My blog has moved and can now be found at http://blog.aniljohn.com
No action is needed on your part if you are already subscribed to this blog via e-mail or its syndication feed.
Saturday, April 17, 2010
I had the opportunity earlier in the week to attend the 9th Symposium on Identity and Trust on the Internet (IDtrust 2010) which was held at NIST.
Given that a lot of the work that I am currently doing is centered around externalized, policy driven Authorization using Attribute Based Access Control (ABAC) and the profiling and deployment of Enterprise Attribute Services, I found a paper [PDF] and presentation [PDF] given by Ivonne Thomas from the Hasso-Plattner-Institue for IT-Systems Engineering to be very interesting.
As an aside, one of the best explanations on conveying what ABAC is all about, particularly to business owners, was given by a colleague who works for the DOD in this particular domain (Thanks Ken B).
“Consider if you will, the following two situations.
You are standing in line at the Grocery store and a little old lady in a walker comes up to you and demands your driver’s license and proof-of-insurance! You will be making a particular decision at that time. Now, consider if the same question was asked of you with red and blue lights blinking behind you and someone with a badge and a gun is knocking on your windshield asking for the same information.
We make these types of decisions all the time in our lives based on a real time evaluation of who is asking the question, what they want access to, and the context in which the question is being asked. ABAC is how we could do the same thing in the electronic world. Making a real-time access control decision based on attributes of the subject, the attributes of the resource and the attributes of the environment/context.”
I love this explanation and have shamelessly stolen and used it to great effect in multiple situations.
Coming back to the paper, given that Attributes are used to make these critical access control decisions, how does one judge the “trust-worthiness” and/or “authoritative-ness” of each attribute that are used to make the decision? How could one convey these qualities related to attributes to a Relying Party so that it can make a nuanced access control decision?
On the authentication front, we have an existing body of work that can be leveraged such as the OMB E-Authentication Guidance M-04-04 [PDF] which defines the four Levels of Assurance (LOA) for the US Federal Government and the attendant NIST SP 800-63 [PDF] that defines the technologies that can be used to meet the requirements of M-04-04. In particular, you have the ability to use SAML Authentication Context to convey the LOA statements in conformance with an identity assurance framework.
The paper, which I think has a misleading title, uses the Authentication Context approach as an example and defines an extension to the SAML 2.0 schema for what is termed by the Authors as an “Attribute Context” which can be applied to each Attribute value. The authors define the parts as:
- Attribute Context This data element holds the attribute context, which is comprised of all additional information to the attribute value itself. This element is the upper container for all identity metadata.
- Attribute Data Source This data element indicates the source from which the attribute value was originally received and is part of the Attribute Context. This can be for example another identity provider, some authority as a certificate authority or the user himself who entered the data.
- Verification Context This data element holds the verification context, which comprises all information related to the verification of an identity attribute value. The Verification Context is one specific context within the Attribute Context.
- Verification Status This data element indicates the verification status of an identity attribute value, which should be one of “verified”, “not verified” or “unknown”. The verification status is part of the verification context.
- Verification Context Declaration The verification context declaration holds the verification process details. Such a detail could for example be the method that has been used for verifying the correctness of the attribute. Further extensions are possible and should be added here. The verification context declaration besides the verification status make up the verification context.
I know of many folks who are working on the policy side of this question of how to judge the “authoritative-ness” of an Attribute under multiple topics such as “Attribute Assurance”, “Attribute Practice Statements”, “Authority Services” etc. etc. But I have often thought about how one would go about conveying these types of assertions using current technology. This approach seems to provide an elegant approach for doing just that:
As you can see in the above example, the extensions proposed by the authors integrate nicely into a standard saml:AttributeStatement and convey the metadata about individual attributes to a Relying Party that can make a more nuanced access control decision.
I think this is a great beginning and would love to see the authors submit this to the OASIS Security Services (SAML) TC so that it can become part and parcel of the SAML 2.0 specification. I would also love to see a Profile come out of the OASIS SSTC that would define a consistent set of Verification Context Declarations. In particular I believe that the concept of referencing “Governing Agreements” as defined in the current “SAML 2.0 Identity Assurance Profile, Version 1.0” (which is in public review) has applicability to this work as well.
br>
Sunday, February 21, 2010
To be conformant to SPML v2 means that the SPML interface (Provisioning Service Provider / PSP) MUST:
- Support the set of Core operations
- a discovery operation {listTargets} on the provider
- basic operations {add, lookup, modify, delete} that apply to objects on a target
- Supports basic operations for every schema entity that a target supports
- Supports modal mechanisms for asynchronous operations
There are additional “Standard” operations described in the OASIS SPML v2 Specification [Zip]. The clear thing to keep in mind is that each operations adds a data management burden onto the provider, so the choice of whether or not to implement them should be considered very carefully.
From the perspective of deployment topologies, the PSP could be deployed separately from the Target or could very well be integrated tightly with the Target e.g. an SPML compliant web service interface on a target system.
One of the frustrating items for me when enquiring about SPML support in products has been the lack of clarity and visibility around exactly what has been implemented. All too often, vendors seem to have cherry picked a chosen set of operations (whether from the Core or from the Standard list) and used that to claim SPML support. I would be very curious to see if anyone can claim full SPML v2 compliance.
A particular use case for SPML that I am currently working on has to deal with the “batch” movement of attributes from multiple systems to a central repository. The typical flow is as follows:
- Per organizational policy & relationship to user, attributes are assigned in their home organization and/or business unit (Org A / Org B / …)
- Org A must move those users and/or their attributes to a central repository (Repository X) on a regular basis
- Repository X acts as the authoritative source of attributes of users from multiple organizations / business units and can provide those attributes to authenticated and authorized entities in a real-time request/response and in a synch-take-offline-use modes.
Some points to keep in mind are:
- Org A / B / … may have, and all too often do, have their own existing identity and provisioning systems as well as associated governance processes in place.
- The organizations and the repository may or may not be under the same sphere of control and as such cannot mandate the use of the same piece of provisioning software and associated connectors on both ends of the divide.
- The systems where the organizations store the attributes of their users may not necessarily be directory based systems.
- The Repository may or may not be directory based system.
- Identity / Trust / Security are, as you may imagine, rather important in these types of transactions.
To meet these needs, we are currently profiling SPML to support the Core SPML Operations as well as the optional “BATCH” capability. The “ASYNC” capability is something that we are more than likely going to support as well as it provides a mechanism for the provider to advertise support for asynchronous operations rather than have a request for an asynch operation fail on a requester with an error “status=’failed’” and “error=’unsupportedExecutionMode’”.
Keep in mind that the end result will satisfy more than just the one use case that I noted above. In fact, it satisfies many other use cases that we have that deal with both LACS and PACS scenarios. In addition, the profile will also bring in the pieces that are noted as out of scope in the SPML standard i.e. the Profiling of the Security protocols that are used to assure the integrity, confidentiality and trust of these exchanges. Fortunately, we can leverage some of previous work we have done in this space for that aspect.
del.icio.us Tags:
SPML,
Federation,
IdM Technorati Tags:
SPML,
Federation,
IdM br>
Saturday, February 13, 2010
Mark Diodati at the Burton Group kicked off this conversation in his blog post "SPML Is On Life Support..." Other folks, notably Nishant Kaushik ("SPML Under the Spotlight Again?"), Ingrid Melve ("Provisioning, will SPML emerge?") and Jeff Bohren ("Whither SPML or wither SPML?") bring additional perspectives to this conversation. There is also some chatter in the Twitter-verse around this topic as well.
As someone who has been involved in both the standards process as well as end user implementation, I have a semi-jaded perspective to offer on what it takes for vendors to implement interfaces that are standards based in their tooling/products. First of all, let it be clearly understood that Standards are beautiful things (and there are many of them) but a Standard without vendor tooling support is nothing more than shelf-ware. So in the case of Standards Based Provisioning, in order to get that tooling support, multiple things need to happen:
- First and foremost, do NOT let a vendor drive your architecture! User organizations need to break out the "vicious cycle" that exists by first realizing that there are choices beyond the proprietary connectors that are being peddled by vendors, and secondly by stepping up and defining provisioning architectures in a manner that prioritizes open interfaces, minimizes custom connectors and promotes diversity of vendor choice. Map vendor technology into your architecture and not the other way around, because if you start from what a vendor's product gives you, you will always be limited by that vendor's vision, choices and motivations.
- Bring your use cases and pain points to the Standards development process and invest the time and effort (Yes, this is often painful and time consuming!) to incorporate your needs into the base standard itself. I am finding that often the Technical Committees in Standards Organizations are proposed and driven by vendors and not end users. But in cases where there is a good balance between end users and vendors, the Standard reflects the needs of real people (The Security Services/SAML TC at OASIS often comes to mind as a good example).
- Organizations need to incorporate the need for open standards into their product acquisition process. This needs to go beyond "Product X will support SPML" to explicit use cases as to which portions of the standard are important and relevant. Prototype what you need and be prepared to ask tough, detailed questions and ask for conformance tests against a profile of the Standard.
- Be prepared to actively work with vendors who treat you like an intelligent, strategic partner and are willing to invest their time in understanding your business needs and motivations. These are the folks who see the strategic value and business opportunities in supporting open interfaces and standards, realize they can turn and burn quicker than the competition, and compete on how fast they can innovate and on customer satisfaction versus depending on product lock-in. They are out there, and it is incumbent upon organizations to drive the conversation with those folks.
Moving on, let me reiterate the comments that I made on Mark's blog posting:
"The concern with exposing LDAP/AD across organizational boundaries is real and may not be resolved at the technology level. Applying an existing cross-cutting security infrastructure to a SOAP binding (to SPML) is a proven and understood mechanism which is more acceptable to risk averse organizations.
I would also add two additional points:
- More support for the XSD portion of SPML vs. DSML in vendor tooling. There are a LOT of authoritative sources of information that are simply NOT directories.
- There needs to be the the analog of SAML metadata in the SPML world (Or a profile of SAML metadata that can be used with SPML) to bootstrap the discovery of capabilities. The "listTargets" operation is simply not enough."
While I do resonate with the "pull" model interfaces noted by Mark in his posting, I do believe that exposing LDAP(S)/AD Interfaces either directly of via Virtual Directories outside organizational boundaries is a non-starter for many organizations.
At the same time I believe there exists options in the current state of technology to provide a hybrid approach that can incorporate both the pull model as well as provide the application of cross-cutting security infrastructure into the mix. The architecture that we are currently using incorporates a combination of both Virtual/Meta Directory capabilities as well as an XML Security Gateway to provide policy enforcement (security and more) when exposed to the outside.
I will also reiterate that there needs to be more support for the XSD portion of SPML vs. DSML. A lot of the authoritative sources of user information that I am dealing with are simply not found in directory services but in other sources such as relational databases, custom web services and sometimes proprietary formats in addition to LDAP/AD.
I hope to post some the use cases for standards based provisioning as well as the details of some of the profiling that we are doing on SPML to satisfy those use cases in future blog posts. Looking forward to further conversations around this topic.
br>
Friday, August 14, 2009
I had a great time at Burton Group's Catalyst Conference this year. Spent my time between the Identity Management, SOA and Cloud sessions. Also had an opportunity to attend the Cloud Security & Identity SIG session as well.
As the fast-thinking, slow talking, and always insightful Chris Haddad notes on the Burton APS Blog (Chris... enjoyed the lunch and the conversation) "Existing Cloud Computing's momentum is predominantly focused on hardware optimization (IaaS) or delivery of entire applications (SaaS)".
But the message that I often hear from Cloud vendors is:
- We want to be an extension of your Enterprise
- We have deep expertise in certain competencies that are not core to your business, and as such you should let us integrate what we bring to the table into your Enterprise
... and variations on this theme.
But in order to do this, an Enterprise needs to have a deep understanding of its own core competencies, have clearly articulated it's capabilities into distinct offerings, and gone through some sort of a rationalization process for its existing application portfolio.. In effect, have done a very good job of Service Orient-ing themselves!
But we are also hearing at the same time that SOA has lost its bright and shiny appeal and that most SOA efforts, with rare exceptions, have not been successful. For the record, success in SOA to me is not about building out a web services infrastructure, but about getting true value and clear and measurable ROI out of the effort.
So to me, it would appear that without an organization getting Service Orientation right, any serious attempt they make on the cloud computing end will end up as nothing more than an attempt at building a castle on quicksand.
The other point that I noted was that while there were discussions around Identity and Security of Cloud offerings (they still need to mature a whole lot more, but the discussion was still there), there was little to no discussion around visibility and manageability of cloud offerings. A point that I brought up in questions and in conversations on this topic was that while people's appetite for risk vary, one of the ways to evaluate and potentially mitigate risk was to provide more real time visibility into cloud offerings. If a cloud vendor's offerings are to be tightly integrated into an Enterprise, and I now have a clear dependency on them, I would very much want to have a clear awareness of how the cloud offerings were behaving.
From a technical perspective, what I was proposing was something very similar in concept to the monitoring (and not management) piece of what WS-Management & WSDM brought to the table on the WS-* front. In effect, a standardized interface that all cloud vendors agree to implement that provides health and monitoring visibility to the organizations that utilize their services. In short, I do not want to get an after-the-fact report on your status sent to me by e-mail or pulled up on a web site, I want the real time visibility into your services that my NOC can monitor. There was a response from some vendors that they have this interface internally for their own monitoring. My response back to them is to expose it to your customers, and work within the cloud community to standardize it such that the same interface exits as I move from vendor to vendor.
br>
Sunday, September 21, 2008
In the physical world, when an attacker is preparing to assassinate someone or bomb a target, the first thing that they will do is to determine how best to set up that attack. The phrase used to describe the initial phase of the set-up is called 'pre-operational surveillance'.
Unfortunately, the default configuration of most web services allow a potential attacker to do the digital equivalent of pre-operational surveillance very easily. In the digital world, these type of threats are often classified under the category of 'Information Disclosure Threats'. There are two in particular (there are more) that I would like to call attention to:
- SOAP Fault Error Messages
- WSDL Scanning/Foot-Printing/Enumeration
1. SOAP Fault Error Messages
All too often, detailed fault messages can provide information about the web service or the back-end resources used by that web service. In fact, one of the favorite tactic of attackers is to try to deliberately cause an exception or fault in a web service in the hope that sensitive information such as connection strings, stack traces and other information may end up in the SOAP fault. Mark O'Neill has a recent blog entry 'SOAP Faults - Too much information' in which he points to a vulnerability assessment that his company did of a bank that provided information that enabled an attacker to understand the infrastructure the bank was running and presumably allowed them to further tailor the attack.
The typical mitigation for this type of information disclosure is the implementation of the 'Exception Shielding Pattern' as noted in the Patterns & Practices Book 'Web Service Security' [Free PDF Version] which can be used to "Return only those exceptions to the client that have been sanitized or exceptions that are safe by design. Exceptions that are safe by design do not contain sensitive information in the exception message, and they do not contain a detailed stack trace, either of which might reveal sensitive information about the Web service's inner workings." (FULL DISCLOSURE: I was an external, unpaid, technical reviewer of this book).
You can either implement this pattern in software or use a hardware device like a XML Security Gateway to implement this pattern. Mark utilized a Vordel Security GW, but this is something that can be implemented by all devices in this category. I have direct experience with Layer 7 as well as Cisco/Reactivity Gateways and happen to know that they support this functionality and I don't doubt that IBM/DataPower and others in this space support it as well.
Note that this does not imply that the error's that happen are not caught or addressed but simply that they are not propagated to an end-user.
2. WSDL Scanning/Foot-Printing/Enumeration
Appendix A of 'NIST 800-95: Guide to Secure Web Services' provides a listing of common attacks against web services, and you will note that there are many references to the information that can be found in a WSDL that can lend itself to a variety of attacks including Reconnaissance Attacks, WSDL Scanning, Schema Poisoning and more.
And in the 'Security Concepts, Challenges, and Design Considerations for Web Services Integration' article at the "Build Security In" web site sponsored by the DHS National Cyber Security Division, it notes that "An attacker may footprint a system’s data types and operations based on information stored in WSDL, since the WSDL may be published without a high degree of security. For example, in a world-readable registry, the method’s interface is exposed. WSDL is the interface to the web services. WSDL contains the message exchange pattern, types, values, methods, and parameters that are available to the service requester. An attacker may use this information to gain knowledge about the system and to craft attacks against the service directly and the system in general."
The type of information found in a WSDL, and which can be obtained simply by appending a ?WSDL to the end of a service endpoint URL, can be an extremely useful source of info for an attacker seeking to exploit a weakness in a service, and as such should not be provided or simply turned off.
There are multiple ways of mitigating this type of an attack which include turning off the automatic ?WSDL generation at the SOAP stack application level or by the configuring the intermediary that is protecting the service end-point. For example, most XML Security Gateway's by default turn off the ability to query the ?WSDL on a service end-point.
I consider this to be a very good default.
When this option is implemented, there are often a variety of questions that come up that I would like a take a quick moment to address.
Q. If you turn off the automatic WSDL generation capabilities (i.e. ?WSDL) how are developers supposed to implement a client that invokes the web service?
There are two ways. (1) Publish the WSDL and the associates XML Schema and Policy files in an Enterprise Registry/Repository that has the appropriate Access Control Mechanisms on it so that a developer can obtain a copy of the WSDL/Schema/Policy Documents at design time. (2) Provide the WSDL/Schema/Policy files out of band (e.g. Zip File, At a protected web site) to the developer.
Oh yes, there is always the run-time binding question that comes up here as well. What I will say is that run-time binding does not mean "run time proxy generation + dynamic UI code generation + glue code" but simply that the client side proxy and the associated UI and glue code are generated at design time, but that the end-point that the client points to may be a dynamic lookup from a UDDI compliant Registry. I've done this before and this does not require any run-time lookup of a web service's WSDL.
There is an additional benefit to this method as well. Have you ever gone through the process of defining a WSDL and Schema using best practices for web services interoperability, implemented a service using that WSDL and Schema, and then looked at the auto-generated WSDL? You may be surprised to find that the automatic generated WSDL may be in a majority of cases is not as clean or easy to follow and in some cases may indeed be wrong. The best practice for developing interoperable web services recommends following a contract-first approach. This requires that the "contract" i.e. the WSDL and the Schema to be something that is developed with a great deal of care given to interoperability. Since the automatic generation of WSDL is platform-specific, there is always the possibility of some platform-specific artifacts ending up in the contract documents, which is not what you intended to happen.
Q. What about those existing/legacy services that do a run time lookup? Won't those break?
The question that needs to be asked at this point is why these services are doing a run time lookup, is there value being added by this capability in this client, and are there alternatives that will enable the client to provide the same functionality without compromising security?
As an example take the case of a BEA Weblogic client. If you will look at the documentation that BEA provides on building a Dynamic client you will note that they provide two different approaches, one that uses a dynamic WSDL lookup and another that does not. The interesting thing about this is that the approach that uses the WSDL makes a run-time lookup of a Web Service's WSDL which will end up breaking if the ?WSDL functionality is turned off. But the alternative approach of building a dynamic client provides the same functionality without the run-time WSDL lookup.
From what I can see, from a functional perspective there is no difference between the two approaches and given that one of the things that you want to do when developing web services, or any software for that matter, is to minimize the number of external dependencies, I would choose the second option of NOT doing a run-time WSDL lookup in this particular case. What is regrettable in this case is that it appears that the default configuration in BEA's tooling is to use the run-time WSDL option (Or so I have been informed), which leads to issues when folks who choose the default options with their tools develop the clients.
Mitigating these information disclosure threats requires both developers and operational support folks to understand their shared responsibility for security. Developer's need to understand that security should be part of the software development lifecycle and is not something that is bolted on at the end or is 'thrown over the wall' for someone else to take care of. Operational folks need to understand that a layered defense in depth strategy is needed and that secure coding practices of developers are an essential component of any operational environment. In particular the mentality of "Firewalls and SSL will save us all" needs to change for all parties concerned.
br>
Sunday, September 7, 2008
Notes from an on-going online discussion to self, for use as a reference and for discussion:
"SOA is an architectural style, and an architectural style is a set of principles. Gartner has enumerated five principles that constrain SOA:
- modular
- distributable
- described
- sharable
- loosely coupled
To the degree a system exhibits all five, the more it qualifies as representing the SOA style"
- Nick Gall, Gartner
"SOA Principles of Service Design:
- Service Contracts
- Service Coupling
- Service Abstraction
- Service Reusability
- Service Autonomy
- Service Statelessness
- Service Discoverability
- Service Composability"
- Thomas Erl, SOA Principles of Service Design
"From my perspective, the overarching principle governing SOA is separation of concerns. This principle helps you determine how to factor functionality into services. Thomas Erl discusses service factoring and granularity in the SOA Fundamentals section of his book rather than treating SoC as a principle"
- Anne Thomas Manes, Burton Group
"The 4 tenets of Indigo as defined by Don Box, which has now been morphed into the Microsoft tenets of SOA:
- Boundaries are explicit
- Services are autonomous
- Services share schema and contract, not class
- Service compatibility is determined based on policy"
- Don Box, A Guide to Developing and Running Connected Systems with Indigo
"The 10 Principles of SOA, as expanded on the above 4 tenets, by Stefan Tilkov:
- Explicit boundaries
- Shared contract and schema, not class
- Policy-driven
- Autonomous
- Wire formats, not programming language APIs
- Document-oriented
- Loosely coupled
- Standards-compliant
- Vendor-independent
- Metadata-driven"
- Stefan Tilkov, innoQ
I've been using a combination of Anne's separation of concerns, Thomas Erl's principles and selected bits from the OASIS SOA-RM in the SOA class that I teach but the variations above look to be great fodder for some discussions!
del.icio.us Tags:
SOA,
Teaching Technorati Tags:
SOA,
Teaching br>
Sunday, March 9, 2008
As part of my SOA class, we are currently going over some of the principles of service design. In particular, we were going over the principle of abstraction. The example of technology abstraction that I used in class was a remote control.
The funny thing for me has been just recently my 10+ year old Pioneer AV receiver that is part of my home entertainment system finally started having problems after years of excellent service. I had to replace it with a new Onkyo AV receiver that really has more options in it that I know what to do with. So I spent some time two nights ago, after the kids and wife had gone to bed, to swap out this component. But the greatest thing for me was that when they went to watch TV and to listen to the radio the next day, they did not have to do anything differently!
Everything just worked using the same interface that they have always been used to, down to using the same key presses, because I had invested some time in consolidating my "service interface" to one programmable and extendable universal remote. So, the only additional thing I had done was to update the firmware in the remote control to now point to the new receiver on the back-end.
I would definitely consider this a practical example of the implementation of the principle of abstraction.
br>
Wednesday, February 27, 2008
Many people believe that an Enterprise Service Bus (ESB) is a must have component of a SOA infrastructure. The usual argument put forth is that if you want security, manageability and reliability in your environment, you must have something that looks like a "bus" in your environment.
I have a slightly different perspective on this. From my experience, there are other components that do an outstanding job when it comes to security functionality. In addition, an ESB really can't manage services that are not "plugged-in" to it (you need something like a WSM product). And finally, with the approval of and support for WS-ReliableMessaging as an OASIS standard, you no longer need some proprietary messaging technology to provide reliable messaging. You can leverage the support for the standard built into the basic service platform itself. So my experience has been that you do not need a "bus" in the middle through which all traffic should flow and all things in your enterprise should be connected to.
But at the same time, where I have seen the value of an ESB is from the perspective of its ability to easily tap into a variety of back-end systems and expose them using a contracted web service interface. So in my world, the ESB provides me ease of use when it comes to tapping into custom or Enterprise class systems (ERP, RDBMS, Mainframe) and "service-enabling" them. So an ESB is simply a type of Service Platform which can be used to build services and not a bus to which everything is connected. The service created in this manner can be treated like any other service that you build or buy, and can be secured and managed just like you would any other service.
In this model, I really did not see much value in having an ESB, given that we have a pretty comprehensive existing and heterogeneous SOA infrastructure that is designed to work together in a standards compliant manner and provides pretty much all of the functionality that an ESB is touted to fulfill. The only exception would be if there existed some back-end system that I could not natively tap into from a standard service platform and needed the facilities of an ESB to ease the connection into that proprietary or legacy system.
But I had the opportunity yesterday to listen to Anne Thomas Manes of the Burton Group at the "Pragmatic SOA Governance" workshop that was put together by Michael Meehan and his crew from TechTarget.com. Great event, BTW!
What Anne's comments opened my eyes to was to take what I had above to the next step. From her perspective, an ESB is the new generation of Application Servers, and what it brings to the table is the ability for an organization to be resilient to application protocol changes. So an ESB provides the ability to leverage the same core business logic that is used to build a capability, and expose it over multiple service protocols/interfaces. And if a new protocol needs to be supported, it is simply a matter of the ESB supporting it. Keep in mind the end product is still a contracted service interface. But in this case, that interface is not limited to SOAP but can be many others that may be much more peformant or optimized for that particular domain.
Conceptually, I can buy into this. Will have to see how well it does in real life.
br>
Wednesday, November 28, 2007
Sunday, November 25, 2007
Recently, a lot of interest has been shown in SOA (Service Oriented Architectures). In these systems, there are multiple services each with its own code and data, and ability to operate independently of its partners. In particular, atomic transactions with two-phase commit do not occur across multiple services because this necessitates holding locks while another service decides the outcome of the transaction. This talk proposes there are a number of seminal differences between data inside a service and data sent into the space outside of the service boundary. The act of unlocking data as a copy of it is sent in the message means the interpretation of the received message must include the understanding that this data in unlocked. This changes how the data can be used.
We then consider objects, SQL, and XML as different representations of data. Each of these models has strengths and weaknesses when applied to the inside and outside of the service boundary. The talk concludes that the strength of each of these models in one area is derived from essential characteristics underlying its weakness in the other area.
Source: Presentation by Pat Helland of "Data on the Inside versus Data on the Outside" at TechEd EMEA at Barcelona
Pat Helland's "Data on the Outside vs. Data on the Inside" paper has always been one of those must read items for me when it comes to Service Orientation. He recently gave a presentation on the topic at TechEd EMEA at Barcelona and has posted the slides. Definitely worth checking out...
br>
Thursday, November 22, 2007
Slides and notes from two presentations on REST and SOAP at QCon:
Very different viewpoints. I enjoyed both
br>
Tuesday, November 20, 2007
I was giving a presentation and demo today about Policy Based Management in a Web Services environment. The particular use case I was demonstrating was the ability to, by policy, change the type of authentication tokens that were accepted by a web service (from none, to hard-coded, to leveraging an existing identity store, to X.509 Certs etc.) depending on the level of assurance needed, without modifying the web service code.
The mechanism I was using as the Policy Enforcement Point (PEP) in my demonstration was an XML Security Gateway. XML Security Gateways are useful devices for a variety of reasons, but typically there are also drawbacks. The major one is that if you have XML Security Gateways from multiple vendors, you typically cannot define policies in the Policy Administration Point (PAP) of one vendor and push it out to the Gateways (PEPs) of another vendor. This issue becomes even more extensive when you consider that other pieces of web services infrastructure such as Web Service Management (WSM) products, ESBs etc. also have their own unique consoles for administration.
When you question the vendors on this, the typical answer that you get is that they are waiting for WS-Policy (and the associated domain specific languages under WS-Policy) to be approved and adopted to alleviate this issue. In the mean time of course, if you need that central administration, just standardize on our product
I'll buy that to a certain extent, but what about support for those standards that have been out there for a while and have traction in the community? e.g. SAML and XACML.
One of the reasons that the acquisition of Reactivity and Securent by Cisco interested me, was that it brought together the possibility of an XML Security Gateway (acting as a PEP) backing against a XACML-based fine grained authorization service (PDP). I was not aware of anyone who supported this use case out of the box, although I am aware of folks who have requested this functionality and the vendors who have either custom modified their products to enable this or have put it on their feature roadmap.
But I was recently made aware of at least one potential out of the box support for this capability by Mark O'Neill, CTO of Vordel. Mark pointed me to Vordel's XACML PEP Support, as well as a case study and information on interoperating with various XACML PDPs. Very interesting!
br>
Sunday, November 4, 2007
Monday, October 29, 2007

br>
Monday, September 17, 2007
When designing schemas, one tries to strive for modularity which allows one to build XML schemas that are composed of other schema documents. The keywords that make it possible are include, import and redefine. Most folks who are used to schemas are familiar with import and if you want to maximize interoperability you should stay away from redefine since it is not implemented on a consistent basis.
That leaves the include keyword. When you use include, one of the following should be true:
- Both schema documents (The including schema and the included schema) must have the same target namespace
- Neither of the schema documents should have a target namespace
- The including schema document has a target namespace, and the included schema does not.
In the last case, all components of the included schema document take on the namespace of the including schema document. The included schema document is sometimes referred to as a chameleon schema as its namespace changes depending on where it is included.
A best practice that I normally follow is to use chameleon schemas for common, reusable types so that I don't have to namespace qualify some very common schema types that I normally end up using across multiple schemas.
I recently ran into an issue when actually working on this in that a particular .NET tool that I was using did not seem understand the use of option (3) i.e. chameleon schemas. Since I know the guys who developed the tool, and they are considered pretty much experts in the field, I was not that surprised when I pinged them on this and got back an answer that it was a known issue.
According to them, the reason that the issue exists is because of a lack of support in the .NET API itself and (this is way too low level for me) has to do with how the ServiceDescriptionImporter(SDI) class is not working properly. So you would have issues if you tried to use wsdl.exe with chameleon schemas in .NET 1.1 and 2.0. Not sure if the issue exists under WCF.
The workaround for this, which I implemented, was to qualify the included schema with the same namespace as the including schema. Not ideal, but got me to where I needed to.
Hopefully this is an issue that will be fixed.
br>
Sunday, September 2, 2007
Service Component Architecture (SCA) is something that has been popping up on my radar for some time now, but I've been having a hard time getting a clear idea of what SCA is all about from the vendor presentations and from the specifications themselves.
In particular I was interested in how it relates to SOA and Web Services, but what I had heard to date and what I took away from the various presentations/readings made me put it on the back-burner as a "new application thing from a bunch of Java vendors".
I just changed my mind on this after reading David Chappell's "Introducing SCA [PDF]" white-paper. It is a clear, vendor-neutral and most excellent description of what SCA is all about and the various pieces that make up SCA. In particular it sets the stage for understanding how various vendors who jump on the SCA bandwagon may choose to focus on or implement one or more of the the pieces of what SCA is in total.
I would also add that if after reading the white-paper you are interested in the standardization efforts around SCA, to check out the OASIS OpenCSA efforts.
In short, as noted in the white-paper "The reality today is clear: Anyone who's interested in the future of application development should also be interested in SCA." Read!
br>
Sunday, July 1, 2007
You don't get much! Rest with the small caps of course. The program starts (depending on whether or not you have something going on during breakfast) any time between 7 and 8:30 p.m. and goes on all the way through 6 p.m. Then there are networking and interoperability events that usually go on until 9 p.m. All in all, very full program with very little bit of slack or fluff.
Allrighty, now that I have made my lame joke, I did want to mention "REST Easy" workshop that was given by Pete Lacey. I personally found it to be very enjoyable and educational. Pete is passionate, articulate and takes no prisoners on this topic. You might as well have named the workshop "SOAP based web services are the spawn of evil and should be staked through the heart ASAP!" 
As I mentioned to Pete afterwards, I am not in the OR camp (i.e. WS-* OR REST) as I believe that there is a place for both. I also think that 10 years from now we will be using a strange fusion of the two approaches and arguing about something else! In any case, I do believe that REST offers definite potential benefits if you can wrap your head around it and learn how to apply its constraints correctly in building solutions. I, for one, intend to dedicate some time to do just that. You can never have enough tools in your toolbox!
UPDATE: Humor often does not come through very well when writing (and the above, now crossed-out sentence, was meant to be humorous). But based on Pete's comments below, I want to make sure that the reader's of this blog posting do not get wrong impression. Significant portion (> 95%) of the time was spent on REST principles itself, examples of an actual REST solution with code samples and a lot more and not on picking on WS-*. To think otherwise would be very unfair to Pete and that is not my intent at all. My only intent with the above comment was to note that if you believe the premise and the promise of REST as presented in the workshop, you will come away with an aversion to the complexity that is inherent in the current state of WS-*. Which of course is why I noted above that I would indeed be investing time in learning more about REST.
br>
Friday, June 29, 2007
Today was a good day!
Gave my presentation today and got some incredibly good engagement and feedback on it which I need to follow-up on. It appears that a lot of folks share the trials and tribulations that we are going through as we are deploying our SOA environment, so sharing the information on how best we are accomplishing what we need to do and some of the best practices we have identified definitely opened up a floodgate of ideas for possible collaboration, which was exactly what we were hoping for!
The sessions as usual were outstanding and I ended up in the evening having some intense and wide ranging conversations with both Anne Thomas Manes as well as Jonathan Chaitt from Disney. Anne is the SOA track lead for Catalyst conference and really did a great job of putting together a great selection of folks (Burton, End Users, Vendors, Independents) while keeping it all real. I also really enjoyed chatting with the Disney folks. They are doing some really fine work in the area of fine grained authorization and are folks I hope to keep in touch with.
Also attended both an OASIS XACML Interop event as well as a WS-I Basic Security Profile Interop Session which really opens up some possibilities for some of the things that we are considering.
All in all an excellent day topped by some an awesome personally guided walking tour of downtown SF (Some amazingly beautiful buildings out here) by a rather remarkable gentleman that I met the last time I was out here who just so happens is a former SF resident.
br>
Wednesday, June 27, 2007
Trends driving enterprise IT
- Today's toys = tomorrow's tools
- SaaS as a new business model
- Semantic disparity
- Integration of collaboration into business apps
- Virtualization
- Automating regulatory compliance and governance
- more...
Organizations should build general-purpose reusable infrastructure based on standards to ensure management, consistency etc. Tension between building for today and architecting for tomorrow. Realize that tech is fleeting.
Growing resistance to super-platform from best of breed. Innovations in raising level of abstraction and in the pursuit of simplicity.
Super-platform vendors are not just selling app servers but SOA/BPM platforms. More "stuff" in the core platform. But also more specialization in the areas of:
- Domain-specific languages
- OSS rebel framework
- Mobile frameworks
- UI frameworks
- Others..
Increasing simplicity
- REST/WS-*/POX
- Dynamic vs. compiled languages
- 80/20 rule specialized frameworks (e.g., Rails)
- Lightweight containers
Increasing abstraction
- Model-driven development
- Declarative languages
- Data services
- Infrastructure services
Assume heterogeneity at the core. Pursue simplicity and abstraction. Invest in infrastructure (SDLC, Governance, Runtime, Security, Data) to provide separation of concerns, increase productivity and efficiency and provide better governance and consistency.
Don't let the vendor dictate your strategy
- Design own infrastructure
- Identify functional capabilities and map vendor tech into them
- Best innovation from startup community
- Focus on principles and patterns and recognize that technology is fleeting
- Separation of concerns between Apps and Infrastructure
- More...
br>
I am attending the Burton Group Catalyst conference in San Francisco this week. Flew out on here on Monday and attended some workshops over the last couple of days on Identity Federation Technologies, Application Security and my personal favorite, Pete Lacey's workshop on REST.
Today is also the first day that I am feeling relatively human as over the last three days I've been suffering from what felt like all of the symptoms of a flu. Liberal amounts of rest combined with regular dosages of various pain killers seem to have improved the situation. Which is a good thing since I am scheduled to give a case study presentation on Thursday:
SOA and Security: A Pattern-based Approach
A critical part of building out a SOA runtime infrastructure is the requirement to directly address the threats to message exchanges that exist in a non-benign environment. As JHU/APL is building out its SOA infrastructure for our GIG Testbed environment, we are taking a measured and hopefully realistic approach to web service security that is leveraging best practices from the community that are embodied in various security patterns.
To the greatest extent possible, we are mapping various applicable security patterns to physical implementations using components of a SOA runtime infrastructure such as mediation and web service management systems in combination with applicable security and WS-* standards. This presentation will provide an overview of this effort, with drill downs into some specific patterns and their corresponding implementations, as well as provide insight into some of the related but non-technical issues such as governance and building a community of practice around this effort.
br>
Thursday, June 7, 2007
"I've set my bozo bit for WS and SOA types who are repositioning themselves as REST stalwarts. Spotting a bandwagons is not an indicator of competence. " - REST Person
"REST is now the hot chick in town. Its on the uptick of the hype curve. Atom is going to be taking over soon. Until we get past the top of the hype curve its impossible to have intelligent, analytical, critical conversations with the fanatics." - WS-* Person
From the perspective of someone who just wants to get things done, this is simply MAD. <sigh>
br>
Thursday, April 19, 2007
I had a chance to geek out over dinner with a couple of friends, Ken Laskey and Chris Bashioum, as well as a colleague of theirs (Rob Mikula) from MITRE. We got together to talk about SOA Governance since both Ken and Chris are fellow members on the OASIS SOA-RM TC and the three of them team teach a SOA course at MITRE that heavily leverages the SOA-RM.
Unsurprisingly, the conversation ranged across the board from SOA adoption, granularity of services, performance impact of composite services and possible ways to mitigate them, the role of the UDDI protocol, data model extensibility in Repositories, WS-Policy, Consent of the Governed and how it applies to SOA Governance, the role of a Center of Excellence in the adoption and operation of a SOA and more...
A discussion that we were having also provided me with a way forward in something that I've been struggling with regarding the SOA course that I will be teaching for Johns Hopkins University. What type of project/exercise work can the students work on for the class? What Ken, Chris and Rob do in their two day class is to have their students work through a case study on integrating multiple information systems using a SOA approach. Given that I have a semester's worth of time, a case study with drill downs in specific and relevant areas running the gamut from governance and requirements to actual implementation of services could be very useful in driving home the lecture/discussion points while at the same time providing me with a mechanism to gauge if the students are actually grokking the information. Will have to give some serious thought on how to go about structuring this.
All in all, an immensely enjoyable evening!
br>
Tuesday, April 10, 2007
I will be teaching a graduate degree class on Service Oriented Architecture via the Johns Hopkins University's Engineering Programs for Professionals. The exact date is uncertain, but I expect it to be either in the Fall of this year or the Spring of next year. Here is the class description:
605.702 Service Oriented Architecture
This course will explore SOA concepts and design principles, interoperability standards, security considerations as well as runtime and governance infrastructure for SOA implementations.
Web services will be used as an example of implementation technology for SOA and as such, the exploration of runtime infrastructure will focus on standards based support for SOA requirements in modern service platforms such as .NET/WCF and Java/Axis2, the role of mediation systems such as XML Security Gateways and ESBs, as well as how Registries, Repositories and Web Service Management capabilities map into an implementation of a SOA.
Given its focus on shared capabilities, SOA involves more than technology. Therefore, additional topics will include the impact of SOA on culture, organization, and governance.
If you thought the definition of SOA sounded familiar, you would not be mistaken.
I am really looking forward to teaching this class, if for no other reason that I have found that in the process of teaching (or preparing and giving a presentation) and interacting with the audience, I often learn a great deal as well. Given the rapid change in this topic area, this course more than likely will tend to morph on a semester to semester basis as our shared understanding of what SOA is and how best it can be implemented advance.
UPDATE (4/11/07): Course description was recently updated to be bit more descriptive (or buzzword compliant - take your pick) 
br>
Saturday, April 7, 2007
I recently came across this resource from Sun. According to the web site:
This tutorial explains how to develop web applications using the Web Service Interoperability Technologies (WSIT). The tutorial describes how, when, and why to use the WSIT technologies and also describes the features and options that each technology supports.
WSIT, developed by Sun Microsystems, implements several new web services technologies including Security Policy, WS-Trust, WS-SecureConversation, Reliable Messaging, Data Binding, Atomic Transactions, and Optimization. WSIT was also tested in a joint effort by Sun Microsystems, Inc. and Microsoft with the expressed goal of ensuring interoperability between web services applications developed using either WSIT and the Windows Communication Foundation (WCF) product.
[...]
The Web Services Interoperability Technology Tutorial addresses the following technology areas:
- Bootstrapping and Configuration
- Message Optimization
- Reliable Messaging (WS-RM)
- Web Services Security 1.1 (WS-Security)
- Web Services Trust (WS-Trust)
- Web Services Secure Conversation (WS-Secure Conversation)
- Data Contracts
- Atomic Transactions (WS-AT)
- SOAP/TCP
This looks to be a rather good resource to learn about Web Service Interoperability between Sun and Microsoft stacks using some of the advanced WS-* standards and specifications. The highlight above regarding the joint Sun/Microsoft testing is mine. Worth checking out.
br>
Wednesday, April 4, 2007
I had the opportunity today to give a presentation on SOA and its relationship to Net-Centricity to various folks in my organization. During the Q&A session that followed the briefing, there was a question regarding service versioning.
Just to provide some context, in my briefing one of the items that I touched on is the concept of Loose Coupling and how that enables the abstraction of interface from implementation and gives a service provider the ability to change out their implementation without affecting the service consumer.
To paraphrase the question "I have a service that is being used by multiple parties, and I am changing the implementation of that service but not the interface. (1) From a testing and certification perspective, what should I do? (2) What mechanisms exist to communicate this change to all the folks who are using my service?"
The interesting variation that this particular question posed was that changing out the implementation in this example was NOT about changing implementation technology but changing the processing algorithms/business logic associated with the implementation.
My answer to (1) was that if the algorithm/business logic change had the effect of changing the expected result (as compared to the original implementation), at that point I would consider this implementation to be a whole new service and would consciously break the interface. For example, in the case of a web service implementation, I would change/update the namespace of the schema such the it would break compatibility with existing service consumers. I would also have to have this service be tested and certified as though it was a new service.
But before I do that, I would have to notify the consumers that are using my service that I am about to make this change. Which relates to my answer to (2). AFAIK, at present there is no standardized, automated way of notifying all existing service consumers that I am about to change out my implementation. So in the current state of technology, what I would have to do would be to set up a mechanism/process as part of the original client provisioning on how I as a service provider would communicate changes and updates of importance to my service consumers.
The example I pointed to was how Google implements its AdWords API versioning strategy:
The AdWords API supports multiple recent versions of the WSDL to allow developers time to migrate to the most recent version. Once an earlier version of the WSDL has been replaced by an updated version, the older version will be supported for four months after the launch of the newer version.
During this period, the AdWords API will continue to provide developer access to and documentation support for any version dating back two months.
You can tell which version of the WSDL you are accessing based on the access URL namespace, which includes the version number. Versions are named with the letter 'v' followed by whole numbers (v5, v6, etc.).
The Release Notes summarizes changes between versions. In addition, new versions and shutdowns of older version are announced via the AdWords API Blog.
In addition to this documentation, whenever we release a new version of the AdWords API, new versions and older version shutdowns will be announced via the AdWords API Blog.
In the above example, the communication mechanism is the AdWords API Blog and it is incumbent upon service consumers to subscribe to it to keep updated on what is going on with the API. And Google provides a 4 month window in which they run both the old version and the new version side-by-side to give you time to move from one to the other.
But I have to admit that this is a situation that I have not personally run into (change in implementation logic, no change in interface), so I am basing my answers on various community best practices and conversations with folks who have had to do this. If you have run into this particular situation before, I would be very interested in knowing how you handle this in your organization, especially any info you can share on the governance policies and processes that you have put into place to communicate upcoming changes.
br>
Sunday, April 1, 2007
There is an interesting article over at Thomas Earl's SOA Magazine site by Cory Isaacson titled "High Performance SOA with Software Pipelines".
In the article, Isaacson notes that "Distributed service-oriented applications, by their nature, take advantage of multi-CPU and multi-server architectures. However, for software applications to truly leverage multi-core platforms, they must be designed and implemented with an approach that emphasizes concurrent processing".
He identifies and explains current approaches to dealing with concurrency in applications such as:
- Symmetric Multi-Processing in which a SMP server operating system manages the workload distribution across multiple CPUs.
- Automated Network Routing in which service requests are routed to individual servers in a pool of redundant servers.
- Clustering Systems in which multiple servers share common resources over a private "cluster interconnect".
- Grid Computing in which applications are divided into sub-tasks that can execute independently.
... as well as the various limitations associated with the current approaches. He also identifies a new approach, based on a methodology called software pipelines, which can enable businesses to achieve the benefits of concurrent processing without major redevelopment effort. I found it to be fascinating reading as I personally have not done much work with multi-threaded applications or grid computing.
As an aside, the challenges of programming for multi-core chips and how to make that easier for the developer was a key theme in Bill Gate's keynote address at the recent Microsoft MVP Summit.
As I have noted before, performance engineering to me is something that should be considered in an end to end manner. Currently, in the web service world, there are folks who are tackling this problems using hardware (e.g. XML Security Gateways) and by using binary encoding approaches, but IMHO, not a lot of work being done to provide best practices for optimizing the design of the services themselves.
An exception to the rule, and an excellent source of information on performance engineering that I always point to, are the first three chapters in the PAG Perf & Scale book. So, for your reading pleasure, let me point to them once more:
Just to be clear, it does not matter if you are in the .NET camp or the Java Camp or any of the other language/platform camps, the information above is equally applicable and relevant.
br>
Saturday, March 31, 2007
One of the first things that is brought up when one talks of web services interoperability is the Web Services Interoperability Organization (WS-I) and conformance to the WS-I basic profile, and how that ensures interoperability (Allrighty, I am deliberately choosing not to talk about how WS-I punted on XML Schema profiling and how you can build web services that are WS-I basic profile compliant but NOT interoperable).
Many folks have the impression that the WS-I is a standards organization. It is important that it is not and that it is a coalition of vendors. There is a rather interesting blog post by Erik Johnson, the former chair of the WS-I XML Schema Planning Working Group, that sheds some light on some of the internal processes at this organization.
As someone who is actively involved in the standards work that is going on at OASIS, it is always fascinating for me to get insight into how other organizations work with specifications and standards in the SOA and Web Services space.
br>
Friday, March 23, 2007
Patterns are your friends. Patterns keep you from having to reinvent the wheel and it allows you to leverage best practices. Patterns provide a common vocabulary that can be used to share information between folks who often come from different backgrounds. I like patterns!
I was one of the external reviewers for the PAG book on Web Service Security Patterns, so using a pattern based approach is something that I am very much following as part of the design and deployment of a SOA runtime infrastructure.
Yesterday, a colleague and I were discussing one of the design decisions we made in configuring our environment to enable access for external applications and services to web services within our private network. The enjoyable part of the conversation for me was in using a pattern as a common mechanism of communication to discuss the rationale for the decision, given that our backgrounds are a bit different (He comes from the Network/Comms background and I from the AppDev side).
In particular, the pattern that we used in this instance is the Perimeter Service Router Pattern. Here is a bit of detail on the pattern (follow the link for complete info):
Context
External applications require access to one or more Web services that are deployed within a private network. Access to the Web services and resources in the private network is restricted to authenticated users. External applications should not have access to resources used by the Web services in the private network.
Problem
How do you make Web services in a private network available to external applications without exposing resources in the private network?
Forces
Any of the following conditions justifies using the solution described in this pattern:
- Internal Web services and dependent resources may be targeted by attackers who are external to the network. The organization must protect Web services on the internal network, so that any attacks do not affect the internal Web services or dependent resources.
- Attackers can gain information about the internal network, and use it to compromise the network. The organization must not reveal information about the internal network infrastructure that can be useful to attackers.
The following condition is an additional reason to use the solution:
- External clients need reliable access to fixed service endpoints. The location of a Web service's internal implementation may need to change dynamically to cater for the availability of dependent resources, or to cater for maintenance and batch processing windows. External clients should be unaffected by these changes.
Solution
Design a Web service intermediary that acts as a perimeter service router. The perimeter service router provides an external interface on the perimeter network for internal Web services. It accepts messages from external applications and routes them to the appropriate Web service on the private network.
The realization of this pattern for us was NOT in software but in hardware. We used a XML Security Gateway as the realization of the Perimeter Service Router pattern.
br>
Tuesday, March 6, 2007
I, along with Todd Biske and some other folks, were recently interviewed by Phil Windley for an article on SOA and Governance which will be published in the March 5th print issue of InfoWorld. A lot of my thinking in this area has been influenced by a combination of my background in operational IT, my current work in the SOA space as well as my participation in the standards process as a member of the OASIS SOA-RM TC.
Do read the "Teaming up for SOA" article and let me know what you think.
br>
Sunday, February 25, 2007
I recently had the opportunity to look at some of the details of
Web Services for Remote Portlets (WSRP), which is a web services protocol for aggregating content and interactive web applications from remote sources. As an aside, if you are interested in a quick tutorial, I would recommend the OASIS WSRP TC's Web Services for Remote Portlets 1.0 Primer.
The interesting thing with this standard is that it is built on top of a few fundamental standards such as XML, SOAP and WSDL. But with WSRP, every single web service has the same set of operations (See graphic).
This is very similar to the REST architectural constraint of uniform interfaces, which means that all resources present the same interface to clients. As noted in Steve's excellent REST Article:
"A significant advantage of the uniform interface constraint lies in the area of scalability. For a client to correctly interact with a SOA service, it must understand the specifics of both that service’s interface contract and data contract. But for a client to invoke a REST service, it must understand only that service’s specific data contract: the interface contract is uniform for all services."
To apply this to WSRP, both the interface contract and the data contract are uniform for all WSRP services, and as such the consumer of the WSRP service is a generic construct. For example, pretty much all of the major portal implementations supply, at a minimum, a WSRP consumer portlet that can bring in a remote WSRP service without any coding.
While I understand that REST is about more than uniform interfaces, wonder what the REST folks would have to say about WSRP.
br>
Wednesday, February 21, 2007
Sunday, February 18, 2007
I was reading through the research article "Adaptive QoS for Mobile Web Services through Cross-Layer Communication" in the current issue of IEEE Computer Magazine in which the authors are proposing something called WS-QoS framework, which is an approach to unify Quality of Service (QoS) for web services across transport, computing and app layers. It is an interesting read.
Per the article, the discovery, selection and invocation process consists, at a high level, of :
- Service Provider registers with a UDDI based registry. Each service has a unique interface key.
- Potential Service Consumer queries an "offer broker" (This is a new entity in the mix) for services that match a specific interface key AND QoS requirements
- The offer broker acts as the middle man in identifying the "best match" between the QoS requirements of the Service Consumer and potential Service Providers who are registered in the UDDI based registry.
- The Service Consumer directly invokes the identified best match Service Provider.
The manner in which QoS is codified is based on a specific XML Schema .
For example, for the Transport Segment you could have something like:
<operationQoSInfo name="myOperation">
...
<transportQoSPriorities>
<delay>5</delay>
<jitter>3</jitter>
....
</transportQoSPriorities>
...
</operationQoSInfo>
For Servers it could be:
<serverQoSMetrics>
...
<requestsPerSecond>30</requestsPerSecond>
...
</serverQoSMetrics>
And at the App Layer you could have something that codifies facets like compression and decompression and other items.
As a thought exercise, given that the point of using web services is all about interoperability, I went through what would need to happen from the standards and vendor support to make all this real.
- Given the amount of work going on around WS-Policy, wrap up the QoS information as a domain policy language for QoS under the WS-Policy umbrella
- The direct integration of the "offer broker" functionality into the Registry/Repository
- Built in support from the networking vendors that can map the codified policies into the appropriate technology specific network mechanism such as priorities for expedited forwarding, assured forwarding, best-effort etc.
- Built in support from the server OS vendors that can map the codified policies into server performance levels. And given that a lot of folks are using virtualization in their computing tier, support from those folks as well.
- Last but not least, agreement and profiling of the specification(s) between all of the web service stack vendors.
I am sure that I have grossly over-simplified a lot of things in the above and probably gotten some of it completely wrong. But the essence remains. Beyond this being a technically challenging problem, there needs to be significant amount of agreement between a lot of vendors as well as the incorporation of a variety of these technologies into the various product sets (Vendor Politics, Oh My!).
It is going to be a while! <sigh>
br>
Friday, February 16, 2007
via Dims:
"For a while now, we've been building up a Stack comparison page at the Apache WS Wiki site:
http://wiki.apache.org/ws/StackComparison
Yes, You can edit it and update existing information on that page. Just click on the link at the top right corner and create an account for yourself ......"
Not a complete listing by any means (In particular it is currently missing the WCF/.NET 1.1/2.0 as well as the stacks from BEA and IBM), but a good starting point, especially if you are interested in the Open Source Web Service stacks.
br>
Tuesday, February 13, 2007
Both Paul Fremantle in the comments on my previous entry and Sanjiva Weerawarana in his blog entry confirm that the option of moving away from serialization to handling the raw XML messaging is something that was designed into the Axis2 core but was not something that the majority of developers seem to be comfortable with. Very much appreciate the information. Would love to see some tutorials around this (if it is not there already) on either the WSO2 or Axis2 sites.
Sanjiva also noted the need to define a benchmark for testing that takes into account a lot more of the factors that I noted in my previous entry and offered to host it as an open source project.
So to start, what is needed would be some sort of a "real" application against which the tests could be run. Hmm... I'll throw one out for consideration. Have you thought about running your tests against the WS-I sample application?
According to the WS-I web site:
"The Sample Application presents a high-level, interoperable example of a supply chain management application in the form of a set of Use Cases that demonstrate use of Web services that conform to the Basic Profile 1.0."
Currently the Sample Application has been implemented by BEA Systems, Bowstreet, Corillian, IBM, Microsoft, Nokia, Novell, Oracle, Quovadx, SAP and Sun Microsystems (On a variety of web services stacks to be sure). Source code is available for download on the WS-I site.
Please keep in mind that I am throwing this out after about 5 minutes of thinking and have not really explored any of the details such as possible licensing restrictions by WS-I etc. Something to consider....
br>
Sunday, February 11, 2007
A reference to the SOA Practitioners' guide, which is hosted on BEA's Dev2Dev site, came across on one of the lists that I am on. According to the web site:
SOA is relatively new, so companies seeking to implement it cannot tap into a wealth of practical expertise. Without a common language and industry vocabulary based on shared experience, SOA may end up adding more custom logic and increased complexity to IT infrastructure, instead of delivering on its promise of intra and inter-enterprise services reuse and process interoperability. To help develop a shared language and collective body of knowledge about SOA, a group of SOA practitioners created this SOA Practitioners' Guide series of documents. In it, these SOA experts describe and document best practices and key learnings relating to SOA, to help other companies address the challenges of SOA. The SOA Practitioners' Guide is envisioned as a multi-part collection of publications that can act as a standard reference encyclopedia for all SOA stakeholders.
The guide is available in three parts:
- SOA Practitioners Guide Part 1—Why Services-Oriented Architecture? This guide provides a high-level summary of SOA.
- SOA Practitioners Guide Part 2—This guide covers the SOA Reference Architecture, which provides a worked design of an enterprise-wide SOA implementation with detailed architecture diagrams, component descriptions, detailed requirements, design patterns, opinions about standards, patterns on regulation compliance, standards templates and potential code assets from members.
- SOA Practitioners Guide Part 3—This guide introduces the Services Lifecycle and provides a detailed process for services management though the service lifecycle, from inception through to retirement or repurposing of the services. It also contains an appendix that includes organization and governance best practices, templates, comments on key SOA standards, and recommended links for more information.
I've not had a chance to go through these documents in any great detail, but I do note that two of the reviewers for the documents are Brenda Michelson of Elemental Links, Inc. and Steve Jones of Capgemini Group who are both smart, competent people in the SOA space who spent some time on this effort. Looks like I'll have to dedicate some time to read these documents.
br>
Ben Moreland, the Director for Foundation Services at the The Hartford, has a great article on SOA Governance up on eBizQ. The Hartford is an organization in the financial sector who is at the forefront of SOA adoption and implementation on the commercial side of the house.
The keys to their success have been their strong Enterprise Architecture and Governance programs. Case in point is that, in a presentation that Ben gave recently, he noted that some time ago (2-3 yrs?) The Hartford sequestered both their Senior Business Executives and Enterprise Architects for an extended period of time (I believe it was around 4 months!) to hammer out a strategic plan for how they were going to employ technology to drive business value. Their approach to SOA is based on that strategic plan and is a clear indicator how serious these folks are about executing on that all too often mythical "Business/IT alignment" everyone talks about!
From the Article:
"Some people use SOA governance to mean service lifecycle governance—that is, governing the lifecycle of services from creation through deployment. Others take it to mean applying runtime policies to services. But is there more to SOA governance than this? Shouldn’t governance with SOA ultimately be about delivering on your business and SOA objectives? Finally, without a common understanding of what governance means, are organizations that adopt SOA simply setting themselves up for failure?"
The article identifies the Key Leverage Points of SOA Governance as People, Financial, Portfolio, Operations, Architecture, Technology and Projects, and as noted in the article:
The key thing to understand is that you can only achieve the change necessary for SOA success by putting policies and processes in place around all of the key leverage points denoted above—people, application portfolio, services portfolio, projects, services, enterprise architecture, enterprise technology platforms, and operations. If you put these policies in place—that is, if you govern your SOA journey wisely— you will be able to deliver on your SOA strategy and business objectives.
All in all, an excellent article from a practitioner and not a talking head
Check it out!
br>
There is currently a war of words going on regarding the performance of some of the Java web service stacks including Axis2, XFire and JAX-WS 2.1 FCS.
Instinctively, I think that this type of testing is asking the wrong questions and I am trying to articulate why that is so.
To start with, these steps seem to completely sidestep any of the design considerations that are associated with the development of any serious enterprise class web service. Those design considerations [Microsoft PAG: Improving Web Services Performance] include:
- Design chunky interfaces to reduce round trips.
- Prefer message-based programming over RPC style.
- Use literal message encoding for parameter formatting.
- Prefer primitive types for Web services parameters.
- Avoid maintaining server state between calls.
- Consider input validation for costly Web methods.
- Consider your approach to caching.
- Consider approaches for bulk data transfer and attachments.
- Avoid calling local Web services.
Secondly, this type of benchmarking tends to focus people on the immediacy and synchronous nature of web services rather than designing the system for asynchronous operation. In a real production system, all too often the chunk of time that is taken up by the processing associated with the business logic that the web service is fronting may be a significant factor in the performance overhead associated with the web service. And as you go through your perf optimization, it may very well make sense to optimize that piece e.g. database calls (as that gets you the biggest bang for the buck) as you do your end to end performance engineering.
Lastly, this all seems to be about the relative performance of the various data binding frameworks that are out there (ADB, JiBX, XMLBeans etc.. etc..) which in turn brings up all the nasty interoperability issues related to serialization/de-serialization that deal with the impendence mismatch between XML Schema and the language of your choice (Java, C#, ...). This is something that I have had a great deal of interest in, especially in trying to find work arounds to ensure interoperability. But more and more, I am becoming frustrated by this particular aspect of web services and am moving more towards avoiding serialization entirely and processing a message directly. This would also allow me to utilize some of the more powerful capabilities like XSLT/XPath etc.
Of course, this also moves me away from the web services mainstream and the "ease of use" argument that can be made due to the tooling support for XML to Object Mappings by the various vendors. One of the things on my list of near term to-do's is to explore how hard/easy it would be to go down this path using some of the modern web service stacks such as WCF and Axis2. I really think that in the long term, it would be much more beneficial to me to go down this path and will more than likely also help out as I try to come up to speed on REST (Another one of my to-do's).
br>
Sunday, January 28, 2007
The latest edition of Thomas Erl's [Editor] SOA Magazine is out.
Service Elicitation: Defining the Conceptual Service
Fundamental to any SOA delivery project is the definition of services. More specifically, the ability to define what constitutes a service and how logic should be partitioned and represented across a collection of services. The ambitious goal of SOA to achieve unity between business and technology domains further makes service definition a critical step along a typical SOA roadmap. This is the second article in a series dedicated to exploring the functional side of SOA. It explores several ways to properly describe a service in a stage called "service elicitation," essentially the process of extracting services from business knowledge...
SOA and EDA: Using Events to Bridge Decoupled Service Boundaries
The distinction between service-oriented architecture (SOA) and event-driven architecture (EDA) can be traced down to message patterns. Understanding the implications of common exchange patterns, such as request-and-reply and publish-and-subscribe, helps determine both fundamental differences and commonality in these two complementary architectural models. It is appropriate and desirable to use the acronyms SOA and EDA to make this distinction, because both of these architectural styles are positioned in the same domain; SOA focusing on the decomposition of business functions and EDA focusing on business events. This article explores the differences between these two models and specifically studies how EDA patterns can be used to connect decoupled service domains...
SOA and the Emergence of Business Technology: How Business Services are Changing the IT Landscape
Globalization is having a tremendous impact on IT. Fueled by technological change and innovation IT is becoming more capable than ever of establishing itself as a true partner to business, a trend that is creating the opportunity for a new breed of IT professional: one that is both technology and business savvy. In this article we discuss the genesis of this accelerating wave of change, how it has been responsible for and relates to the service-oriented architectural model, and how it is contributing to a new field we can call "business technology"...
br>
One of the questions that is often asked by certain folks is the relationship between Enterprise Architecture and Service Oriented Architecture. Some folks believe that SOA is the new version of EA, others that the disciplines are distinct. My personal belief that they are mutually supporting disciplines and the level of maturity that an organization has achieved in one will directly impact its ability to implement the other.
Given this the following quote, from Anne Thomas Manes of the Burton Group, really resonated with me:
"SOA also applies at the enterprise architecture level -- helping the [Enterprise Architects] optimize the application portfolio and data architecture. Nearly every large organization has way too many applications that implement the same capabilities and way too many data structures that represent the same information. The cost of ownership of managing and maintaining a bloated application and database portfolio keeps fixed annual costs very high, reduces the available funds for new projects, and severely limits the flexibility and agility of the organization. From the [Enterprise Architecture] perspective, the goal is to dramatically reduce duplication of application functionality and data structures by implementing shared capabilities as services and designing standard data structures for interfacing with those services. [Enterprise Architects] should be defining priorities for SOA projects.
When it comes time to design a specific application, the goal is to analyze the required capabilities of the application, identify capabilities that have already been implemented, and identify capabilities that other systems might need. These shared capabilities should be implemented as services -- not re-implemented in every application that needs them. Also any volatile capability should be implemented as a service to increase separation of concern and to enable easier management."
Figured I'd put it on the blog as I'm sure that I will be reusing this particular explanation in the future and it will be easier to point to it here with attribution given to Anne.
br>
Sunday, January 7, 2007
I was reading Todd's entry on "Services for Managing the Network" in which he comments on an article by the F5 folks which talks about a unified way to manage both services and network components through web service interfaces.
This is something that I've been thinking about for a while as well. To me a SOA runtime infrastructure should allow us to monitor, manage and administer across network, computing and service resources using standardized policies. In this ideal world, the appropriate domain experts (Security, Networking, QoS, SLA etc.) define the policies for that domain in a centralized manner, push out those polices to distributed appliances and service platforms across your Enterprise such that they can be enforced, and provides the ability to collect metrics on what is going on in your environment.
In some ways the greater challenge is not technical, but cultural. It lies in trying to provide a common frame of reference and understanding to folks who come from different background (NetOps/Transport folks, DataCenter/Computing Infrastructure folks, Service folks) on the impact of deploying a SOA runtime infrastructure. It is a challenge that I face on a regular basis and one that requires the most fundamental of skills - Communications and the ability to see the other person's point of view.
On the technical side of the house, the challenge is wrapped up in the phrase "standardized policies". To reach this stage requires two separate things to happen:
- The ratification of standards that address these various aspects of management
- Adoption of these standards by various vendors at the Network, Computing and Service layers
At the current stage of technology, some of what I am discussing above is possible by using a combination of WSM, Mediation Systems, Registry/Repository, Platforms and Network Devices. But unfortunately, given that a lot of the standards are not finalized, it requires one to use specific vendor products (where vendors have established interop relationships) that are using proprietary mechanisms in the absence of established standards.
So what are some of the standards that we should be tracking and urging our vendors to support in the policy and management space?
- Policy - WS-Policy to start with. But keep in mind that WS-Policy is simply a container and still requires the creation of multiple domain specific languages that will address areas such as SLAs and QoS etc.
- Provisioning - Adoption of Service Provisioning Markup Language (SPML) v 2.0. Keep in mind that this deals purely with user provisioning and not with service provisioning. Current service provisioning is, to a great extent, a manual process.
- Management & Reporting - The convergence of WSDM and WS-Management. Note that this has a dependency on the convergence of WS-Eventing (WS-Management needs this) and WS-Notification (WSDM needs this) into WS-EventNotification.
As you can see above, there is a lot of work that still needs to be done in this space and significant competition among the various vendor factions regarding what these standards should be. As you are building out your infrastructure, I would highly recommend that you question your vendors on their support for existing standards, their tracking and participation in the standards process, and their roadmap for support of future standards, so that in the end you have the ability to monitor, manage and administer your environment in a holistic manner.
br>
Thursday, January 4, 2007
This is a great introductory article on REST by Steve Vinoski of IONA titled "REST Eye for the SOA Guy" (PDF) that has been published in the current issue of IEEE Internet Computing. For more in-depth info, check out the RESTwiki .
P.S. My kids were kind enough, in the recent season of sharing, to share a rather nasty cold with me. Towards the end of the day yesterday, my co-workers were making the "warding-off-evil" signs in my direction, so I am taking the day off today. Now that the medication has kicked in, I am using the temporary relief as an excuse to catch up on some technical reading which, unfortunately, I can tolerate only for short bursts. <sigh>
br>
Wednesday, January 3, 2007
The latest edition of Thomas Erl's [Editor] SOA Magazine is out.
Implications of SOA on Business Strategy and Organizational Design
The need to somehow change the way we do business as a prerequisite to unlocking the transformative potential (and resulting competitive advantage) inherent in technological innovation is becoming increasingly recognized. The scope of discussion this time around however moves beyond organizational efficiencies to whole of market efficiencies, and the strategic implications this has in terms of planning and organizational design. Many business leaders have grown progressively indignant towards the over-sold and under-delivered powers of technology to effect their bottom line - these, the same people who are responsible for setting strategic direction, business planning, and capital investment. This is the first in a series of articles targeting the business community. It explores the implications of SOA on strategic planning and organizational design - from a business perspective.
Commercializing Services: Web Services Distribution Channels and SOA
Exposing web services to the outside world is much more complex than creating and maintaining services geared towards internal consumption. While internally focused projects have their technical challenges, outwardly focused web services initiatives bring to the fore a whole host of non-IT related issues such as business strategy and marketing. Those who proceed with such projects with the same mindset that made their internal projects successful run a significant risk of failing. Web services initiatives aimed at serving the needs of non-captive customers and partners are akin in effort to that of creating a new business channel and not merely a systems integration project. In order be successful in these efforts, you must clearly understand your organization's objectives, your customer's needs and the Web Services Distribution Ecosystem...
AJAX: Bringing SOA to the Front Lines
A service-oriented architecture (SOA) can provide enterprises with significant benefits, including the ability to reuse application functionality and to interconnect heterogeneous applications to create new composite ones. However, a critical component to the realization of SOA benefits is that users throughout the extended enterprise can efficiently access and interact with key resources. Otherwise you cannot fully leverage your infrastructure investment. Using AJAX rich internet applications (RIAs) as the presentation tier, however, can significantly enhance the impact of SOA. This article explains how companies can link their employees, customers and partners, with a scalable, flexible interface to efficiently interact with service-oriented resources.
Always an an interesting read.
br>
Tuesday, January 2, 2007
I am a member of the OASIS SOA Reference Architecture Subcommittee which is part of the SOA-RM (Reference Model) Technical Committee. We had a F2F meeting before the holidays and one of the items that came up during our discussion was the need to engage the wider community to make sure that the work we are doing is relevant and applicable to implementers, and to solicit feedback for incorporation into this ongoing work. So I asked our chair if I could blog about this work and he said sure (Thanks Frank!), provided that I mention that this is a work in progress.
So, this is a work in progress
On a serious note, comments/corrections/additions/pointers/hints/smoke signals are very welcome and I or any other member of the TC can act as your conduit and make sure that it is presented to the TC at large. Please feel free to leave comments on this blog entry or contact me directly. Needless to say, if your organization is part of OASIS, we are a friendly bunch of folks doing some interesting and complex work, and would very much welcome your direct participation!
On to the topic at hand. A particular interest of mine in the SOA-RA is the area of governance and we had a discussion on this topic that I wanted to share.
SOA & GOVERNANCE
The starting point of the discussion was the definition of SOA as defined in the SOA-RM which states that "Service Oriented Architecture (SOA) is a paradigm for organizing and utilizing distributed capabilities that may be under the control of different ownership domains."
But when we speak of traditional IT governance, it usually means governance applied within the Enterprise; within a single ownership domain if you will. But in the case of a SOA implementation it needs to be applied across ownership domains, across Enterprises. And that requires a different set of carrots and sticks, perhaps something much more contractual in nature rather than something direct. And that in turn brings to light the fact that what one organization considers governance will be completely different from what another organization considers governance.
At this point, I proposed a definition of governance that is consistent with the above and has resonated very well with me. Requiring no original thought on my part, I quoted Anne Thomas Manes of the Burton Group who has said “Governance refers to the processes that an enterprise puts in place to ensure that things are done right, where "right" means in accordance with best practices, architectural principles, government regulations, laws, and other determining factors. SOA governance refers to the processes used to govern adoption and implementation of SOA.” With the exception of adoption bit, the committee members agreed that this was a good working definition. This also tied in very nicely with an earlier comment by a colleague, Ken Laskey of MITRE, that "Governance for SOA [...] is likely to parallel governance for traditional commerce", and that "There will be a range of governance depending on the perceived needs of the participants."
One of the items on my to-do list is to research the governance practices of large enterprises, especially ones in which the business units have a great deal of autonomy, to distill some lessons on what works and what does not work. At this point in time, I personally have not seen examples of SOA implementations that span Enterprises. Or rather Enterprises that are equivalent in authority/power/influence. Any examples you can share would be very appreciated.
As we progressed along this path, one of the items that became much clearer is that governance by its very nature implies the authority to govern. That authority can be formal or informal and could be codified in an explicit manner or implied. But in all cases, there is the concept of authority. Given this, implementing SOA governance requires:
- Formulation of polices that are appropriate to the domain
- The ability to enforce the policies
- The ability to obtain metrics on what is working and what is not
- Implementing feedback [and adjudication] processes that can adjust the existing policies as needed
<aniltj - personal comments>
Speaking for myself, and not for the committee at large, one of the items that we need to keep in mind regarding governance is that it should not just be the big hammer. It should also be the mechanism for providing motivators to moving to and doing the right things in a SOA. Not just the de-motivators. And the reality as regards to SOA governance is that it should be an extension of your existing IT governance where you add the SOA specific bits. I think the challenge here will be figuring out what that amorphous line is. It does not make sense in the SOA RA to document IT governance components, but there is definitely overlap and mutual support. Just as with EA and SOA.
Above all, I think we need to realize that when we speak of formulating SOA polices, we are dealing with people and behavior and culture and not just technology. Which means it is messy and imprecise. As the old saying goes "Technology is easy, People? That's Hard!".
</aniltj - personal comments>
Again, a work in progress. Input and comments are solicited and welcome.
UPDATE: 1/3/07 - incorporation of off-line comments.
UPDATE: 1/4/07 - I just noticed that OASIS also has a public SOA-RM Comment Listserv, which folks can use to provide feedback as well. Please use whichever mechanism works for you.
br>
Wednesday, December 27, 2006
As we move beyond the infancy of SOA, there is general consensus that it is not just about the technology but about using technology to solve critical problems that are facing businesses/agencies/organizations.
But as ever, we operate in a non-benign environment, and the realization of the Architecture requires one to consider the myriad of threats that can be brought to bear on a SOA implementation.
I am trying to graphically represent some of the threats that can be brought to bear against the exchange of messages in a SOA e.g. In a SOA implemented using web services.
There are two that I explicitly did not put on the graphic, and those are:
- Unauthorized Service Consumers
- Rogue Service Producers
Not because they are not important, but simply because I'm still trying to figure out a way to represent them on this graphic in a clean manner.
This is only the starting point for a discussion of security threats in a SOA, and there has been some work done to date on various security design patterns that can be used to mitigate these threats.
This is definitely an area that I am going to be exploring in much greater detail.
br>
Saturday, December 23, 2006
Joe McKendrick has a list of Ten companies where SOA made a difference in 2006 . These are companies that moved beyond the pilot stage into live deployments and are seeing results. They include:
- eBay
- IBM
- Wachovia Bank
- Harley Davidson
- Hewlett Packard
- Ameriprise Financial
- Amazon
- Citigroup
- OnStar
- Dreamworks Animation SKG
A good list to point to when asked about examples of successful SOA implementations.
Update (12/26/06): Another eight more companies (International Truck, MedicAlert, Experian, Washington Group, Siemens AG, The Hartford, FBI, Monster).
br>
Tuesday, December 12, 2006
The SOA Reference Model (SOA-RM) was approved as an OASIS Standard on October 23, 2006. Currently we are working on the SOA Reference Architecture (SOA-RA), and today was the first of three days of Face 2 Face Meetings for the RA work. Long and interesting day with a group of smart people.
I have traditionally had to be concrete and implementation focused (Make the rubber meet the road and not the sky!), so one of the challenges that I have as part of the process of working on the SOA-RA is in trying to distill my experience and lessons into something that can contribute to a body of work that exists at a higher level of abstraction, and applies to a wide range of implementations.
br>
Wednesday, December 6, 2006
I had a chance to attend a Special Technical Session at the OMG Technical Meeting held in Arlington, VA today on Emerging Standards for SOA. Enjoyed the talk on linking Web Service Specification Languages and Semantic Technologies by a friend of mine, Chris Bashioum of MITRE, as well as a most excellent briefing by Toufik Boulez of Layer 7 Technologies on WS-Policy. There were also very informative presentations on WS-Security and WS-I Security Profile as well as SCA and SDO and other various other topics.
As always it was also an opportunity to renew old acquaintances and make new ones. Surprisingly, a much more informative and enjoyable day than I expected!
br>
Friday, November 24, 2006
Last Call Working Draft Review for Basic XML Schema Patterns for Databinding. Here is some more information on this W3C Working Group:
"The W3C XML Schema Patterns for Databinding Working Group, part of the W3C Web Services Activity, has released two working drafts for review.The mission of this Working Group is to define a set of XML Schema patterns that will be efficiently implementable by the broad community who use XML databindings. Patterns which may prove useful to model include abstractions of structures common across a wide variety of programming environments, such as hash tables, vectors, and collections.
There are several ways of representing such abstracted data structures and Web Services toolkits are currently using ad hoc technologies to infer the most suitable language mapping when processing XML Schemas.Agreeing on a set of XML Schema patterns for which databinding optimizations can be made will facilitate the ability of Web services and other toolkits to expose a more comprehensible data model to the developer.
The WG has published a First Public Working Draft for "Advanced XML Schema Patterns for Databinding Version 1.0." This document defines an advanced set of example XML Schema 1.0 constructs and types in the form of concrete XPath 2.0 expressions. These patterns are known to be in widespread use and considered to be compatible with databinding implementations. Implementers of databinding tools may find these patterns useful to represent simple and common place data structures. Ensuring tools recognize at least these simple XML Schema 1.0 patterns and present them in terms most appropriate to the specific language, database or environment will provide an improved user experience when using databinding tools.
The WG has also issued a Last Call Working Draft for the "Basic XML Schema Patterns for Databinding Version 1.0" specification. A databinding tool generates a mapping between XML 1.0 documents which conform to an XML Schema 1.0 schema and an internal data representation. For example, a Web services databinding tool may use XML Schema 1.0 descriptions inside a WSDL 2.0 or WSDL 1.1 document to produce and consume XML and SOAP messages in terms of data structures in a programming language or data held inside a database."
Given that the impedance mismatch between XML Schema and Language Types are one of the major causes of Interoperability problems in web services toolkits, this work and these documents are definitely worth checking out.
br>
Thursday, November 16, 2006
LOL! From Pete Lacey at the Burton Group:
SG: [....] From here on in we pass around coarse-grained messages—you like that term, coarse-grained?. Messages that conform to an XML Schema. We call the new style Document/Literal and the old style RPC/Encoded.
Dev: XML Schema?
SG: Oh, it’s all the rage. Next big thing. Take a look.
Dev: (Reads XML Schema spec). Saints preserve us! Alexander the Great couldn’t unravel that.
SG: Don’t worry about it. Your tools will create the schema for you. Really, its all about the tooling.
Dev: How are the tools gonna do that?
SG: Well, they will reflect on your code (if possible) and autogenerate a compliant schema.
Dev: Reflect on my code? I thought it was all about documents, not serialized objects.
SG: Didn’t you hear me? It’s all about the tools. Anyway, we can’t expect you to write XML Schema and WSDL by hand. Besides, its just plumbing. You don’t need to see it.
Dev: Whoa, back up. What was that word? Wizzdle?
SG: Oh, haven’t I mentioned WSDL? W-S-D-L. Web Services Description Language. It’s how you specify the data types, parameter lists, operation names, transport bindings, and the endpoint URI, so that client developers can access your service. Check it out.
Dev: (Reads WSDL spec). I trust that the guys who wrote this have been shot. It’s not even internally consistent. And what’s with all this HTTP GET bindings. I thought GET was undefined.
SG: Don’t worry about that. Nobody uses that. Anyway, your tools will generate a WSDL, and in the WSDL will be the schema.
Dev: But shouldn’t it be the other way ‘round? Shouldn’t I design the contract first and then generate the code?
SG: Well, yeah, I guess that sounds right in principle. But that’s not so easy to do, and very few SOAP stacks support WSDL-first development. Just let the tools worry about it.
This is so darn funny, especially when you consider that it is so true! Go read the entire thing, please!
br>
Tuesday, November 14, 2006
From the announcement:
Apache Axis2 is a complete re-design and re-write of the widely used Apache Axis engine and is a more efficient, more scalable, more modular and more XML-oriented Web services framework. It is carefully designed to support the easy addition of plug-in "modules" that extend its functionality for features such as security and reliability.
Major Changes Since 1.0:
- - Significantly improved documentation
- - Significantly improved support for POJO services and clients
- - Significantly improved support for Spring services
- - Significantly improved Axis Data Binding (ADB) to increase schema coverage and overall stability
- - Improved service lifecycle model
- - Improved JMS support
- - Improved handler and module interfaces
- - Improved Eclipse and Idea plugins
- - New Attachments API for sending & receiving MTOM and SwA attachments
- - Built in support for WS-Policy via Apache Neethi
- - Added support for unwrapping Web service requests
- - Fixed tons of small and not-so-small bugs
- - Major refactoring of release structure to make usage easy
Known Issues and Limitations in 1.1 Release:
- - Unwrapping of response messages (coming in 1.2)
- - JSR 181/183 Annotation support (coming in 1.2)
- - JaxMe and JAXBRI data binding support is experimental
I have updated my "Install and configure Apache Tomcat/Axis for web service development on Windows XP SP2" post for Axis2 1.1.
br>
Monday, October 30, 2006
Per the soapUI web site, this free tool ..."includes integrated support for the WS-I organizations Basic Profile validation tools for 2 situations:
- Validating WSDL definitions - from the Interface Menu with the "Check WSI Compliance" option. This will run the WS-I Test Tools and validate the WSDL definition accordingly.
- Validating SOAP request/response messages - from within the Request Editors response popup with the "Check WS-I Compliance" option
In either case, you first need to download either the java or C# version (soapui will use whichever is available) of the WS-I Interoperability Testing Tools 1.1 from the WS-I deliverables page."
Found via Mark's blog.
br>
Sunday, October 29, 2006
Jon Udell of InfoWorld has a podcast with John Schneider, the CTO of AgileDelta, who is "...evangelizing Efficient XML, an alternate binary syntax for XML.". In the podcast they ".. discuss the motivations for this proposed W3C standard, its theoretical foundations, and its uses." It is an enjoyable podcast and John is an articulate proponent of the need for an alternate binary representation of XML.
The intent with this approach is to use a binary encoding instead of the XML Infoset encoding as the transfer mechanism (which is meant to reduce the size of the message on the wire among other benefits). The Efficient XML Interchange Working Group at the W3C has been tasked with gathering use cases and approaches to binary encodings. As of March 15, 2006 the following proposals have been submitted to this working group:
As a technologist, I have to admit that this is an approach that offers a technically elegant solution that addresses the content bloat that can be associated with XML and the need to transfer data over a limited bandwidth.
But I do have concerns…
To me interoperability is achieved through a combination of open standards AND the adoption and implementation of those standards in vendor tooling such that by following accepted and adopted practices you have a means for exchanging information across platforms, languages and toolsets in a seamless manner. I do not believe that Binary XML meets this criterion, in the current timeframe, as:
- It is not an accepted standard for content encoding in the web services world (noted by its lack of adoption in vendor toolkits from IBM, Microsoft, BEA and a host of others).
- The binary encoding support is not in the technology roadmap of any vendor (other than Sun perhaps for Fast Infoset) and
- In the current world, in order for a binary encoding based exchange of happen, it requires a custom encoder/decoder on both ends of the conversation i.e. there is no out of the box support for it in the current web services stacks.
The way these approaches deal with Interoperability is to use the binary encoding, instead of the regular XML Infoset encoding, when it detects that the other endpoint supports it, which of course requires remote endpoints to support that particular binary encoder/decoder to get the benefits. Otherwise you are just down-selecting, from a performance or size perspective, and using regular XML web services. Some binary XML folks define this ability to dynamically switch between binary encoding and XML encoding as being interoperable. I do not, as in my view you do NOT get both interoperability AND performance when you choose this approach. You get one or the other.
The only way you would get BOTH interoperability AND performance is if one or more of these binary encoding schemes actually became a publicly accepted web service standard and was adopted by the platform vendors like IBM, Microsoft, BEA, and Sun into their web service implementations. We are not even close on that one, as of yet.
There is an even more critical issue that comes into play when choosing to use binary encodings. A SOA implemented using web services is not just about point to point web services. It is about web services that are passing messages via intermediaries that may perform various actions on those web service messages (e.g. Content Mediation that allows a message format to be transformed from one format to another, Security Mediation which allows enforcement of security policies etc.) The caveat in this case is that these Intermediaries need to understand the data format in order to act and process these messages. They do so in the current state of technology because these Intermediaries (ESBs, XML Gateways, and Orchestration Engines etc.) are built to publicly supported standards. Binary XML is currently NOT standardized, which means that these types of technologies have no visibility into and cannot act on and enforce policies on the messages that are using binary encodings!
Does this mean that this will always be the case? No! But if and until there is a standard around this, and that standard is widely supported by the vendors who build the tools that allow us to build services, this will at best be a proprietary point-to-point solution to a very specific problem that requires you to step away from standards that promote interoperability.
In all fairness, driving towards a standard is precisely what folks like John Schneider are working towards at the W3C and he notes in the podcast that we are, optimistically, more than 2 years away from any type of standard. Then, of course, we have to wait for the standard to be implemented by the various vendors.
That said, I do look forward to the time when there is indeed a binary XML standard that is baked into the all the web service stacks because the limited bandwidth use case that the solution is targeting is a very real one.
UPDATE (April 2007): Per the W3C EXI Status Page, as of mid November 2006, the group has selected Efficient XML to be the basis for the proposed encoding specification. Present work centers around integrating some features from the other measured format technologies into Efficient XML, particularly variations for both more efficient structural and value encodings. Additionally, the first public working draft of the specification of this format is expected in early May 2007.
br>
Saturday, October 28, 2006
About a month ago, I had an opportunity to attend a 2 day SOA workshop that was held by Thomas Erl at the University of British Columbia, Vancouver. Thomas, for those who do not know, is the best-selling author of "Service-Oriented Architecture: Concepts, Technology and Design".
The mix of people in the class was interesting in that you had implementers, you had technology vendors and you had
business executives who by their very involvement and questions brought a unique perspectives to the class. This in turn reinforced the fact that SOA, to be successful, is about the business and not about technology. Thomas did a nice balancing act in in addressing the various perspectives, which is not a trivial job!
The enjoyable thing for me was his balanced perspective that combined independent and vendor-neutral research on SOA with practical consulting experience. With the amount of hype that surrounds SOA these days, that is a much needed and important perspective.
The negatives? Two days is simply not enough to cover this topic in a great deal of depth (From what I understand he gives longer workshops as well). So when it got to some of the more interesting aspects, he had to punt, which was frustrating in some places. The other is more personal. I have been to some of the most beautiful cities in North America (San Francisco, Vancouver to name two) to attend conferences. The depressing thing is that the only beauty I got to see was looking out the window or the inside of a hotel! I have to plan these things better! 
br>
I wanted to thank everyone who showed up for my presentation last Thursday. I was pleasantly surprised to see Yasser Shohoud who used to be a local and wrote one of the first and best books on web services back in the day. Given that he is a pretty smart guy, he got snapped up by Microsoft to work on the Indigo/WCF product team and then eventually moved out to work as a Technology Architect at the Microsoft Technology Center in Austin, Texas. Good to see you again, Yasser!
I had a great time and really enjoyed the interaction during and after the presentation. It was interesting for me to give a presentation that was stressing an architecture/business focused, standards based, vendor-neutral approach to SOA to an audience that was composed of MS technology developers and architects as well as Microsoft employees at a Microsoft Technology Center facility! The credit for establishing a forum that allows such diverse views definitely goes to Geoff Snowman, a local MS Technology Architect who is very involved in this space.
We actually had a pretty healthy discussion about the the role of ESBs in a SOA Environment. I recently came across this "ESB Market Report" that seems relevant to that conversation, so I thought I'd point to it. I resonate most closely with the comments made by Anne Thomas Manes in that article:
ESBs are just one type of SOA intermediary; other types are Web services management products, XML gateways, pureplay mediators like SOA Software and Apache Synapse, and platforms, according to Anne Thomas Manes, vice president and research director at Midvale, Utah-based Burton Group Inc.. She believes ESBs are best suited for complex aggregation and transformation of data, legacy access and orchestration. "I don't typically recommend using an ESB for mediation, they're more in the platform category," she said. "I typically recommend an XML gateway or Web services management product. Both have stronger security mediation."
My one point of ambivalence with Anne's comments are with the usage of ESBs for orchestration functionality. Orchestration to me has always been something that happened from a single perspective. A service or an application orchestrates a series of actions. As such, orchestration is to me a technical implementation of a Task-Based Service and is not something that should be in the Infrastructure. I have not been convinced that Orchestration Engines in the "Cloud" are the right way to move out.
To the person who had questions about WS-Notification and WS-Eventing, let me point you to:
There are some politics being played in this (Surprise!).. Check out this entry from Joe McKendrick to see the details.
As to the gentleman who brought up the SOA == Web Services question, I would like to provide a couple of resources that demonstrate the application of SOA principles but do NOT use Web Services.
SOA is an Architectural Style, Web Services are a standards based middleware platform for building Services. One does not equal the other.
br>
Saturday, October 21, 2006
I am scheduled to do a presentation about "SOA, Interoperability and Web Services" to the Capital Area Microsoft Integration and Connected Systems User Group (MICSUG) on Thursday, 10/26/06.
I was originally going to focus on just the current status of the various WS-* specifications. But given my interests, I've decided to broaden the scope a tad bit beyond simple web services integration by placing web services and the associated standards and specifications within the broader context of SOA. So my intention is to start out with SOA design principles, move onto what you need to be aware of in a web service implementation of a SOA (WS-* standards and specifications come into play here) and finish up with pitfalls you have to watch out for and some guidance around emerging best practices.
br>
Tuesday, October 17, 2006
Sunday, October 8, 2006
When thinking about applying the Principle of Loose Coupling to a SOA, folks naturally gravitate towards and understand the separation
of Interface from Implementation and the attendant corollary of information hiding such that the Interface is not a direct reflection of the implementation.
Another equally important aspect of Loose Coupling is the separation of Infrastructure functionality from the Business Logic. There are always going to be cross-cutting concerns that apply across service implementations in a SOA such as Authentication, Auditing and Logging, Transformations, Crypto and more. Now, each service could implement these aspects on its own, but in an Enterprise setting where one needs consistency of and visibility into multiple service implementations, it is important to abstract these aspects into the infrastructure and not leave it up to each service on the how and when it will implement this functionality.
If you put the right type of SOA management infrastructure in place, you have the ability to centrally manage the creation and versioning of policies and to push out these polices to distributed Policy Enforcement Points. This type of infrastructure is critically important to the Run-Time Governance that is needed in a SOA as it provides the ability to administer, monitor and control your environment in a policy driven manner. Of course, Governance goes beyond just these run-time aspects, and in building a service eco-system you want the right folks to be in in charge of the right pieces e.g. You want the Security folks to be in charge of defining security policies and verifying compliance with those policies. The abstraction of infrastructure functionality that can be commonly used and leveraged gives a mechanism for doing just that.
The reality of any Enterprise is that there are going to be both "Rock-Stars" and Lounge-Singers who are going to be building Business services, and as such they should be spending their time focusing on business functionality and not on core infrastructure capabilities that are generic and need to be implemented in a consistent manner.
br>
Tuesday, September 5, 2006
I had the pleasure of giving a presentation on “SOA, Interoperability and Web Services” to the Central Maryland Association of .NET Professionals today. Very well organized event and a good crowd. Lots of familiar faces as well as new ones. Thanks to everyone who came out. Just a quick note that links to web services interoperability resources that I mentioned during my talk can be found on the Resources section of the home page of this blog.
On a separate note, it would appear that my old friend Matt Fisher, who is the leader of the OWASP DC-Maryland Chapter and in attendance, is in no danger whatsoever of running out of work!
There was a section of the meeting where one of the sponsors of the meeting, a local placement agency, was asked what the hot areas of the local job market are. In particular he was asked if there was a demand for developers with application security skills in the MD-DC-VA area. The unfortunate and very disappointing answer was a resounding No! (Note to self: Find and subscribe to Matt’s Blog).
br>
Saturday, September 2, 2006
There was an article a couple of days ago at SearchWebServices.com titled “SOA with J2EE and .NET: Possible, but not easy” that I took a bit of exception to given that I have a great deal of interest in web services interoperability across platforms and technologies. I responded in the article comments but am reiterating that response here as well:
“If this article had come out 5 years ago, I would have less of an issue with this. I would say that interoperability is not only possible and there are accepted best practices for how to accomplish this. Most vendors now have a pretty good interop story to tell BUT it is NOT the default choice in their Tooling! Heck, I can build non-interoperable services with both sides being Java/J2EE.
The reality is that if you follow the right practices, you can build interoperable services in any web service toolkit, but if you follow vendor's path of least resistance i.e. let the vendor's tools do the thinking for you, you will be going down a path that leads to lock-in on that vendor's way of doing things. And the right practices start with Top-Down/Contract-First style of development, WS-I compliance (which does not address schema support in toolkit issues), and an understanding of some of the mine fields in the area of XML schema design for web services.
Here are some pointers that may help:
There is still a lot of work that needs to be done in the web services Interop arena, especially in the area of the advanced WS-* specifications. But, at the same time, let us not make the problem harder than it needs to be either!
br>
Friday, September 1, 2006
I had noted some time back how IBM donating its implementation of WSDM to Apache did not have much impact beyond good PR given the confusion that existed around the competing WSDM and WS-Management specifications and how any vendors that were working to support either of those specs in their tooling were holding off until the two specs were reconciled.
That reconciliation process is now moving forward and the first milestone out of that work has now been published. Here is a portion of the executive summary from that document:
On March 15, 2006, HP, Intel, IBM and Microsoft announced the intention to reconcile the WSDM and WS-Man specifications into a single standard for management of system resources using Web services. As the work reaches certain milestones, portions of it will be shared with the Web service community to solicit feedback. This document summarizes the first of these milestones: the publication of a first draft of Web Services Resource Transfer (WS-RT) specification, the publication of the Service Modeling Language specification and an update to the WS-Metadata Exchange (WS-MEX) specification.
The WSDM/WS-Man Reconciliation – Overview and Guidance document “… highlights the work to date on the management reconciliation roadmap and presents its status and resolutions in several ways. With a diverse set of parties interested in the progress of this work, the Migration document presents the status of the reconciliation work in a variety of levels of technical detail – ranging from a very high-level overview all the way down to a developer’s guide for code migration”.
br>
Thursday, August 31, 2006
“The SOA Magazine” is a new bi-monthly online publication, edited by Thomas Erl, “… dedicated to publishing specialized SOA articles, case studies, and papers by industry experts. The common criteria for contributions is that each explore a distinct aspect of service-oriented computing”.
The first issue is currently online and has the following three articles:
A Survey of the Technical Landscape
by Cyrille Thilloy
This article is the first in a series exploring the various technologies and options available to an enterprise when establishing a SOA roadmap. It introduces the notion of a SOA infrastructure in the enterprise and enriches it with the models and frameworks currently available to SOA implementers. By describing the typical SOA components and frameworks, it defines the baseline for the enterprise SOA. Subsequent articles will further explore the methodology for SOA governance based on the formal description of the service purposes...
SOA Infrastructure: Mediation and Orchestration
by Satadru Roy
A services development platform such as a Web services runtime stack or an application server may no longer be enough to support the complex infrastructure requirements of a services ecosystem. This is the first of a four-part series in which we'll examine the most common SOA infrastructure requirements, their various degrees of complexity and how organizations can take an incremental approach towards SOA infrastructure software adoption. We begin by covering two categories of SOA infrastructure requirements, mediation and orchestration...
An SOA Practices Checklist for Building Implementation Roadmaps
by Nitin Gandhi
It's been well documented how service-oriented computing can help organizations achieve strategic benefits that can ultimately result in streamlined IT environments and increased profit margins. However, to successfully carry out the process of incorporating SOA into an organizational environment requires that various IT groups be coordinated in an effort to adopt SOA in a standardized manner. This is where accepted practices become useful. This article provides a master list of common practices...
Looks to be a good read!
br>
Wednesday, July 12, 2006
I participated yesterday in a SOA Forum presentation by Steve Graham [IBM] on their approach to SOA Governance. IBM’s approach to this is pretty holistic, and they fully believe in addressing SOA Governance as an extension to IT Governance. As I have noted before, I am in full agreement with this and do think that an organization that already has an established governance process in place will find adding SOA Governance an incremental process.
The interesting item for me was the discussion that took place in the Q&A session after the presentation. The premise of that discussion was around the following:
- Organization does not have existing governance policies and processes
- Organization is embarking on a SOA initiative
- Organization is implementing SOA governance as part of the initiative
- Organization is looking to use the SOA initiative to retro-fit governance back into IT
- Many Organizations are doing this
I am a bit unsure about this approach. In a lot of ways, I think the companies that follow this path focus around the technologies that go into implementing and managing the governance processes and believe that the acceptance and use of said technologies equals implementing governance. The problem, as I see it, revolves around the fact that governance really is NOT about technology, it is about culture. As such, governance requires a change in mind-set and buy-in and support within the organizational culture either because they have seen the light or because they want to stay out of jail. The easy part in this is the technology. The hard part is the Policies that need be defined and the processes that need to be put into place with the appropriate carrots and sticks. Y’know, the people part!
Instinctively, I find it hard to accept that an organization that could not implement an IT governance process is suddenly going to buy into the need for governance simply because they are embarking on a SOA initiative. On the other hand, can the value of governance be demonstrated to the organization as a whole by doing the right thing within the SOA initiative? Which in turn leads to changes in the larger organizational culture? I would be very curious to get any feedback from folks who have or are going down this path.
[Now playing: White Flag - Life For Rent]
br>
Friday, June 23, 2006
In continuing the conversation, Brenda asks:
How about that "elusive" governance? How do you sell Governance to IT and Business Leadership? Where do you get the money for people, process and tools? How do you convince people "Governance is good for you" rather than "Governance is a roadblock"?
Governance is fast becoming one of my favorite topics because it is so critical to the success of a SOA and I am dealing with various aspects of it on a daily basis. This is SO NOT a technology issue (like much of SOA) but one of culture.
How do you convince people "Governance is good for you" rather than "Governance is a roadblock"?
I'm going to paraphrase one of my favorite folks in this space, Anne Thomas Manes of the Burton Group, who I have heard answer this particular question more than once: "By making sure that the path that implements Governance is the path of least resistance!". What she means of course is to make it easy for folks to do the right thing by making sure that doing the right thing is the least onerous course of action.
You could do this by perhaps reducing the hoops one has to jump through if you are following the right process or by providing rewards and incentives (what those would be depends on the culture of your organization) for doing the right thing. I think that it is also important that the folks who are going to be "governed" have a say in what the governance policies should be either by having an opportunity to provide revisions/feedback to the processes or by being part of the team that is coming up with the governance processes. Having a say in how things are going to be implemented gives people a sense of shared ownership in the process.
How do you sell Governance to IT and Business Leadership? Where do you get the money for people, process and tools?
I know that in my environment, facets like Security, Interoperability, Traceability and Metrics are critically important and as such those are things that I would be hammering on to get buy-in. In general, I do not know if I would actually sell "Governance" as a concept because it is such a loaded term. My inclination would be to sell the benefits of Governance which at a high level is the concrete articulation of what your organization considers the right thing/best practices. The ability to apply these best practices in a consistent manner across your SOA initiative is a powerful incentive. On the other hand it could very well be about the "Stay out of Jail" card that is needed to make sure that you are doing the right thing by certain industry specific compliance requirements. Again, a lot depends on the culture of your organization.
UPDATE 6/24/06: Todd Biske has some thoughts to offer on this topic (and more) as well.
br>
Wednesday, June 21, 2006
In
Brenda's comments in the
ongoing discussion,
Mark Griffin follows up with… "
Is it possible to focus on the business but forget that at the end of the day you still have to deliver what you promised? This would be the loosely coupled, reusable and agile services that make up that business process."
I think it is indeed possible to get absorbed in the mapping of business processes to the detriment of everything else. I personally think that one of the core benefits of a SOA implementation is re-usability, which in turn enables agility. As such a lot of how I approach this is colored by that viewpoint. Given this, my approach to business process is not as an end to itself but as a way to identify reusable aspects of a business such that it can be factored out into services.
Approaching SOA from both the Top-Down/Business Process perspective as well as the Bottom-Up/Service Factoring perspective allows for the identification of the re-usable aspects in a business process and to realize that re-usability using the service implementation technology of your choice. In short, this helps you build the right type of services that are reusable across the Enterprise. I would add that in order to provide the standards based loose coupling in the current state of technology, I would be utilizing web services to implement the SOA.
br>
Tuesday, June 20, 2006
Todd has a pointer to Brenda Michelson who is facilitating an online discussion around “..how to prepare your IT organization for service orientation”.
I concur with Todd’s comments and add that the one thing that I noted with MarkG’s question was the emphasis on “… prepare your IT organization..”. The issue that I often see with IT or a Technology Group driving a SOA implementation is that there is more than likely going to be an emphasis on technology implementation rather the improvement of business processes. As a technologist it pains me to admit this, but SOA is NOT about technology. It is among other things, identifying the capabilities that are offered by the business unit, having a clear understanding of the processes that make up those capabilities, and making those processes available for reuse via standardized interfaces to the enterprise at large. In short, unless you fit the identification, design and development of services within the larger context of what the business is all about, it is very possible to end up with a bunch of services and not a SOA.
Where IT driving the bus makes sense is in the design and development of Infrastructure Services. What I mean by that is that in any implementation of a SOA there are going to be cross-cutting concerns that should be centrally managed and abstracted away from the business. Implementation of a security infrastructure is a good example of this type of functionality. These types of implementations, which should have very low coupling with business processes, are something that makes sense for an IT/Technology Organization to drive.
As to the governance that was mentioned, it does not necessarily follow that an organization that has a deep expertise in AppDev translates to an organization that has a good governance process in place. I do think that it makes sense that if the culture already has a good governance process in place as part of its Enterprise Architecture, adding the bits that enable governance for SOA is an incremental addition and not a big culture shift. And this addition very much improves the chances of success of the SOA implementation, given that lack of governance has been cited as one of the prime reasons for the failure of SOA implementations.
br>
Sunday, June 18, 2006
I saw the recent announcement that IBM had donated its implementation of WSDM (Web Services Distributed Management) to Apache and was also assigning dedicated development resources to the effort.
The two competing web services management specifications in the area are WSDM, which is backed by BEA, HP, IBM, Oracle and others, and WS-Management which is backed by AMD, BMC, Dell, Intel, Microsoft, Sun and others.
But back in early 2005, in a moment of shared clarity, the two competing vendor groups decided to merge the two specifications together, which is excellent news for the user community. But the reality that underscores the decision is that any vendor who was working on incorporating either WSDM or WS-Management into their tooling/product stopped working on it. In short, WSDM and WS-Management specifications as they existed are effectively dead! Everyone is waiting for the merged specs to come out such that they can implement them, which is probably still 18-24 months out.
So what does it mean when IBM touts contributing WSDM to Apache? Nothing more than PR. Move along now, nothing to see here...
br>
Burton Group put on its annual Catalyst conference last week and they did a fabulous job. This was my first Catalyst, and I spent most of my time in the Application Platform Strategies (APS) content area, which is the home for SOA, with occasional forays into the Security and Risk Management, Identity and Privacy Strategies and Collaboration and Content Strategies areas.
Anne Thomas Manes, who is the Research Director for the APS area, put together a stellar lineup of folks that talked to the successes, challenges and the current thinking on SOA. I liked the way that presentations were clearly broken up into Burton Group POVs, End User Case Studies and Vendor POVs because you got to hear about problems, solutions and the way forward from different perspectives. I was gratified to note that even in the Vendor POV presentations, there was more of a focus on problem/solutions and less on marketing. Kudos to Anne for making this happen.
On a personal note, it was great to finally meet Anne in person. Even though we seem to move in the similar virtual circles and have communicated electronically before, this was the first time that we met in person. She is as knowledgeable, regarding an amazing wide variety of topics, in person as she is virtually.
The key points that were hammered home again and again by folks who are really in the trenches of SOA implementations is that it is NOT about technology but about the business/mission and the critical importance that governance plays in the success of a SOA implementation. One of the most valuable take away’s for me from the conference was the two hours that I spent on Friday in a SOA Governance BOF moderated by Anne but including both Burton folks as well as end users. I came away with a lot of great information and insight that is directly applicable to my environment.
There were just too many good presentations from the Burton folks, but one particular item that caught my attention and is something that I will have to follow up on was the importance of modelling, especially at the business/mission level, in the SDLC by Chris Howard. The fact that Chris participated in the SOA Governance BOF and I gave him a ride back to the airport to catch our respective red-eye flights only reinforced this 
Finally got a chance to meet Todd Biske from A.G. Edwards Technology Group in person. He gave an excellent presentation on their SOA infrastructure and governance implementation. Todd is another one of those people who moves in the same virtual circles as I do and it was great to finally meet, hang out and share war stories. There were many other memorable presentations including ones from Rob Vietmeyer from DISA, Benjamin Moreland from The Hartford, Jeff Barr from Amazon, Gregor Hohpe from Google and Barry Briggs from Microsoft. On a side note, Barry's "The Process-centric Organization" brief was very good and in some ways surprising. Surprising, at least to me, in that it was at a level of abstraction and maturity that you rarely get from Microsoft given its inclination to cater to the Developer and not the Enterprise Architect.
All in all, an excellent use of my time. My one regret was that this was my first time in San Francisco and I really did not get to see any of it. With sessions sometimes starting at 7:30 a.m. and going on till 6 p.m. followed by networking sessions even later and a 3 hour time zone difference the days were just too full to do any site-seeing. I'll have to plan better next time.
br>
Saturday, May 27, 2006
Sanjiva has announced the release of “Tungsten” which is the first product from WSO2 and brings together an integrated stack of Apache web services technologies which include:
- Apache Axis2 (SOAP)
- Apache Axiom (High performance XML)
- Apache Rampart/Apache WSS4J (WS Security)
- Apache Sandesha2 (WS Reliable Messaging)
- WS- Addressing
- Apache Neethi (WS Policy)
- Apache XML Schema
- Apache Derby (Database)
- Hibernate (Persistence)
- Jetty (HTTP server)
- Apache Tomcat
Of course, the product is completely open source and is licensed under the Apache Software licence. It currently has support for the following WS-* specs:
- SOAP 1.1/1.2
- WSDL 1.1
- MTOM, XOP & SOAP with Attachments
- WS-Addressing
- WS-Security
- WS-SecurityPolicy
- WS-ReliableMessaging
- WS-Policy
- WS-Policy Attachment
I do have to check this out!
br>
Tuesday, May 23, 2006
I had the opportunity to attend the SOA for E-Government Conference which is being held in Virginia.
The keynote speaker for today was Ron Schmelzer, the author of “Service Orient or Be Doomed! How Service Orientation Will Change Your Business” (Excellent Book, BTW!). I’ve interacted with Ron before in various virtual SOA communities before, so he actually recognized my name when I stood up to ask a question, which was flattering. And he must have thought my question was semi-intelligent since I got the book as a gift for asking a good/relevant question. I already have a copy, but since I do enjoy his work, I made sure that I got his autograph on the book 
Two other people that I ran into and ended up having a conversation around Standards and Profiling were Greg Lomow, co-author of “Understanding SOA with Web Services” (another book that I like), and Andrew Townley, who is currently the Principal Architect for the SOA backbone of the Irish Government’s e-gov initiative, both of whom are currently with BearingPoint.
br>
Friday, May 12, 2006
A working group at the W3C that is looking at addressing the impedance mismatch that occurs when trying to map XML Schema to language implementation is the “XML Schema Patterns for Databinding Working Group”. They have published a working draft [1] that should interest folks who are interested in web services Interop.
Abstract
This specification provides a set of simple XML Schema 1.0 patterns which may be used to describe XML representations of commonly used data structures. The data structures described are intended to be independent of any particular programming language or database or modelling environment.
[1] http://www.w3.org/TR/xmlschema-patterns/
[Now playing: Ankhiyan Na Maar - Ek Khiladi Ek Haseena]
br>
Wednesday, April 19, 2006
Tom Glover (IBM), currently serving as the president & chairman of the Board for WS-I, has some news on his blog about what the WS-I will be pursuing in the future. In particular, check out the following blog posts:
To paraphrase a recent e-mail, what this in effect means is that “… effectively, WS-I has agreed to produce profiles that cover the scope of the IBM/Ford/Daimler RAMP profile. The RAMP profile composes on the WS-I Basic Profile 1.1 and WS-I Basic Security Profile 1.0 and adds the following specifications: (1) WS-Addressing (2) WS-Reliable Messaging and (3) WS-Secure Conversation. The main difference in the proposed work in WS-I is that WS-Addressing will instead be included in an amended version of the WS-I Basic Profile 1.1 (called Basic Profile 1.2).
Additionally, to address interoperability of attachments support, support for MTOM/XOP in a SOAP 1.1 context will be considered. Once the rechartered Basic Profile WG completes its work on the Basic Profile 1.2, it would then begin work on a Basic Profile 2.0 that is based on SOAP 1.2 and MTOM/XOP.”
This is good news indeed as I expect to be leveraging a lot of this in my environment going forward.
[Now playing: Kaho Na Kaho - Murder]
br>
Sunday, March 26, 2006
It would appear that IBM and others are proposing that WS-I take up working on a new Profile for Reliable Asynchronous Messaging. The scope of the Profile is:
Take a look at the RAMP section of IBM developerWorks site for more information.
[Now playing: Do Pal - Veer-Zaara]
br>
Sunday, February 5, 2006
Software
JDK & Tomcat Application Server
Apache Axis (NOT Recommended for new Web Service Development)
Apache Axis2
Eclipse IDE
Installation
- Install the JDK - De-select Demo's and Source Code; Choose defaults for everything else
- Add the following system environment variables
- JAVA_HOME=C:\Program Files\Java\jdk1.5.0_09 (NOTE: Substitute the appropriate path here)
- Add “%JAVA_HOME%\bin” to the PATH
- Verify "javac" from the command prompt works!
- Reboot
- Install Apache Tomcat into "C:\DevTools\Tomcat5.5"
- Use "Full" Install for a DEV Environment
- Choose Port 80 as the HTTP Connector Port (Verify that you do not have another web server running on Port 80)
- Choose a userid/password for the Administrator Login
- Choose the previously installed JDK (NOT the JRE) when asked a path to a Java JVM
- If you expect to provide the web services to remote machines, configure the windows firewall to let in ports 80/443
- Reboot
Axis (NOT Recommended for new Web Service Development)
- Install Axis into "C:\DevTools\axis-1_4"
- Add the following system environment variables
- AXIS_HOME=C:\DevTools\axis-1_4
- AXIS_LIB=%AXIS_HOME%\lib
- AXISCLASSPATH=%AXIS_LIB%\axis.jar;%AXIS_LIB%\commons-discovery-0.2.jar;
%AXIS_LIB%\commons-logging-1.0.4.jar;%AXIS_LIB%\jaxrpc.jar;%AXIS_LIB%\saaj.jar;
%AXIS_LIB%\log4j-1.2.8.jar;
- Copy "xercesImpl.jar" & "xml-apis.jar" to from "Xerces-J-bin.2.8.1.zip" to "C:\DevTools\axis-1_4\webapps\axis\WEB-INF\lib"
- Copy "activation.jar" from "jaf-1_1-fr.zip" to "C:\DevTools\axis-1_4\webapps\axis\WEB-INF\lib"
- Copy "mail.jar" from "javamail-1_4.zip" to "C:\DevTools\axis-1_4\webapps\axis\WEB-INF\lib"
- Copy "xmlsec-1.3.0.jar" from "xml-security-bin-1_3_0.zip" to "C:\DevTools\axis-1_4\webapps\axis\WEB-INF\lib"
- Copy "wss4j-1.5.0.jar" from "wss4j-bin-1.5.0.zip" to "C:\DevTools\axis-1_4\webapps\axis\WEB-INF\lib"
- Register Axis with Tomcat by creating "axis.xml" in "C:\DevTools\Tomcat5.5\conf\Catalina\localhost"
- "axis.xml" should contain just one line - "<Context docBase="C:\DevTools\axis-1_4\webapps\axis" />"
- Reboot
- Run "http://localhost/axis/happyaxis.jsp" and verify all needed components are installed
- Run "http://localhost/axis/EchoHeaders.jws?method=list" and verify there are no errors
Axis2
- Unzip the Axis2 Standard Binary Distribution into "C:\DevTools\axis2-1.3"
NOTE: Substitute the appropriate path to the current version of Axis2
- Add the following system environment variables
- AXIS2_HOME=C:\DevTools\axis2-1.3
- Add the following to the PATH
- [NOTE: You are doing all this so that you have access to the command line utilities like wsdl2java etc.]
- Copy the axis2.war to the "C:\DevTools\Tomcat 5.5\webapps" directory
- Browse to http://localhost/axis2/axis2-web/HappyAxis.jsp and verify that all needed components are installed
- Copy any needed modules to "C:\DevTools\Tomcat 5.5\webapps\axis2\WEB-INF\modules"
- Restart Tomcat
IDE Installation
- Unzip the Eclipse WTP All-In-One Package to into "C:\DevTools"
- Install any needed Axis2 Plug-ins
br>
Tuesday, January 31, 2006
I am in the process of troubleshooting some connectivity issues between an Apache Axis based web service and a .NET (1.1) service consumer. I am not all the way there, but each step is bringing me closer to the solution. Here is an issue that I ran into and was able to resolve that I am documenting here in the hopes that others will learn from my mistake. BTW, this particular issue does not have anything with interoperability. Just with me taking some things for granted and not thinking through the process 
The service connection is over an SSL channel but I was getting an exception that informed me that “The underlying connection was closed. Could not establish trust relationship with remote sever”. What the message was telling me was that I was not even getting to the first step, which is establishing an active SSL connection with the remote service! But I could browse to and invoke the service over SSL from my browser!
The issue and the resolution to it was pretty straight forward once I thought through the issue. My Enterprise, like other large scale Enterprises, has its own Certificate Authority. As such the service was protected by a SSL certificate that was generated based on the Enterprise Certificate Authority. I could browse to the service with my browser, because I had installed the Enterprise Root Certificates into the browser some time ago. But that installation put the certificates into the Personal Store and not the Local Machine store. This is important since the web service consumer was NOT running under my credentials but was running under a low privilege service account.
The solution was to import the Root Certificates for my Certificate Authority into the “Trusted Root Certification Authorities” List for the Local Machine. And yes, I was absolutely positive that I DID trust this particular Root Certificate! Once I did that, I was good to go..
[Now playing: Chham Se Woh (Remix) - Dus]
br>
Monday, January 30, 2006
If you are working with web services, especially across platforms, the ability to troubleshoot SOAP traffic using a tracing tool is absolutely critical. I personally have been using the SOAP debugger that is built into XMLSpy for a while and have been very happy with it, but I’ve run into an issue that I hope someone can provide a resolution to.
But before I get to that, I wanted to point to some freeware SOAP tracing/debugging tools that I’ve used and have really liked.
Now onto the issue that I am having. Some of the web services that I am working with are outside my environment and requires me to authenticate with a HTTP proxy that requires explicit credentials. I have updated my client to programmatically authenticate against the proxy. But how can I do a SOAP trace on this connection? The tools that I am using behave like a HTTP proxy to start with, so the only way I can see this working if the tool itself supports HTTP proxy authentication.
From what I can see the SOAP debugger in XMLSpy, ProxyTrace or TcpTrace do not support this functionality. WebServiceStudio 2.0 says that it does, but I am getting consistent exceptions when using it.. I sent an email to the developer of the tool to enquire as to how he does HTTP proxy authentication, or if he was willing to provide the source (he had it on GotDotNet at one point, then pulled it) so that I could make the mods myself, but have not received any replies.
So would anyone have any recommendations for a SOAP Tracing tool that supports HTTP Proxy Authentication? BTW, it MUST support the setting of explicit authentication credentials and not just pick up the settings from the browser proxy settings. Or am I over-thinking this and there is an easier way to accomplish what I am trying to do?
[Now playing: Kaisi Paheli Zindagaani - Parineeta]
br>
Thursday, January 19, 2006
I recently had the need to consume a web service that required that I authenticate against a HTTP Proxy. The credentials required for authentication against the proxy were NOT my default login credentials but an explicit user-name/password that was provided by the proxy owner.
Here is the working C# code used by the service consumer:
_ws.Url = "http://url.tothe.webservice";
if (_useProxy)
{
System.Net.WebProxy _wp = new WebProxy(_proxyUri);
System.Net.NetworkCredential _creds =
new NetworkCredential(_proxyUsername,_proxyPassword);
System.Net.CredentialCache _cache = new CredentialCache();
_cache.Add(new Uri(_proxyUri),"Negotiate", _creds);
_ws.Proxy = _wp;
_ws.Proxy.Credentials = _creds;
}
Where _ws is my web service proxy class and _proxyUri is the Uri of the proxy server and is in the "the.proxy.server:8000" format.
[Now playing: Chaiyya Chaiyya - Dil Se]
br>
Monday, December 12, 2005
The Microsoft patterns & practices web services security guide is now online. A PDF format version is also available.
“This guide will help you quickly make the most appropriate security decisions in the context of your Web service's requirements while providing the rationale and education for each option. A scenario-driven approach is provided to demonstrate situations where different security patterns are successful. The guide also combines a series of decision matrices to assist you in applying your own criteria to use the Web service security patterns to meet the requirements of your environment.”
This document provides a scenario and patterns based approach to web services security. As such the information transcends platforms. I was one of the external reviewers of this book and found it to be a VERY welcome product from the PAG!
br>
Sunday, December 11, 2005
I had the opportunity last week to do a guest lecture on “Service Oriented Architecture (SOA), Web Services and Interoperability” at the Johns Hopkins University Whiting School of Engineering. I would like to thank the instructor (Thanks, Jeff!) for the invitation and for the folks in the class for the dialog. I had fun and I hope everyone who came got something out of it as well.
br>
I've written previously about the mismatch that sometimes happens when trying to map XML to a language implementation class. Some of the issues that I've run into are probably worth documenting.
- In consistent support for the <choice> content group in JAX-RPC
When you are defining content groups in a schema, you can use Sequence, Choice or All. We had a case where the most appropriate one to use would have been the <Choice> content group. Unfortunately JAX-RPC implementations do not support this XSD construct consistently. [Note to self: Need to find out if the latest release of Apache Axis (1.3.x) has improved support for this]
UPDATE (12/16/05): Was informed today that the JAX-RPC implementation of Apache Axis 1.2.1 supports <choice>.
The work around for this was to use a <Sequence> content group but mark the elements as optional i.e. use minOccurs=0.
- Lack of support for an empty instance of the anyURI data type in .NET
In .NET v1.1, there is a problem with schema validation of xs:anyURI data type in the .NET System.Xml.XmlValidatingReader. In reading an explanation of this from Dare Obasanjo [MSFT] some time ago, it would appear that he schema validation engine in the .NET Framework uses the System.Uri class for parsing URIs. This class doesn't consider an empty string to be a valid URI which is why the .NET schema validation considers the empty instance of anyURI to be invalid according to its schema.
The long and the short is that this is supposed to be fixed in the .NET 2.0 [Note to self: Verify!]. The work around for this is to change the data type from xs:anyURI to xs:string.
- Lack of support for XSD Schema Substitution Groups in the .NET XML Serializer.
In some of the schemas that I work with, Substitution groups are used extensively. I would think that one of the work arounds for this is to use the <choice> content group. The issue of course is that we cannot because of the first issue; JAX-RPC support! So we are in effect caught between a rock and a hard place.
The work around for this was to go through each of the elements that use a substitution group and make an explicit choice on which one we will use/support and modify the schema to reflect that. [Note to Self: Bug Microsoft to support this construct
]
- Lack of support in Java for default values and optional attributes.
The optional and default attributes in a schema are simply ignored when you generate the server-side bindings. On the other hand, I really am not a fan of validation adding items that will be passed on the wire without my explicit say-so, hmm..
- In the Java type model, all object references can be null but in .NET 1.1 value types always have a value and as such cannot be set to null. Only reference types can be null. Nullable types in .NET 2.0 address this issue.
Be aware of and avoid the usage of nillable=”true” for value types! Another recommendation that I’ve seen is to define a complex type to wrap a value type.
Just to close out, here is a listing of Cross-Platform Safe Types:
XML Schema |
.NET - CLR |
Java |
xs:string |
String |
String |
xs:int* |
int |
int |
xs:dateTime* |
DateTime |
Calendar |
xs:float* |
float |
float |
xs:double* |
double |
double |
xs:boolean* |
bool |
boolean |
xs:enumeration** |
Enum |
Enum |
* avoid nillable=”true”
** avoid QName values (colons in data)
From those who have experienced the pain of working through Interop, I would appreciate any corrections, additions and work-arounds to the listing.
br>
Saturday, December 10, 2005
I am a big fan of Top-Down/Contract-First Style of web service development as a mechanism to improve web services interoperability. I recently came across a series of articles from IBM on improving interoperability between J2EE and .NET. I wanted to explicitly call out their summary of best practices for web services interoperability:
- Design the XSD and WSDL first, and program against the schema and interface.
- If at all possible, avoid using the RPC/encoded style.
- Wrap any weakly-typed collection objects with simple arrays of concrete types as the signature for Web service methods.
- Avoid passing an array with null elements between Web services clients and servers.
- Do not expose unsigned numerical data types in Web services methods. Consider creating wrapper methods to expose and transmit the data types.
- Take care when mapping XSD types to a value type in one language and to a reference type in another. Define a complex type to wrap the value type and set the complex type to be null to indicate a null value.
- Because base URIs are not well-defined in WSDL documents, avoid using relative URI references in namespace declarations.
- To avoid conflicts resulting from different naming conventions among vendors, qualify each Web service with a unique domain name. Some tools offer custom mapping of namespaces to packages or provide refactoring of package names to resolve this problem.
- Develop a comprehensive test suite for Web Services Interoperability Organization (WS-I) conformance verification.
Here are links to the articles:
br>
Dino recently posted an entry on the usage of Date and Time values when dealing with web services interop. He also points to a W3C Note on this topic.
This entry reminded me of an earlier article by Dan Rogers [MSFT] that provides great insight into this issue regardless of platform and provides best practices for handling this particular DateTime issue if working in .NET. A key take-away from the above article as regards to DateTime usage in web services in .NET is “When using the .NET Framework version 1.0 and 1.1, DO NOT send a DateTime value that represents UCT time thru System.XML.Serialization. This goes for Date, Time and DateTime values. For Web services and other forms of serialization to XML involving System.DateTime, always make sure that the value in the DateTime value represents current machine local time. The serializer will properly decode an XML Schema-defined DateTime value that is encoded in GMT (offset value = 0), but it will decode it to the local machine time viewpoint.”
BTW, Dan is one of the creators of XSDObjectGen which is a must have tool if working with web services on the .NET platform. If you look at the documentation for XSDObjectGen, you will note the following gem: “Special handling logic for DateTime types: DateTime serialization properties are generated and illustrate the recommended best practice for working with local and universal time values that will properly serialize on another computer in another time zone without loss in precision.” Excellent!
br>
Sunday, December 4, 2005
I am doing a bit of research on Web Service Service Level Agreements (SLAs). I do realize that pretty much all Web Service Management vendors have some mechanism for specifying SLAs in their products, which they in turn monitor and enforce using their tooling. But there does not appear to be a standardized language that can be used to specify SLAs in a SOA i.e. Each vendor has its own proprietary mechanism for specifying SLAs.
At this point, I really am not looking at a mechanism for providing a machine readable SLA language as much as trying to gain an understanding of what should be specified using such a language. I am familiar with SLAs as it relates to a public facing web application/site and want to explore what the criteria should be on a public facing web service. A couple of resources that were pointed out by various folks in online discussions include:
I’ve also come across the following series of articles (again from IBM) that seem relevant as well:
Service Level Agreements are a rather important facet of any enterprise class “service” that is offered for consumption, but there seems to be a lack of information on this topic at it relates to a SOA (at least as far as my Google skills are concerned). Is IBM the only one out there that is thinking about this topic, with some degree of seriousness, at the Enterprise/Strategic level?
Any help or pointers to resources would be very much appreciated.
br>
Thursday, November 10, 2005
Back in June of this year (June 21–22), the W3C “… held a Workshop to gather and access experiences of using the XML Schema 1.0 Recommendation on 21-22 June at the Oracle Conference Center in Redwood Shores, CA, USA. More than 30 participants attended to represent diverse communities including end users and vertical standards organizations through to vendors and the W3C XML Schema Working Group. The participants shared implementation stories and expertise and considered ways to move forward in addressing XML Schema 1.0 interoperability, errata, and clarifications.”
As has been noted before, inconsistent support for the full subset of XML Schema in vendor web service toolkits is an issue that has been brought up time an again when it comes to web services interoperability. The practical experiences of the folks who came to the table for this workshop gives one great insight into this issue.
Here are some pointers for you to check out:
- W3C Chair’s Summary Report
- Minutes of Day 1
- The presentations by the individual submitters are great and include ones from WS-I, BT, OAGi, HL7, QuickTree, SAP, Semeiosis, Microsoft x2, IBM, Sun, Oracle, Rogue Wave and Acord and others….
- Minutes of Day 2
In addition to the presentation powerpoints linked to from the above location, there are also more text based and detailed submissions to the workshop that are available on the W3C site. For example, a PDF version of WS-I’s submission.
There are some additional resources that came out of this effort that are important going forward:
In short, this is an awesome resource for folks who are into building interoperable web services.
P.S. My thanks go to Anne, who in responding to a question, pointed to some of the resources here.
br>
Sunday, October 16, 2005
I will be giving a presentation on "Service Oriented Architecture (SOA), Web Services and Interoperability" at the IEEE Computer Society (Baltimore Chapter) on Thursday, October 20.
Here is the abstract:
“Learn about the components of a Service Oriented Architecture (SOA) and the role Web Services play in a SOA implementation. Web services are touted as the distributed computing technology that can be used to integrate systems, services and applications that live on diverse technology platforms. While true to a great degree, there are implementation issues regarding interoperability that often come into play when seeking to integrate the various systems. Learn about the standards as well as design and development approaches that will improve the chances of interoperability across platforms and technologies.”
All are welcome to attend [You do NOT need to be an IEEE or Computer Society member to attend], and as the Vice-Chair of the IEEE Computer Society [Baltimore], I urge you to come and check us out
Location and Directions can be found on the Chapter Site.
br>
Wednesday, October 12, 2005
I came across the following chart on web services adoption (especially of the WS-* specs) in the industry. It was put together by Kirill and Simon as part of their Indigo/Java Interop presentation at the PDC.

br>
Saturday, October 8, 2005
As you probably noted in my last blog entry, if I am on the .NET platform, I tend to use XSDObjectGen a lot. XSDObjectGen is a excellent tool on that .NET platform that will improve your chances of Interop with other platforms and technologies. It has now been updated to 1.4.2. Check it out.
While I was visiting Microsoft last week, I had a chance to meet Dan Rogers, who was one of two guys who actually built that particular tool. Awesome guy, with whom I spent some time talking about web services, schema design and versioning and more and who provided me with a wealth of information on those topics (Thanks Dan!).
I did not realize it until much later into the conversation that Dan also was basically one of the inventors of UDDI and was one of the folks who spent a lot of time at OASIS hammering out the 1.0 and 2.0 specs. He also seemed to have a lot of familiarity with the BizTalk side of the house. From what he mentioned, XSDObjectGen was something that was originally developed for some work he was doing with BizTalk, but turned out to be useful in its own right.
UPDATE: As noted by Dan in the comments, XSDObjectGen has been updated to v1.4.2.1. This is a bug-fix release. Same link as given above.
br>
Platform and technology agnostic exchange of messages such that they can be utilized from any language or platform. That is the promise of integration via web-services. The contract-first style of development is a web-service development style that will improve the chance of interoperability when the technologies and platforms on both ends of the wire are different.
My environment happens to be very heterogeneous and, as such, the web-services that I develop MUST interop with other SOAP stacks. My normal design and development style is pretty straight forward:
- Design the message schema for the web service.
I tend to use XMLSpy a whole lot in this phase. The interesting thing is that there are many different ways to define XML Schemas and the design choice can seriously impact the generation of implementation classes in the technology of your choice. There are different schema design styles such as the Russian Doll, Venetian Blind and Garden of Eden that can be followed. There are also some guidelines for XML Schema design that will tend to improve web services interoperability. I tend to be a fan of the the Garden of Eden style for web services.
- Generate a WS-I Basic Profile compliant WSDL from the message schema.
While I occasionally do use a wrench, I am not a plumber. Hand generating WSDL is a dark and mysterious art prone to many pitfalls. Generating WS-I Basic Profile compliant WSDL by hand and tweaking it is simply voodoo! As such, I rely on tooling to help me in this. My particular tool of choice here is WSCF from my friend Christian Weyer. He has versions that work from the command line, as a Visual Studio.NET plug-in, or as an Eclipse plug-in. So whether or not I am on the .NET or Java platform, I am covered. After I am done with the generation, I bring the WSDL into XMLSpy for any minor tweaking or corrections as well as to make sure that the WSDL validates.
- Generate the Service Stub
If I am using Apache Axis, I use WSDL2Java for this purpose. If I am on the .NET platform, I use a combination of WSCF and XSDObjectGen. I first generate my implementation classes using XSDObjectGen by pointing them to the message schema. Then I generate the shell for the web service methods using WSCF and then hook it up to the Data Types that are generated by XSDObjectGen. Some folks may question why I do this, when WSCF can generate the data types as well. There are a couple of answers, with the most relevant being that I simply find it easier to work with code that is generated by XSDObjectGen.
- Generate the Client Proxy
Use WSDL2Java if using Apache Axis to generate your client proxy. On the .NET platform, I use WSDL.exe to generate the client proxy BUT I go into the proxy class and comment out all of the data types. I actually use XSDObjectGen to generate the data types from the XSD files and hook it up to the proxy class for the same reasons that I noted above in the Service stub generation. I do not use WSCF here for another reason, which is that it seems to enforce the Pascal casing for generated type names even if the XSD is using camel casing. I prefer to keep the casing consistent between my XSD and Code.
- Unit Tests to Verify Interop
Trust but Verify! Since I am building Interoperable web services, I do not want my consumers to discover issues with my services. To that end, I make sure that I write unit tests in platforms and technologies at both ends of the wire to verify interoperability. i.e. If I am writing a .NET web service and I am expecting that my consumers are going to be .NET and Apache Axis based, I make sure I write unit tests in both .NET and with Java/Axis to exercise the service. I would do the same if I was deploying an Axis based web service.
br>
Wednesday, August 17, 2005
I saw a couple of pattern implementations coming across on Microsoft Download and tracked them back to two very, very interesting articles by John Evdmon :
1) Service Patterns and Anti-Patterns
“This paper, the first of a multi-paper series, provides some fundamental principles for designing and implementing Web services. The paper starts with a brief review of SOA concepts and leads up to a detailed discussion of several patterns and anti-patterns that developers can leverage when building Web services today. Guidance in this paper is applicable to any programming language or platform for which Web services can be developed and deployed.”
Some items in this article really resonated with me such as “Avoid blurring the line between public and private data representations.” and “Service consumers must map their data representations to the schema used by the service. Some consumers may find mapping to a service's schema to be a lossy process.”, as we were just today having a discussion internally about Data Models in SOA and how it is very important to understand the differences between Service Level Data Models and Internal Data Models. Especially as to how in an SOA implemented using web services, there has to be a data translation that occurs between the Internal Data Model and the Service Level one both the Provider and the Consumer sides.
I do have issues with the assertion that if you “…comply with the WS-I Basic Profile [it will enable the service] to be consumed by any platform or programming language that supports Web services.” My problem with this statement is comes from painful experience gained while working with today’s SOAP toolkits to build inter-operable web services. The reality is that WS-I improves the possibility of Interop by mandating how specific artifacts such as a WSDL should be structured, but WS-I punts when it comes to XML Schema. They basically say that the SOAP toolkit should support the approved XML Schema standard and leaves it at that.
The problem here is that no SOAP toolkit has full support for the entire XML Schema standard, so when it comes to implementation specifics there is often the possibility that an schema artifact that is supported in one toolkit may not be supported in another. An example is support for substitution groups which exists in Apache Axis but does not exist in .NET. There are examples that go in the other direction as well. Check out my earlier summary of a SOAPBuilder’s discussion if you wish to know more about this..
I do love that fact that the article advocates Top Down / WSDL First web service development (… just keep the above caveat in mind when you are designing your message schema).
The associated code for the patterns mentioned in the article can be found at:
2) Service Versioning
“This paper is the second in a multi-paper series focusing on service design principles. This paper surveys some fundamental guidelines for versioning schemas and Web services. Six specific principles for versioning Web services are identified.”
Hmm.. I need to think a bit more about this…. This is also something that is of great interest to me and is something that is being actively worked on in my space as well.. I did note that some of the Message Versioning recommendations seem to tie in very nicely with the “XML Schema best practices” document from HP.
br>
Tuesday, July 19, 2005
Some time back (As in late March of this year), there was a very interesting thread discussing web services InterOp on the SOAPBuilder’s list serve. The great thing was that there were a lot of folks participating in that discussion, including folks from IBM, Microsoft, WS-I and more, and some great real life issues were brought up during that discussion.
I had at that time made a note to myself to somehow document them, if for nothing else but for my own benefit. I am sure the discussion thread is archived so you can check out who said what if you need to, but here are some of the important (at least to me) points that were made during that discussion:
- Most InterOp challenges occur at the point where SOAP implementations attempt to map language objects into XML data and vice versa. A key factor affecting InterOp is the impedance mismatch between language type systems and the XML type system.
- The impedance mismatch in question is caused by various web service stacks implementing a subset of the W3C XML schema...... and they are not all the same subset!
- Worse, the toolkits implement the same subset in different ways. My favourite example is .NET 1.0's problems with handling xsi:nil for value types like integers and enums and how that interacts poorly with Axis' tendency to use xsi:nil everywhere. You can argue about which platform is wrong but in the end of the day we're still left with it being difficult to build interoperable applications.
- ..just to clarify -- Anil said that you shouldn't have problems if you are using the same technology/platform -- and by that he refers to a particular SOAP implementation -- not just a language. You may experience interop problems going Java-to-Java if, for example, you are using Apache Axis on one side and Sun JWSDP on the other.
- An example on the Java side is that the <choice> content group is not supported in JAX-RPC while an example on the .NET side would be that Substitution groups are not supported by the .NET Xml Serializer.
- The idea behind the WS-I Basic Profile was to make sure that a service description defines an unabiguous wire format. One of the first decisions taken by the WS-I was to drop SOAP encoding in favor of XML Schema to express the wire format for data types. Further, the Basic Profile expressly allows the use of all types and constructs in the W3C XML Schema 1.0 Specification 2nd Edition.
Of course, lots of toolkits have problems with XML Schema -- especially around language bindings. But this is a thorny problem to solve. The text of the XML Schema spec can be hard to digest and type systems are complex subjects. Even with its shortcomings, there aren't really any major ambiguities or internal inconsistency in the XML Schema spec itself. This is a different situation than with some of the original web service specs, which partly led to the creation of WS-I Basic Profile.
So how could this problem be resolved? You could "profile away" XML Schema features and disallow the use of constructs that cause toolkits to choke. But many (including me) think this is a bad idea. What would you remove and for what language/toolkit problem (since toolkit support for XML schema varies widely)?
Would you eliminate the use of xs:nonPositiveInteger because there is no directly related type in most languages? That would mean those who use XML Schema validation would have to implement that particular bounds checking in our code.
The other issue is that there are lots of people (including me) who don't want to see the capabilities of XML and Infosets reduced for the sake of easier binding to languages. But there are also lots of people who just want object-based (for example) code to remote in an interoperable way via SOAP.
So two bad outcomes of all this would be (a) winners and losers get picked or (b) multiple web service standards merge (perhaps one emphasizing full "XML capabilities", one emphasizing easy language bindings).
I'm still hopeful that the toolkits will simply continue to improve their support of the XML Schema spec. Unsupported schema constructs are OK as long as the toolkit allows a developer to mitigate the problem. For example, don't error out (or worse, blow up) because there's an xs:redefine present -- expose the construct as an XML node type and let the developer process it in their code.
- > I am curious to find out if the WS-I or anyone else currently have
> pointers to any documentation from the various toolkit vendors that
> show exactly which schema artifacts are currently unsupported by their
> products.
I don't know of any matrix comparing schema features and toolkit support. I wouldn't expect the WS-I to take that effort on because it wants to avoid certifying toolkits directly. But I've heard a lot of people *wishing* for such a matrix -- in the spirit of the earlier SoapBuilders interop grids.
- > But therefore I wonder: given the importance that XSD bindings have
> for interoperability, how is that the WS-I BP doesn't put any limit
> into the XSD constructs? If they do, toolkits would agree what it
> should be supported. In this sense, maybe the limit could be those
> constructs that were defined in the SOAP encoding soap1.1 section 5.
So which XML Schema features would you pick to eliminate? Coming up with an XML Schema sweet spot would mean deciding which toolkits and/or languages "matter" more than others.
Also, many of the WS-* specs use XML Schema to describe message formats – not to mention that XML Schema itself has a schema. So, in eliminating something like derivations, redefinitions, or the xs:choice compositor you can actually cause the web services stack itself to get internally inconsistent (self-non-conformant? Hmm.).
Anyway, I seriously doubt the WS-I would reconsider SOAP encoding. It was tossed from consideration in the BP very early on and with broad consensus. XML Schema – which wasn't a standard when the SOAP spec was initially written, BTW – was considered a superior type system for describing XML documents. So, XML Schema was adopted even though it set the effectiveness of existing toolkits back.
I find this last bit interesting. The WS-I is basically run by the major toolkit vendors. And although nothing gets published without affirmation by the whole membership, the WS-I Board approves what gets voted on. Board discussions and votes are secret and it only takes 2 "no" votes out of 9 to defeat a motion to publish material.
Despite all this, the WS-I published the BP even though these same vendors knew it would exacerbate areas where their toolkits were pretty weak.
- > although XSD is very powerful for describing type semantics, current
> toolkits don't support all the constructs and this produces impedance
> mismatches.
The impedance mismatch is between the XSD types (hierarchies) and OO language types (rich object graphs), regardless of the abilities of the current toolkits -- the type systems are fundamentally different.
I actually think that the major interoperability issues are caused from the other direction -- current toolkits aren't very good at mapping rich object graphs to hierarchical structures. Many toolkits attempt to treat SOAP/WSDL as just another distributed object system (similar to CORBA, RMI, and DCOM). The toolkits focus on generating XSD definitions from code -- and lots of developers try to expose language-specific object types (Java hashmaps, .NET Datasets, etc) through their SOAP interfaces. This approach often results in interoperability challenges.
If developers start with WSDL descriptions and XSD types and generate code from them, the interop issues are definitely lessened. And if the toolkit doesn't support a specific XSD construct, the toolkit can always resort to DOM.
- > Can any language implement any XSD type? If the answer is not, here
> there is an interoperability problem.
I understand the argument, but respectfully disagree. The purpose of XML Schema in WSDL is to describe the format of an XML message. As long as that format is unambiguously described, interoperability is maintained.
WSDL / XML Schema was never intended to prescribe -- or even describe -- a programming model.
- > If I am a Java programmer that wants to create a service that sends
> say, a hashmap, by using WSDL-first I would define somehow a XSD type
> (something like a list of key-value pairs), but then the toolkit will
> not generate the hashmap type, so I would have to program the hashmap
> behaviour myself. Isn't this an inconvenience? As a programmer, I
> would prefer to use predefined types.
Not all languages have hashmaps, and not all hashmaps are the same (e.g., C++ STL vs Java). If you don't care about non-Java languages, then you don't need the interop provided by WSDL-first.
But if you don't care about non-Java, why not just use RMI?
- My point of view was that by doing the WSDL-first approach the service implementation (whatever is java, .net..) cannot take advantage of the language capabilities. This means that you are going to get a map of the XSD defined types: arrays, lists, etc, but not a mapping of more complex structures like trees (unless you do the parsing yourself using DOM as it was said before). But if your language supports graph types, IMHO you can't take advantage of that using WSDL-first.
- Right. And my point was that if you want to support clients written in many different languages, then this is the price you have to pay.
- Or, alternatively, you must build an abstraction layer between your WSDL interface and your internal object model.
- In our enterprise we do not consider support via DOM to be support; it is a vendor cop out that is slightly better than no support and too many vendors use DOM support to claim support.
In our strictly WSDL-first-development environment, we have found the interoperability problems associated with varying xsd support to be so significant that we maintain and enforce internal standards regarding allowed/disallowed XSD constructs. Furthermore, we have found toolkit xsd support variability and implementation quality to be a significant problem and, therefore, we have standards that strictly limit the WS toolkits that may be used. We allow three toolkits and life would be improved if we could reduce it to one.
In my view, the state of WS interoperability is reminiscent of CORBA interoperability. And like CORBA interoperability, these problem will be solved over time.
- > I'm still curious what are the existing interop issues for ints and
> dateTimes?
> I see it mentioned from time to time, but what are the concrete
> examples?
I know that in .NET 1.x DateTime types are "value types", which means that there is no notion of a NULL value or empty content. So there is no way to interpret an empty node in a message to a DateTime.
In .NET 2.0, all value types get a "default" value that will let NULL get interpreted at least in a consistent way.
I don't know the issue with int values. However, in .NET 1.x some integer-ish XML Schema types -- like xs:nonNegativeInteger get cast as strings in generated types. I never got a good reason why.
-
For ints, its the problem when you want to make them optional. So it's not strictly fair to say it's int interop, it's optional parameters that are value types in Java and/or C#.
-
Right, nullable value types were not supported in .Net 1.0/1.1, fixed in 2.0 by introducing Nullable<T>.
-
nil and minOccurs='0' typically get handled in different ways in different platforms, if you're lucky, the platforms you care about handle them in ways that are don't cause trouble. Unfortunately today, java tools tend to serialize nulls explicitly as xsi:nil='true' unless told otherwise, whilst .NET tends to do either that, or not serialize it at all, depending on whether its a value type or a reference type, this gets better in .NET 2.0, but that is still beta. I expect people to run into this with .NET 1.1 for quite a while. (because an xsi:nil='true' on an element that's an int in .NET will cause it to barf)
>I've never personally had a dateTime problem, which in retrospect
>surprises me. Our users have a lot of confusion about timezones, but
>the interop is actually working the way it is supposed to.
Steve Loughran has written a number of times about problems with dateTime, i've never fully understood the issue he talks about, although lots of users get confused over timezones, and whether there toolkit works with UTC or local times (as most platforms DateTime datatype typically doesn't retain TZ info).
One other problem i regularly see, is that xsd:long is used in a few services but has no usable mapping to COM, COM itself does support the equivilent of an xsd:long with the VT_I8 type, but every COM environment except for C++ doesn't support the type. This for example makes it tough to use the google adword API with PocketSOAP
One final problem i haven't seen mentioned, and i suspect most vendors don't even consider it an issue, is the evolution of code generated proxies. for example code written against an Axis 1.1 generated proxy is not necessarily going work when the proxy is regenerated with Axis 1.2 (as the xsd -> java type mapping rules are now different). Or with .NET if the WSDL is evolved to contain 2 instead of 1 services, .NET will start using the binding name instead of the service name for it generated classes, breaking any existing code.
-
>Barring the null issue, I think the interop on basic constructs (value
>types, structs, arrays) has been fairly satisfactory. But I could be
>proven wrong...
Between Java and .NET, I think you're right that basic interop works. And I apologize for continuing to bring up the null issues, it's just a specific example I understand really well.
But the whole world isn't Java and .NET. In Perl, SOAP::Lite did a pretty rough job with document/literal up until the 0.65 release a few months ago, and even then it still fakes it in a lot of places. In Python, SOAPpy doesn't really do document/literal at all (although you can fake it talking to Axis); ZSI is better, but I don't have much experience with it. PHP now offers us three choices for SOAP but last I checked none of them really worked right. gSOAP for C is apparently quite good at talking to Axis in doc/lit.
... It's not so much that it's impossible to make doc/lit services interop, it's that it's difficult. That it requires a lot of expertise. This isn't off the shelf technology; users have to understand a lot about XML, and Schema, and they have to fiddle with the code and their usage of it. And the debugging experience for them is awfully confusing.
A very interesting and educational discussion.. Hopefully of value.
br>
Sunday, July 10, 2005
If you are into Contract-First web service development, this is definitely a must have tool. This is souped up version of the xsd.exe tool that comes with the framework.
“XSDObjectGen is similar in purpose to the XSD.exe tool that ships with Visual Studio .NET. The key difference is that XsdObjectGen creates sample code that can be used to explore the XML serialization attributes in the .NET framework. The sample code that is generated also demonstrates a “contract first” approach to programming that eliminates the need to focus on XML for interop and classes for internal work. The classes generated are fully supported .NET framework code, and there are no runtime libraries or licenses required beyond the normal .NET framework licensing terms.”
I am not sure if they updated the code-base for the tool since the initial release, but the associated documentation has been significantly updated with information on Schema-First development methodologies. In particular, the support for popular XML schema structures has been updated as part of this tool. It “….Supports (inheritance, groups, abstract types, elements, attributes, complexType, simpleType, enumeration, any, anyType). Limited support for “interesting” schema constructs (choice, substitution groups, complex type restriction, complex type extension using regular expressions). No support for “questionable” schema constructs (union, redefine).”
In particular, I am looking forward to exploring the support for substitution groups (It is not as seamless as I would like, but is better than the lack of support that currently exists).
Check it out!
br>
Sunday, June 19, 2005
On a listserve that I am on, the conversation recently turned to SOAP Debugging Tools. The focus was primarily security specific so the requirement were not simply to view the soap traffic, but to actually intercept or generate traffic to a service. The following tools came up as recommendations on the list.
- SOAPDebugger - a simple, generic SOAP client
Extremely simple SOAP client built on Axis. Feed it a WSDL, and it will display the available methods and let you fill in the input parameters before performing the call.
- WebServiceStudio 2.0
.NET Web service Studio is a tool to invoke web methods interactively. The user can provide a WSDL endpoint. On clicking button Get the tool fetches the WSDL, generates .NET proxy from the WSDL and displays the list of methods available. The user can choose any method and provide the required input parameters. On clicking Invoke the SOAP request is sent to the server and the response is parsed to display the return value.
There were a couple of others mentioned as well, but since no URLs were provided (and I could not find them by searching by name), I am not going to include them. Please note that I have any extensive experience with the above tools, so cannot give you a full report on the extent of their capabilities.
I currently use the SOAP Debugger that is built into XMLSpy, which does a superb job. But that most definitely is not a free product.
br>
Thursday, June 16, 2005
UDDI is used extensively in my environment for Web Services Discovery (Both for design and run-time use cases). Currently Windows 2003 provides a UDDI v 2.0 compliant implementation. But earlier this year UDDI v 3.0 was approved by OASIS. The leader in the field, Systinet, already has a v 3.0 implementation out there. In addition, they have made a commitment to move away from using proprietary schemas for describing the Taxonomies to using OWL for future versions.
Does anyone out there know when we can expect an upgrade to v 3.0 from Microsoft and if there is any move to use OWL on the Taxonomy front?
br>
Saturday, June 4, 2005
Excellent! Got a ping from Christian today that the latest version of WSCF is out!
Find out more on Christian's blog entry and go through the updated walkthrough. The latest version of CoDe magazine has an article by Christian that showcases Contract First Web Services Dev using WSCF 0.5 as well.
One of the interesting things that I am looking forward to checking out, especially given my own trials and tribulations along these lines, is the new Web Services Help and Documentation page. According to the docs:
- WSCF now provides a Web service help page that the user can use when going to the .asmx endpoint with his browser. In 0.4 code generation just disabled this ‘documentation’ feature.
- The documentation page provides the same test and documentation features as the original one; but it disallows calling ?WSDL on the endpoint.
Also definitely looking forward to the WSDL round-tripping, and support for the <service> element.
I did not see if there were any support for SOAP faults in this version.. I don't think so as that would definitely be a feature that would be mentioned, so I'll look forward to that in a future release

br>
Thursday, June 2, 2005
Earlier this year, Microsoft did a whole month of webcasts that dealt with Interoperability. They now have an archive of those webcasts linked from one location. Here is the description:
"Focus on interoperability through this series of webcasts. Discover why it matters to your business, learn common strategies and methods, and obtain guidance on specific implementation scenarios between the major platform players. See how Microsoft embraces interoperability on many levels—through our products today, with the new generation of XML-enabled software, through technology and IP licensing, and in our partnerships with companies that are dedicated to helping software products work together.
Webcasts are organized by job role and platform, allowing you to navigate to the webcasts that address your specific interests."
Check it out...
br>
Monday, May 23, 2005
Two gems from the patterns & practices folks on the Interop front have been released on GotDotNet.
The Microsoft WS-I Basic Security Profile 1.0 Sample Application: Preview Release for the .NET Framework version 1.1 has been developed to demonstrate essential programming elements which are required to achieve interoperability of secure Web services between endpoints created by different vendors. This release includes the sample application which Microsoft has produced for the WS-I BSP Sample Application Working Group (SAWG), and has been constructed using the .NET Framework 1.1 and Microsoft Web Services Enhancements (WSE) 2.0 SP3. The theme of the application is a retail supply chain management system whereby the retailer sells consumer electronics via an imaginary Supply chain using secured Web services optimized for interoperability.
The Microsoft Basic Security Profile (BSP) 1.0 Sample Application Guide describes the design and implementation of the sample application and steps you through the process of installation and usage. The guide also describes how the sample application uses WSE 2.0 SP3 to provide interoperable secure Web services based on the WS-I Basic Security Profile version 1.0.
Chapters include:
1 - Introduction
2 - Installation
3 - Walkthrough
4 - Application Architecture
5 - Policy
6 - Designing for Interop
7 - Interoperability Guidance
8 - Appendix A - Enterprise Library Integration
9 - Appendix B - WS-I Sample Application Messages
br>
Saturday, May 14, 2005
Once we have properly defined our Schema, the next step in the process is to use a platform specific tool to generate the helper classes that will map the XML Schema to Platform Code and vice-versa. Some service-side choices on the .NET side are XSD.exe, XSDObjectGen or WSCF and on the Apache Axis side it could be WSDL2Java. To make sure that the helper classes that are generated are indeed "helping" you, it is worthwhile exploring the various XML Schema design styles and seeing how particular choices can impact the helper class generation.
The design styles that you will most often run into have very interesting and memorable names such as the Russian Doll, Salami Slice, Venetian Blind and Garden of Eden. I am not going to attempt to provide examples of this as others have done it much more lucidly than I ever could. In particular I would point you to "Schema Design Rules for UBL... and Maybe for You" by Eve Maler [Sun] for a good overview of the above design styles.
I do note that many folks recommend either the Venetian Blind and/or the Garden of Eden style of schema design when it comes to Web Services.
So what have your experiences been in this area? Is there a particular style you prefer and if so why?
br>
Everything starts from the data and the only truth that exists is on the wire!
I've been doing a lot of work these days with web services that are designed to be Interoperable. The primary driver is the existence of multiple web service stacks in my environment including Apache Axis, BEA, .NET and more. I am not going to get into the Code-First vs. WSDL/Schema-First debate. I personally believe that you can significantly minimize the number of Interop issues if you start from the Schema rather than the Code..... provided that you start from a constrained subset of the schema that is supported by your target web service stacks.
As such, it is very important that you follow good practices when designing your schema. I've come across some resources that I use and recommend and thought that I would share them.
br>
Tuesday, May 10, 2005
The majority of the web service work that I am involved with has platform/technology Interop in the mix. As such I lean towards the WSDL-First style of web service development. I am also pragmatic enough to try to start out with a subset of schema that is supported on both ends of the pipe.. More on that later.
Since I generate the WSDL first, I really do not want the auto generated WSDL to show up on the .NET Web service documentation page. So I actually would like the "Service Description" link to point to my manually generated WSDL instead of the one generated by "MyWebService.asmx?WSDL".
I was under the impression that setting the Location property of the WebServiceBindingAttribute would enable me to point to my WSDL. In effect, map the "Service Description" link to my WSDL location.
But unfortunately that does not seem to be happening. Here is the relevant code:
[WebServiceAttribute(
Namespace="urn:my:service:2005:04:16",
Description="This is a DEV instance of myService.")]
[WebServiceBindingAttribute(
Name="myWebService",
Namespace="urn:my:service:2005:04:16",
Location="myWebService.wsdl")]
I am missing something obvious. Any ideas?
UPDATE: I got this to work ....after a fashion. The trick is to make sure that you specify the WebServiceAttribute Binding Name (i.e. "myWebService") as the Binding on the SoapDocumentMethodAttribute for EVERY SINGLE web method.
[WebMethodAttribute(
Description="Not Implemented")]
[System.Web.Services.Protocols.SoapDocumentMethodAttribute(
Action="urn:my:services:2005:04:16:setSomething",
Binding="myWebService",
Use=System.Web.Services.Description.SoapBindingUse.Literal,
ParameterStyle=System.Web.Services.Protocols.SoapParameterStyle.Bare)]
public string setSomething(SomeType setSomething)
{
}
When you do this, the Service Description link on the documentation page still points to the auto-generated WSDL location, but instead of the complete WSDL, what you will get is a WSDL import statement that looks like this:
<wsdl:import namespace="urn:my:service:2005:04:16" location="myWebService.wsdl" />
which points to my manually generated WSDL. Very nice!
The only issue that I am running into is that one of my Web Methods is simply NOT showing up at all on the documentation page.. <sigh>. I think it has something to do with the Request Type that I am passing in.
br>
Tuesday, December 7, 2004
Sunday, November 21, 2004
Now this shows maturity in the industry!
Microsoft has invited Sun, IBM, BEA and the Open Source folks to talk about what it would take to make all of the various vendor technologies work together in the customer environment. It would appear that a majority of them, with the notable exception of IBM and the Open Source guys, have accepted!
They are going to kick off a series of about 40 webcasts in January. Find out more about it and pre-register.
Very, Very Cool!
br>
Tuesday, October 26, 2004
Friday, September 10, 2004
The Enterprise Development Reference Architecture is our reference architecture for creating a service-oriented architecture. Previously codenamed Shadowfax, it provides code and documentation on Microsoft’s recommendations for the architecture of enterprise-ready systems. This is Microsoft's most complete guidance to date on what a service-oriented architecture should really look like.
You can get more information online here.
This particular piece of architectural guidance is a very big deal for us; we really believe that it defines the future of application development. It’s the culmination of the work Pat Helland has been doing for the last couple of years. Pat is one of Microsoft’s “big thinkers” on architecture. Among other things, he created the architecture for Microsoft Transaction Server, laying out most of the concepts of a modern application server for the first time. Recently, he’s been focused on the concept of message passing as a basis for building heterogeneous systems.
Locally, we have a seminar next week that will cover many of these concepts. It’s being presented by one of our local regional directors, Vishwas Lele of AIS, and the title is Understanding Service-Oriented Architecture. It will start with an overview of the SOA concept, and then drill down in detail into the Shadowfax architecture.
This seminar will be in the DC Office on Wednesday, September 15th, at 9:00 AM. Registration is here.
I do SO wish that I could go to this one, especially because of the topic in question as well as the presenter. Vishwas Lele, who is one of our local RDs, was one of my fellow presenters at DevDays 2004 and did a great job. I am sure he will do a bang up job here as well. Unfortunately I have a prior commitment :-(
br>
Thursday, September 2, 2004
New article by Pat Helland explores Service Oriented Architecture, and the differences between data inside and data outside the service boundary. Additionally, he examines the strengths and weaknesses of objects, SQL, and XML as different representations of data, and compares and contrasts these models.
br>
Tuesday, August 31, 2004
Article on how to use WSE 2.0 to implement security, trust, and secure conversations in Web services architecture. See the security-related changes since WSE 1.0.
br>
Sunday, July 11, 2004
Interoperability and Integration using Web Services - An Industry Perspective - Level 200
http://go.microsoft.com/fwlink/?linkid=31082
July 12, 2004, 11:00 AM - 12:30 PM Pacific Time
Simon Guest, Program Manager, Microsoft Corporation
Are you writing applications using IBM WebSphere or BEA WebLogic? Are you wondering if and how you can interoperate with Microsoft® .NET using Web services? In this webcast, Simon Guest will summarize his previous webcast with an overview of some of the best practices, recommendations and strategies for achieving interoperability using Web Services. In addition, Simon will be joined by three industry architects - John Evdemon, Drew Gude and Mauro Regio - to discuss how Web services interoperability is becoming a reality in the public sector, manufacturing and healthcare verticals.
Application Decomposition for SOA Based Systems - Level 300
http://go.microsoft.com/fwlink/?linkid=31084
July 13, 2004, 11:00 AM - 12:30 PM Pacific Time
Paddy Srinivasan, Program Manager, Microsoft Corporation
This webcast will deal with the concept of decomposing applications to enable architecting better service oriented applications. Integrating software applications is the new mantra of the Web services world but in order to achieve this, business services and boundaries should be clearly identified and broken down into autonomous entities. Decomposition of applications based on business logic is critical in maximizing the benefits of a service orientation. This webcast will use a supply chain system as the example for illustrating the concept. It will also look into some of the design patterns and Microsoft technologies that are applicable.
patterns & practices Live: Test Driven Development - Level 200
http://go.microsoft.com/fwlink/?linkid=31155
July 15, 2004, 11:00 AM - 12:30 PM Pacific Time
Jim Newkirk, Development Lead, Microsoft Corporation
In Kent Beck's book titled Test-Driven Development, by Example he defines Test-Driven Development (TDD) as driving software development with automated tests. He goes further by stating that TDD is governed by two simple rules: write new code only if an automated test has failed and eliminate duplication. The implications of these two simple rules can be a profound change to the way that software is written. Most of the literature to date has bundled TDD along with Extreme Programming (XP). However, the benefits of using TDD are not limited to XP, and can be realized in any programming methodology. This webcast will provide an introduction into TDD, demonstrating how it works and what benefits it provides when used with Microsoft® .NET. The examples shown will use Visual C#® and NUnit.
br>
Saturday, June 26, 2004
I've been looking at information on creating interoperable web services as well as tools, and came across these resources.
Articles & Online Books
Tools
br>
Sunday, June 20, 2004
Programming with Web Services Enhancements 2.0 – Level 200
Friday, July 09, 2004, 9:00AM-10:30AM Pacific Time (GMT-7, US & Canada)
http://go.microsoft.com/fwlink/?LinkId=31150
If you develop Web services, you’ll want to find out how Microsoft’s Web Services Enhancements 2.0 (WSE 2.0) can make your life easier as a developer. WSE 2.0 provides advanced Web services capabilities, including a policy framework, enhanced security model, message-oriented programming model, and support for multiple hosting environments. This webcast will show how to leverage these powerful features in WSE 2.0 and will focus primarily on security.
MSDN TV: Security in WSE 2.0
http://msdn.microsoft.com/msdntv/episode.aspx?xml=episodes/en/20040617WSEJB/manifest.xml
Celebrating the launch of the Web Service Enhancements (WSE) 2.0 at Tech·Ed 2004, Benjamin Mitchell and John Bristowe talk about the advanced Web services specifications that it supports, focusing on WS-Security.
WSE 2.0 Tracing Utility
http://mtaulty.com/blog/archive/2004/05/25/433.aspx
"WSE 2.0 has tracing facilities that can be switched on via the configuration file which traces messages to a text file but I wanted something that looked a little bit more like the SOAPTrace tool that shipped with he SOAP toolkit so that I can more easily use it for demos and so on...... what I wrote uses WSE2.0 SOAP messaging to trace WSE2.0 messaging (be that ASMX or SOAP messaging)"
br>
Monday, June 14, 2004
Ron Jacobs has a recent blog entry in which he discusses the trade-offs involved in the design of the ShadowFax reference application and he asks a question -
"What price would you pay (in terms of performance) and what gains would you need to get to consider this trade a fair deal?"
The issue with the question is that you REALLY need more information on the application before you can answer it. Once you have that information you can:
- Set performance objectives
- Do performance modeling
- Know the cost and validate the performance models
- Do performance testing
- ..... and tune the performance as an iterative process until you meet your performance objectives.
The point is that you cannot answer this question in isolation. You need to balance performance, security, architectural flexibility and developer productivity when designing, developing and deploying an application. And a lot of the information that goes into making a balanced decision is driven from the business/technical requirements of the application..
So the answer is indeed “It depends!”
br>
Friday, June 4, 2004
This article published on MSDN discusses various approaches to solving eight key challenges faced by companies when implementing a service-oriented architecture.
The challenges as described are:
- Service identification. What is a service? What is the business functionality to be provided by a given service? What is the optimal granularity of the service?
- Service location. Where should a service be located within the enterprise?
- Service domain definition. How should services be grouped together into logical domains?
- Service packaging. How is existing functionality within legacy mainframe systems to be re-engineered or wrapped into reusable services?
- Service orchestration. How are composite services to be orchestrated?
- Service routing. How are requests from service consumers to be routed to the appropriate service and/or service domain?
- Service governance. How will the enterprise exercise governance processes to administer and maintain services?
- Service messaging standards adoption. How will the enterprise adopt a given standard consistently?
br>
Wednesday, May 26, 2004
There are also a couple of hands-on labs that are available:
br>
Tuesday, May 11, 2004
Date: Thursday, May 13, 2004
Time: 11:00AM-12:30PM Pacific Time (GMT-7, US & Canada)
Description: Today's business applications rarely live in isolation. Users and customers expect instant access to data and functions that may be spread across multiple independent systems. Therefore, these disparate systems have to be integrated to allow a coordinated flow of data and functionality across the enterprise. Despite advances in EAI and Web Services tools, creating robust integration solutions is not without pitfalls. For example, the asynchronous nature of most message-based integration solutions is different from the synchronous world of application development and requires architects and developers to adopt new design, development and testing strategies. This webcast examines how design patterns can help developers build successful integration solutions. The patterns have been harvested from years of actual integration projects using messaging, Web Services and EAI tools.
Register for the Patterns & Practices Live Webcast @
http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032250502&Culture=en-US
[Now Playing: Kuch To Hua Hai - Kal Ho Naa Ho]
br>
Monday, April 26, 2004
patterns & practices Live: Shadowfax - Level 300 [1]
April 29, 2004, 11:00 AM - 12:30 PM Pacific Time
Ron Jacobs, Product Manager, Microsoft Corporation
Service-oriented architecture (SOA) is the latest architectural wave. Find out what the buzz is about and what Microsoft doing to help you understand and implement SOA with .NET. Shadowfax is a future patterns & practices reference architecture which will help you to build SOA solutions on .NET quickly and easily. This sneak peek will give you insight into the project while it is still early in development and provide you the opportunity to influence its direction.
[1] http://go.microsoft.com/fwlink/?linkid=25888
BTW, I got my PAG Button,
, Do you? *
Remember, web site buttons are the bumper stickers of the 21st Century 
* Yes, it is Promote The PAG week @ SecureCoder
[Now Playing: Chalte Chalte (2) - Mohabbatein]
br>
Tuesday, April 13, 2004
The spring issue of the Microsoft Architects Journal is out. Topics covered include:
- Metropolis (Online)
- Service Orientated Architecture – Considerations for Agile Systems
- Service Orientated Architecture Implementation Challenges
- Business Patterns for Software Engineering Use – Part 1
- Messaging Patterns in Service Oriented Architecture – Part 1
UPDATE: A comment that was left by a reader (PReddy) seemed to indicate that the direct link above did not work. I just tried it, and it worked for me. Another option you may want to try is to follow the link from the Architects Journal Home page @
http://msdn.microsoft.com/architecture/journal/ and see if that works for you (if the above link does not work)
[Now Playing: Maahi Ve - Kal Ho Naa Ho]
br>
Thursday, February 12, 2004
The TechEd site recently posted the sessions names for the Architecture Track as well as the abstract for the Architecture PreConference Session. Get more info @ the TechEd home page.
[DevHawk]
The track topics are pretty extensive. Should be very educational.
- .NET and J2EE Strategies for Interoperability
- Bridging the Gap Between IT and Application Developers
- Building Applications with the patterns & practices Application Blocks
- Data in Services Oriented Architecture
- Defense in Depth with Microsoft Systems Architecture
- Enterprise Information Integration and Entity Aggregation in Service Oriented Architecture
- Enterprise Solution Patterns
- Exploring A Service Oriented Architecture: Thompson Financial Case Study
- Factoring in a Service Oriented World
- Improving Application Performance and Scalability
- Managing Service Oriented Architecture Using Existing Platforms
- Metropolis : Building Applications in the Service-Oriented Enterprise
- Metropolis: Envisioning the Service-Oriented Enterprise
- Metropolis: Using Information in the Service-Oriented Enterprise
- Office Developer: Architecting Solutions and Managed Service Layers
- Realizing Services Oriented Architectures
- Service-Oriented Business Architecture: A Conceptual Model
- Services For Unix: Migrating and Extending UNIX Applications on Windows
- Smart Client Architecture Principles
- Tools for Architecture: Designing for Deployment
- Tools for Architecture: Developing Service Oriented Systems
- Visual Studio "Whidbey": Managing the Enterprise Build Process with MSBuild
- Windows Forms: Architecting and Building Smart Client Applications in .NET
[Now Playing: Mere Khwabon Main - Dilwale Dulhania Le Jayenge]
br>