My blog has moved and can now be found at

No action is needed on your part if you are already subscribed to this blog via e-mail or its syndication feed.

Sunday, April 1, 2007
« WS-I is not a standards organization | Main | Service versioning best practice? »

There is an interesting article over at Thomas Earl's SOA Magazine site by Cory Isaacson titled "High Performance SOA with Software Pipelines". the article, Isaacson notes that "Distributed service-oriented applications, by their nature, take advantage of multi-CPU and multi-server architectures. However, for software applications to truly leverage multi-core platforms, they must be designed and implemented with an approach that emphasizes concurrent processing".

He identifies and explains current approaches to dealing with concurrency in applications such as:

  • Symmetric Multi-Processing in which a SMP server operating system manages the workload distribution across multiple CPUs.
  • Automated Network Routing in which service requests are routed to individual servers in a pool of redundant servers.
  • Clustering Systems in which multiple servers share common resources over a private "cluster interconnect".
  • Grid Computing in which applications are divided into sub-tasks that can execute independently.

... as well as the various limitations associated with the current approaches. He also identifies a new approach, based on a methodology called software pipelines, which can enable businesses to achieve the benefits of concurrent processing without major redevelopment effort. I found it to be fascinating reading as I personally have not done much work with multi-threaded applications or grid computing.

As an aside, the challenges of programming for multi-core chips and how to make that easier for the developer was a key theme in Bill Gate's keynote address at the recent Microsoft MVP Summit.

patterns and practices Perf and ScaleAs I have noted before, performance engineering to me is something that should be considered in an end to end manner.  Currently, in the web service world, there are folks who are tackling this problems using hardware (e.g. XML Security Gateways) and by using binary encoding approaches, but IMHO, not a lot of work being done to provide best practices for optimizing the design of the services themselves.

An exception to the rule, and an excellent source of information on performance engineering that I always point to, are the first three chapters in the PAG Perf & Scale book. So, for your reading pleasure, let me point to them once more:

Just to be clear, it does not matter if you are in the .NET camp or the Java Camp or any of the other language/platform camps, the information above is equally applicable and relevant.


4/1/2007 2:27 PM Eastern Daylight Time  |  Comments [1]  |  Disclaimer  |  Permalink   
Monday, April 23, 2007 5:46:51 PM (Eastern Daylight Time, UTC-04:00)
One problem I have with this is being dependent on so many servers being up and running all the time. Especially in development environments where things go down a LOT. If you have 20 services all on a different JVM (server), when one of em goes down your application may be in trouble and your development team may be stalled. I know this may not be exactly related to your post but it is something that I struggle with related to SOA on multiple servers.
Comments are closed.