My blog has moved and can now be found at

No action is needed on your part if you are already subscribed to this blog via e-mail or its syndication feed.

Sunday, March 13, 2011

In many conversations I have had with folks who potentially have a need for the services of an Identity Oracle, especially as to how it could help with assurances of identity, there is a two part reaction that I found to be very interesting as indicators of what we need to focus on as a community to make this real and viable. 

The first part of the reaction is typically about the “many security holes” in the concept and “changes to existing business processes” that are needed to leverage the capability.  The second part of the reaction takes place a bit later as we get into discussing identity proofing and bring up the example of US Government PIV cards (which are Smart Cards that are issued to US Government Employees and Contractors) or Non Federally Issued PIV-I Cards, both of which have have transparent, publically documented, and consistent identity proofing process and the level of comfort the same set of folks have in potentially changing their business processes to accept the PIV/PIV-I Card as a proxy for identity proofing that has been done by someone else.

What that combination of reactions confirmed for me is that the issue is not about technology/security holes (since the the Identity Oracle is a business and NOT a technology) or about changing business practices (since the second part requires that change as well) but about the level of comfort and confidence one can place in the relationships between the Identity Oracle and entities that need to interact with it.  I prefer to not use the word “Trust” in this context because the definition is ambiguous at best (See Gunnar Peterson’s “Lets Stop ‘Building Naïveté In’ - Wishing You a Less Trustful 2011” blog post) but instead would like to focus on the contractual aspects of what can be articulated, measured and enforced as both Gunnar in his blog and Scott David in my earlier “Identity Oracles – A Business and Law Perspective” blog post noted.

This tension between the technical and the business also came up in the reactions (@independentid, @NishantK, @IDinTheCloud, @mgd) to my original post on Identity Oracles, so would like to explicitly address that in this post.

How does the traditional “pure tech” Identity and/or Attribute Provider operate and what if any are the constraints placed upon it?

Oracle_IdP_PIIFrom a technical interaction perspective, you have:

  1. Person presents to the Relying Party some token that has binds them to a unique identifier
  2. Relying party uses that unique identifier to call out to the Identity/Attribute Provider to retrieve attributes of the Person
  3. The Identity/Attribute Provider interacts with Authoritative Sources of information about the Person and returns the requested information to the Relying Party

Now let us look at this from a non-technical interaction perspective:

  • A contractual relationship exists between the Authoritative Sources and the Identity/Attribute Provider
  • A contractual relationship exists between the Identity/Attribute Provider and the Relying Party
  • A contractual relationship exists between the Person and the Relying Party
  • NO contractual relationship exists between the Person and Identity/Attribute Provider

Privacy Implications

  • The Relying Party typically click-wraps its privacy and information release in its interactions with the Person
  • The identity/attribute provider, as a business entity which needs to make money, is dependent on Relying Parties for its revenue stream
  • The identity/attribute provider, as the entity in the middle, has visibility into the transactions that are conducted by the Person and has significant financial pressure on it to monetize that information by selling it to third parties (or even to the Relying Party). For more information on this extremely sophisticated and lucrative market in private information, please read the recent series of investigative articles from the Wall Street Journal.
  • Given the lack of a contractual relationship between the Person and the Identity/Claims provider, the person has no visibility or little to no control over how this transactional information, which can be used to build a very detailed profile of the person, is used.

How does an Identity Oracle operate and what if any are the constraints placed upon it?

Oracle_IdP_No_PII From a technical interaction perspective, you have:

  1. Person establishes a relationship with the Identity Oracle, which verifies their identity and potentially other information about them via its relationship to Authoritative Sources. The Identity Oracle provides the person with token(s) that allow the person to vouch for their relationship with the Identity Oracle in different contexts (Potentially everything from a Smart Card when you need very high assurances of identity to some token that asserts something about the person without revealing who they are)
  2. When the Person needs to conduct a transaction with the Relying Party, he presents the appropriate token needed which establishes their relationship to the Identity Oracle
  3. The Relying Party asks the Identity Oracle “Am I allowed to offer service X to the Person with a token Y from You under condition Z?”. The Identity Oracle answers “Yes or No”

Now let us look at this from a non-technical interaction perspective:

  • A contractual relationship exists between the Authoritative Sources and the Identity Oracle
  • A contractual relationship exists between the Identity Oracle and the Relying Party
  • A contractual relationship exists between the Person and the Relying Party
  • A contractual relationship exists between the Person and Identity Oracle

Privacy Implications

  • The Relying Party typically click-wraps its privacy and information release in its interactions with the Person but in many cases does not need to collect Privacy Sensitive information from the Person
  • The Relying Party can potentially outsource some functions as well as transfer liability for incorrect responses to the Identity Oracle
  • The Identity Oracle, as a business entity which needs to make money, has multiple revenue streams including the Relying Party as well as the Person, not to mention value added services it can offer to the Person
  • The Identity Oracle, as the entity in the middle, has visibility into the transactions that are conducted by the Person BUT is constrained by its contractual relationship with the Person to protect both the transactional information it has visibility into, as well as provide only meta-data about the private information it knows about the Person to Relying Parties

Some of the critical points that bears emphasizing with the Identity Oracle concept are:

  • Privacy protection of both PII information as well as transactional information with visibility and control by the Person
  • Allocation of responsibility and liability across Relying Parties, Identity Oracles and Persons.
  • Ability to conduct transactions that require very high assurances of identity to completely anonymous
  • Ability to conduct transactions across multiple modalities including in-person, internet/web, mobile devices and more
  • Ability to leverage existing technologies such as SAML, XACML, Smart Cards, OTPs and more

I hope that this blog post has been helpful in articulating the differences between a traditional identity/attribute provider and the identity oracle, and provides a case for the community to focus more on defining and articulating the contractual and business process aspects of the relationships of the parties involved, while simultaneously working on the supporting technology.

Tags:: Architecture | Security
3/13/2011 1:11 PM Eastern Standard Time  |  Comments [1]  |  Disclaimer  |  Permalink   
Wednesday, March 2, 2011

Reminder:  The Identity Oracle idea is NOT mine, but I have become convinced that it, or something like it, needs to exist in a healthy Identity Eco-System.  The concept is something that was originally proposed by Bob Blakley and expanded upon by him and others at Gartner/Burton Group.  I am simply trying to gather the information that exists in a variety of places into one cohesive narrative, and adding my own perspective to move the conversation forward on this topic.

Scales of Justice One of the aspects of the Identity Oracle is that it is not a technology but a business that proposes to address the relationship between Subjects, Relying Parties and Authoritative Sources of Information via mechanisms such as Contract Law. I am not a lawyer and I do not play one on TV. So when I had questions about the viability of the Identity Oracle from a Law and Business perspective, I pinged Scott David at K&L Gates. Scott and I have ended up at a lot of the same identity focused events in recent months and I have really enjoyed conversing with him about the intersection of Identity, Privacy and Law.  As someone who is passionate about those topics, and works in the domain, he brings a critical insight to this discussion.

My request to Scott was to read my previous blog entry on Identity Oracles and answer if the concept was “… feasible or is it a Utopian vision that is a bridge too far?”  The short version of the answer that I got was:

“I agree with much of the strategy of what you suggest in the blog, but I have some comments on tactics”

But because the long version of his answer is so very thought provoking, I am posting it here, with his permission. I do take some liberties below by commenting on Scott’s words and providing external links to some of his references.

Here is Scott, in his own words:

Anil – The following are my personal comments to your blog entry. They do not reflect the views of my firm (K&L Gates LLP) or any of its clients.

I guess I would say you are "getting warmer," but there are some underlying assumptions on the legal side in the path that you outline that will likely prevent achieving internet scale through the path described.

With some changes in assumptions and design and deployment tactics, however, the market-oriented system that you contemplate can, I think, be built to accommodate the needs of global data/identity systems.

If we treat law as a technology (just as "language" is a "technology") in need of standardization, and look at law from a systems, information science, thermodynamics, AND economic incentives perspective, the following additional points quickly suggest themselves as requiring accommodation in internet scale systems.

1) You are right-on with emphasis on contract law. Massively interoperable systems require Rules standardization (not just technical standardization) on a broad scale. The most system relevant rules (the only one's on which system users can rely) will be those that are enforceable. Those are called legal duties. They arise two ways: by legislation (regulation or other government action) or contract. There is no single international legal jurisdiction (see Peace of Westphalia - 1648), so legislation and regulation alone cannot drive standardization. The international law is the law of contracts (minimum coverage of treaties aside).

Standardized, enforceable, international contracts involving remote parties dealing in valuable intangibles/data are entered into literally every second . . .that activity takes place in the current financial markets. Existing financial and other market structures offer a great deal of insight into the likely functioning of future data/information/identity services markets. Lots to discuss here.

There is another reason to rely on contract law. Due to the limited reach of US and other sovereign nation legal jurisdiction in this context, neither the US, nor any other country, can "force" adoption of internet scale data/identity rules.

There is a solid advantage for the US (and other jurisdictions that have reliable legal/political systems), however, and it is the same one that permits U.S. financial markets to maintain ascendancy in the world markets (despite recent deflections). It is the strong "system support value" derived from the US tradition of deference to the "rule of law." To the extent that the US and other similar jurisdictions are able to "attach" their ideas (manifested in their local data/identity-system-supporting laws) of how to structure data/identity systems to the broad and deep "trust" that is placed in their respective legal/political systems worldwide, it will enhance the appeal of the those systems, and the efficacy and authority of persons and institutions that are responsible for such systems.

It is for this reason, incidentally, that OIX processes were organized based on a variety of US and international trusted, developed "market" models (in a variety of self-regulatory settings), and why they focus on reliable, predictable, transparent processes, etc. Systems that offer the best solutions will enjoy the broadest adoption. Reliability and predictability are currently at a premium due to system fragmentation and so are highly desirable at present. In fact, the data/identity system harm "trifecta," i.e., "privacy," "security," and "liability," can all be seen as merely symptoms of lack of reliability and predictability, due to a lack of standardized legal structure at the core of nascent data/identity markets. Core enforceable legal structure yields reliability, predictability and a form of "trust."

I had never given much thought to this but once Scott articulated this point, the focus on Contract Law which can be international in scope vs Legislation which is local makes sense. There are also familiar elements here regarding the concept of “Comparability” vs. “Compliance” (where the former model is preferred) that Dr. Peter Alterman from NIH has often spoken of in regards to Identity Trust Frameworks.

2) You are correct that it is not a technology issue. I introduced the alliterative concept of "Tools and Rules" early on as a rhetorical device to put laws on par with technology in the discussion (which still takes place mainly among technologists). As a former large software company attorney once said "in the world of software, the contract is the product." He did not intend to diminish the efforts of software programmers, just to call out that providing a customer with a copy of a software product without a license that limits duplication would undermine the business plan (since without the contract, that person could make 1 million copies). Similarly, in the future markets for data/identity services, the contract is the product. This is key (see below).

As a technologist it is sometimes hard for me to admit that the truly challenging problems in the Identity and Trust domain are not technical in nature but in the domain of Policy. To paraphrase the remarks of someone I work with from a recent discussion “We need to get policy right so that we can work the technical issues”.

3) Your discussion is based on a property paradigm. There is much to discuss here. The property paradigm does not scale without first establishing some ground rules.

First, the concept of private property was adopted by the Constitution's framers who were familiar with the work of Gladstone (who believed that without property laws, every man must act as a "thief"). Those laws work very well where the asset is "rivalrous," i.e., it can only be possessed/ controlled by one person. This works for all physical assets. For intangible assets, rivalrousness requires a legal regime (e.g., copyright, patent, etc. to create the ability to exclude, since there is no asset physicality to "possess" as against all other claimants to the same asset). The analysis is then, what legal regime will work to support the interactions and transactions in the particular intangible assets involved here (be it identified as "data," "information," "identity" etc.). Data is non-rivalrous (see discussion in 5 below).

I believe that this is a "resource management" type situation (like managing riparian, aquifer, fisheries, grazing or other similar rights) that lends itself to that type of legal regime, rather than a traditional "property" regime. In this alternative, the "property" interest held by a party is an "intangible contract right," rather than a direct interest in physical property. That contract right entitles the party to be the beneficiary of one or more duties of other people to perform actions relating to data in a way that benefits the rights holder. For instance, a "relying party" receives greater benefit (and an IDP is more burdened) at LOA 3 than LOA 2). The "value" of the contract right is measured by the value to the party benefited by the duty.

The resource management structure emphasizes mutual performance promises among stakeholders, rather than underlying property interests. Briefly, consider a river with three types of user groups (40 agricultural (irrigation) users upstream, 2 power plants midstream (cooling), and a city of 100,000 residential water users downstream (consumption and washing, etc.)). Each rely on different qualities of the water (irrigation is for supporting plant metabolism (stomata turgidity, hydrogen source for manufacturing complex carbohydrates in photosynthesis, etc.), power plants use water for its thermal capacity, and residents use it for supporting human metabolism (consumption) and as a fairly "universal solvent" (for washing, etc.). When there is plenty of water in the river, there is no conflict and each user can use it freely without restriction. When there is too little water, or conflicting usage patterns, there can be conflicting interests. In that situation, it is not property interests, per se, that are applied to resolve the conflicts, but rather mutually agreed upon duties documented in standard agreements that bind all parties to act in ways consistent with the interests of other parties.

Like water, data is a resource that has many different user groups (among them data subjects, relying parties and identity providers), with needs sometimes in conflict. Notably, because data is not a physical resource, the "scarcity" is not due to physical limitation of the resource, but rather is due to the exertion of the rights of other parties to restrict usage (which is indistinguishable legally from a physical restriction).

The property paradigm can be employed for certain forms of intellectual property, such as copyrights, but those systems were not designed to accommodate large "many to many" data transfers. Arrangements such as BMI/ASCAP (which organize music licensing for public radio play, etc.) are needed to help those systems achieve scale.

In any event, there is also a question of ownership where "data" is generated by an interaction (which is most (or all?) of the time). Who "owns" data about my interactions with my friends, me or them? If both parties "own" it, then it is more of a rights regime than a "property" regime as that term is generally understood. Who owns data about my purchase transactions at the supermarket, me or the store? It takes two to tango. We will be able to attribute ownership of data about interactions and relationships to one or the other party (in a non-arbitrary fashion) only when we can also answer the question "who owns a marriage?", i.e., never. You quote Bob Blakley who speaks about "your" information. I take that to be a casual reference to the class of information about someone, rather than an assertion of a right of exclusive possession or control. If it is the latter, it seems inconsistent with the indications that the database will be an "asset" of the Identity Oracle. That separation could be accomplished through a rights regime.

There is also the linguistics based problem of "non-count nouns." Certain nouns do not have objects associated with them directly. Gold and water are good examples. I don't say "I have a gold." or I have a water." In order to describe an object, it needs a "container/object convention" ("a gold necklace" or "a glass of water.") Data is a non-count noun. When it is put in a "container" (i.e., when it is observed in a context), it becomes "information." It makes no sense for me to point to a snowbank and say "there is my snowball in that snowbank." Instead, I can pick up a handful of snow (separate it out from the snowbank) and then make that declaration. Similarly, in the era of behavioral advertising, massive data collection and processing, it makes little sense to say, "there is my personal information in that data bank" (unless the data is already correlated in a file in a cohesive way, or is an "inventory control" type number such as an SSN). It takes the act of observation to place data in the information "container."

As a result, it will take more to allow parties to exert any type of "property" interests in data (even those property interests under a contract "rights regime."). First, you need to make a data "snowball" (i.e., observe it into the status of "information") from the mass of data.

The paradigm of resource allocation allows DATA to flow, while permitting rules to measure (and restrict or charge for, etc.) information. When we talk, I will share with you the concept of when limitations, measurement, valuation, monetization might be applied. Briefly, when the data is "observed" by a party, I call it a "recognition" event. That observation will always be in a context (of the observer) and be for that observer's subjective purposes. At the point of observation, data is "elevated" to information (the "Heisenberg synapses" in your brain may be firing at this notion). It is at that point that it is the "difference that makes a difference" (to quote Bateson). The first reference to "difference" is the fact that data is carried by a "state change" in a medium. The second reference to "difference" in the Bateson quote is the fact that the data matters to the observer (it has value either monetarily or otherwise). Anyway, this data/information distinction I think lends itself to a system that can allow data to "flow" but can offer appropriate "measurement" at the point of "use" ,i.,e, observation, that can form the basis of legal structures to value, monetize, limit, restrict, protect, etc. the information that the data contains.

This works well with context-based limitation. Ask me about the example using data held by my banker under Gramm Leach Bliley.

The resource allocation and “non-count nouns” concepts are very interesting to me and is something I need to digest, think about and explore a lot more.

4) Bilateral agreements, individually negotiated agreements won't scale. Standard form agreements are used in every market (financial, stock, commodities, electrical grid) where remote parties desire to render the behavior of other participants more reliable and predictable. Even the standardized legal rules of the Uniform Commercial Code (passed in all 50 states) offers standard provisions as a baseline "virtual interoperable utility" for various sub-elements of larger commercial markets (the UCC provides standard terms associated with sales of goods, commercial paper, negotiable instruments, etc. that have established standard legal duties in the commercial sector since the 1940s. . .and establish broad legal duty interoperability that makes information in the commercial sector "flow").

Standard form agreements permit remote parties without direct contractual privity to be assured about each other's performance of legal duties. This reduces "risk" in the environment of the organism (either individual or entity), since it makes the behavior of other parties more reliable and predictable. This saves costs (since parties don't have to anticipate as many external variables in planning), and so has value to parties. The concept of contract "consideration" is the measure of the value to a party for receiving promises of particular future behavior (legal duties) from another party.

The creation of a "risk-reduction territory" through the assignment of standardized legal duties to broad groups of participants is called a "market" in the commercial sector, it is called a "community" in the social sector, and it is called a "governance structure" in the political sector. Those duties can be established by contract or by legislation/regulation. In the present case (as noted above) contract is the likely route to the establishment of duties. Since all three sectors are using a shared resource, i.e., data, improvement of the reliability, predictability and interoperability in any one of the three sectors will yield benefits for participants in all three sectors. An example of this relationship among user groups is evidenced by the willingness of the government authorities to rely on the commercial sector for development of data/identity Tools and Rules.

Standard form agreements enable the creation of either mediated markets (such as those mediated by banks (match capital accumulation to those with borrowing needs), or brokers (match buy and sell orders), etc.), or unmediated markets (such as the use of standard form mortgages or car loan documents to enable the securitization (reselling) of receivables in those markets).

5) Centralized operation and enforcement won't scale. Steven Wright, the comedian, says that he has "the largest seashell collection in the world, he keeps it on beaches around the earth." This is amusing because it stretches the "ownership" concept beyond our normal understanding. Data is seashells. It will be impossible (or at least commercially unreasonable) to try to vacuum all (or even a large portion of) data into a single entity (whether commercial or governmental).

In fact, on page 90 of Luciano Floridi's book "Information - A very short introduction." (Oxford Press) (strongly recommended), the author notes that information has three main properties that differentiate it from other ordinary goods. Information is "non-rivalrous" (we can both own the same information, but not the same loaf of bread), "non-excludable" (because information is easily disclosed and sharable, it takes energy to protect it - how much energy?. . .see wikileaks issues), and "zero marginal cost" (cost of reproduction is negligible). Of these, the non-excludability characteristic suggests that a distributed "neighborhood watch" type system (more akin to the decentralization we observe in the innate and learned immune systems of animals), offers a path to enforcement that is probably more sound economically, politically, mathematically and thermodynamically than to attempt to centralize operation, control and enforcement. That is not to say that the "control reflex" won't be evidenced by existing commercial and governmental institutions. . .it will; it is simply to suggest that each such entity would be well advised to have "Plan B" at the ready.

This does not mean that data (even as "seashells") cannot be accessed centrally; it can due to the gross interoperability of scaled systems based on standardization of tools and rules. The key is "access rights" that will be based on enforceable, consensus-based agreement (and complementary technology standards). This analysis will naturally expand to topics such as ECPA reform, future 4th amendment jurisprudence and a host of related areas, where group and individual needs are also balanced (but in the political, rather than the commercial user group setting). The analysis of those civil rights/security-related issues will benefit from using a similar analysis to that relied upon for configuration of commercial systems, since both will involve the management of a single "data river" resource, and since the requirements imposed on private persons to cooperate with and assist valid governmental investigations will be applied with respect to the use of such commercial systems.

In this context it is critical to separate out the system harms caused by bad actors (that cause intentional harm), and negligent actors (that cause harm without intention). Intentional actors will not be directly discouraged by the formality of structured access rights, which they will likely violate with impunity just as they do now. The presence of structured, common rules provides an indirect defense against intentional actors, however, since it gives the system "1000 eyes." In other words, since much intentional unauthorized access is caused by fooling people through "social engineering " (in online context) and "pretexting" (in telco context), those paths to unauthorized access will be curtailed by a more standardized system that is more familiar to users (who are less likely to be fooled). Security can be built right into the rights, incentives and penalties regime (remind me to tell you about the way they handled the "orange rockfish" problem in one of the pacific fisheries). Again, there is much to discuss here as well.

Also, your business emphasis seems exactly right. Due to the energy requirements to maintain security and system integrity (resist entropy?), the system can only scale if there are incentives and penalties built into the system. Those incentives and penalties need to be administered in a way so that they are distributed throughout the system. The standardized contract model anticipates that. Ultimately, the adoption ("Opt in") curve will be derived from whether or not participation is sufficiently economically compelling for business (in their roles as IDPs, RPs and data subjects), and offers similarly compelling benefits to individuals (in similar roles). This returns the analysis to the "resource management" model.

6) As noted above, there are different user groups that use the same data resources. These include those groups in the gross categories of commercial, social and governmental users. Thus, for example, when I post to a social network a personal comment, that social network may "observe" that posting for commercial purposes. That can be conceived of as a "user group conflict" (depending on the parties’ respective expectations and “rights”) to be resolved by resort to common terms. The good news is that because all user groups are working with a common resource (data), improvement of the structuring for any one user group will have benefits for the other users of the resource as well.

In short, I agree with much of the strategy of what you suggest in the blog, but I have some comments on tactics.

There is a lot of information and concepts here and while a lot of it is something that I can map to my domain (Lack of scalability of bi-lateral agreements and central enforcement and more), there are others that I have not had to deal with before so am slowly working my way thru them. But in either case, I wanted to expose this to the larger community so that it can become part of the conversation that needs to happen on this topic.  I for one, am really looking forward to further conversations with Scott on this topic!

Technorati Tags: ,, Tags: ,,
Tags:: Architecture | Security
3/2/2011 10:43 PM Eastern Standard Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Sunday, February 27, 2011

The concept of the Identity Oracle is something that I have been giving a lot of thought to recently. It has been driven by a combination of factors including current projects, maturity of both policy conversations and technology, as well as a desire to move the art of the possible forward at the intersection of identity and privacy.  My intention is to use this blog post to provide pointers to past conversations on this topic in the community, and to use that as a foundation for furthering the conversation.

Identity Oracle When it comes to information about people (who they are, what they are allowed to do, what digital breadcrumbs they leave during their daily travels etc.), there exists in the eco-system both sources of information as well as entities that would find value in utilizing this information for a variety of purposes.  What will be critical to the success of the identity eco-system is to define, as a starting point, the qualities and behavior of the "entity-that-needs-to-exist-in-the-middle" between these authoritative sources of information and consumers of such information.  I believe the Identity Oracle to be a critical piece of that entity. 

So, what is an Identity Oracle?

Bob Blakley, currently the Gartner Research VP for Identity and Privacy, coined the phrase "Identity Oracle", and provided a definition in a Burton Catalyst 2006 presentation:

  • An organization which derives all of its profit from collection & use of your private information…
  • And therefore treats your information as an asset…
  • And therefore protects your information by answering questions (i.e. providing meta-identity information) based on your information without disclosing your information…
  • Thus keeping both the Relying Party and you happy, while making money.

That is as succinct a definition as I've seen in the many conversations on this topic since that time, and since I have no desire to re-invent the wheel, this is as good a starting point as any.

The key point to note here is that this is NOT technology but a business, and as such if there is any hope for this to work, this business needs a viable business model i.e. something that makes it money.  As Bob notes, some of the questions that need be answered by the current eco-system denizens such as Identity Providers, Attribute Providers and Relying Parties include:

  • Paying for the Identity Provider server and the service it provides.
  • Convincing Relying Parties that they should rely on information provided by a third party (the Identity Provider) rather than maintaining identity attribute information themselves.
  • Assigning liability when a Relying Party asserts that a claimed identity attribute is incorrect.
  • Assigning liability when a subject claims that the wrong identity attribute claim was released to a Relying Party.
  • Making subjects whole when a security failure “leaks” subject identity attributes directly from the Identity Provider.
  • Assigning liability and making subjects whole when a security failure “leaks” subject identity attributes from a Relying Party.

I will add the following to the above list:

  • Making subjects whole when the Identity/Attribute Provider's desire to monetize its visibility into the transactional information across multiple Relying Parties overrides its responsibility to protect the subject's personal information.

As always, whenever something like this is proposed there is a tendency for technologists to try and map this to technology implementations. In this case technologies such as Security Token Services, Claims Transformers and Agents, Minimal Disclosure Tokens and Verified Claims. And in the "What the Identity Oracle Isn't" blog post, Bob provides a clear example of why such a technology focused view is incomplete at best by walking through an example of an Identity Oracle based transaction:

A human – let’s call him “Bob” – signs up for an account with the Identity Oracle.  The Identity Oracle collects some personal information about Bob, and signs a legally binding contract with Bob describing how it will use and disclose the information, and how it will protect the information against uses and disclosures which are not allowed by the contract.  The contract prescribes a set of penalties – if Bob’s information is used in any way which is not allowed by the contract, the Identity Oracle PAYS Bob a penalty: cash money.

When Bob wants to get a service from some giant, impersonal corporation (say “GiCorp”) whose business depends in some way on Bob’s identity, Bob refers GiCorp to the Identity Oracle; GiCorp then goes to the Identity Oracle and asks a question.  The question is NOT a request for Bob’s personal information in any form whatsoever (for example, the question is NOT “What is Bob’s birthdate”). And the Identity Oracle’s response is NOT a “minimal disclosure token” (that is, a token containing Bob’s personal information, but only as much personal information as is absolutely necessary for GiCorp to make a decision about whether to extend the service to Bob – for example a token saying “Bob is over 18”).

Instead, GiCorp’s request looks like this:
“I am allowed to extend service to Bob only if he is above the legal age for this service in the jurisdiction in which he lives.  Am I allowed to extend service to Bob?”

And the Identity Oracle’s response looks like this:

The Identity Oracle, in normal operation, acts as a trusted agent for the user and does not disclose any personal information whatsoever; it just answers questions based on GiCorp’s stated policies (that is, it distributes only metadata about its users – not the underlying data).

The Identity Oracle charges GiCorp and other relying-party customers money for its services.  The asset on the basis of which the Identity Oracle is able to charge money is its database of personal information.  Because personal information is its only business asset, the Identity Oracle guards personal information very carefully.

Because disclosing personal information to relying-party customers like GiCorp would be giving away its only asset for free, it strongly resists disclosing personal information to its relying-party customers.  In the rare cases in which relying parties need to receive actual personal data (not just metadata) to do their jobs, the Identity Oracle requires its relying-party customers to sign a legally binding contract stating what they are and are not allowed to do with the information.  This contract contains indemnity clauses – if GiCorp signs the contract and then misuses or improperly discloses the personal information it receives from the Identity Oracle about Bob, the contract requires GiCorp to pay a large amount of cash money to the Identity Oracle, which then turns around and reimburses Bob for his loss.

This system provides Bob with much stronger protection than he receives under national privacy laws, which generally do not provide monetary damages for breaches of privacy.  Contract law, however, can provide any penalty the parties (the Identity Oracle and its relying party customers like GiCorp) agree on.  In order to obtain good liability terms for Bob, the Identity Oracle needs to have a valuable asset, to which GiCorp strongly desires access.  This asset is the big database of personal data, belonging to the Identity Oracle, which enables GiCorp to do its business. And allows the Identity Oracle to charge for its services.

The Identity Oracle provides valuable services (privacy protection and transaction enablement) to Bob, but it also provides valuable services to GiCorp and other relying-party customers.  These services are liability limitation (because GiCorp no longer has to be exposed to private data which creates regulatory liability and protection costs for GiCorp) and transaction enablement (because GiCorp can now rely on the Identity Oracle as a trusted agent when making decisions about what services to extend to whom, and it may be able to get the Identity Oracle to assume liability for transactions which fail because the Oracle gave bad advice).

The important take-aways for me from the above are (1) The contextual and privacy preserving nature of the question being asked and answered, (2) the allocation and assumption of liability, as well as the (3) redress mechanisms that rely on contract law rather than privacy legislation.

This approach, I believe, addresses some of the issues that are raised by Aaron Titus in his “NSTIC at a Crossroads” blog post and his concepts around “retail” and “wholesale” privacy in what he refers to as the current Notice and Consent legal regime in the United States.

Currently, one of the things that I am thinking over and having conversations with others about, is if it makes sense for the Fair Information Practice Principles (FIPPs) [Transparency, Individual Participation, Purpose Specification, Data Minimization, Use Limitation, Data Quality and Integrity, Security, Accountability and Auditing], found in Appendix C of the June 2010 DRAFT release of the National Strategy for Trusted Identities in Cyberspace (NSTIC), can be adopted as the core operating principles of an Identity Oracle. And as noted in the example above, if these operating principles could be enforced via Contract Law to the benefit of the Identity Eco-System as a whole.

Tags:: Architecture | Security
2/27/2011 6:22 PM Eastern Standard Time  |  Comments [1]  |  Disclaimer  |  Permalink   
Sunday, December 12, 2010

I am doing a bit of research into what it would take to deploy Sharepoint 2010 as a DMZ facing portal that accepts Federated Credentials.  Here are some materials I’ve come across that may help others who may be doing the same:

From MS PDC10 Presentation “How Microsoft Sharepoint 2010 was built with Windows Identity Foundation”:

Classic Authentication

Claims-based Authentication

  • NT Token Windows Identity
  • NT Token Windows Identity
  • ASP.NET Forms Based Authentication (SQL, LDAP, Custom …)
  • SAML 1.1++
  >>> SAML Token Claims Based Identity
>>> SPUser >>> SPUser

More details regarding the above can be found at the MS Technet page on Authentication methods supported in SP2010 Foundation.

Windows Identity Foundation (WIF) which is the RP piece integrated with Sharepoint 2010 (SP2010) does NOT support the SAML Protocol. It only supports the WS-Federation Passive profile with SAML tokens for Web SSO.

Alternative to get SP2010 to work with a SAML2 IdP requires the deployment and usage of ADFS 2:

  • Configure ADFS 2 as a SAML2 SP that accepts attributes/claims from an external SAML2 IdP
    • Define the SAML2 IdP as a SAML2 Claims Provider within ADFS 2
    • Exchange federation metadata between SAML2 IdP and ADFS 2 SP
  • Configure the WIF based application (i.e. SP2010 application) as a RP which points to ADFS 2.0 as the Sharepoint-STS (SP-STS) to which the web apps externalize Authentication

Of course, this implies that you need to deploy another server in the DMZ that is hosting the ADFS 2 bits.

In order to configure SP2010 Authentication to work with SAML Tokens:

  1. Export the token-signing certificate from the IP-STS. This certificate is known as the ImportTrustCertificate. Copy the certificate to a server computer in the SharePoint Server 2010 farm.
  2. Define the claim that will be used as the unique identifier of the user. This is known as the identity claim. Many examples of this process use the user e-mail name as the user identifier. Coordinate with the administrator of the IP-STS to determine the correct identifier because only the owner of the IP-STS knows which value in the token will always be unique per user. Identifying the unique identifier for the user is part of the claims-mapping process. Claims mappings are created by using Windows PowerShell.
  3. Define additional claims mappings. Define which additional claims from the incoming token will be used by the SharePoint Server 2010 farm. User roles are an example of a claim that can be used to permission resources in the SharePoint Server 2010 farm. All claims from an incoming token that do not have a mapping will be discarded.
  4. Create a new authentication provider by using Windows PowerShell to import the token-signing certificate. This process creates the SPTrustedIdentityTokenIssuer. During this process, you specify the identity claim and additional claims that you have mapped. You must also create and specify a realm that is associated with the first SharePoint Web applications that you are configuring for SAML token-based authentication. After the SPTrustedIdentityTokenIssuer is created, you can create and add more realms for additional SharePoint Web applications. This is how you configure multiple Web applications to use the same SPTrustedIdentityTokenIssuer.
  5. For each realm that is added to the SPTrustedIdentityTokenIssuer, you must create an RP-STS entry on the IP-STS. This can be done before the SharePoint Web application is created. Regardless, you must plan the URL before you create the Web applications.
  6. Create a new SharePoint Web application and configure it to use the newly created authentication provider. The authentication provider will appear as an option in Central Administration when claims mode is selected for the Web application.

You can configure multiple SAML token-based authentication providers. However, you can only use a token-signing certificate once in a farm. All providers that are configured will appear as options in Central Administration. Claims from different trusted STS environments will not conflict.

The SP2010 Authentication Flow then becomes:

  1. User attempts to access Sharepoint web application
  2. User redirected to Sharepoint STS
    - Validate AuthN Token (if user already has been AuthN w/ IdP)
    - Augment claims, if need be
  3. Post Token {SP-Token} to Sharepoint Web Application
  4. Extract Claims and construct IClaimsPrincipal

I still have a list of outstanding questions I am working thru, some of which are:

  • Can the built-in SP-STS do direct Authentication of X.509 Credentials for SP2010?
    • What "front-end" protocols are supported by this SP-STS? (WS-Fed Passive Profile only?)
    • Is there any MS "magic sauce" added to this SP-STS that "extends" the standards to make it work with SP2010?
    • Can the built-in SP-STS do direct Authentication of X.509 credentials?
    • Can the built-in the SP-STS do just in time provisioning of users to SP2010? Is it needed?
  • When using ADFS 2 with SP2010, does ADFS 2 replace the built-in SP-STS or does it work in conjunction with the SP-STS? i.e. if using ADFS 2, can the built-in SP-STS be disabled?
    • Can ADFS 2 do direct Authentication of X.509 credentials?
    • Can ADFS 2 do just in time provisioning of users to SP2010? Is it needed?
  • Does this SP-STS need to be ADFS 2.0 or can it be any STS that can do SAML2 to WS-Fed token transformation on the RP side?
  • If it can be any STS, how do I register a non-Microsoft STS w/ SP2010? i.e. How do I register it as a "SPTrustedIdentityTokenIssuer"
  • Where can I find the metadata on the SP2010 side that can be exported to bootstrap the registration of a SP2010 RP App with an external IdP?

Part of the issue I am working thru is the differences in terminology between Microsoft and …everyone else… :-) that is used to describe the same identity infrastructure components. Walking thru some of the ADFS 2.0 Step-by-Step and How To Guides, especially the ones that show interop configurations with Ping Identity Pingfederate and Shibboleth 2, do help but not as much as I had hoped.  The primary limitation of the guides is that they do the wizard driven click-thru UI configuration without explaining why things are being done or providing explanations on the underlying protocols that are supported and the implementation choices that are made.

Tags:: Architecture | Security
12/12/2010 3:57 PM Eastern Standard Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Tuesday, December 7, 2010

Input to access control decisions are based on information about the subject, information about the resource, environmental/contextual information, and more, that are often expressed as attributes/claims. But how do you determine what those attributes/claims should be, especially as it relates to information about the subject?

The typical way that I have seen folks handle this is based on a bottom up approach that gets a whole bunch of folks who manage and maintain directory services, lock them in a room and throw away the key until they can come to some type of agreement on a common set of attributes everyone can live with based on their knowledge of relying party applications. This often is not …ah… optimal.

ABAC Data Model The other approach is to start at the organizational policy level and identify a concrete set of attributes that can fully support the enterprise’s policies. My team was tasked with looking at the latter approach on behalf of the DHS Science and Technology Directorate. The driving force behind it was coming up with a conceptual model that remains relevant not just within an Enterprise but also across them i.e. in a Federation.

Couple of my team members, Tom Smith and Maria Vachino, led the effort which resulted in a formal peer-reviewed paper that they presented at the 2010 IEEE International Conference on Homeland Security [PPTX] last month. The actual paper is titled “Modeling the Federal User Identity, Credential, and Access Management (ICAM) decision space to facilitate secure information sharing” and can be found on IEEExplore.


Providing the right information to the right person at the right time is critical, especially for emergency response and law enforcement operations. Accomplishing this across sovereign organizations while keeping resources secure is a formidable task. What is needed is an access control solution that can break down information silos by securely enabling information sharing with non-provisioned users in a dynamic environment.

Multiple government agencies, including the Department of Homeland Security (DHS) Science and Technology Directorate (S&T) are currently developing Attribute-Based Access Control (ABAC) solutions to do just that. ABAC supports cross-organizational information sharing by facilitating policy-based resource access control. The critical components of an ABAC solution are the governing organizational policies, attribute syntax and semantics, and authoritative sources. The policies define the business objectives and the authoritative sources provide critical attribute attestation, but syntactic and semantic agreement between the information exchange endpoints is the linchpin of attribute sharing. The Organization for the Advancement of Structured Information Standards (OASIS) Security Assertion Markup Language (SAML) standard provides federation partners with a viable attribute sharing syntax, but establishing semantic agreement is an impediment to ABAC efforts. This critical issue can be successfully addressed with conceptual modeling. S&T is sponsoring the following research and development effort to provide a concept model of the User Identity, Credential, and Access Management decision space for secure information sharing.

The paper itself describes the conceptual model, but we have taken the work from the conceptual stage to the development of a logical model, which was then physically implemented using a Virtual Directory which acts as the backend for an Enterprise’s Authoritative Attribute Service. Tags: ,,, Tags: ,,,

Tags:: Architecture | Security
12/7/2010 9:28 PM Eastern Standard Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Friday, October 22, 2010

Information Sharing and Cybersecurity are hot button topics in the Government right now and Identity, Credentialing and Access Management are a core component of both those areas. As such, I thought it would be interesting to take a look at how the US Federal Government’s Identity, Credentialing and Access Management (ICAM) efforts around identity federation map into the Authentication, Attribute Exposure and Authorization flows that I have blogged about previously.

[As I have noted before, the entries in my blog are solely my opinions and do not represent the thoughts, intentions, plans or strategies of any third party, including my employer, except where explicitly stated. As such, what I am about to say is simply my informed opinion and may or may not be what the FICAM Gov't folks intent or believe]

Fed_3Ps When I think of the components of Identity Federation, I tend to bucket them into the 3 P’s; Protocol, Payload and Policy:

  1. Protocol
    What are the technical means agreed to by all parties in a federation by which information is exchanged? This will typically involve decisions regarding choices and interoperability profiles that relate to HTTP, SOAP, SAML, WS-Federation, OpenID, Information Cards etc. In the past I’ve also referred to this as the “Plumbing”. ICAM calls these “Identity Schemes”.

    Federal ICAM Support for Authentication Flows

    Federal ICAM Support for Attribute Exposure Flows

    Federal ICAM Support for Authorization Flows

  2. Payload
    What is carried on the wire? This typically involves attribute contracts that define how a subject may be defined, the additional attributes needed in order to make access control decisions etc.

    Federal ICAM Support
    ICAM remains agnostic to the payload and leaves it up to the organizations and communities of interest that are utilizing the ICAM profiles to define their attribute contracts.

    In Appendix A of the ICAM Backend Attribute Exchange* (BAE) [PDF] there was an attempt made to define the semantics of a Federal Government wide Attribute Contract but none of the attributes are required. Currently there is a Data Attribute Tiger Team that has been stood up under the ICAMSC Federation Interoperability Working Group which is working to define multiple attribute contracts that can potentially be used as part of an Attribute Exposure mechanism.
  3. Policy
    The governance processes that are put into place to manage and operate a federation as well as adjudicate issues that may come up. In the past I’ve referred to this as “Governance” but thought that Policy may be much more appropriate.

    Federal ICAM Support
    • Which protocol is supported by ICAM is governed by the FICAM Identity Scheme Adoption Process [PDF]. Currently supported protocols include, OpenID, IMI and SAML 2.0.
    • FICAM, thru its Open Identity Initiative, has put into place a layer of abstraction regarding the certification and accreditation of non Government Identity Providers allowed to issue credentials that can be utilized to access Government resources. This layer is known as a Trust Framework Provider. The Trust Framework Providers are responsible for assessing non Government Identity Providers (IDPs). The process by which an Organization becomes a Trust Framework Provider is known as the Trust Framework Provider Adoption Process [PDF]. Currently supported Trust Framework Providers include OIX and Kantara.

* The ICAM Backend Attribute Exchange (BAE) v1.0 [PDF] document that I am linking to here is rather out of date. The Architecture components of this documents are still valid but the technical profile pieces have been OBE (Overcome By Events) and are significantly out of date. The ICAMSC Architecture Working Group is currently working on v2 of this document incorporating the lessons learned from multiple pilots between Government Agencies/Departments as well as implementation experience from COTS vendors such as Layer 7, Vordel and BiTKOO who have implemented BAE support in their products. Ping me directly if you need further info.

Technorati Tags: ,,,,,,, Tags: ,,,,,,,
Tags:: Architecture | Security
10/22/2010 2:27 PM Eastern Daylight Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Sunday, October 10, 2010

After the blog posts on Authentication and Attribute Exposure options in the federation of identities, this post is going to focus on putting it all together for authorization.  The caveats noted in the earlier posts apply here as well.

Authorization – Front Channel Attribute Based Access Control

  • Clear separation of security boundaries
  • Clear separation between Authentication and Authorization
  • Resource B needs attributes of Subject A to make access control decision
  • Resource B accepts Subject A mediating attribute delivery from authoritative sources to Resource B

1) Subject A’s attributes are gathered as part of the cross-domain brokered authentication Flows

2) Subject A’s attributes are presented as part of one of the cross-domain brokered authentication flows

3) PDP B makes an access control decision based on attributes that have been gathered and presented

  • While Broker A and Attribute Service A are logically separate, physical implementation may combine them.
  • While PDP B is logically separate from Resource B, logical implementation may be as an externalized PEP or Internalized Code

An example of this is an IdP or SP initiated Web Browser SSO in which the subject authenticates to an IdP in its own domain and is redirected to the SP. The redirect session contains both an authentication assertion and an attribute assertion. The SP validates the authentication assertion and a PEP/PDP integrated with the SP utilizes the attributes in the attribute assertion to make an access control decision. This, with minor variations, also supports user centric flows using information cards etc.



Authorization – Back Channel Attribute Based Access Control

  • Clear separation of security boundaries
  • Clear separation between Authentication and Authorization
  • Resource B needs attributes of Subject A to make access control decision
  • Resource B is requires delivery of Subject A attributes directly from authoritative sources

Subject A’s is authenticated using one of the cross-domain brokered authentication Flows

1) Subject A’s access control decision has been externalized to PDP B

2) PDP B makes pulls attributes directly from authoritative sources and an access control decision based on attributes that have been gathered

  • While Broker A and Attribute Service A are logically separate, physical implementation may combine them.
  • While PDP B is logically separate from Resource B, logical implementation may be as an externalized PEP or Internalized Code

An example of this flow is a Subject who authenticates in its own domain using an IdP or SP initiated Web Browser SSO or a subject who authenticates using an X.509 based Smart Card to the Resource. Once the subject has been validated, the access control decision is delegated to a PDP which pulls the attributes of the subject directly from authoritative sources using one of the supported Attribute Exposure Flows.



Provided the infrastructure exists, there is nothing stopping you from using a combination of both Front Channel and Back Channel mechanisms for ABAC. For example, you may want to have the option of the Subject mediating privacy related attribute release via the Front Channel and combine that with Enterprise or Community of Interest Type attributes pulled via the Back Channel mechanisms.

Tags:: Architecture | Security
10/10/2010 9:15 PM Eastern Daylight Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Sunday, October 3, 2010

Continuing my series of blog posts on the options available in federating identities, which I started with Authentication, I am going to try and map out some options that are available when exposing attributes.

As noted in my earlier post on Authentication, the following caveats apply:

  • This is conceptual in nature
  • Implementation choices, whether they are architectural or technology, may drive the separation or co-location of some of the conceptual entities noted in the pictures
  • Still a work in progress…
Attribute Exposure – Organizational Query

  • Clear separation of security boundaries.
  • One or more authoritative sources of attributes for the Subject exist in the same Trust Domain
  • Trust relationship between Resource B and Attribute Service A set up before-hand and out-of-band

1) Subject A has been Authenticated in Trust Domain B

2) Resource B recognizes Subject A as from outside its domain and utilizes attributes from Attribute Service A



Attribute Exposure – Single Point of Query 1

  • Clear separation of security boundaries.
  • One or more authoritative sources of attributes for the Subject exist in multiple Trust Domains
  • Trust relationship between Resource B and Attribute Aggregator A set up before-hand and out-of-band
  • Attribute Aggregator A has knowledge and trust relationships with attribute sources both inside and outside its trust domain

1) Subject A has been Authenticated in Trust Domain B

2) Resource B recognizes Subject A as from outside its domain and utilizes attributes from Attribute Aggregator A

3-4) Attribute Aggregator A aggregates Subject A attributes from multiple authoritative sources, wherever they may reside



Attribute Exposure – Single Point of Query 2

  • Clear separation of security boundaries
  • One or more authoritative sources of attributes for the Subject exist in multiple Trust Domains
  • Resource B has outsourced attribute gathering to Attribute Aggregator B
  • Attribute Aggregator B has knowledge and trust relationships with multiple attribute sources

1) Subject A has been Authenticated in Trust Domain B

2) Resource B recognizes Subject A as from outside its domain and utilizes attributes from Attribute Aggregator B

3-4) Attribute Aggregator B aggregates Subject A attributes from multiple authoritative sources, wherever they may reside

I am most ambivalent regarding this flow because of the complexity of the moving pieces involved:

  • The multiple trust relationships that needs to be managed by the attribute aggregator
  • The attribute aggregator must “know” where all to go to get the attributes, but given that the subject is from a separate domain and the aggregator may not have a close enough relationship with the subject, would it really know where to go to get the attributes?




Attribute Exposure – Identity Oracle
  • Clear separation of security boundaries
  • One or more authoritative sources of attributes for the Subject exist in multiple Trust Domains
  • Resource B has engaged the services of an Identity Oracle
  • Identity Oracle has close relationship with multiple Authoritative Attribute Sources

1) Subject A has been Authenticated in Trust Domain B

2) Resource B recognizes Subject A as from outside its domain and asks appropriate question of the Identity Oracle

3-4) Identity Oracle obtains relevant Subject A attributes from multiple authoritative sources and answers the question


I am being very careful of word choices here because this is at the conceptual level and not at the implementation level.  For example, I am particular about using the word “utilizes attributes from …” rather than “requests attributes from …” so that the flows could accommodate both “front-channel” attribute passing as well as “back-channel” attribute passing. For example in the “Organizational Query” flow, the physical implementation could represent both a Federation Web SSO option that provided the attributes to the Relying Party/Service Provider as a browser based SAML Attribute Assertion or attributes requested by a PDP integrated with the Relying Party/Service Provider as a SOAP request to the Attribute Service.

Comments are welcome and would be very much appreciated.

Technorati Tags: ,, Tags: ,,
Tags:: Architecture | Security
10/3/2010 8:20 PM Eastern Daylight Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Sunday, September 19, 2010

In some of the conversations I’ve had recently, there has occasionally been a sense of confusion around around the options available in federating identities, the separation of concerns between authentication and authorization as well as the choices in how attributes can be passed to applications to make access control decisions.

I am in the process of putting together some material to convey the various options available to us in the current state of technology.  I am starting with authentication. Some caveats:

  • This is conceptual in nature
  • Implementation choices, whether they are architectural or technology, may drive the separation or co-location of some of the conceptual entities noted in the pictures
  • Still a work in progress…

First a definition:  A Domain is a realm of administrative autonomy, authority, or control for subjects and objects in a computing environment.  For the purposes of this discussion, a Trust Domain defines the environment in which a single authority is trusted to validate the credentials presented during authentication. (Thanks Russ!)

Authentication – Direct (Single Trust Domain)

1) The Subject attempts to access the Resource and presents a credential

2) The Resource, prior to authenticating the claimed identity presented in the credential, checks the validity of the credential. This could include: (a) Is the credential issues from a source I trust? (b) Has the credential expired? (c) Has the credential been revoked?  Once the validity of the credential is satisfied, the resource authenticates the Subject by verifying the Subject can prove association to the asserted identity in the credential

Once Authenticated, the resource should then verify that the identity has authorized access to the requested resource, based on existing security policy.



Authentication – Brokered (Single Trust Domain)

1 and 2) The Subject presents a credential to the Broker. The Broker, prior to authenticating the claimed identity presented in the credential, checks the validity of the credential. This could include: (a) Is the credential issues from a source I trust? (b) Has the credential expired? (c) Has the credential been revoked?  Once the validity of the credential is satisfied, the Broker authenticates the Subject by verifying the Subject can prove association to the asserted identity in the credential. Once this is done, the Subject receives a token with proof-of-authentication .

3) Subject attempts to access the Resource and presents the token from the Broker

4) The Resource validates the Subject’s token

Once validated, the resource should then verify that the identity has authorized access to the requested resource, based on existing security policy.

Types of Security Tokens

  • SAML Assertion
  • Kerberos ticket
  • Username token
  • X.509 token
  • WAM Session Token
  • Custom

Authentication – Direct (Cross-Domain/Federated)

This beastie does not exist!

Authentication – Brokered I (Cross-Domain/Federated)

  • Clear separation of security boundaries.
  • Resource B only accepts identity information vouched for by Broker B.
  • Dependency between Subject A and Broker B; If Broker B requires X.509 Certificates as a token, Subject A must have the ability to handle X.509 Certificates
  • Trust between Broker A and Broker B is usually set up before-hand and out-of-band.

1) Subject A presents a credential to the Broker A. Broker A, prior to authenticating the claimed identity presented in the credential, checks the validity of the credential. This could include: (a) Is the credential issues from a source I trust? (b) Has the credential expired? (c) Has the credential been revoked?  Once the validity of the credential is satisfied, the Broker authenticates the Subject by verifying the Subject can prove association to the asserted identity in the credential. Once this is done, Subject A receives a token with proof-of-authentication
2) Subject A presents the token to Broker B; Given that Broker B trusts tokens issued by Broker A, Broker B issues token to Subject A that is valid in Trust Domain B
3) Subject A attempts to access the Resource B and presents the token from the Broker B
4) Resource B validates the Subject A’s token

Once Authenticated, the resource should then verify the identity has authorized access to the requested resource, based on existing security policy.



Authentication – Brokered II (Cross-Domain/Federated)

  • Clear separation of security boundaries.
  • Resource B accepts identity information from external sources but “outsources” the actual authentication to Broker B.
  • Trust between Broker B and Broker A is mediated by a third party (Bridge) which is set up before-hand and out-of-band.

1) Subject A presents a credential to the Broker A. Broker A, prior to authenticating the claimed identity presented in the credential, checks the validity of the credential. This could include: (a) Is the credential issues from a source I trust? (b) Has the credential expired? (c) Has the credential been revoked?  Once the validity of the credential is satisfied, the Broker authenticates the Subject by verifying the Subject can prove association to the asserted identity in the credential. Once this is done, Subject A receives a token with proof-of-authentication
--- Variation: Subject A has been issued credentials
2) Subject A attempts to access Resource B and presents the issued credentials (or token from Broker A)
3) Resource B externalizes the validation of Subject A’s credential or token to Broker B
4) Broker B validates credentials or token with the Bridge (Path Validation + Revocation for PKI or other mechanism with a Federation Operator)

Once Authenticated, the resource should then verify the identity has authorized access to the requested resource, based on existing security policy.


As noted above, this is Authentication only. Comments are very welcome and would be appreciated.

UPDATE (10/16/2010): Updated post language based on comments and feedback from Russ Reopell

Tags:: Architecture | Security
9/19/2010 3:16 PM Eastern Daylight Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Sunday, September 12, 2010

My proposal of this session at IIW East was driven by the following context:

  • We are moving into an environment where dynamic, contextual, policy driven mechanisms are needed to make real time access control decisions at the moment of need
  • The input to these decisions are based on attributes/claims which reside in multiple authoritative sources
  • The authoritative-ness/relevance of these attributes are based on the closeness of a relationship that the keeper/data-steward of the source has with the subject. I would highly recommend reading the Burton Group paper (FREE) by Bob Blakley on "A Relationship Layer for the Web . . . and for Enterprises, Too” which provides very cogent and relevant reasoning as to why authoritativeness of attributes is driven by the relationship between the subject and the attribute provider
  • There are a set of attributes that the Government maintains thorough its lifecycle, on behalf of citizens, that have significant value in multiple transactions a citizen conducts. As such, is there a need for these attributes to be provided by the government for use and is there a market that could build value on top of what the government can offer?

Some of the vocal folks at this session, in no particular order, included (my apologies to folks I may have missed):

  • Dr. Peter Alterman, NIH
  • Ian Glazer, Gartner
  • Gerry Beuchelt, MITRE
  • Nishant Kaushik, Oracle
  • Laura Hunter, Microsoft
  • Pamela Dingle, Ping Identity
  • Mary Ruddy, Meristic
  • Me,  Citizen :-)

We started out the session converging on (an aspect of) an Identity Oracle as something that provides an answer to a question but not an attribute. The classic example of this is someone who wishes to buy alcohol which is age restricted in the US. The question that can be asked of an Oracle would be "Is this person old enough to buy alcohol?" and the answer that comes back is "Yes/No" with the Oracle handling all of the heavy lifting on the backend regarding state laws that may differ, preservation of Personally Identifiable Information (PII) etc.  Contrast this to an Attribute Provider to whom you would be asking "What is this person's Birthday?" and which releases PII info.

It was noted that the Government (Federal/State/Local/Tribal) is authoritative for only a finite number of attributes such as Passport #, Citizenship, Driver's License, Social Security Number etc and that the issue at present is that there does not exist an "Attribute Infrastructure" within the Government. The Federal ICAM Backend Attribute Exchange (BAE) is seen as a mechanism that will move the Government along on this path, but while there is clarity around the technical implementation, there are still outstanding governance issues that need to be resolved.

There was significant discussion about Attribute Quality, Assurance Levels and Authoritativeness. In my own mind, I split them up into Operational Issues and Governance Principles.  On the Operational Issue arena, existing experiences with attribute providers have shown the challenges that exist around the quality of data and service level agreements that need to be worked out and defined as part of a multi-party agreement rather than bi-lateral agreements. On the Governance Principals side, there are potentially two philosophies for how to deal with authoritativeness:

  1. A source is designated as authoritative or not and what needs to be resolved from the perspective of an attribute service is how to show the provenance of that data as coming from the authoritative source
  2. There are multiple sources of the same attribute and there needs to be the equivalent of a Level of Assurance that can be associated with each attribute

At this point, I am very much in camp (1) but as pointed out at the session, this does NOT preclude the existence of second party attribute services that add value on top of the services provided by the authoritative sources. An example of this is the desire of an organization to do due diligence checks on potential employees. As part of this process, they may find value in contracting the services of service provider that aggregates attributes from multiple sources (some gov't provided and others not) that are provided by them in an "Attribute Contract" that satisfies their business need. Contrast this to them having to build the infrastructure, capabilities and business agreements with multiple attribute providers. The second party provider may offer higher availability, a more targeted Attribute Contract, but with the caveat that some of the attributes that they provide may be 12-18 hours out-of-date etc.  Ultimately, it was noted that all decisions are local and the decisions about factors such as authoritativeness and freshness are driven by the policies of the organization.

In a lot of ways, in this discussion we got away from the perspective of the Government as an Identity Oracle but focused on it more as an Attribute Provider. A path forward seemed to be more around encouraging an eco-system that leveraged attribute providers (Gov't and Others) to offer "Oracle Services" whether from the Cloud or not. As such the Oracle on the one end has a business relationship with the Government which is the authoritative source of attributes (because of its close relationship with the citizen) and on the other end has a close contractual relationship which organizations, such as financial service institutions, to leverage their services. This, I think, makes the relationship one removed from what was originally envisioned as what is meant by an Identity Oracle.  This was something that Nishant brought up after the session in a sidebar with Ian and Myself.  I hope that there is further conversation on this topic about this.

My take away from this session was that there is value and a business need in the Government being an attribute provider, technical infrastructure is being put into place that could enable this, and while many issues regarding governance and quality of data still remains to be resolved, there is a marketplace and opportunity for Attribute Aggregators/Oracles that would like to participate in this emerging identity eco-system.

Raw notes from the session can be found here courtesy of Ian Glazer.

Tags:: Architecture | Security
9/12/2010 2:00 PM Eastern Daylight Time  |  Comments [1]  |  Disclaimer  |  Permalink   
Thursday, August 12, 2010

There has been a great deal of excitement about the US Federal Government's ICAM initiative that provides for the development of Trust Frameworks, and providers of same, that has resulted in the emergence of identity providers who can issue credentials to citizens that can be used to gain access to Government websites/applications/relying parties. In all of the discussions surrounding these efforts, the focus has been on leveraging existing OpenID, Information Card or other types of credentials issued by commercial or educational organizations to access Government resources.

But, is that all we want from our Government?

In this blog posting, I am going to consciously side-step the concept of the Government as an Identity Provider. In the United States at least, much more thoughtful people than I have discussed, debated and argued about the feasibility of this and I do not believe that I can add much value here. The general consensus to date seems to be that the value proposition around the concept of a "National Identity Card" has many challenges to overcome before it is seen as something that is viable in the US. Whether this is true or not, I leave to others to ponder.

But what about the US Government vouching for the attributes/claims of a person that they are already managing with our implicit or explicit permission?

My last blog post "The Future of Identity Management is...Now" spoke to the pull-based future of identity management:

  • ...
  • "The input to these decisions are based on information about the subject, information about the resource, environmental/contextual information, and more, that are often expressed as attributes/claims.
  • These attributes/claims can reside in multiple authoritative sources where the authoritative-ness/relevance may be based on the closeness of a relationship that the keeper/data-steward of the source has with the subject."
  • ...

There are certainly attributes/claims for which the US Government has the closest of relationship with its citizens and residents and as such remain the authoritative source:

  • Citizenship - State Department
  • Address Information - Postal Service
  • Eligibility to Work in the US - Department of Homeland Security
  • Eligibility to Drive - State Government DMVs
  • More...

I may be wrong about which agency is responsible for what, but I hope you see my point. There are some fundamental attributes about a person, that in the US, that are managed through its life-cyle by the Government, whether Federal or State.

I firmly believe, as someone who has been involved in demonstrating the feasibility of pull based identity architectures for delivering the right information to the right person at the moment of need using current commercial technologies and standards, that we have reached a point in time where the combination of the maturity of approaches and technologies such as the Federal ICAM Backend Attribute Exchange or the Identity Meta-system technologies and the willingness of the Government to engage with the public in the area of identity, that it is time to have a discussion about this topic.

The questions are definitely NOT technical in nature but are more around need and interest, feasibility and value with a heavy infusion of privacy. Some initial questions to start the conversation rolling would be:

  • What are a core set of attributes that can serve as a starting point for discussion?
  • Who would find value in utilizing them? How is it any better than what they have in place right now?
  • What are the privacy implications of specific attributes? How can they be mitigated (e.g. Ask if this person is old enough to buy alcohol vs. What is your birthday/age?
  • Liability in case of mistakes
  • How would the Government recoup some of the costs? We pay for passport renewals, we pay for driver's license renewals; don't expect this to come for free
  • Much, much more....

I would be curious to find out if there is any interest in this topic and if so what your reactions are. If there is interest, and given that the next Internet Identity Workshop is for the first time going to be held on the East Coast (Washington DC) on September 9-10 with a focus on "Open Identity for Open Government", and given its un-conference nature, was going to propose this as a topic of discussion.

UPDATE: Ian Glazer, Research Director for Identity and Privacy at Gartner has agreed to tag team with me on this topic at IIW in DC. Ian's research and interests sit at the very important intersection of Identity and Privacy, and I think he will bring that much needed perspective to this conversation.

He also thought that the topic should be more correctly termed "Government's role as an Oracle" rather than as an Attribute Provider, and since I agree, that will more than likely end up being the topic

To see what is meant by an Identity Oracle and what it is NOT, read this and this blog posts by Bob Blakely

Tags:: Architecture | Security
8/12/2010 8:43 AM Eastern Daylight Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Tuesday, August 3, 2010

The Gartner/Burton Group conference has a very high signal to noise ratio and is one that I was fortunate to present at this year. I spoke in my role as the Technical Lead for DHS Science & Technology Directorate's Identity Management Testbed about how we are taking the Federal ICAM Backend Attribute Exchange Interface and Architecture Specification from Profile to Usage.

The biggest buzz in the Identity Management track, where I spent most of my time, was around the “pull” based architecture that Bob Blakley and the rest of the Burton crew have been writing and speaking about for a while as being the future of Identity Management. The key take-away’s for me on this topic are:

  • We are moving to an era where dynamic, contextual, policy driven mechanisms are needed to make real time access control decisions at the moment of need.
  • The policy driven nature of the decisions require that the decision making capability be externalized from systems/applications/services and not be embedded within and that policy be treated as a first class citizen.
  • The input to these decisions are based on information about the subject, information about the resource, environmental/contextual information, and more, that are often expressed as attributes/claims.
  • These attributes/claims can reside in multiple authoritative sources where the authoritative-ness/relevance may be based on the closeness of a relationship that the keeper/data-steward of the source has with the subject.
  • The relevant attributes are retrieved (“pulled”) from the variety of sources at the moment when a subject needs to access a system and are not pre-provisioned into the system.
  • Standards! Standards! Standards! All of the moving parts here (finding/correlating attributes, movement of attributes across organizational boundaries, decision control mechanisms etc.) needs to be using standards based interfaces and technologies.

Potential implementation technologies proposed include virtual directories as mechanisms that can consolidate and correlate across multiple sources of attributes, standards such as LDAP(S), SAML and SPML as the plumbing standards, and External Authorization Mangers (“XACMLoids”) as decision engines.

BAE-2 What was interesting and relevant to me is the the US Federal Government via the ICAM effort as well as the Homeland Security, Defense and other communities have embraced this viewpoint for a while and are putting into place both the infrastructure to support it at scale, and have working implementations in use.

In particular my presentation was about how we are working an information sharing effort between two organizations who need to collaborate and share information in the event of a natural or man-made disaster where there is no way we could pre-provision users since we won’t know who those users are until they try to access systems. Our end-to-end implementation architecture really reflects pretty much everything noted in the Burton vision of the future. Relevant bits from the abstract:

The Backend Attribute Exchange (BAE) Interface and Architecture Specifications define capabilities that provide for both the real time exchange of user attributes across federated domains using SAML and for the batch exchange of user attributes using SPML.

The DHS Science & Technology (S&T) Directorate in partnership with the DOD Defense Manpower Data Center (DMDC), profiled SAML v2.0 as part of a iterative proof of concept implementation. The lessons learned and the profiles were submitted to the Federal CIO Council’s Identity, Credentialing and Access Management (ICAM) Sub-Committee and are now part of the Federal Government's ICAM Roadmap as the standardized mechanism for Attribute Exchange across Government Agencies […]

This presentation will provide an overview of the BAE profiling effort, technical details regarding the choices made, vendor implementations, usage scenarios and discuss extensibility points that make this profile relevant to Commercial as well as Federal, State, Local and Tribal Government entities.

In our flow there is a clear separation of concerns between Authentication and Authorization and in the language of my community, the subject that is attempting to access the Relying Party application is an “Unanticipated User” i.e. a subject that is from outside that organization who has NOT been provisioned in the RP Application.

  1. There is a organizational access control policy that is externalized from the application via the Externalized Authorization Manager (EAM) that is dynamic in nature (“Allow access to user if user is from organization X, has attributes Y and Z and the current environment status is Green”).
  2. The subject is identified as being from outside the organization, is authenticated and an account is created in the system. The subject has no roles, rights or privileges within the system.
  3. The EAM pulls the attributes that are needed from external (to organization) sources to execute the access control policy and based on a permit decision grants access to resources that are allowed by policy.

All of this, BTW, is taking place using existing standards such as SAML and XACML and technologies such as Virtual Directories, XML Security Gateways, Externalized Access Management solutions etc. This works now using existing technology and standards and gets us away from the often proprietary, connector-driven, provisioning-dependent architectures and moves us to something that works very well in a federated world.

To us this is not the future of Identity Management. This is Now! Tags: ,,,,,

Technorati Tags: ,,,,,
Tags:: Architecture | Security
8/3/2010 11:04 PM Eastern Daylight Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Saturday, March 13, 2010

At a meeting yesterday Judy Spencer, co-chair of the Federal CIO Council ICAMSC, briefed that NIST had recently re-released Special Publication 800-73 [PDF] to account for PIV-I Card Issuance.  These would be Smart Cards that can be issued by Non-Federal Issuer’s and can potentially be trusted by US Government Relying Parties.

The relevant bits are in Section 3.3 of NIST SP 800-73-3 (Quoting below so that I can easily reference them in the future):

3.3    Inclusion of Universally Unique IDentifiers (UUIDs)

As defined in [10], the presence of a Universally Unique IDentifier (UUID) conformant to the specification [11] is required in each identification card issued by Non-Federal Issuers, referred to as  “PIV Interoperable” (PIV-I) or “PIV Compatible” (PIV-C) cards.  The intent of [10] is to enable issuers to issue cards that are technically interoperable with Federal PIV Card readers and applications, and that may be trusted for particular purposes through a decision of the relying Federal Department or Agency.  Because the goal is interoperability of PIV-I and PIV-C with the Federal PIV System, the technical requirements for the inclusion of the UUID document are specified in this document. To include a UUID identifier on a PIV-I, PIV-C, or PIV Card, a credential issuer shall meet the following specifications for all relevant data objects present on an issued identification card.

  1. If the card is a PIV-I or PIV-C card, the FASC-N in the CHUID shall have Agency Code equal to 9999, System Code equal to 9999, and Credential Number equal to 999999, indicating that a UUID is the primary credential identifier.  In this case, the FASC-N shall be omitted from the certificates and CMS-signed data objects. If the card is a PIV Card, the FASC-N in the CHUID shall be populated as described in Section 3.1.2, and the FASC-N shall be included in authentication certificates and CMS-signed data objects as required by FIPS 201.
  2. The value of the GUID data element of the CHUID data object shall be a 16-byte binary representation of a valid UUID[11]. The UUID should be version 1, 4, or 5, as specified in [11], Section 4.1.3.
  3. The same 16-byte binary representation of the UUID value shall be present as the value of an entryUUID attribute, as defined in [12], in any CMS-signed data object that is required to contain a pivFASC-N attribute on a PIV Card, i.e., in the fingerprint template and facial image data objects, if present.
  4. The string representation of the same UUID value shall be present in the PIV Authentication Certificate and the Card Authentication Certificate, if present, in the subjectAltName extension encoded as a URI, as specified by [11], Section 3.

The option specified in this section supports the use of UUIDs by Non-Federal Issuers.  It also allows, but does not require, the use of UUIDs as optional data elements on PIV Cards.  PIV Cards must meet all requirements in FIPS 201 whether or not the UUID identifier option is used; in particular, the FASC-N identifier must be present in all PIV data objects as specified by FIPS 201 and its normative references.  PIV Cards that include UUIDs must include the UUIDs in all data objects described in (2) through (4).

At the site, you can also find a list of Credential Service Providers, cross-certified with the US Federal Bridge CA at Medium Hardware LOA (i.e. Meets the requirement that FIPS 140 Level 2 validated cryptographic modules are used for cryptographic operations as well as for the protection of trusted public keys), who have the ability to issue PIV-I Credentials. Tags: ,,,,,

Technorati Tags: ,,,,,

Tags:: Architecture | Security
3/13/2010 4:15 PM Eastern Standard Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Sunday, February 21, 2010

To be conformant to SPML v2 means that the SPML interface (Provisioning Service Provider / PSP) MUST:

  • Support the set of Core operations
    • a discovery operation {listTargets} on the provider
    • basic operations {add, lookup, modify, delete} that apply to objects on a target
  • Supports basic operations for every schema entity that a target supports
  • Supports modal mechanisms for asynchronous operations

SPMLThere are additional “Standard” operations described in the OASIS SPML v2 Specification [Zip]. The clear thing to keep in mind is that each operations adds a data management burden onto the provider, so the choice of whether or not to implement them should be considered very carefully.

From the perspective of deployment topologies, the PSP could be deployed separately from the Target or could very well be integrated tightly with the Target e.g. an SPML compliant web service interface on a target system.

One of the frustrating items for me when enquiring about SPML support in products has been the lack of clarity and visibility around exactly what has been implemented. All too often, vendors seem to have cherry picked a chosen set of operations (whether from the Core or from the Standard list) and used that to claim SPML support. I would be very curious to see if anyone can claim full SPML v2 compliance.

A particular use case for SPML that I am currently working on has to deal with the “batch” movement of attributes from multiple systems to a central repository. The typical flow is as follows:

  • Per organizational policy & relationship to user, attributes are assigned in their home organization and/or business unit (Org A / Org B / …)
  • Org A must move those users and/or their attributes to a central repository (Repository X) on a regular basis
  • Repository X acts as the authoritative source of attributes of users from multiple organizations / business units and can provide those attributes to authenticated and authorized entities in a real-time request/response and in a synch-take-offline-use modes.

Some points to keep in mind are:

  • Org A / B / … may have, and all too often do, have their own existing identity and provisioning systems as well as associated governance processes in place.
  • The organizations and the repository may or may not be under the same sphere of control and as such cannot mandate the use of the same piece of provisioning software and associated connectors on both ends of the divide.
  • The systems where the organizations store the attributes of their users may not necessarily be directory based systems.
  • The Repository may or may not be directory based system.
  • Identity / Trust / Security are, as you may imagine, rather important in these types of transactions.

SPML_Profile To meet these needs, we are currently profiling SPML to support the Core SPML Operations as well as the optional “BATCH” capability.  The “ASYNC” capability is something that we are more than likely going to support as well as it provides a mechanism for the provider to advertise support for asynchronous operations rather than have a request for an asynch operation fail on a requester with an error “status=’failed’” and “error=’unsupportedExecutionMode’”.

Keep in mind that the end result will satisfy more than just the one use case that I noted above. In fact, it satisfies many other use cases that we have that deal with both LACS and PACS scenarios. In addition, the profile will also bring in the pieces that are noted as out of scope in the SPML standard i.e. the Profiling of the Security protocols that are used to assure the integrity, confidentiality and trust of these exchanges. Fortunately, we can leverage some of previous work we have done in this space for that aspect. Tags: ,,

Technorati Tags: ,,

2/21/2010 4:44 PM Eastern Standard Time  |  Comments [2]  |  Disclaimer  |  Permalink   
Saturday, February 13, 2010

Mark Diodati at the Burton Group kicked off this conversation in his blog post "SPML Is On Life Support..." Other folks, notably Nishant Kaushik ("SPML Under the Spotlight Again?"), Ingrid Melve ("Provisioning, will SPML emerge?") and Jeff Bohren ("Whither SPML or wither SPML?") bring additional perspectives to this conversation. There is also some chatter in the Twitter-verse around this topic as well.

As someone who has been involved in both the standards process as well as end user implementation, I have a semi-jaded perspective to offer on what it takes for vendors to implement interfaces that are standards based in their tooling/products. First of all, let it be clearly understood that Standards are beautiful things (and there are many of them) but a Standard without vendor tooling support is nothing more than shelf-ware. So in the case of Standards Based Provisioning, in order to get that tooling support, multiple things need to happen:

  • First and foremost, do NOT let a vendor drive your architecture! User organizations need to break out the "vicious cycle" that exists by first realizing that there are choices beyond the proprietary connectors that are being peddled by vendors, and secondly by stepping up and defining provisioning architectures in a manner that prioritizes open interfaces, minimizes custom connectors and promotes diversity of vendor choice.  Map vendor technology into your architecture and not the other way around, because if you start from what a vendor's product gives you, you will always be limited by that vendor's vision, choices and motivations.
  • Bring your use cases and pain points to the Standards development process and invest the time and effort (Yes, this is often painful and time consuming!) to incorporate your needs into the base standard itself. I am finding that often the Technical Committees in Standards Organizations are proposed and driven by vendors and not end users. But in cases where there is a good balance between end users and vendors, the Standard reflects the needs of real people (The Security Services/SAML TC at OASIS often comes to mind as a good example).
  • Organizations need to incorporate the need for open standards into their product acquisition process. This needs to go beyond "Product X will support SPML" to explicit use cases as to which portions of the standard are important and relevant. Prototype what you need and be prepared to ask tough, detailed questions and ask for conformance tests against a profile of the Standard.
  • Be prepared to actively work with vendors who treat you like an intelligent, strategic partner and are willing to invest their time in understanding your business needs and motivations. These are the folks who see the strategic value and business opportunities in supporting open interfaces and standards, realize they can turn and burn quicker than the competition, and compete on how fast they can innovate and on customer satisfaction versus depending on product lock-in.  They are out there, and it is incumbent upon organizations to drive the conversation with those folks.

Moving on, let me reiterate the comments that I made on Mark's blog posting:

"The concern with exposing LDAP/AD across organizational boundaries is real and may not be resolved at the technology level. Applying an existing cross-cutting security infrastructure to a SOAP binding (to SPML) is a proven and understood mechanism which is more acceptable to risk averse organizations.

I would also add two additional points:

  1. More support for the XSD portion of SPML vs. DSML in vendor tooling. There are a LOT of authoritative sources of information that are simply NOT directories.
  2. There needs to be the the analog of SAML metadata in the SPML world (Or a profile of SAML metadata that can be used with SPML) to bootstrap the discovery of capabilities. The "listTargets" operation is simply not enough."

Pull While I do resonate with the "pull" model interfaces noted by Mark in his posting, I do believe that exposing LDAP(S)/AD Interfaces either directly of via Virtual Directories outside organizational boundaries is a non-starter for many organizations.

At the same time I believe there exists options in the current state of technology to provide a hybrid approach that can incorporate both the pull model as well as provide the application of cross-cutting security infrastructure into the mix. The architecture that we are currently using incorporates a combination of both Virtual/Meta Directory capabilities as well as an XML Security Gateway to provide policy enforcement (security and more) when exposed to the outside.

I will also reiterate that there needs to be more support for the XSD portion of SPML vs. DSML. A lot of the authoritative sources of user information that I am dealing with are simply not found in directory services but in other sources such as relational databases, custom web services and sometimes proprietary formats in addition to LDAP/AD.

I hope to post some the use cases for standards based provisioning as well as the details of some of the profiling that we are doing on SPML to satisfy those use cases in future blog posts. Looking forward to further conversations around this topic.

2/13/2010 12:26 PM Eastern Standard Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Friday, August 14, 2009

I had a great time at Burton Group's Catalyst Conference this year.  Spent my time between the Identity Management, SOA and Cloud sessions. Also had an opportunity to attend the Cloud Security & Identity SIG session as well.

As the fast-thinking, slow talking, and always insightful Chris Haddad notes on the Burton APS Blog (Chris... enjoyed the lunch and the conversation) "Existing Cloud Computing's momentum is predominantly focused on hardware optimization (IaaS) or delivery of entire applications (SaaS)".

But the message that I often hear from Cloud vendors is:

  • We want to be an extension of your Enterprise
  • We have deep expertise in certain competencies that are not core to your business, and as such you should let us integrate what we bring to the table into your Enterprise

... and variations on this theme.

But in order to do this, an Enterprise needs to have a deep understanding of its own core competencies, have clearly articulated it's capabilities into distinct offerings, and gone through some sort of a rationalization process for its existing application portfolio.. In effect, have done a very good job of Service Orient-ing themselves!

But we are also hearing at the same time that SOA has lost its bright and shiny appeal and that most SOA efforts, with rare exceptions, have not been successful. For the record, success in SOA to me is not about building out a web services infrastructure, but about getting true value and clear and measurable ROI out of the effort.

So to me, it would appear that without an organization getting Service Orientation right, any serious attempt they make on the cloud computing end will end up as nothing more than an attempt at building a castle on quicksand.

The other point that I noted was that while there were discussions around Identity and Security of Cloud offerings (they still need to mature a whole lot more, but the discussion was still there), there was little to no discussion around visibility and manageability of cloud offerings.  A point that I brought up in questions and in conversations on this topic was that while people's appetite for risk vary, one of the ways to evaluate and potentially mitigate risk was to provide more real time visibility into cloud offerings.  If a cloud vendor's offerings are to be tightly integrated into an Enterprise, and I now have a clear dependency on them, I would very much want to have a clear awareness of how the cloud offerings were behaving.

From a technical perspective, what I was proposing was something very similar in concept to the monitoring (and not management) piece of what WS-Management & WSDM brought to the table on the WS-* front. In effect, a standardized interface that all cloud vendors agree to implement that provides health and monitoring visibility to the organizations that utilize their services. In short, I do not want to get an after-the-fact report on your status sent to me by e-mail or pulled up on a web site, I want the real time visibility into your services that my NOC can monitor. There was a response from some vendors that they have this interface internally for their own monitoring. My response back to them is to expose it to your customers, and work within the cloud community to standardize it such that the same interface exits as I move from vendor to vendor.

8/14/2009 9:59 AM Eastern Daylight Time  |  Comments [1]  |  Disclaimer  |  Permalink   
Saturday, June 20, 2009

As part of the BAE profiling and reference implementation, we have a full test & validation suite.  Our desire has always been to make the barrier to entry for anyone using the test suites to be the minimum it needs to be. As such we focused on creating our test suites using open source tooling so that we could provide a test suite project that an implementer could import into their open source testing tool, point it at their BAE implementation, run it, and get immediate feedback on whether or not their implementation was conformant to the profile.

To that end, we have been using the popular and free soapUI testing tool. Unfortunately, we are running into some limitations in the tool support for SAML 2.0. It would appear that the current soapUI implementation is using the OpenSAML 1.1 implementation and not the current OpenSAML 2.0 which supports SAML v2. In particular, this means that the following functionality that relates to the testing of SAML AttributeRequest/Response are not supported:

  • Ability to digitally sign and validate attribute requests and responses using the enveloped signature method
  • Ability to utilize the <saml:EncryptedID> as a means of carrying the encrypted name identifier
  • Ability to decrypt the <saml:EncryptedAssertion> element sent by the Attribute Authority which contains the encrypted contents of an assertion

This has required us to go thru some gyrations in how we are implementing the test suites, which is making the user experience not as smooth as we would like.

Ideally we would love to continue using soapUI going forward, but we are also on the lookout for other open source tooling that we could utilize for our testing. Suggestions and recommendations from folks who have experienced this issue and have found a resolution would be very much appreciated. Tags: ,,,,

Technorati Tags: ,,,,
Tags:: Architecture | Security
6/20/2009 8:40 PM Eastern Daylight Time  |  Comments [2]  |  Disclaimer  |  Permalink   
Saturday, June 6, 2009

FIPS 201 defines a US Government-wide interoperable identification credential for controlling physical access to federal facilities and logical access to federal information systems.  The FIPS 201 credential, known as the Personal Identity Verification (PIV) Card, supports PIV Cardholder authentication using information securely stored on the PIV Card. Some PIV Cardholder information is available on-card through PIV Card external physical topology (i.e., card surface) and PIV Card internal data storage (e.g.  Magnetic stripe, integrated circuit chip). 

Other PIV Cardholder information is available off-card. Examples of off-card information, say in the First Responder & Emergency Response domain, could be certifications that could be presented by a Doctor or EMT that could verify their claims and allow physical and/or logical access to resources.

SAML2 BAE Profile Accordingly, the federal government requires a standard mechanism for Relying Parties to obtain PIV Cardholder information (User Attributes), which are available off-card, directly from the authoritative source (Attribute Authority). The authoritative source is the PIV Card Issuing Agency, which is the agency that issued the PIV Card to the PIV Cardholder.  The exchange of these User Attributes between backend systems is known as “Backend Attribute Exchange” (BAE). The architectural vision for the BAE can be found at (Direct link to "Backend Attribute Exchange Architecture and Interface Specification" - PDF).

I, and members of my team, have been part of a joint DHS and DOD team that have been working on a proof of concept implementation of the BAE in order to validate the approach, gain valuable implementation experience, and to provide feedback to the relevant governance organizations within the US Federal Government. The results of our work are three-fold:

  1. A SAML2 Profile of the BAE, with both normative and informative sections, that provide concrete implementation guidance, lessons learned as well as recommendations for folks seeking to support this profile
  2. Reference implementations stood up within the T&E environments of both DHS and DOD for interoperability testing
  3. Test suites that can be used by implementers to verify compliance with the profile

I am happy to report that the profile is currently at v1.0 (DRAFT) status, under external review, and that we are scheduled to give a briefing on the work to a sub-committee of the Federal CIO Council later this month. In addition, we have our reference implementations up and running and are putting the finishing touches on the Test Suites.

As someone who has and is participating in industry standards efforts, I am fully aware that one of the critical items for a standard to become successful is for incorporation of the standard into vendor tooling. Some of the choices that we made, beyond satisfying the needed functionality, was to make sure that it was as easy as possible to build in profile support by:

  • Not reinventing the wheel; Leverage the conventions and standards established by some of the fine work that has been done to date by the OASIS Security Services (SAML) TC on Attribute Query Profiles
  • Keep the delta's as small as possible between the BAE Profile and existing profiles such as the X.509 Attribute Sharing Profile (XASP)
  • Provide LOTS of informative guidance
  • Striking a balance between making sure that the profile was generic enough to be widely used and deployable, but provided enough information in the message flow for implementers to get full value.

The last item was something that we found to be critical and sometimes contentious to balance. But, we would not be where we are right now, had we not been informed by our actual proof-of-concept implementations. A pure paper effort would have left too many holes to patch.

We have also made an active effort to reach out to vendors, especially in the federation, entitlement management and XML security arenas, and have been gratified by their response in committing to support this profile in their tooling (In some cases, folks already have beta support baked in!). We are fully expecting to highlight and point out those folks during our out-brief later this month. If you are a vendor, want to find out what it takes to support this profile, and are interested in receiving a copy of the v1.0 DRAFT, please feel free to ping me at anil dot john at jhuapl dot edu.

This has been a pretty extensive, exciting and detailed effort and we are very grateful for the senior level support from both Organizations for this effort.  Beyond that, it has been a blast working with some very smart people from both DHS and DOD to make this real. Tags: ,,,,,,

Technorati Tags: ,,,,,,
Tags:: Architecture | Security
6/6/2009 2:59 PM Eastern Daylight Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Saturday, September 13, 2008

Digital ID World 2008 is the first IdM conference that I've gone to as part of a team, and given the variety of breakout sessions we decided early on to use the divide and conquer approach based on our areas of interest and expertise.

The following are some highlights on some (not all) of the sessions that I attended and found to be interesting. As with a lot of conferences, there were some sessions that were pretty much disguised vendor pitches which I am not even going to bother with a mention.

Keynote - Identity Assurance: A Backbone For The Identity Marketplace
by Peter Alterman - GSA, Andrew Nash - PayPal, Frank Villavicencio - Citigroup

In some ways this was rehash of the panel on the same topic that was moderated by Mark Diodati at Burton Catalyst but with the addition of Peter Alterman of the GSA, who tends to add a certain amount of ...ah... flair to the conversation :-)

The intent of the Liberty Identity Assurance Framework (IAF) is to develop a framework that leverages the existing work that has been done by EAP, tScheme, US e-Auth etc. to generate an identity assurance standard that is technology agnostic but provides a consistent way of of defining identity credential policy and the process and policy rule set etc.  The IAF consists of four parts (1) Assurance Levels (2) Assessment Criteria (3) Accreditation and Certification Model and (4) Business Rules. You can find out more about it on the IAF Section of the Liberty Alliance Web Site.

What interested me about the entire conversation was the leveraging of OMB M-04-04 and NIST 800-63 to define the assurance criteria but the drive to make a "Liberty Alliance IAF Assurance Token" (if you will) that will be certified to mean the same thing across federations. Mr. Alterman also noted, and I hope that I interpreted this correctly, that the intent from the GSA side would be to not re-invent the wheel but to adopt this IAF framework going forward. He spoke of current inter-federation work he is involved in between NIH and the InCommon Federation that is leveraging this.

During the Q&A session, I brought up the fact that this work is directly focused on AuthN but in general, access to resources is granted based on a variety of factors, only one of which is the strength and assurance of the authentication token. The response is that the Liberty work is deliberately focusing on the AuthN and considers AuthZ to be out-of-scope for their work.

Keynote Presentation: State Of The Industry
by Jamie Lewis - Burton Group

Enterprise IdM is the set of business processes, and a supporting infrastructure, that provides identity-based access control to systems and resources in accordance with established policies.

  • Business trends are driving integration across processes and folks are being asked to do more with less.
  • SaaS is gaining momentum
  • Many failures in IdM projects caused by a lack of doing homework and a belief in the silver bullet product etc.
  • People manage risk, not products.
  • IdM is a means and not an end; It is about enabling capabilities and not an end in itself.
  • The Identity Big Bang is around new ways of working, collaborating and communicating
  • Make every project an installment on the Architecture and scope the goals to around 3 years.
  • Always think about data linking and cleansing

That was the first half of the keynote, but the second half was something I found to be very fascinating and is based on work that Burton has been proposing around the idea of a "Relationship Layer for the Web"

  • AuthN and AuthZ are necessary but not sufficient
  • Centrism of any kind does NOT work
  • Lessons from social science on trust, reciprocity, reputation etc.
  • The future of identity is relationships
  • Difference between close and distant relationships; Able to make many observations in a close relationship, so able to get good identity information. Not so for distant relationships
  • A good relationship provides value to all parties. And it is not just about rights but also obligations
  • Values like privacy etc. require awareness of relationship context
  • Systems fail if they are not "relationship-aware"
  • Difference between Custodial, Contextual and Transactional identities.
    -- Custodial Identity is directly maintained by an org and a person has a direct relationship with the org.
    -- Contextual identity is something you get from another party but there are rules associated with how that identity can be used.
    -- Transactional identity is just the limited amount of info that an RP (?) gets to complete a transaction e.g. Ability to buy alcohol requires a person to be over 18 (?) but in a transactional relationship, you would simply ask the question of "Is this person old enough to buy alcohol?" and the answer would come back as "Yes/No". Compare this to a question of "What is this person's age or birthday?" which releases a lot more info.
  • The last type of identity in effect requires the existence of what Burton Calls an "Identity Oracle" (See Bob Blakley's blog entries) that has a primary and trusted relationship with a user as well as with relying party and can stand behind (from a legal and liability perspective) the transactional identity statements that it makes.

I found this entire topic absolutely fascinating as this is so very relevant to a lot of the work that I do around information sharing across organizations that may or may not trust each other for a variety of (sometimes very valid) reasons. Will be actively tracking this area on an ongoing basis.

The Plot To Kill Identity
by Pamela Dingle - Nulli Secundus

I really enjoyed this session by Pamela on the disconnect that currently exists between the needs of the users, what is being asked of the application vendors and the lack of a common vocabulary to express our needs such that there is a change in the same old way of doing business.

  • Need for an effort to be consistent all the way at the RFP/RFI time
  • Need a common vocabulary when requesting capability from vendors
  • Start with:  Provide and Rely support i.e. the ability to choose whether or not a product relies on external identity services or provides its own.
  • Pamela also had a great starting set of RFI type questions one can use.. I am hoping that she will post them on her blog.

One of the questions I brought up during the Q&A session was that if I bought in to the Kool-Aid of what she discussed during the presentation (and I do), what would it take to scale the conversation to a larger audience? Bob Blakley, who was also in the audience, chimed in and noted that if Pamela wrote up a white-paper on the topic, he would help her get it published and widely distributed as well.

I would also be very interested in expanding the scope of the sample RFI questions to be grouped by product/project category (and released under an open licence; Creative Commons?) so that folks like me can use them in our RFP/RFIs as well.

There were more sessions that I attended that were interesting such as the Concordia Workshop on "Bootstrapping Identity Protocols: A Look At Integrating OpenID, ID-WSF, WS-Trust And SAML", "Using An Identity Capable Platform To Enhance Cardspace Interactions" and more..

All in all, beyond the sessions themselves, the hall-way conversations and the connections made to be as valuable (or even more so) than just the sessions themselves. I know that I found and made connections with multiple folks who work in my community and am very much looking forward to future collaborations with them and others. Tags: ,

Technorati Tags: ,
Tags:: Architecture | Security
9/13/2008 4:43 PM Eastern Daylight Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Saturday, June 2, 2007

For those of you in the Baltimore/Washington Area, this may be of interest.

Jeff Barr, the Web Services Evangelist for Amazon Web Services, is someone I know and invited out to Johns Hopkins University - Applied Physics Laboratory (JHU/APL) to give a presentation on Amazon's experience in building out and managing their infrastructure. He was gracious enough to accept and will be giving the presentation as part of the APL Colloquium.  Here are the particulars:

Building a 'Web-Scale Computing' Architecture
Wednesday June 6, 2007, 2:00 - 3:00 PM
Parsons Auditorium, JHU/APL

Jeff Barr will provide the blueprint for 'Web-Scale Computing' - enabling businesses to use Amazon Web Services to build an elastic architecture that can quickly respond to demand. Jeff’s presentation will focus on Amazon Simple Storage Service (Amazon S3), Amazon’s Simple Queue Service, and Amazon Elastic Compute Cloud (Amazon EC2) and will include real-world examples of how these services are being used singly and in combination. spent 12 years and over $1 billion developing a world-class technology and content platform that powers Amazon web sites for millions of customers every day. Today, Amazon Web Services exposes this technology, through 10 open APIS, allowing developers to build applications leveraging the same robust, scalable, and reliable technology that powers Amazon's business.

The APL Colloquium began in 1947. Held weekly, it is one of the longest standing technical and scientific lecture series in the Washington/Baltimore area. The goal of the Colloquium has been to bring to the Laboratory scientific scholars, technical innovators, industry leaders, government sponsors, and policy makers to inform, educate, and enlighten Laboratory staff on what is currently exciting, relevant, and of value to the work of APL.

You are more than welcome to attend as the Colloquia are open to the public. Visitor Guide/Directions can be found on the APL Colloquium web site. And if you found out about this event from this blog entry, please don't forget to stop by and say hello :-)

Tags:: Architecture
6/2/2007 10:49 AM Eastern Daylight Time  |  Comments [1]  |  Disclaimer  |  Permalink   
Sunday, April 15, 2007

"The hard problems in distributed computing are not the problems of how to get things on and off the wire. The hard problems in distributed computing concern dealing with partial failure and the lack of a central resource manager. The hard problems in distributed computing concern insuring adequate performance and dealing with problems of concurrency. The hard problems have to do with differences in memory access paradigms between local and distributed entities. People attempting to write distributed applications quickly discover that they are spending all of their efforts in these areas and not on the communications protocol programming interface.
- A Note on Distributed Computing by Samuel C. Kendall, Jim Waldo, Ann Wollrath and Geoff Wyant

A very good read, and still relevant after 13 years! Thanks for the pointer, Steve.

Tags:: Architecture
4/15/2007 9:08 PM Eastern Daylight Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Sunday, December 3, 2006

Be the Software!

J.D. has an entry on how, when working on some type of a R&D project, you go about shortening and getting more bang for the buck with testing user experience models. Key advice is:

... experimented with two techniques:

  1. Build modular slideware for visual walkthroughs of task-based features.
  2. Be the software.

This radically improved customer verification of the user experience and kept our dev team building out the right experience.

Mocking up in slides is nothing new.  The trick was making it efficient and effective:

  1. We prioritized scenarios that were the most risk for user experience.
  2. We created modular slide decks.  Each deck focused on exactly one scenario-based task (and scenarios were outcome based).  Modular slide decks are easier to build, review and update.  Our average deck was around six slides.
  3. Each slide in a deck was a single step in the task from the user's perspective.
  4. Each slide had a visual mock up of what the user would see
  5. To paint some of the bigger stories, we did larger wrapper decks, but only after getting the more fine-grained scenarios right.  Our house was made of stone instead of straw.  In practice, I see a lot of beautiful end-to-end scenarios decks that are too big, too fragile and too make believe.

I've seen a couple of examples of this, but my issue with them was exactly what he called out in (5) i.e. "..beautiful end-to-end scenarios decks that are too big, too fragile and too make believe".  Good advice that is very useful. Check out the full entry.

Tags:: Architecture
12/3/2006 12:55 PM Eastern Standard Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Saturday, October 28, 2006

Performance in a SOA, especially in a SOA implemented using Web Services is very important, but folks all too often do not have a common definition of what performance is. In addition, in a majority of the cases, performance is not treated as something that should be engineered into a solution from the ground up.

One of the first things that I do when folks start this particular conversation is to point them to some work that has been done by J.D. Meier and his team over at Microsoft as part of their Perf & Scale work. In particular I point them over to the following:

I find the above work relevant, and highly recommended reading, whether or not you are in the .NET/Microsoft, Java/J2EE, OSS or the Fluffy-Bunny camp.

Tags:: Architecture
10/28/2006 3:20 PM Eastern Daylight Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Thursday, October 5, 2006

One of the things I like about the Patterns & Practices Team at Microsoft and especially J.D. Meier is that they/he really takes customer feedback into account. The last time I was at Microsoft, I raised some issues that some of the guidance that they provide was too high-level and that they did not break it up into actionable material.

J.D. and his crew have released a new version of the Guidance Explorer that takes into account this feedback. To paraphrase J.D.  "Guidance Explorer let's you browse the online guidance store (caches locally) and you can create your own views of the guidance (or edit the guidance or create your own using our templates or make your own templates). If you don't like what we did, the source is in codeplex so you can shape it to your own needs."

Here are some links that talk in more detail about it:

Very nice work!

Tags:: Architecture
10/5/2006 9:34 PM Eastern Daylight Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Saturday, June 25, 2005

Two books that I am currently reading are “Expert .NET Delivery using NAnt and CruiseControl.NET” from Apress and “Ship It! A practical guide to successful software projects”  from the Pragmatic Programmer books series.

I am enjoying both immensely.  I’ve already used the suggestions and recommendations in “Expert .NET Delivery” to improve and fine tune my NAnt scripts and will be moving on to CruiseControl.NET next.  “Ship It!” is in the style of the other Pragmatic books and is an easy and focused read.  I finished it in two days and have already gained a wealth of insight that I can apply immediately.  Highly recommended if you want to streamline your software development life!

Tags:: Architecture
6/25/2005 10:07 PM Eastern Daylight Time  |  Comments [2]  |  Disclaimer  |  Permalink   
Sunday, June 12, 2005

Ever since I read the Pragmatic Programmer series of books, I have been a fan of automation. So the build process is one that I have tried to automate to the extent possible. The tool of choice for me in this case has been NAnt. At a high level, my build process consist of the following:

  • Clean up the existing directory structure
  • Prepare the directory structure
  • Get the source from my Source Control Provider
  • Build the solution
  • Run Unit Tests
  • and more...
Since the NAnt configuration file has to be manually coded, one of the challenges I was facing was to make sure that all of the details and dependencies of a multi-project Visual Studio solution were taken into account when I did the build and compile of the solution. In the past I've done the hand coding, or used Slingshot. But recently I've been using the <solution> task in NAnt and really like it.
In short, this particular NAnt task reads a VS.NET solution file, figures out all of the various project dependencies and does the build. Very nice.
Here is an example:

target name="build" description="Build the solution" depends="init">
    <solution configuration="${solution.config}" 
include name="${code.dir}\FirstProject\FirstProject.csproj"/>
            <include name="${code.dir}\AnotherProject\AnotherProject.csproj"/>
            <map url=http://localhost/AnotherProject/Another.csproj 
Tags:: Architecture
6/12/2005 4:39 PM Eastern Daylight Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Friday, October 8, 2004

I was fortunate enough to spend the last three days at the Patterns & Practices Summit which was held at the Microsoft Technology Center in Reston, VA. 

In a word, Awesome!

We had a great lineup of speakers such as:

On top of all of those there were also various keynotes, the most memorable of which was by Sandy Khaund on where the PAG is going.  I would also be remiss if I did not mention that the man who helped to coordinate this from the local side was none other than our own Developer Community Champion, Geoff Snowman. Excellent job all around.
Beyond the pure technical knowledge that was imparted, it was also a chance to connect in person with people who I had, in some cases, "met" only online. The other great thing was the ability to leverage their knowledge. Chris Kinsman helped me solve a configuration issue that I had been having with Log4Net and Jim Newkirk was a great source of information on some things I am currently looking at regarding Unit Testing, Daily Builds and more.  Tom Hollander as ever was patient in taking some of the "feedback" I have regarding some of the deployment scenarios for the Enterprise Library :-)
All in all, a great, great event and I have to give big kudos to both Sandy and Keith for putting this together!
Tags:: Architecture
10/8/2004 7:08 PM Eastern Daylight Time  |  Comments [2]  |  Disclaimer  |  Permalink   
Wednesday, July 7, 2004

Building on the application patterns presented in Enterprise Solution Patterns Using Microsoft .NET, this guide applies patterns to solve integration problems within the enterprise.

The design concepts in this guide include implementations on the Microsoft platform that use BizTalk Server 2004, Host Integration Server 2004, ASP.NET, Visual Studio, Visio 2003 and the .NET Framework.

The scenario is an online bill payment application in the banking industry. To meet the needs of this scenario, the team used a pattern-based approach to build and validate a baseline architecture. Because a well-designed architecture must be traceable to the needs of the business, the guide also includes a set of artifacts that trace from high-level business processes down to code.

Online @
Integration Patterns

Should be available for download as PDF soon.

UPDATE: PDF is now available

Tags:: Architecture
7/7/2004 10:41 PM Eastern Daylight Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Tuesday, July 6, 2004

patterns & practices Live: Integration Patterns - Level 200
July 8, 2004, 11:00 AM - 12:30 PM Pacific Time
Gregor Hohpe, Senior Architect, ThoughtWorks, Inc

Today's business applications rarely live in isolation. Users and customers expect instant access to data and functions that may be spread across multiple independent systems. Therefore, these disparate systems have to be integrated to allow a coordinated flow of data and functionality across the enterprise. Despite advances in EAI and Web Services tools, creating robust integration solutions is not without pitfalls. For example, the asynchronous nature of most message-based integration solutions is different from the synchronous world of application development and requires architects and developers to adopt new design, development and testing strategies. This webcast shows how design patterns can help developers build successful integration solutions. The patterns have been harvested from years of actual integration projects using messaging, Web Services and EAI tools.


Tags:: Architecture
7/6/2004 9:57 PM Eastern Daylight Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Friday, June 25, 2004
Monday, June 14, 2004

Per David Hill:

The Smart Client Architecture Guide is now live on MSDN

The performance chapter is being completed as we speak and should be posted in a week or so. We decided to put the rest of the guide out there since it has generated a lot of interest. Feedback welcome!


Tags:: Architecture
6/14/2004 1:55 PM Eastern Daylight Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Sunday, June 13, 2004

Implement Presentation Workflow with the User Interface Process Application Block – Level 400
June 15, 2004, 1:00PM-2:30PM Pacific Time
Brian Noyes, Principal Software Architect, IDesign, Inc

The User Interface Process (UIP) application block provides a rich framework for developing stateful, process-oriented user interfaces, either for Web or windows applications. It allows you to decouple the views of your application using and implementation of the Model-View-Controller pattern provided by the application block. Ideal for applications such as shopping carts, registration, questionnaires, online quizzes, and other single flow or multi-path information gathering applications, the UIP is a complex framework that is very easy to use. This webcast will step through the architecture and capabilities of the UIP application block and demonstrate when and how to employ it for both web and windows applications.

patterns & practices Live: Testing Blocks - Level 200
June 17, 2004, 11:00 AM - 12:30 PM Pacific Time
Larry Brader, Test Lead, Microsoft Corporation

Blocks have been gaining momentum as their values to developers are realized, but in reality development is only half the equation to shipping a product. The other side is that of Test. In this Webcast we will drill down and examine how to test blocks.

[Editor's note:  Looks like someone is taking a shortcut in the description here.. I assume that the webcast is on how the Application Blocks are tested. Sandy recently blogged [1] about the new guidance that the PAG folks are putting together on this and the related GotDotNet workspace..]

patterns & practices Live: Test Driven Development – Level 200
Ah, the on demand version of this past webcast by Jim Newkirk is now available! [2]



Tags:: Architecture
6/13/2004 4:31 PM Eastern Daylight Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Monday, June 7, 2004

patterns & practices Live: Enterprise Software Architects - Level 200
June 10, 2004, 11:00 AM - 12:30 PM Pacific Time
Craig Utley, Partner, Enterprise Software Architects

This webcast will focus on the patterns work going on at Microsoft designed to help developers reap the benefits of pattern-based development. It will give you a sneak peak at some third-party training resources that will help your development teams adopt these proven techniques in your development organization.


Tags:: Architecture
6/7/2004 12:00 AM Eastern Daylight Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Wednesday, May 19, 2004

Ah.. It is finally up! New site features include:

  • Redesign of site navigation allowing you to locate patterns & practices titles by viewpoint, guidance type, or audience, and the ability to locate a title in the all releases list alphabetically. 
  • Ability to search exclusively within the patterns & practices site.
  • Enhanced community page including a new GotDotNet workspaces page with individual links to each of the patterns & practices workspaces.
  • New Case Studies page for highlighting customers that are succeeding with patterns & practices.
  • New Events page showing upcoming events where patterns & practices will be participating, upcoming webcasts, as well as all archived webcasts.
And of course you can FINALLY and EASILY find all of the .NET Application Blocks via a direct link from the home page. Direct link to them @

The main site can be found @
Check it out!
Tags:: Architecture
5/19/2004 9:12 PM Eastern Daylight Time  |  Comments [4]  |  Disclaimer  |  Permalink   
Saturday, May 15, 2004

Moving to SOA: Practical approaches in Healthcare Level 200
May 18, 2004, 11:00 AM - 12:30 PM Pacific Time
Tim Gruver, Architect Evangelist, Microsoft Corporation

This webcast will focus on practical approaches for migrating to an SOA, while considering the unique challenges of the healthcare industry and government standards.

patterns & practices Live: User Interface Process Block Version 2 Level 200
May 20, 2004, 11:00 AM - 12:30 PM Pacific Time
Scott Densmore, SDE, Microsoft Corporation

The User Interface Process (UIP) Application Block, version 2, provides an extensible framework to simplify the process of separating business logic code from the user interface. This webcast will examine using the block to write complex user interface navigation and workflow processes that can be reused in multiple scenarios and extended as your application evolves. UIP Version 2 provides support for both Web Forms and Smart Clients.

[Now Playing: Yaara Yaara - Hum Tum]

Tags:: Architecture
5/15/2004 1:16 PM Eastern Daylight Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Thursday, May 13, 2004

This MSDN TV episode is related to the newest release from the patterns & practices team. The topics discussed are memory management, COM Interop and the Dispose pattern.

Check it out @

BTW, you can find the online version and a PDF download @

[Now Playing: The Medley - Mujhse Dosti Karoge]

Tags:: Architecture
5/13/2004 3:44 PM Eastern Daylight Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Thursday, May 6, 2004

Integration Patterns from the PAG introduces patterns in the context of the Global Bank integration scenario. This patterns catalog is organized to help you locate the right combination of patterns to apply when solving your integration problem. In addition, the guide introduces a visual model that describes a language of patterns and their relationships.

Note: This preview release is an early look at Integration Patterns to obtain your feedback on the content. This release includes only the first four chapters and the 10 patterns that the chapters discuss. Chapter 5 through Chapter 9 and the remaining patterns will be released within one to two months.

[Now Playing: The Medley - Mujhse Dosti Karoge]

Tags:: Architecture
5/6/2004 9:29 AM Eastern Daylight Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Wednesday, May 5, 2004

Improving .NET Application Performance and Scalability from the PAG  is now available as a PDF download.

Go get it NOW @

UPDATE: Online version can be found @ 

[Now Playing: Laila Laila - Samay]

Tags:: Architecture
5/5/2004 12:21 PM Eastern Daylight Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Monday, May 3, 2004

Architecting and Building Smart Client Solutions: The Future of Application Development Level 200
May 04, 2004, 11:00 AM - 12:30 PM Pacific Time
Tim Huckaby, CEO, InterKnowlolgy

In Spring of 2003, Tim Huckaby was tasked by Microsoft Norway to develop and deliver a keynote-level "Architecting and Building Smart Client Applications" presentation for the Visual Studio .NET 2003 launch in Oslo. In performing the research to put together the content and demos necessary for a great presentation, Tim discovered multiple instances of inconsistent messaging in smart client application development that still exist today. Even today, Microsoft has multiple conflicting definitions of what a smart client application is, and there are still some very distinct and differing "siloed" opinions of smart client applications within the Microsoft Product Groups. Developers won't want to miss this webcast's demonstrations, which will help demystify these inconsistencies and narrow the definition of a smart client application.

Patterns & Practices Live: .Net Enterprise Solution Patterns Level 200
May 06, 2004, 11:00 AM - 12:30 PM Pacific Time
Robert C. Martin, President, Object Mentor Inc

This webcast presents an overview of the .Net Enterprise Solution Patterns. The concept of patterns will be introduced, and a selected group of patterns will be discussed in depth.


Tags:: Architecture
5/3/2004 12:19 AM Eastern Daylight Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Friday, April 30, 2004

Issue 3 is out @

"... this newsletter will keep you informed of all that's new on the MSDN Architecture Center as well as upcoming events. We'll continue to bring you features and profiles from Microsoft architecture community, plus a new feature added this issue called "Contemplating Architecture." This feature offers opinions and perspectives from Microsoft architects and members of the Microsoft Architecture Advisory board."

[Now Playing: Ruk Ja O Dil Deewane - Dilwale Dulhania Le Jayenge]

Tags:: Architecture
4/30/2004 10:05 PM Eastern Daylight Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Wednesday, April 28, 2004

Patterns & Practices Guide on "Improving .NET Application Performance and Scalability" is now LIVE!

This guide provides end-to-end guidance for managing performance and scalability throughout your application life cycle to reduce risk and lower total cost of ownership. It provides a framework that organizes performance into a handful of prioritized categories where your choices heavily impact performance and scalability success. The logical units of the framework help integrate performance throughout your application life cycle. Information is segmented by roles, including architects, developers, testers, and administrators, to make it more relevant and actionable. This guide provides processes and actionable steps for modeling performance, measuring, testing, and tuning your applications. Expert guidance is also provided for improving the performance of managed code, ASP.NET, Enterprise Services, Web services, Remoting, ADO.NET, XML, and SQL Server.

Check it out @

Congrats to J.D. Meier, Srinath Vasireddy, Ashish Babbar, and Alex Mackman as well as all of the Microsoft and external reviewers who contributed to this guide!

Do I even need to mention that this amazing tome is put out by the MS PAG? *

* Promote the PAG week continues .... :-)

[Now Playing: Humko Humise Chura Lo - Mohabbatein]

Tags:: Architecture
4/28/2004 11:08 PM Eastern Daylight Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Sunday, April 25, 2004

[NOTE: For my non-US readers, Tylenol is a very popular medicine that is taken for headache and fever relief in the US]

I am an unabashed fan of the PAG!

For those who do not know the acronym, the PAG is the Platform Architecture Guidance Group at Microsoft. They are the folks who put out the Patterns & Practices series of books as well the .NET Application Blocks among other things. In a sentence, these are the folks who provide you with the tools and information that demonstrate and document the best practices for implementing the current shipping technology.

Best Practices for Current, Shipping Technology. This is an important distinction.

In a lot of ways the Product Teams at Microsoft live a Rev ahead. They are already working on the next generation (and the one after) of the current technology. When speaking with them the focus often is on what is coming and not on what is currently here. While that is very cool and exciting, it does not address the working concerns of the current technology implementers.

Pop open the latest issue of MSDN Magazine or any of the other .NET trade rags. What do you find these days? Coverage of Visual Studio .NET 2005! Generics in C# 2.0! The magic and the wonder that is Longhorn! Yet these technologies are at best more than a year out. Are you deploying PRODUCTION apps on this technology? Is any Enterprise (other than ones that Microsoft directly supports via its Early Adopter Programs) implementing this technology NOW? The answer is NO! The pain points and the headaches that Enterprises are experiencing are with the current shipping 1.0 or 1.1 .NET Technologies… Heck, many folks are only now thinking of moving to .NET (More on this later..). They have little to no interest in alpha/beta technologies. They have issues that need to be addressed now.

This is where the PAG comes in. They produce the prescription that solves the headaches that Enterprises have RIGHT NOW! They produce the best practices and architecture guidelines that showcase Microsoft technology as being seriously Enterprise ready. Here is a sampling:

  • Application Architecture for .NET
  • Building Secure ASP.NET Applications
  • Enterprise Solution Patterns for .NET
  • Improving Web Application Security
  • Microsoft Exchange 2000 Server Operations Guide
  • .NET Application Blocks
  • Microsoft SQL Server 2000 High Availability Series
  • Shadowfax SOA Reference Application
  • and more…. @

I personally don't think that they get the credit or visibility they truly deserve. The reason of course is that they are not out in front talking about and playing with cool tools and sexy technologies. They are the people who provide the basic blocking and tackling that allows the Quarterback to be a shining star. They do what they do so OTHERS can get the work done. And in that goal, they are immensely successful.

I've been meaning to write about this for some time, but this came up front and center for me very recently. I am currently in the midst of an Architecture consulting gig with a firm that is moving to .NET. I've done this sort of thing before (first time about 2 years ago), when I was the Architect/Technical Lead tasked with implementing .NET for the Fortune 500 Enterprise I was then working at. At that time, a lot of the practices that were implemented were a direct result of my personal knowledge of .NET from working with it from the early beta phases and knowing the right people to ping at Microsoft to get advice on particular issues that I needed help with. What is different now is the breath and depth of material I can tap into from the PAG that make mine and my client's life so much easier. I don't think a day has gone by when we have not reviewed some best practice or implemented something that came out of the PAG. The resources the PAG has provided has allowed my client to have the comfort factor that we are doing the right things with .NET Technologies.

So Kudos and Thank You to Sandy, Shaun, JD, Tom, Ron, Ed, David and the many more folks at PAG.

If you take away one thing from this entry, it is that when you run into issues or need guidance on current technologies, do go over to the PAG site @ and browse over their offerings. I would not be surprised if you come away with answers or pointers to answers that heal YOUR pain!

[Now Playing: Ek Pal Ka Jeena - Kaho Naa Pyar Hai]

Tags:: Architecture
4/25/2004 11:13 PM Eastern Daylight Time  |  Comments [4]  |  Disclaimer  |  Permalink   

[Changed my mind just this once, about Longhorn stuff. Only because it is from my favorite people @ the PAG]

The Developer Guide to Migration and Interoperability in "Longhorn" is a patterns & practices "Emerging Practice" that provides developers with a roadmap for how to start preparing for Longhorn today. It addresses these issues from two main vantage points. First, from an architectural perspective, it looks at considerations and decisions that are optimal for establishing an infrastructure for Longhorn applications. Second, from a development perspective, it delves into low-level coding recommendations for when and how to interoperate and/or migrate existing code. These recommendations include new best practices around managed/unmanaged code integration that are relevant to all mixed mode development. The recommendations also include deep discussion on the presentation layer, in terms of migration and interoperability, of Win32, ActiveX, and Windows Forms with the new presentation capabilities in Longhorn. 

Check it out @

[Now Playing: Tujhe Yaad Na Meri Aayee - Kuch Kuch Hota Hai]

Tags:: Architecture
4/25/2004 10:54 PM Eastern Daylight Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Sunday, April 18, 2004

Architecting and Building Smart Client Solutions: The Future of Application Development - Level 200 [1]
April 22, 2004, 9:00 AM - 10:30 AM Pacific Time
Tim Huckaby, CEO, InterKnowlolgy

In Spring of 2003, Tim Huckaby was tasked by Microsoft® Norway to develop and deliver a keynote level "Architecting and Building Smart Client Applications" presentation for the Microsoft® Visual Studio® .NET 2003 launch in Oslo. In performing the research to put together the content and demos necessary for a great presentation, Tim discovered multiple instances of inconsistent messaging in smart client application development that still exists today. Even today, Microsoft has multiple conflicting definitions of what a smart client application is and there are still some very distinct and differing "siloed" opinions of smart client applications within the Microsoft Product Groups. Developers won't want to miss this webcasts demonstrations that will help demystify these inconsistencies and narrow the definition of what a smart client application is.

Patterns & Practices Live: P&P Update - Level 200 [2]
April 22, 2004, 11:00 AM - 12:30 PM Pacific Time
Sandy Khaund, Group Product Manager, Microsoft Corporation

This webcast will be a patterns & practices Spring 2004 Update. This is a follow-up to the October 2003 webcast about patterns & practices, the library of guidance and code to help build sound solutions for the .NET Framework. In this webcast, we will provide an overview of the upcoming deliverables provided by patterns & practices (Shadowfax, Performance & Scalability, Integration Patterns) and give you a preview of other activities that the team will be pursuing in the months ahead.


[Now Playing: O Haseena Zulfon Wali - Dil Vil Pyar Vyar]

Tags:: Architecture
4/18/2004 11:46 PM Eastern Daylight Time  |  Comments [0]  |  Disclaimer  |  Permalink   

I am in the process of architecting a thick client distributed application that also has application integration business requirements. Looking through the various Distributed Systems Patterns in the Patterns & Practices "Enterprise Solution Patterns using Microsoft .NET v 2.0", the Data Transfer Object (DTO) Pattern maps very well into the path that we have chosen (WinForms <--> WS <--> Biz/Data).

While I was initially looking at implementing the DTO using a typed DataSet, I am concerned with the potential performance hit (Instantiation/ Filling/ Serialization/ Deserialization) when using a DataSet. Also, in this particular use case, the interaction is with a single table. The recommended alternative for better performance in this case is to use a DataReader with strongly typed objects. The implementation of which is supposedly documented in "Implementing DTO in .NET with Serialized Objects". 

But while this is referenced multiple times as a Related Pattern, I can't seem to find it in the book. Is this missing or is it in some other corner of the book that I have not perused as of yet?

[Now Playing: Jaage Jaage - Mere Yaar Ki Shaadi Hai]

Tags:: Architecture
4/18/2004 8:33 PM Eastern Daylight Time  |  Comments [1]  |  Disclaimer  |  Permalink   
Thursday, April 15, 2004

Keith Pleas just blogged that the Microsoft Architecture Center has an RSS feed. Subscribed!  You can find it @

BTW, Keith is one of the main people on the GAPP (Guidance About Patterns & Practices) Team, a team of third-party subject matter experts that work closely with the PAG (Prescriptive Architecture Guidance) team to create material that promotes and enhances PAG’s Patterns & Practices titles. He is also the guy putting together the Patterns & Practices Summit.

You can find out more about the GAPP in the following On Demand Webcast @

More info on the Patterns & Practices Summit @

Keith's blog can be found @

[Now Playing: Pehle Kabhi Na Mera Haal - Baghban]

Tags:: Architecture
4/15/2004 10:41 PM Eastern Daylight Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Saturday, April 10, 2004

Design and Architecture for .NET Applications - Level 300
April 13, 2004, 11:00 AM - 12:30 PM Pacific Time
Rockford Lhotka, Technology Evangelist and CEO, Magenic Technologies

There are many ways to architect a .NET application. Based on Magenic's architecture experience with .NET at numerous clients, learn which ways work best for Web and Microsoft Windows development. Discover when to use object-oriented designs, and when to use data-centric designs. Find out when to use and not to use Enterprise Services, Remoting, Web services, and other key .NET technologies.

Patterns & Practices Live: Powered by LogicLibrary Logidex - Level 200

April 15, 2004, 11:00 AM - 12:30 PM Pacific Time
Brent Carlson, Vice President of Technology and Co-founder, LogicLibrary

Patterns offer proven solutions to recurring application architecture, design, and implementation problems within a particular context. The Microsoft(r) Platform Architectural Guidance (PAG) Group has developed a number of patterns and best practices for use by architects and developers as they design and build enterprise solutions. The MSDN Logidex .NET Library, powered by LogicLibrary Logidex, provides fast, easy access to the PAG patterns and practices on MSDN. This webcast will use the .NET PetShop reference application to introduce participants to the text and model-based search capabilities of Logidex and to the content of the Logidex .NET Library, showing how the various components of this application are related to PAG-provided patterns and application blocks and to core .NET Framework capabilities.

[Now Playing: Saanwali Si Ek Ladki - Mujhse Dosti Karoge]

Tags:: Architecture
4/10/2004 3:40 PM Eastern Daylight Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Friday, April 9, 2004

The Microsoft Patterns & Practices Group has released Version 2.0 of the User Interface Process Application Block.


The User Interface Process Block V2 or UIP V2 is the next version of one of the most popular application blocks. This block is a reusable code component that builds on the capabilities of the Microsoft .NET Framework to help you separate your business logic code from the user interface. The UIP Application Block is based on the model-view-controller (MVC) pattern. You can use the block to write complex user interface navigation and workflow processes that can be reused in multiple scenarios and extended as your application evolves.

The following features are in the first version of UIP and continue to be part of UIP version 2.

  • Web session resume
  • Web session transfer
  • Reuse of code between application types
  • Development of discrete tasks
  • Storage of state in state persistence providers

The following features are new to UIP version 2:

  • Expanded navigation management
  • Additional state persistence providers
  • Layout managers
  • Enable back-button support
  • Usability enhancements
  • Support for Smart Client Applications, including state persistence using isolated storage
  • New views supported: hosted controls, wizards, and floating windows
  • A number of fixes and enhancements to V1.
I have updated my complete list of Application Blocks @
Tags:: Architecture
4/9/2004 7:10 PM Eastern Daylight Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Tuesday, April 6, 2004

Per Srinath, the Microsoft Patterns & Practices Group has released three chapters (pre-alpha) of the Smart Client Architecture Guide [1]

The chapters are:

  1. Introduction
  2. Offline
  3. Multithreading
This is an opportunity to review and provide input into this guide. Take advantage of it.
Tags:: Architecture
4/6/2004 10:39 PM Eastern Daylight Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Monday, March 22, 2004

The Webcast will be held on April 8, 2004, from 11:00 AM to 12:30 PM Pacific Time (GMT-8, US & Canada). It will be led by Jim Newkirk, Development Lead at Microsoft Corporation and co-author of the book Test Driven Development in .NET.

Here is the official blurb:

In Kent Beck's book titled "Test-Driven Development, By Example" he defines Test-Driven Development (TDD) as driving software development with automated tests. He goes further by stating that TDD is governed by two simple rules: Write new code only if an automated test has failed and eliminate duplication. The implications of these two simple rules can proffer a profound change to the way that software is written. Most of the literature to date has bundled TDD along with Extreme Programming (XP). However, the benefits of using TDD are not limited to XP, and can be realized in any programming methodology. This webcast will provide an introduction into TDD, demonstrating how it works and what benefits it provides when used with Microsoft® .NET. The examples shown will use Visual C#® and NUnit.

This is a Patterns & Practices Group Level 400 Webcast. The link to register can be found @

Tags:: Architecture
3/22/2004 7:02 PM Eastern Standard Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Thursday, March 11, 2004

Maxim Karpov has a weblog @ in which he discusses .NET Design Patterns, Best Practices and More. Subscribed!

He also has a recent article in which he implements a wrapper class that uses the Abstract Factory Pattern to implement various Crypto Functions [1]


Tags:: Architecture
3/11/2004 9:49 AM Eastern Standard Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Tuesday, March 2, 2004

Sandy Khaund, of the MS PAG, is asking for input into the current and future direction of the .NET Application Blocks on his weblog.  Here is a chance to make your views known [1].


[Now Playing: Sharara - Mere Yaar Ki Shaadi Hai]

Tags:: Architecture
3/2/2004 11:10 PM Eastern Standard Time  |  Comments [0]  |  Disclaimer  |  Permalink   

The Offline Application Block, which is intended to serve as an architectural model for developers who want to add offline capabilities to their smart client applications is now available from the MS PAG [1].

The block demonstrates how to:

  • Detect the presence or absence of network connectivity.
  • Cache the required data so that the application can continue to function even when the network connection is not available.
  • Synchronize the client application state and/or data with the server when the network connection becomes available.

I have added the Offline Block to my complete list of .NET Application Blocks [2]


[Now Playing: Mere Khwabon Main - Dilwale Dulhania Le Jayenge]

Tags:: Architecture
3/2/2004 10:55 AM Eastern Standard Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Sunday, February 22, 2004

The Microsoft Architecture Update is a newsletter for business, applications, infrastructure, and technology architects with essential information on current publications, events, and discussions. It is published every other month via email.

The current issue (February 13, 2004) [1] has the following topics:

One interesting tidbit from the newsletter is a link to the Microsoft EMEA Architects Journal [2]. According to the blurb "... It will be a platform for thought leadership on a wide range of subjects on enterprise application architecture, design, and development. Authors will discuss various business and "soft" concerns that play a key role in enterprise systems development. It will provide a unique source of information previously unavailable through any other Microsoft offering."
Very interesting and relevant content.
Note to Editor:  Proof read! Ex. The link to the online version from the Email version went to a domain called msdnprod. I am sure that works from within MS, but does not from elsewhere :-)  Hopefully these are simple growing pains which are more than offset by the quality content.
Check it out and subscribe Online [1]

[Now Playing: Pyar Aaya - Plan]

Tags:: Architecture
2/22/2004 7:57 AM Eastern Standard Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Friday, February 13, 2004


Join top experts discussing a wide range of key architectural topics in live, interactive webcasts. Air your questions in the live Q&A sessions.
[MSDN Just Published]
And if you cannot catch it live, make sure you check out the archive as well.
Tags:: Architecture
2/13/2004 1:02 AM Eastern Standard Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Thursday, February 12, 2004

I did my first web cast the week before last. A colleague and I, Brenton Webster, who is also a solution architect in the .NET Enterprise Architecture Team, co-presented a session on Smart Client Architecture.

Now, Smart Client Architecture is a big topic and we didn’t have time to cover all architectural issues, but we did cover some of the most frequently asked questions – such as business value, design patterns, deployment and how to go offline. The session is not very technical and was intended more as an overview to the topic but I hope you find it useful…
[Microsoft WebBlogs]

DevDays has a Smart Client Track in which the topics will drill down into a lot more technical detail, especially as regards to security.  But high level overviews are a good start.

[Now Playing: O Haseena Zulfon Wali (I) - Dil Vil Pyar Vyar]

Tags:: Architecture
2/12/2004 10:46 PM Eastern Standard Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Saturday, January 24, 2004

.NET Application Blocks are distinct pieces of code, created by the Microsoft Patterns and Practices Team (PAG), that demonstrate the best practices on how to accomplish a specific task using .NET. They are ready made code that you can use/extend to make your lives a WHOLE lot simpler.

I like them as they are extensively tested and reviewed (which hopefully minimizes security issues).

I noted earlier that the PAG had released a new App Block.

Here is a complete list of the App Blocks [1] and where you can download them. I will keep the list up to date as new ones are released.


Tags:: Architecture
1/24/2004 11:38 AM Eastern Standard Time  |  Comments [1]  |  Disclaimer  |  Permalink