My blog has moved and can now be found at http://blog.aniljohn.com

No action is needed on your part if you are already subscribed to this blog via e-mail or its syndication feed.

Sunday, June 19, 2011

The desire to externalize authorization and to drive access control via standardized policy has been one of the major contributors to the success of XACML. Typically, this has been focused almost exclusively on Logical Access Control Systems (LACS). But, what if you could define and manage your access control policies for both Logical Access Control Systems (LACS) and Physical Access Control Systems (PACS) from a single pane of glass?

PACS_PEP Over the last number of years, in my role as the Technical Lead for DHS S&T's IdM Testbed, I've been working with companies that participate in the DHS Science & Technology Directorate's Small Business Innovation Research (SBIR) Program. One of the interesting projects that I've provided technical advice and guidance to has been with an SBIR Awardee (Queralt, Inc) that has developed a PACS Policy Enforcement Point (PEP) that conforms to the XACML 2.0 and 3.0 standard.

Moving out on this, we fully realized that the perspectives of the physical security folks are often different than the IT folks who typically run LACS systems. As such it is important to make sure those concerns are addressed up front. Those concerns include:

  • The access control policies for the PACS system must remain under the control of the physical security officers
  • The PACS system must continue to operate even if this additional functionality is disabled for some reason
  • This is an additional functionality built on top of existing capabilities and must easily integrate with existing infrastructure

As an aside, when we speak of Attribute Based Access Control (ABAC), the input to a decision includes identity attributes, authority attributes, actions to be performed and environmental attributes as well. One of those environmental attributes could be location information. The key with location information is that in order for it to be relevant, it must come from a trusted infrastructure i.e. I can easily trust location information from a turnstile that is owned and managed by my organization but I would have a harder time trusting location information from a computer or mobile device coming from a wireless network that can more easily be spoofed. This capability allows for the incorporation of location information from a trusted infrastructure.

As always, a completely standards based interface between the PACS PEP and a PDP is critical to the success and adoption of this type of technology. As such Queralt is currently in the process of finishing up testing against the multiple XACML PDPs we have made available to them from our Testbed.  So far everything looks good.

We have also connected them with multiple leading XACML PDP vendors, who are very interested in this technology that will help them expand their reach into the PACS realm. Queralt, which has a focus on location based technologies and RFID, already has excellent relationships with multiple PACS vendors as well.  All in all, to paraphrase a physical security officer who received a briefing on this effort last week, this is a game changer that many folks have been looking for and really need.

Just as a disclaimer, other than my involvement noted above, I personally have no vested interest in Queralt as a company. I do think that this is very cool tech and it is something that will add greater value to policy driven access control decisioning capabilities.  As such, if you are a PDP or PACS vendor and would like to be connected to the folks at Queralt, please do drop me a line and I would be glad to make that happen.

Tags:: ABAC | PACS | PEP | PIV | PIV-I | XACML
6/19/2011 4:38 PM Eastern Daylight Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Saturday, June 18, 2011

What is the Federal ICAM Backend Attribute Exchange (BAE) v2?

The BAE is a standards based architecture and interface specification to securely obtain attributes of subjects (e.g. PIV and PIV-I card holders, federation members with a unique identifier), from authoritative sources, to make access control decisions and/or to do provisioning.

While the original BAE v1 specification was a theoretical whiteboard exercise, the v2 specification incorporates the hands-on protocol profiling lessons learned from an initial proof-of-concept implementation, as well as a follow-on end-to-end pilot implementation of a pull based access control architecture. As such the "BAE documentation set" consists of:

  • BAE v2 Overview
  • Federal ICAM Governance for BAE v2
  • SAML 2.0 Identifier and Protocol Profiles for BAE v2
  • SAML 2.0 Metadata Profile for BAE v2
  • SPML 2.0 Read-Only Profile for BAE v2

The BAE architecture and interface specification defines a mechanism for implementing a pure attribute provider and is not in the business of authenticating an end-user. Take a look at my previous entry on FICAM Support for Identity Federation Flows to see how the BAE v2 architecture fits in with the larger Authentication, Attribute Exchange and Authorization mechanisms.

To keep this focus on Attribute Provider functionality, I have started using the term BAE-AP (BAE Compliant Attribute Provider) to refer to an attribute service that implements the BAE protocol profiles.

As you can see in the documentation breakdown above, an implementation of a BAE-AP supports both the real-time, on demand, querying of attributes of a single person using SAML 2.0, and a "batch" read-only mechanism to retrieve attributes of multiple people using SPML.  The latter capability is important to satisfy many of the occasionally-connected and dynamic provisioning use cases that exist within the community.

The SAML 2.0 Profiles are fully baked while the SPML Profile is currently being developed as part of a Pilot.

There were some specific choices made in developing the BAE v2:

  • The most important was to make sure that there were no dependencies between the Governance mechanisms and the implementation of the technical profiles. The Governance document illustrates how Federal ICAM will implement the BAE environment as an example. But organizations that are outside the Federal Government or Agencies and Departments who wish to implement a BAE-AP internally are free to utilize their own Governance mechanisms.
  • During the development of the SAML profiles and the subsequent implementations, my team actively reached out to multiple forward-leaning vendors in this space and built a business case with each of them as to why they should support the BAE-AP profiles within their product set. We also stood up a reference implementation that is being used for interoperability and conformance testing. I am happy to note that products from Layer 7, Vordel and Intel currently have built in support for generating a BAE-AP SAML Attribute Service, and that  External Authorization Management (PDP) vendors such as BiTKOO and others have built in the capability to query a BAE-AP SAML end-point directly from their PDP.
  • Last, but not least for the SAML profiles, we consciously separated the profiling of the Identifiers from the profiling of the Protocol which will allow anyone to snap-in additional identifiers as needed without impacting or changing the protocol profile. What that means in generic terms is that while the current SAML profile explicitly profiles the usage of a Subject DN from an X.509 Certificate,  FASC-N from a PIV Authentication Certificate or a UUID from a PIV-I Certificate to query a BAE-AP, you are free to extend what "key/Subject Name Identifier" you can use to query a BAE-AP. For e.g. Our reference implementation currently supports, in addition to the above, e-mail address and JID (Jabber ID) as identifiers that can be used to query for attributes. We simply advertise, within the metadata, the Subject Name Identifiers that are supported by our implementation of the BAE-AP. The expectation is that the Governance mechanism for a particular community will define the minimal set of Subject Name Identifiers everyone must support, but individual BAE-APs will be free to go beyond the minimal set given their particular use cases.

The Federal ICAM Architecture Working Group is in the process of reviewing and incorporating the comments from multiple parties, and once approved by the ICAMSC, the BAE v2 architecture and specification will become the US Federal Government's standard way to exchange attributes if using the "back-channel".

Tags:: ABAC | BAE | ICAM | SAML
6/18/2011 1:03 AM Eastern Daylight Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Sunday, June 12, 2011

In the many conversations that took place in the sidebars, asides and hallways of the NSTIC Governance workshop this past Thursday and Friday, I found one, which I am calling the "Canvas Theory of Levels of Assurance (LOA)", to be particularly interesting. It goes something like this:

The current definition of Identity LOA, as defined by OMB and NIST [1], are too rigid/inflexible/yesterday/not today/[insert your preferred word here]. A model that is more [insert your opposing word choice here] is to treat a credential as a blank canvas. Over time, as the credential is used in transactions, the image of the credential holder becomes more and more clear on the canvas. And based on this visibility, the LOA of the credential can increase as more becomes known about the credential holder and their behavior. Alternatively it can also move down if the behavior or details about them are not in synch. As such LOA is something that should be dynamic, flexible and capable of real-time changes.

As a first step, it is important to be very clear about what LOA means. Paraphrasing OMB M-04-04 [2], [an] assurance level describes the [Relying Party's] degree of certainty that the user has presented an identifier (a credential in this context) that refers to his or her identity. In this context, assurance is defined as (1) the degree of confidence in the vetting process used to establish the identity of the individual to whom the credential was issued, and (2) the degree of confidence that the individual who uses the credential is the individual to whom the credential was issued. What is important to note here is that the Relying Party's degree of certainty is dependent on both the process used to establish the identity of the person before the credential is issued to them, and the confidence that the credential is indeed being used by the person to whom it has been issued.

Secondly, if the end result is the subject being granted (or denied) access to information stored at a web site or the ability to invoke a service to perform some actions on their behalf, the implementation of the vision above results in the following:

  • The "canvas attributes" (for lack of a better word) are not used as part of the access control decision but is instead used to "tune" the LOA level up or down
  • The access control decision is then made primarily based on the new "tuned" LOA level
  • The "tuned" LOA level has no connection to the vetting process and is simply dependent of the consistency and "knowledge-over-time" behavior of the credential
  • Potentially frustrating experience for the subject because the relying party, since it has little or no confidence in the asserted identity's validity, may not be able to give the subject access to the information up front
  • Even more critically important, the risk of identification of the subject now resides solely with the relying party

Whenever something like this is proposed, it is always worthwhile to look at who benefits from such a model. This is a model in which the IdP has no responsibility to put in place a vetting process to establish the identity of the subject, and has no liability when it comes to the potential mis-identification of the subject. Needless to say, the entities that I see this model appealing to are large consumer IdPs who do not want to disturb their existing identity proofing processes (or lack thereof) that they have with their customers.

This approach ultimately does not move the ball forward towards an identity eco-system that allows one to conduct high value and/or privacy sensitive medical, financial and government transactions.

What I would instead propose is the "Canvas Theory of Access Control":

Given that we are moving to an era where dynamic, contextual, policy driven mechanisms are needed to make real time access control decisions at the moment of need, the policy driven nature of the decisions require that the decision making capability be externalized from systems/applications/services. In this environment, we need to treat the level of access control as a blank canvas. Over time, as a credential is used in transactions, the image of the credential holder becomes more and more clear on the canvas. And based on this visibility, combined with many other factors, the level of access can increase.

LOA should just be one of the factors that go into the decision making process and is not a "tunable" component. What becomes a "tunable" component is the level of access that is granted to the subject based on information about the subject (e.g. LOA), information about the resource, environmental/contextual information, and more, that are often expressed as attributes/claims. The contextual information here could indeed be the "canvas attributes" that evolve over time and are fed into access control decision making process. This potentially allows a subject with a LOA 1 credential, combined with compensating controls such as an externalized authorization system and a risk analytics engine that takes subject/resource/ environmental/contextual/ canvas attributes as decision input, to render a decision that could allow the subject access to more and more content on a LOA 3 web site over time. But if the subject had a LOA 2 credential to start out with, they may get immediate access to all content on the web site given that a combination of LOA 2 credential plus other factors raises the confidence level in the subject.

This approach leverages the common and accepted understanding of what LOA is, enables usage of existing infrastructure technologies, and properly apportions risk across identity providers and relying parties.

[1] See FICAM Trust Framework Provider Adoption Process (TFPAP). Appendix A for a readable table of the requirements to issue a LOA 1-4 credential
[2] http://www.whitehouse.gov/sites/default/files/omb/memoranda/fy04/m04-04.pdf

Tags:: ABAC | ICAM | LOA | NSTIC
6/12/2011 1:33 AM Eastern Daylight Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Sunday, March 13, 2011

In many conversations I have had with folks who potentially have a need for the services of an Identity Oracle, especially as to how it could help with assurances of identity, there is a two part reaction that I found to be very interesting as indicators of what we need to focus on as a community to make this real and viable. 

The first part of the reaction is typically about the “many security holes” in the concept and “changes to existing business processes” that are needed to leverage the capability.  The second part of the reaction takes place a bit later as we get into discussing identity proofing and bring up the example of US Government PIV cards (which are Smart Cards that are issued to US Government Employees and Contractors) or Non Federally Issued PIV-I Cards, both of which have have transparent, publically documented, and consistent identity proofing process and the level of comfort the same set of folks have in potentially changing their business processes to accept the PIV/PIV-I Card as a proxy for identity proofing that has been done by someone else.

What that combination of reactions confirmed for me is that the issue is not about technology/security holes (since the the Identity Oracle is a business and NOT a technology) or about changing business practices (since the second part requires that change as well) but about the level of comfort and confidence one can place in the relationships between the Identity Oracle and entities that need to interact with it.  I prefer to not use the word “Trust” in this context because the definition is ambiguous at best (See Gunnar Peterson’s “Lets Stop ‘Building Naïveté In’ - Wishing You a Less Trustful 2011” blog post) but instead would like to focus on the contractual aspects of what can be articulated, measured and enforced as both Gunnar in his blog and Scott David in my earlier “Identity Oracles – A Business and Law Perspective” blog post noted.

This tension between the technical and the business also came up in the reactions (@independentid, @NishantK, @IDinTheCloud, @mgd) to my original post on Identity Oracles, so would like to explicitly address that in this post.

How does the traditional “pure tech” Identity and/or Attribute Provider operate and what if any are the constraints placed upon it?

Oracle_IdP_PIIFrom a technical interaction perspective, you have:

  1. Person presents to the Relying Party some token that has binds them to a unique identifier
  2. Relying party uses that unique identifier to call out to the Identity/Attribute Provider to retrieve attributes of the Person
  3. The Identity/Attribute Provider interacts with Authoritative Sources of information about the Person and returns the requested information to the Relying Party

Now let us look at this from a non-technical interaction perspective:

  • A contractual relationship exists between the Authoritative Sources and the Identity/Attribute Provider
  • A contractual relationship exists between the Identity/Attribute Provider and the Relying Party
  • A contractual relationship exists between the Person and the Relying Party
  • NO contractual relationship exists between the Person and Identity/Attribute Provider

Privacy Implications

  • The Relying Party typically click-wraps its privacy and information release in its interactions with the Person
  • The identity/attribute provider, as a business entity which needs to make money, is dependent on Relying Parties for its revenue stream
  • The identity/attribute provider, as the entity in the middle, has visibility into the transactions that are conducted by the Person and has significant financial pressure on it to monetize that information by selling it to third parties (or even to the Relying Party). For more information on this extremely sophisticated and lucrative market in private information, please read the recent series of investigative articles from the Wall Street Journal.
  • Given the lack of a contractual relationship between the Person and the Identity/Claims provider, the person has no visibility or little to no control over how this transactional information, which can be used to build a very detailed profile of the person, is used.

How does an Identity Oracle operate and what if any are the constraints placed upon it?

Oracle_IdP_No_PII From a technical interaction perspective, you have:

  1. Person establishes a relationship with the Identity Oracle, which verifies their identity and potentially other information about them via its relationship to Authoritative Sources. The Identity Oracle provides the person with token(s) that allow the person to vouch for their relationship with the Identity Oracle in different contexts (Potentially everything from a Smart Card when you need very high assurances of identity to some token that asserts something about the person without revealing who they are)
  2. When the Person needs to conduct a transaction with the Relying Party, he presents the appropriate token needed which establishes their relationship to the Identity Oracle
  3. The Relying Party asks the Identity Oracle “Am I allowed to offer service X to the Person with a token Y from You under condition Z?”. The Identity Oracle answers “Yes or No”

Now let us look at this from a non-technical interaction perspective:

  • A contractual relationship exists between the Authoritative Sources and the Identity Oracle
  • A contractual relationship exists between the Identity Oracle and the Relying Party
  • A contractual relationship exists between the Person and the Relying Party
  • A contractual relationship exists between the Person and Identity Oracle

Privacy Implications

  • The Relying Party typically click-wraps its privacy and information release in its interactions with the Person but in many cases does not need to collect Privacy Sensitive information from the Person
  • The Relying Party can potentially outsource some functions as well as transfer liability for incorrect responses to the Identity Oracle
  • The Identity Oracle, as a business entity which needs to make money, has multiple revenue streams including the Relying Party as well as the Person, not to mention value added services it can offer to the Person
  • The Identity Oracle, as the entity in the middle, has visibility into the transactions that are conducted by the Person BUT is constrained by its contractual relationship with the Person to protect both the transactional information it has visibility into, as well as provide only meta-data about the private information it knows about the Person to Relying Parties

Some of the critical points that bears emphasizing with the Identity Oracle concept are:

  • Privacy protection of both PII information as well as transactional information with visibility and control by the Person
  • Allocation of responsibility and liability across Relying Parties, Identity Oracles and Persons.
  • Ability to conduct transactions that require very high assurances of identity to completely anonymous
  • Ability to conduct transactions across multiple modalities including in-person, internet/web, mobile devices and more
  • Ability to leverage existing technologies such as SAML, XACML, Smart Cards, OTPs and more

I hope that this blog post has been helpful in articulating the differences between a traditional identity/attribute provider and the identity oracle, and provides a case for the community to focus more on defining and articulating the contractual and business process aspects of the relationships of the parties involved, while simultaneously working on the supporting technology.


Tags:: Architecture | Security
3/13/2011 2:11 PM Eastern Daylight Time  |  Comments [1]  |  Disclaimer  |  Permalink   
Wednesday, March 02, 2011

Reminder:  The Identity Oracle idea is NOT mine, but I have become convinced that it, or something like it, needs to exist in a healthy Identity Eco-System.  The concept is something that was originally proposed by Bob Blakley and expanded upon by him and others at Gartner/Burton Group.  I am simply trying to gather the information that exists in a variety of places into one cohesive narrative, and adding my own perspective to move the conversation forward on this topic.

Scales of Justice One of the aspects of the Identity Oracle is that it is not a technology but a business that proposes to address the relationship between Subjects, Relying Parties and Authoritative Sources of Information via mechanisms such as Contract Law. I am not a lawyer and I do not play one on TV. So when I had questions about the viability of the Identity Oracle from a Law and Business perspective, I pinged Scott David at K&L Gates. Scott and I have ended up at a lot of the same identity focused events in recent months and I have really enjoyed conversing with him about the intersection of Identity, Privacy and Law.  As someone who is passionate about those topics, and works in the domain, he brings a critical insight to this discussion.

My request to Scott was to read my previous blog entry on Identity Oracles and answer if the concept was “… feasible or is it a Utopian vision that is a bridge too far?”  The short version of the answer that I got was:

“I agree with much of the strategy of what you suggest in the blog, but I have some comments on tactics”

But because the long version of his answer is so very thought provoking, I am posting it here, with his permission. I do take some liberties below by commenting on Scott’s words and providing external links to some of his references.

Here is Scott, in his own words:

Anil – The following are my personal comments to your blog entry. They do not reflect the views of my firm (K&L Gates LLP) or any of its clients.

I guess I would say you are "getting warmer," but there are some underlying assumptions on the legal side in the path that you outline that will likely prevent achieving internet scale through the path described.

With some changes in assumptions and design and deployment tactics, however, the market-oriented system that you contemplate can, I think, be built to accommodate the needs of global data/identity systems.

If we treat law as a technology (just as "language" is a "technology") in need of standardization, and look at law from a systems, information science, thermodynamics, AND economic incentives perspective, the following additional points quickly suggest themselves as requiring accommodation in internet scale systems.

1) You are right-on with emphasis on contract law. Massively interoperable systems require Rules standardization (not just technical standardization) on a broad scale. The most system relevant rules (the only one's on which system users can rely) will be those that are enforceable. Those are called legal duties. They arise two ways: by legislation (regulation or other government action) or contract. There is no single international legal jurisdiction (see Peace of Westphalia - 1648), so legislation and regulation alone cannot drive standardization. The international law is the law of contracts (minimum coverage of treaties aside).

Standardized, enforceable, international contracts involving remote parties dealing in valuable intangibles/data are entered into literally every second . . .that activity takes place in the current financial markets. Existing financial and other market structures offer a great deal of insight into the likely functioning of future data/information/identity services markets. Lots to discuss here.

There is another reason to rely on contract law. Due to the limited reach of US and other sovereign nation legal jurisdiction in this context, neither the US, nor any other country, can "force" adoption of internet scale data/identity rules.

There is a solid advantage for the US (and other jurisdictions that have reliable legal/political systems), however, and it is the same one that permits U.S. financial markets to maintain ascendancy in the world markets (despite recent deflections). It is the strong "system support value" derived from the US tradition of deference to the "rule of law." To the extent that the US and other similar jurisdictions are able to "attach" their ideas (manifested in their local data/identity-system-supporting laws) of how to structure data/identity systems to the broad and deep "trust" that is placed in their respective legal/political systems worldwide, it will enhance the appeal of the those systems, and the efficacy and authority of persons and institutions that are responsible for such systems.

It is for this reason, incidentally, that OIX processes were organized based on a variety of US and international trusted, developed "market" models (in a variety of self-regulatory settings), and why they focus on reliable, predictable, transparent processes, etc. Systems that offer the best solutions will enjoy the broadest adoption. Reliability and predictability are currently at a premium due to system fragmentation and so are highly desirable at present. In fact, the data/identity system harm "trifecta," i.e., "privacy," "security," and "liability," can all be seen as merely symptoms of lack of reliability and predictability, due to a lack of standardized legal structure at the core of nascent data/identity markets. Core enforceable legal structure yields reliability, predictability and a form of "trust."

I had never given much thought to this but once Scott articulated this point, the focus on Contract Law which can be international in scope vs Legislation which is local makes sense. There are also familiar elements here regarding the concept of “Comparability” vs. “Compliance” (where the former model is preferred) that Dr. Peter Alterman from NIH has often spoken of in regards to Identity Trust Frameworks.

2) You are correct that it is not a technology issue. I introduced the alliterative concept of "Tools and Rules" early on as a rhetorical device to put laws on par with technology in the discussion (which still takes place mainly among technologists). As a former large software company attorney once said "in the world of software, the contract is the product." He did not intend to diminish the efforts of software programmers, just to call out that providing a customer with a copy of a software product without a license that limits duplication would undermine the business plan (since without the contract, that person could make 1 million copies). Similarly, in the future markets for data/identity services, the contract is the product. This is key (see below).

As a technologist it is sometimes hard for me to admit that the truly challenging problems in the Identity and Trust domain are not technical in nature but in the domain of Policy. To paraphrase the remarks of someone I work with from a recent discussion “We need to get policy right so that we can work the technical issues”.

3) Your discussion is based on a property paradigm. There is much to discuss here. The property paradigm does not scale without first establishing some ground rules.

First, the concept of private property was adopted by the Constitution's framers who were familiar with the work of Gladstone (who believed that without property laws, every man must act as a "thief"). Those laws work very well where the asset is "rivalrous," i.e., it can only be possessed/ controlled by one person. This works for all physical assets. For intangible assets, rivalrousness requires a legal regime (e.g., copyright, patent, etc. to create the ability to exclude, since there is no asset physicality to "possess" as against all other claimants to the same asset). The analysis is then, what legal regime will work to support the interactions and transactions in the particular intangible assets involved here (be it identified as "data," "information," "identity" etc.). Data is non-rivalrous (see discussion in 5 below).

I believe that this is a "resource management" type situation (like managing riparian, aquifer, fisheries, grazing or other similar rights) that lends itself to that type of legal regime, rather than a traditional "property" regime. In this alternative, the "property" interest held by a party is an "intangible contract right," rather than a direct interest in physical property. That contract right entitles the party to be the beneficiary of one or more duties of other people to perform actions relating to data in a way that benefits the rights holder. For instance, a "relying party" receives greater benefit (and an IDP is more burdened) at LOA 3 than LOA 2). The "value" of the contract right is measured by the value to the party benefited by the duty.

The resource management structure emphasizes mutual performance promises among stakeholders, rather than underlying property interests. Briefly, consider a river with three types of user groups (40 agricultural (irrigation) users upstream, 2 power plants midstream (cooling), and a city of 100,000 residential water users downstream (consumption and washing, etc.)). Each rely on different qualities of the water (irrigation is for supporting plant metabolism (stomata turgidity, hydrogen source for manufacturing complex carbohydrates in photosynthesis, etc.), power plants use water for its thermal capacity, and residents use it for supporting human metabolism (consumption) and as a fairly "universal solvent" (for washing, etc.). When there is plenty of water in the river, there is no conflict and each user can use it freely without restriction. When there is too little water, or conflicting usage patterns, there can be conflicting interests. In that situation, it is not property interests, per se, that are applied to resolve the conflicts, but rather mutually agreed upon duties documented in standard agreements that bind all parties to act in ways consistent with the interests of other parties.

Like water, data is a resource that has many different user groups (among them data subjects, relying parties and identity providers), with needs sometimes in conflict. Notably, because data is not a physical resource, the "scarcity" is not due to physical limitation of the resource, but rather is due to the exertion of the rights of other parties to restrict usage (which is indistinguishable legally from a physical restriction).

The property paradigm can be employed for certain forms of intellectual property, such as copyrights, but those systems were not designed to accommodate large "many to many" data transfers. Arrangements such as BMI/ASCAP (which organize music licensing for public radio play, etc.) are needed to help those systems achieve scale.

In any event, there is also a question of ownership where "data" is generated by an interaction (which is most (or all?) of the time). Who "owns" data about my interactions with my friends, me or them? If both parties "own" it, then it is more of a rights regime than a "property" regime as that term is generally understood. Who owns data about my purchase transactions at the supermarket, me or the store? It takes two to tango. We will be able to attribute ownership of data about interactions and relationships to one or the other party (in a non-arbitrary fashion) only when we can also answer the question "who owns a marriage?", i.e., never. You quote Bob Blakley who speaks about "your" information. I take that to be a casual reference to the class of information about someone, rather than an assertion of a right of exclusive possession or control. If it is the latter, it seems inconsistent with the indications that the database will be an "asset" of the Identity Oracle. That separation could be accomplished through a rights regime.

There is also the linguistics based problem of "non-count nouns." Certain nouns do not have objects associated with them directly. Gold and water are good examples. I don't say "I have a gold." or I have a water." In order to describe an object, it needs a "container/object convention" ("a gold necklace" or "a glass of water.") Data is a non-count noun. When it is put in a "container" (i.e., when it is observed in a context), it becomes "information." It makes no sense for me to point to a snowbank and say "there is my snowball in that snowbank." Instead, I can pick up a handful of snow (separate it out from the snowbank) and then make that declaration. Similarly, in the era of behavioral advertising, massive data collection and processing, it makes little sense to say, "there is my personal information in that data bank" (unless the data is already correlated in a file in a cohesive way, or is an "inventory control" type number such as an SSN). It takes the act of observation to place data in the information "container."

As a result, it will take more to allow parties to exert any type of "property" interests in data (even those property interests under a contract "rights regime."). First, you need to make a data "snowball" (i.e., observe it into the status of "information") from the mass of data.

The paradigm of resource allocation allows DATA to flow, while permitting rules to measure (and restrict or charge for, etc.) information. When we talk, I will share with you the concept of when limitations, measurement, valuation, monetization might be applied. Briefly, when the data is "observed" by a party, I call it a "recognition" event. That observation will always be in a context (of the observer) and be for that observer's subjective purposes. At the point of observation, data is "elevated" to information (the "Heisenberg synapses" in your brain may be firing at this notion). It is at that point that it is the "difference that makes a difference" (to quote Bateson). The first reference to "difference" is the fact that data is carried by a "state change" in a medium. The second reference to "difference" in the Bateson quote is the fact that the data matters to the observer (it has value either monetarily or otherwise). Anyway, this data/information distinction I think lends itself to a system that can allow data to "flow" but can offer appropriate "measurement" at the point of "use" ,i.,e, observation, that can form the basis of legal structures to value, monetize, limit, restrict, protect, etc. the information that the data contains.

This works well with context-based limitation. Ask me about the example using data held by my banker under Gramm Leach Bliley.

The resource allocation and “non-count nouns” concepts are very interesting to me and is something I need to digest, think about and explore a lot more.

4) Bilateral agreements, individually negotiated agreements won't scale. Standard form agreements are used in every market (financial, stock, commodities, electrical grid) where remote parties desire to render the behavior of other participants more reliable and predictable. Even the standardized legal rules of the Uniform Commercial Code (passed in all 50 states) offers standard provisions as a baseline "virtual interoperable utility" for various sub-elements of larger commercial markets (the UCC provides standard terms associated with sales of goods, commercial paper, negotiable instruments, etc. that have established standard legal duties in the commercial sector since the 1940s. . .and establish broad legal duty interoperability that makes information in the commercial sector "flow").

Standard form agreements permit remote parties without direct contractual privity to be assured about each other's performance of legal duties. This reduces "risk" in the environment of the organism (either individual or entity), since it makes the behavior of other parties more reliable and predictable. This saves costs (since parties don't have to anticipate as many external variables in planning), and so has value to parties. The concept of contract "consideration" is the measure of the value to a party for receiving promises of particular future behavior (legal duties) from another party.

The creation of a "risk-reduction territory" through the assignment of standardized legal duties to broad groups of participants is called a "market" in the commercial sector, it is called a "community" in the social sector, and it is called a "governance structure" in the political sector. Those duties can be established by contract or by legislation/regulation. In the present case (as noted above) contract is the likely route to the establishment of duties. Since all three sectors are using a shared resource, i.e., data, improvement of the reliability, predictability and interoperability in any one of the three sectors will yield benefits for participants in all three sectors. An example of this relationship among user groups is evidenced by the willingness of the government authorities to rely on the commercial sector for development of data/identity Tools and Rules.

Standard form agreements enable the creation of either mediated markets (such as those mediated by banks (match capital accumulation to those with borrowing needs), or brokers (match buy and sell orders), etc.), or unmediated markets (such as the use of standard form mortgages or car loan documents to enable the securitization (reselling) of receivables in those markets).

5) Centralized operation and enforcement won't scale. Steven Wright, the comedian, says that he has "the largest seashell collection in the world, he keeps it on beaches around the earth." This is amusing because it stretches the "ownership" concept beyond our normal understanding. Data is seashells. It will be impossible (or at least commercially unreasonable) to try to vacuum all (or even a large portion of) data into a single entity (whether commercial or governmental).

In fact, on page 90 of Luciano Floridi's book "Information - A very short introduction." (Oxford Press) (strongly recommended), the author notes that information has three main properties that differentiate it from other ordinary goods. Information is "non-rivalrous" (we can both own the same information, but not the same loaf of bread), "non-excludable" (because information is easily disclosed and sharable, it takes energy to protect it - how much energy?. . .see wikileaks issues), and "zero marginal cost" (cost of reproduction is negligible). Of these, the non-excludability characteristic suggests that a distributed "neighborhood watch" type system (more akin to the decentralization we observe in the innate and learned immune systems of animals), offers a path to enforcement that is probably more sound economically, politically, mathematically and thermodynamically than to attempt to centralize operation, control and enforcement. That is not to say that the "control reflex" won't be evidenced by existing commercial and governmental institutions. . .it will; it is simply to suggest that each such entity would be well advised to have "Plan B" at the ready.

This does not mean that data (even as "seashells") cannot be accessed centrally; it can due to the gross interoperability of scaled systems based on standardization of tools and rules. The key is "access rights" that will be based on enforceable, consensus-based agreement (and complementary technology standards). This analysis will naturally expand to topics such as ECPA reform, future 4th amendment jurisprudence and a host of related areas, where group and individual needs are also balanced (but in the political, rather than the commercial user group setting). The analysis of those civil rights/security-related issues will benefit from using a similar analysis to that relied upon for configuration of commercial systems, since both will involve the management of a single "data river" resource, and since the requirements imposed on private persons to cooperate with and assist valid governmental investigations will be applied with respect to the use of such commercial systems.

In this context it is critical to separate out the system harms caused by bad actors (that cause intentional harm), and negligent actors (that cause harm without intention). Intentional actors will not be directly discouraged by the formality of structured access rights, which they will likely violate with impunity just as they do now. The presence of structured, common rules provides an indirect defense against intentional actors, however, since it gives the system "1000 eyes." In other words, since much intentional unauthorized access is caused by fooling people through "social engineering " (in online context) and "pretexting" (in telco context), those paths to unauthorized access will be curtailed by a more standardized system that is more familiar to users (who are less likely to be fooled). Security can be built right into the rights, incentives and penalties regime (remind me to tell you about the way they handled the "orange rockfish" problem in one of the pacific fisheries). Again, there is much to discuss here as well.

Also, your business emphasis seems exactly right. Due to the energy requirements to maintain security and system integrity (resist entropy?), the system can only scale if there are incentives and penalties built into the system. Those incentives and penalties need to be administered in a way so that they are distributed throughout the system. The standardized contract model anticipates that. Ultimately, the adoption ("Opt in") curve will be derived from whether or not participation is sufficiently economically compelling for business (in their roles as IDPs, RPs and data subjects), and offers similarly compelling benefits to individuals (in similar roles). This returns the analysis to the "resource management" model.

6) As noted above, there are different user groups that use the same data resources. These include those groups in the gross categories of commercial, social and governmental users. Thus, for example, when I post to a social network a personal comment, that social network may "observe" that posting for commercial purposes. That can be conceived of as a "user group conflict" (depending on the parties’ respective expectations and “rights”) to be resolved by resort to common terms. The good news is that because all user groups are working with a common resource (data), improvement of the structuring for any one user group will have benefits for the other users of the resource as well.

In short, I agree with much of the strategy of what you suggest in the blog, but I have some comments on tactics.

There is a lot of information and concepts here and while a lot of it is something that I can map to my domain (Lack of scalability of bi-lateral agreements and central enforcement and more), there are others that I have not had to deal with before so am slowly working my way thru them. But in either case, I wanted to expose this to the larger community so that it can become part of the conversation that needs to happen on this topic.  I for one, am really looking forward to further conversations with Scott on this topic!

Technorati Tags: ,,

del.icio.us Tags: ,,
Tags:: Architecture | Security
3/2/2011 10:43 PM Eastern Standard Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Sunday, February 27, 2011

The concept of the Identity Oracle is something that I have been giving a lot of thought to recently. It has been driven by a combination of factors including current projects, maturity of both policy conversations and technology, as well as a desire to move the art of the possible forward at the intersection of identity and privacy.  My intention is to use this blog post to provide pointers to past conversations on this topic in the community, and to use that as a foundation for furthering the conversation.

Identity Oracle When it comes to information about people (who they are, what they are allowed to do, what digital breadcrumbs they leave during their daily travels etc.), there exists in the eco-system both sources of information as well as entities that would find value in utilizing this information for a variety of purposes.  What will be critical to the success of the identity eco-system is to define, as a starting point, the qualities and behavior of the "entity-that-needs-to-exist-in-the-middle" between these authoritative sources of information and consumers of such information.  I believe the Identity Oracle to be a critical piece of that entity. 

So, what is an Identity Oracle?

Bob Blakley, currently the Gartner Research VP for Identity and Privacy, coined the phrase "Identity Oracle", and provided a definition in a Burton Catalyst 2006 presentation:

  • An organization which derives all of its profit from collection & use of your private information…
  • And therefore treats your information as an asset…
  • And therefore protects your information by answering questions (i.e. providing meta-identity information) based on your information without disclosing your information…
  • Thus keeping both the Relying Party and you happy, while making money.

That is as succinct a definition as I've seen in the many conversations on this topic since that time, and since I have no desire to re-invent the wheel, this is as good a starting point as any.

The key point to note here is that this is NOT technology but a business, and as such if there is any hope for this to work, this business needs a viable business model i.e. something that makes it money.  As Bob notes, some of the questions that need be answered by the current eco-system denizens such as Identity Providers, Attribute Providers and Relying Parties include:

  • Paying for the Identity Provider server and the service it provides.
  • Convincing Relying Parties that they should rely on information provided by a third party (the Identity Provider) rather than maintaining identity attribute information themselves.
  • Assigning liability when a Relying Party asserts that a claimed identity attribute is incorrect.
  • Assigning liability when a subject claims that the wrong identity attribute claim was released to a Relying Party.
  • Making subjects whole when a security failure “leaks” subject identity attributes directly from the Identity Provider.
  • Assigning liability and making subjects whole when a security failure “leaks” subject identity attributes from a Relying Party.

I will add the following to the above list:

  • Making subjects whole when the Identity/Attribute Provider's desire to monetize its visibility into the transactional information across multiple Relying Parties overrides its responsibility to protect the subject's personal information.

As always, whenever something like this is proposed there is a tendency for technologists to try and map this to technology implementations. In this case technologies such as Security Token Services, Claims Transformers and Agents, Minimal Disclosure Tokens and Verified Claims. And in the "What the Identity Oracle Isn't" blog post, Bob provides a clear example of why such a technology focused view is incomplete at best by walking through an example of an Identity Oracle based transaction:

A human – let’s call him “Bob” – signs up for an account with the Identity Oracle.  The Identity Oracle collects some personal information about Bob, and signs a legally binding contract with Bob describing how it will use and disclose the information, and how it will protect the information against uses and disclosures which are not allowed by the contract.  The contract prescribes a set of penalties – if Bob’s information is used in any way which is not allowed by the contract, the Identity Oracle PAYS Bob a penalty: cash money.

When Bob wants to get a service from some giant, impersonal corporation (say “GiCorp”) whose business depends in some way on Bob’s identity, Bob refers GiCorp to the Identity Oracle; GiCorp then goes to the Identity Oracle and asks a question.  The question is NOT a request for Bob’s personal information in any form whatsoever (for example, the question is NOT “What is Bob’s birthdate”). And the Identity Oracle’s response is NOT a “minimal disclosure token” (that is, a token containing Bob’s personal information, but only as much personal information as is absolutely necessary for GiCorp to make a decision about whether to extend the service to Bob – for example a token saying “Bob is over 18”).

Instead, GiCorp’s request looks like this:
“I am allowed to extend service to Bob only if he is above the legal age for this service in the jurisdiction in which he lives.  Am I allowed to extend service to Bob?”

And the Identity Oracle’s response looks like this:
“Yes.”

The Identity Oracle, in normal operation, acts as a trusted agent for the user and does not disclose any personal information whatsoever; it just answers questions based on GiCorp’s stated policies (that is, it distributes only metadata about its users – not the underlying data).

The Identity Oracle charges GiCorp and other relying-party customers money for its services.  The asset on the basis of which the Identity Oracle is able to charge money is its database of personal information.  Because personal information is its only business asset, the Identity Oracle guards personal information very carefully.

Because disclosing personal information to relying-party customers like GiCorp would be giving away its only asset for free, it strongly resists disclosing personal information to its relying-party customers.  In the rare cases in which relying parties need to receive actual personal data (not just metadata) to do their jobs, the Identity Oracle requires its relying-party customers to sign a legally binding contract stating what they are and are not allowed to do with the information.  This contract contains indemnity clauses – if GiCorp signs the contract and then misuses or improperly discloses the personal information it receives from the Identity Oracle about Bob, the contract requires GiCorp to pay a large amount of cash money to the Identity Oracle, which then turns around and reimburses Bob for his loss.

This system provides Bob with much stronger protection than he receives under national privacy laws, which generally do not provide monetary damages for breaches of privacy.  Contract law, however, can provide any penalty the parties (the Identity Oracle and its relying party customers like GiCorp) agree on.  In order to obtain good liability terms for Bob, the Identity Oracle needs to have a valuable asset, to which GiCorp strongly desires access.  This asset is the big database of personal data, belonging to the Identity Oracle, which enables GiCorp to do its business. And allows the Identity Oracle to charge for its services.

The Identity Oracle provides valuable services (privacy protection and transaction enablement) to Bob, but it also provides valuable services to GiCorp and other relying-party customers.  These services are liability limitation (because GiCorp no longer has to be exposed to private data which creates regulatory liability and protection costs for GiCorp) and transaction enablement (because GiCorp can now rely on the Identity Oracle as a trusted agent when making decisions about what services to extend to whom, and it may be able to get the Identity Oracle to assume liability for transactions which fail because the Oracle gave bad advice).

The important take-aways for me from the above are (1) The contextual and privacy preserving nature of the question being asked and answered, (2) the allocation and assumption of liability, as well as the (3) redress mechanisms that rely on contract law rather than privacy legislation.

This approach, I believe, addresses some of the issues that are raised by Aaron Titus in his “NSTIC at a Crossroads” blog post and his concepts around “retail” and “wholesale” privacy in what he refers to as the current Notice and Consent legal regime in the United States.

Currently, one of the things that I am thinking over and having conversations with others about, is if it makes sense for the Fair Information Practice Principles (FIPPs) [Transparency, Individual Participation, Purpose Specification, Data Minimization, Use Limitation, Data Quality and Integrity, Security, Accountability and Auditing], found in Appendix C of the June 2010 DRAFT release of the National Strategy for Trusted Identities in Cyberspace (NSTIC), can be adopted as the core operating principles of an Identity Oracle. And as noted in the example above, if these operating principles could be enforced via Contract Law to the benefit of the Identity Eco-System as a whole.


Tags:: Architecture | Security
2/27/2011 6:22 PM Eastern Standard Time  |  Comments [1]  |  Disclaimer  |  Permalink   
Sunday, December 12, 2010

I am doing a bit of research into what it would take to deploy Sharepoint 2010 as a DMZ facing portal that accepts Federated Credentials.  Here are some materials I’ve come across that may help others who may be doing the same:

From MS PDC10 Presentation “How Microsoft Sharepoint 2010 was built with Windows Identity Foundation”:

Classic Authentication

Claims-based Authentication

  • NT Token Windows Identity
  • NT Token Windows Identity
  • ASP.NET Forms Based Authentication (SQL, LDAP, Custom …)
  • SAML 1.1++
  >>> SAML Token Claims Based Identity
>>> SPUser >>> SPUser

More details regarding the above can be found at the MS Technet page on Authentication methods supported in SP2010 Foundation.

Windows Identity Foundation (WIF) which is the RP piece integrated with Sharepoint 2010 (SP2010) does NOT support the SAML Protocol. It only supports the WS-Federation Passive profile with SAML tokens for Web SSO.

Alternative to get SP2010 to work with a SAML2 IdP requires the deployment and usage of ADFS 2:

  • Configure ADFS 2 as a SAML2 SP that accepts attributes/claims from an external SAML2 IdP
    • Define the SAML2 IdP as a SAML2 Claims Provider within ADFS 2
    • Exchange federation metadata between SAML2 IdP and ADFS 2 SP
  • Configure the WIF based application (i.e. SP2010 application) as a RP which points to ADFS 2.0 as the Sharepoint-STS (SP-STS) to which the web apps externalize Authentication

Of course, this implies that you need to deploy another server in the DMZ that is hosting the ADFS 2 bits.

In order to configure SP2010 Authentication to work with SAML Tokens:

  1. Export the token-signing certificate from the IP-STS. This certificate is known as the ImportTrustCertificate. Copy the certificate to a server computer in the SharePoint Server 2010 farm.
  2. Define the claim that will be used as the unique identifier of the user. This is known as the identity claim. Many examples of this process use the user e-mail name as the user identifier. Coordinate with the administrator of the IP-STS to determine the correct identifier because only the owner of the IP-STS knows which value in the token will always be unique per user. Identifying the unique identifier for the user is part of the claims-mapping process. Claims mappings are created by using Windows PowerShell.
  3. Define additional claims mappings. Define which additional claims from the incoming token will be used by the SharePoint Server 2010 farm. User roles are an example of a claim that can be used to permission resources in the SharePoint Server 2010 farm. All claims from an incoming token that do not have a mapping will be discarded.
  4. Create a new authentication provider by using Windows PowerShell to import the token-signing certificate. This process creates the SPTrustedIdentityTokenIssuer. During this process, you specify the identity claim and additional claims that you have mapped. You must also create and specify a realm that is associated with the first SharePoint Web applications that you are configuring for SAML token-based authentication. After the SPTrustedIdentityTokenIssuer is created, you can create and add more realms for additional SharePoint Web applications. This is how you configure multiple Web applications to use the same SPTrustedIdentityTokenIssuer.
  5. For each realm that is added to the SPTrustedIdentityTokenIssuer, you must create an RP-STS entry on the IP-STS. This can be done before the SharePoint Web application is created. Regardless, you must plan the URL before you create the Web applications.
  6. Create a new SharePoint Web application and configure it to use the newly created authentication provider. The authentication provider will appear as an option in Central Administration when claims mode is selected for the Web application.

You can configure multiple SAML token-based authentication providers. However, you can only use a token-signing certificate once in a farm. All providers that are configured will appear as options in Central Administration. Claims from different trusted STS environments will not conflict.

The SP2010 Authentication Flow then becomes:

  1. User attempts to access Sharepoint web application
  2. User redirected to Sharepoint STS
    - Validate AuthN Token (if user already has been AuthN w/ IdP)
    - Augment claims, if need be
  3. Post Token {SP-Token} to Sharepoint Web Application
  4. Extract Claims and construct IClaimsPrincipal

I still have a list of outstanding questions I am working thru, some of which are:

  • Can the built-in SP-STS do direct Authentication of X.509 Credentials for SP2010?
    • What "front-end" protocols are supported by this SP-STS? (WS-Fed Passive Profile only?)
    • Is there any MS "magic sauce" added to this SP-STS that "extends" the standards to make it work with SP2010?
    • Can the built-in SP-STS do direct Authentication of X.509 credentials?
    • Can the built-in the SP-STS do just in time provisioning of users to SP2010? Is it needed?
  • When using ADFS 2 with SP2010, does ADFS 2 replace the built-in SP-STS or does it work in conjunction with the SP-STS? i.e. if using ADFS 2, can the built-in SP-STS be disabled?
    • Can ADFS 2 do direct Authentication of X.509 credentials?
    • Can ADFS 2 do just in time provisioning of users to SP2010? Is it needed?
  • Does this SP-STS need to be ADFS 2.0 or can it be any STS that can do SAML2 to WS-Fed token transformation on the RP side?
  • If it can be any STS, how do I register a non-Microsoft STS w/ SP2010? i.e. How do I register it as a "SPTrustedIdentityTokenIssuer"
  • Where can I find the metadata on the SP2010 side that can be exported to bootstrap the registration of a SP2010 RP App with an external IdP?

Part of the issue I am working thru is the differences in terminology between Microsoft and …everyone else… :-) that is used to describe the same identity infrastructure components. Walking thru some of the ADFS 2.0 Step-by-Step and How To Guides, especially the ones that show interop configurations with Ping Identity Pingfederate and Shibboleth 2, do help but not as much as I had hoped.  The primary limitation of the guides is that they do the wizard driven click-thru UI configuration without explaining why things are being done or providing explanations on the underlying protocols that are supported and the implementation choices that are made.


Tags:: Architecture | Security
12/12/2010 3:57 PM Eastern Standard Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Tuesday, December 07, 2010

Input to access control decisions are based on information about the subject, information about the resource, environmental/contextual information, and more, that are often expressed as attributes/claims. But how do you determine what those attributes/claims should be, especially as it relates to information about the subject?

The typical way that I have seen folks handle this is based on a bottom up approach that gets a whole bunch of folks who manage and maintain directory services, lock them in a room and throw away the key until they can come to some type of agreement on a common set of attributes everyone can live with based on their knowledge of relying party applications. This often is not …ah… optimal.

ABAC Data Model The other approach is to start at the organizational policy level and identify a concrete set of attributes that can fully support the enterprise’s policies. My team was tasked with looking at the latter approach on behalf of the DHS Science and Technology Directorate. The driving force behind it was coming up with a conceptual model that remains relevant not just within an Enterprise but also across them i.e. in a Federation.

Couple of my team members, Tom Smith and Maria Vachino, led the effort which resulted in a formal peer-reviewed paper that they presented at the 2010 IEEE International Conference on Homeland Security [PPTX] last month. The actual paper is titled “Modeling the Federal User Identity, Credential, and Access Management (ICAM) decision space to facilitate secure information sharing” and can be found on IEEExplore.

Abstract:

Providing the right information to the right person at the right time is critical, especially for emergency response and law enforcement operations. Accomplishing this across sovereign organizations while keeping resources secure is a formidable task. What is needed is an access control solution that can break down information silos by securely enabling information sharing with non-provisioned users in a dynamic environment.

Multiple government agencies, including the Department of Homeland Security (DHS) Science and Technology Directorate (S&T) are currently developing Attribute-Based Access Control (ABAC) solutions to do just that. ABAC supports cross-organizational information sharing by facilitating policy-based resource access control. The critical components of an ABAC solution are the governing organizational policies, attribute syntax and semantics, and authoritative sources. The policies define the business objectives and the authoritative sources provide critical attribute attestation, but syntactic and semantic agreement between the information exchange endpoints is the linchpin of attribute sharing. The Organization for the Advancement of Structured Information Standards (OASIS) Security Assertion Markup Language (SAML) standard provides federation partners with a viable attribute sharing syntax, but establishing semantic agreement is an impediment to ABAC efforts. This critical issue can be successfully addressed with conceptual modeling. S&T is sponsoring the following research and development effort to provide a concept model of the User Identity, Credential, and Access Management decision space for secure information sharing.

The paper itself describes the conceptual model, but we have taken the work from the conceptual stage to the development of a logical model, which was then physically implemented using a Virtual Directory which acts as the backend for an Enterprise’s Authoritative Attribute Service.

del.icio.us Tags: ,,,

del.icio.us Tags: ,,,

Tags:: Architecture | Security
12/7/2010 9:28 PM Eastern Standard Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Friday, October 22, 2010

Information Sharing and Cybersecurity are hot button topics in the Government right now and Identity, Credentialing and Access Management are a core component of both those areas. As such, I thought it would be interesting to take a look at how the US Federal Government’s Identity, Credentialing and Access Management (ICAM) efforts around identity federation map into the Authentication, Attribute Exposure and Authorization flows that I have blogged about previously.

[As I have noted before, the entries in my blog are solely my opinions and do not represent the thoughts, intentions, plans or strategies of any third party, including my employer, except where explicitly stated. As such, what I am about to say is simply my informed opinion and may or may not be what the FICAM Gov't folks intent or believe]

Fed_3Ps When I think of the components of Identity Federation, I tend to bucket them into the 3 P’s; Protocol, Payload and Policy:

  1. Protocol
    What are the technical means agreed to by all parties in a federation by which information is exchanged? This will typically involve decisions regarding choices and interoperability profiles that relate to HTTP, SOAP, SAML, WS-Federation, OpenID, Information Cards etc. In the past I’ve also referred to this as the “Plumbing”. ICAM calls these “Identity Schemes”.

    Federal ICAM Support for Authentication Flows

    Federal ICAM Support for Attribute Exposure Flows

    Federal ICAM Support for Authorization Flows


  2. Payload
    What is carried on the wire? This typically involves attribute contracts that define how a subject may be defined, the additional attributes needed in order to make access control decisions etc.

    Federal ICAM Support
    ICAM remains agnostic to the payload and leaves it up to the organizations and communities of interest that are utilizing the ICAM profiles to define their attribute contracts.

    In Appendix A of the ICAM Backend Attribute Exchange* (BAE) [PDF] there was an attempt made to define the semantics of a Federal Government wide Attribute Contract but none of the attributes are required. Currently there is a Data Attribute Tiger Team that has been stood up under the ICAMSC Federation Interoperability Working Group which is working to define multiple attribute contracts that can potentially be used as part of an Attribute Exposure mechanism.
  3. Policy
    The governance processes that are put into place to manage and operate a federation as well as adjudicate issues that may come up. In the past I’ve referred to this as “Governance” but thought that Policy may be much more appropriate.

    Federal ICAM Support
    • Which protocol is supported by ICAM is governed by the FICAM Identity Scheme Adoption Process [PDF]. Currently supported protocols include, OpenID, IMI and SAML 2.0.
    • FICAM, thru its Open Identity Initiative, has put into place a layer of abstraction regarding the certification and accreditation of non Government Identity Providers allowed to issue credentials that can be utilized to access Government resources. This layer is known as a Trust Framework Provider. The Trust Framework Providers are responsible for assessing non Government Identity Providers (IDPs). The process by which an Organization becomes a Trust Framework Provider is known as the Trust Framework Provider Adoption Process [PDF]. Currently supported Trust Framework Providers include OIX and Kantara.

* The ICAM Backend Attribute Exchange (BAE) v1.0 [PDF] document that I am linking to here is rather out of date. The Architecture components of this documents are still valid but the technical profile pieces have been OBE (Overcome By Events) and are significantly out of date. The ICAMSC Architecture Working Group is currently working on v2 of this document incorporating the lessons learned from multiple pilots between Government Agencies/Departments as well as implementation experience from COTS vendors such as Layer 7, Vordel and BiTKOO who have implemented BAE support in their products. Ping me directly if you need further info.

Technorati Tags: ,,,,,,,

del.icio.us Tags: ,,,,,,,
Tags:: Architecture | Security
10/22/2010 2:27 PM Eastern Daylight Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Sunday, October 10, 2010

After the blog posts on Authentication and Attribute Exposure options in the federation of identities, this post is going to focus on putting it all together for authorization.  The caveats noted in the earlier posts apply here as well.

Authorization – Front Channel Attribute Based Access Control

  • Clear separation of security boundaries
  • Clear separation between Authentication and Authorization
  • Resource B needs attributes of Subject A to make access control decision
  • Resource B accepts Subject A mediating attribute delivery from authoritative sources to Resource B

1) Subject A’s attributes are gathered as part of the cross-domain brokered authentication Flows

2) Subject A’s attributes are presented as part of one of the cross-domain brokered authentication flows

3) PDP B makes an access control decision based on attributes that have been gathered and presented

  • While Broker A and Attribute Service A are logically separate, physical implementation may combine them.
  • While PDP B is logically separate from Resource B, logical implementation may be as an externalized PEP or Internalized Code

An example of this is an IdP or SP initiated Web Browser SSO in which the subject authenticates to an IdP in its own domain and is redirected to the SP. The redirect session contains both an authentication assertion and an attribute assertion. The SP validates the authentication assertion and a PEP/PDP integrated with the SP utilizes the attributes in the attribute assertion to make an access control decision. This, with minor variations, also supports user centric flows using information cards etc.

AuthZ_FC_small
  AuthZ_FC_2_small

 

Authorization – Back Channel Attribute Based Access Control

  • Clear separation of security boundaries
  • Clear separation between Authentication and Authorization
  • Resource B needs attributes of Subject A to make access control decision
  • Resource B is requires delivery of Subject A attributes directly from authoritative sources

Subject A’s is authenticated using one of the cross-domain brokered authentication Flows

1) Subject A’s access control decision has been externalized to PDP B

2) PDP B makes pulls attributes directly from authoritative sources and an access control decision based on attributes that have been gathered

  • While Broker A and Attribute Service A are logically separate, physical implementation may combine them.
  • While PDP B is logically separate from Resource B, logical implementation may be as an externalized PEP or Internalized Code

An example of this flow is a Subject who authenticates in its own domain using an IdP or SP initiated Web Browser SSO or a subject who authenticates using an X.509 based Smart Card to the Resource. Once the subject has been validated, the access control decision is delegated to a PDP which pulls the attributes of the subject directly from authoritative sources using one of the supported Attribute Exposure Flows.

AuthZ_BC_small

AuthZ_BC_2_small

Provided the infrastructure exists, there is nothing stopping you from using a combination of both Front Channel and Back Channel mechanisms for ABAC. For example, you may want to have the option of the Subject mediating privacy related attribute release via the Front Channel and combine that with Enterprise or Community of Interest Type attributes pulled via the Back Channel mechanisms.


Tags:: Architecture | Security
10/10/2010 9:15 PM Eastern Daylight Time  |  Comments [0]  |  Disclaimer  |  Permalink