My blog has moved and can now be found at http://blog.aniljohn.com
No action is needed on your part if you are already subscribed to this blog via e-mail or its syndication feed.
Sunday, March 13, 2011
In many conversations I have had with folks who potentially have a need for the services of an Identity Oracle, especially as to how it could help with assurances of identity, there is a two part reaction that I found to be very interesting as indicators of what we need to focus on as a community to make this real and viable.
The first part of the reaction is typically about the “many security holes” in the concept and “changes to existing business processes” that are needed to leverage the capability. The second part of the reaction takes place a bit later as we get into discussing identity proofing and bring up the example of US Government PIV cards (which are Smart Cards that are issued to US Government Employees and Contractors) or Non Federally Issued PIV-I Cards, both of which have have transparent, publically documented, and consistent identity proofing process and the level of comfort the same set of folks have in potentially changing their business processes to accept the PIV/PIV-I Card as a proxy for identity proofing that has been done by someone else.
What that combination of reactions confirmed for me is that the issue is not about technology/security holes (since the the Identity Oracle is a business and NOT a technology) or about changing business practices (since the second part requires that change as well) but about the level of comfort and confidence one can place in the relationships between the Identity Oracle and entities that need to interact with it. I prefer to not use the word “Trust” in this context because the definition is ambiguous at best (See Gunnar Peterson’s “Lets Stop ‘Building Naïveté In’ - Wishing You a Less Trustful 2011” blog post) but instead would like to focus on the contractual aspects of what can be articulated, measured and enforced as both Gunnar in his blog and Scott David in my earlier “Identity Oracles – A Business and Law Perspective” blog post noted.
This tension between the technical and the business also came up in the reactions (@independentid, @NishantK, @IDinTheCloud, @mgd) to my original post on Identity Oracles, so would like to explicitly address that in this post.
How does the traditional “pure tech” Identity and/or Attribute Provider operate and what if any are the constraints placed upon it?
From a technical interaction perspective, you have:
- Person presents to the Relying Party some token that has binds them to a unique identifier
- Relying party uses that unique identifier to call out to the Identity/Attribute Provider to retrieve attributes of the Person
- The Identity/Attribute Provider interacts with Authoritative Sources of information about the Person and returns the requested information to the Relying Party
Now let us look at this from a non-technical interaction perspective:
- A contractual relationship exists between the Authoritative Sources and the Identity/Attribute Provider
- A contractual relationship exists between the Identity/Attribute Provider and the Relying Party
- A contractual relationship exists between the Person and the Relying Party
- NO contractual relationship exists between the Person and Identity/Attribute Provider
Privacy Implications
- The Relying Party typically click-wraps its privacy and information release in its interactions with the Person
- The identity/attribute provider, as a business entity which needs to make money, is dependent on Relying Parties for its revenue stream
- The identity/attribute provider, as the entity in the middle, has visibility into the transactions that are conducted by the Person and has significant financial pressure on it to monetize that information by selling it to third parties (or even to the Relying Party). For more information on this extremely sophisticated and lucrative market in private information, please read the recent series of investigative articles from the Wall Street Journal.
- Given the lack of a contractual relationship between the Person and the Identity/Claims provider, the person has no visibility or little to no control over how this transactional information, which can be used to build a very detailed profile of the person, is used.
How does an Identity Oracle operate and what if any are the constraints placed upon it?
From a technical interaction perspective, you have:
- Person establishes a relationship with the Identity Oracle, which verifies their identity and potentially other information about them via its relationship to Authoritative Sources. The Identity Oracle provides the person with token(s) that allow the person to vouch for their relationship with the Identity Oracle in different contexts (Potentially everything from a Smart Card when you need very high assurances of identity to some token that asserts something about the person without revealing who they are)
- When the Person needs to conduct a transaction with the Relying Party, he presents the appropriate token needed which establishes their relationship to the Identity Oracle
- The Relying Party asks the Identity Oracle “Am I allowed to offer service X to the Person with a token Y from You under condition Z?”. The Identity Oracle answers “Yes or No”
Now let us look at this from a non-technical interaction perspective:
- A contractual relationship exists between the Authoritative Sources and the Identity Oracle
- A contractual relationship exists between the Identity Oracle and the Relying Party
- A contractual relationship exists between the Person and the Relying Party
- A contractual relationship exists between the Person and Identity Oracle
Privacy Implications
- The Relying Party typically click-wraps its privacy and information release in its interactions with the Person but in many cases does not need to collect Privacy Sensitive information from the Person
- The Relying Party can potentially outsource some functions as well as transfer liability for incorrect responses to the Identity Oracle
- The Identity Oracle, as a business entity which needs to make money, has multiple revenue streams including the Relying Party as well as the Person, not to mention value added services it can offer to the Person
- The Identity Oracle, as the entity in the middle, has visibility into the transactions that are conducted by the Person BUT is constrained by its contractual relationship with the Person to protect both the transactional information it has visibility into, as well as provide only meta-data about the private information it knows about the Person to Relying Parties
Some of the critical points that bears emphasizing with the Identity Oracle concept are:
- Privacy protection of both PII information as well as transactional information with visibility and control by the Person
- Allocation of responsibility and liability across Relying Parties, Identity Oracles and Persons.
- Ability to conduct transactions that require very high assurances of identity to completely anonymous
- Ability to conduct transactions across multiple modalities including in-person, internet/web, mobile devices and more
- Ability to leverage existing technologies such as SAML, XACML, Smart Cards, OTPs and more
I hope that this blog post has been helpful in articulating the differences between a traditional identity/attribute provider and the identity oracle, and provides a case for the community to focus more on defining and articulating the contractual and business process aspects of the relationships of the parties involved, while simultaneously working on the supporting technology.
br>
Wednesday, March 2, 2011
Reminder: The Identity Oracle idea is NOT mine, but I have become convinced that it, or something like it, needs to exist in a healthy Identity Eco-System. The concept is something that was originally proposed by Bob Blakley and expanded upon by him and others at Gartner/Burton Group. I am simply trying to gather the information that exists in a variety of places into one cohesive narrative, and adding my own perspective to move the conversation forward on this topic.
One of the aspects of the Identity Oracle is that it is not a technology but a business that proposes to address the relationship between Subjects, Relying Parties and Authoritative Sources of Information via mechanisms such as Contract Law. I am not a lawyer and I do not play one on TV. So when I had questions about the viability of the Identity Oracle from a Law and Business perspective, I pinged Scott David at K&L Gates. Scott and I have ended up at a lot of the same identity focused events in recent months and I have really enjoyed conversing with him about the intersection of Identity, Privacy and Law. As someone who is passionate about those topics, and works in the domain, he brings a critical insight to this discussion.
My request to Scott was to read my previous blog entry on Identity Oracles and answer if the concept was “… feasible or is it a Utopian vision that is a bridge too far?” The short version of the answer that I got was:
“I agree with much of the strategy of what you suggest in the blog, but I have some comments on tactics”
But because the long version of his answer is so very thought provoking, I am posting it here, with his permission. I do take some liberties below by commenting on Scott’s words and providing external links to some of his references.
Here is Scott, in his own words:
Anil – The following are my personal comments to your blog entry. They do not reflect the views of my firm (K&L Gates LLP) or any of its clients.
I guess I would say you are "getting warmer," but there are some underlying assumptions on the legal side in the path that you outline that will likely prevent achieving internet scale through the path described.
With some changes in assumptions and design and deployment tactics, however, the market-oriented system that you contemplate can, I think, be built to accommodate the needs of global data/identity systems.
If we treat law as a technology (just as "language" is a "technology") in need of standardization, and look at law from a systems, information science, thermodynamics, AND economic incentives perspective, the following additional points quickly suggest themselves as requiring accommodation in internet scale systems.
1) You are right-on with emphasis on contract law. Massively interoperable systems require Rules standardization (not just technical standardization) on a broad scale. The most system relevant rules (the only one's on which system users can rely) will be those that are enforceable. Those are called legal duties. They arise two ways: by legislation (regulation or other government action) or contract. There is no single international legal jurisdiction (see Peace of Westphalia - 1648), so legislation and regulation alone cannot drive standardization. The international law is the law of contracts (minimum coverage of treaties aside).
Standardized, enforceable, international contracts involving remote parties dealing in valuable intangibles/data are entered into literally every second . . .that activity takes place in the current financial markets. Existing financial and other market structures offer a great deal of insight into the likely functioning of future data/information/identity services markets. Lots to discuss here.
There is another reason to rely on contract law. Due to the limited reach of US and other sovereign nation legal jurisdiction in this context, neither the US, nor any other country, can "force" adoption of internet scale data/identity rules.
There is a solid advantage for the US (and other jurisdictions that have reliable legal/political systems), however, and it is the same one that permits U.S. financial markets to maintain ascendancy in the world markets (despite recent deflections). It is the strong "system support value" derived from the US tradition of deference to the "rule of law." To the extent that the US and other similar jurisdictions are able to "attach" their ideas (manifested in their local data/identity-system-supporting laws) of how to structure data/identity systems to the broad and deep "trust" that is placed in their respective legal/political systems worldwide, it will enhance the appeal of the those systems, and the efficacy and authority of persons and institutions that are responsible for such systems.
It is for this reason, incidentally, that OIX processes were organized based on a variety of US and international trusted, developed "market" models (in a variety of self-regulatory settings), and why they focus on reliable, predictable, transparent processes, etc. Systems that offer the best solutions will enjoy the broadest adoption. Reliability and predictability are currently at a premium due to system fragmentation and so are highly desirable at present. In fact, the data/identity system harm "trifecta," i.e., "privacy," "security," and "liability," can all be seen as merely symptoms of lack of reliability and predictability, due to a lack of standardized legal structure at the core of nascent data/identity markets. Core enforceable legal structure yields reliability, predictability and a form of "trust."
I had never given much thought to this but once Scott articulated this point, the focus on Contract Law which can be international in scope vs Legislation which is local makes sense. There are also familiar elements here regarding the concept of “Comparability” vs. “Compliance” (where the former model is preferred) that Dr. Peter Alterman from NIH has often spoken of in regards to Identity Trust Frameworks.
2) You are correct that it is not a technology issue. I introduced the alliterative concept of "Tools and Rules" early on as a rhetorical device to put laws on par with technology in the discussion (which still takes place mainly among technologists). As a former large software company attorney once said "in the world of software, the contract is the product." He did not intend to diminish the efforts of software programmers, just to call out that providing a customer with a copy of a software product without a license that limits duplication would undermine the business plan (since without the contract, that person could make 1 million copies). Similarly, in the future markets for data/identity services, the contract is the product. This is key (see below).
As a technologist it is sometimes hard for me to admit that the truly challenging problems in the Identity and Trust domain are not technical in nature but in the domain of Policy. To paraphrase the remarks of someone I work with from a recent discussion “We need to get policy right so that we can work the technical issues”.
3) Your discussion is based on a property paradigm. There is much to discuss here. The property paradigm does not scale without first establishing some ground rules.
First, the concept of private property was adopted by the Constitution's framers who were familiar with the work of Gladstone (who believed that without property laws, every man must act as a "thief"). Those laws work very well where the asset is "rivalrous," i.e., it can only be possessed/ controlled by one person. This works for all physical assets. For intangible assets, rivalrousness requires a legal regime (e.g., copyright, patent, etc. to create the ability to exclude, since there is no asset physicality to "possess" as against all other claimants to the same asset). The analysis is then, what legal regime will work to support the interactions and transactions in the particular intangible assets involved here (be it identified as "data," "information," "identity" etc.). Data is non-rivalrous (see discussion in 5 below).
I believe that this is a "resource management" type situation (like managing riparian, aquifer, fisheries, grazing or other similar rights) that lends itself to that type of legal regime, rather than a traditional "property" regime. In this alternative, the "property" interest held by a party is an "intangible contract right," rather than a direct interest in physical property. That contract right entitles the party to be the beneficiary of one or more duties of other people to perform actions relating to data in a way that benefits the rights holder. For instance, a "relying party" receives greater benefit (and an IDP is more burdened) at LOA 3 than LOA 2). The "value" of the contract right is measured by the value to the party benefited by the duty.
The resource management structure emphasizes mutual performance promises among stakeholders, rather than underlying property interests. Briefly, consider a river with three types of user groups (40 agricultural (irrigation) users upstream, 2 power plants midstream (cooling), and a city of 100,000 residential water users downstream (consumption and washing, etc.)). Each rely on different qualities of the water (irrigation is for supporting plant metabolism (stomata turgidity, hydrogen source for manufacturing complex carbohydrates in photosynthesis, etc.), power plants use water for its thermal capacity, and residents use it for supporting human metabolism (consumption) and as a fairly "universal solvent" (for washing, etc.). When there is plenty of water in the river, there is no conflict and each user can use it freely without restriction. When there is too little water, or conflicting usage patterns, there can be conflicting interests. In that situation, it is not property interests, per se, that are applied to resolve the conflicts, but rather mutually agreed upon duties documented in standard agreements that bind all parties to act in ways consistent with the interests of other parties.
Like water, data is a resource that has many different user groups (among them data subjects, relying parties and identity providers), with needs sometimes in conflict. Notably, because data is not a physical resource, the "scarcity" is not due to physical limitation of the resource, but rather is due to the exertion of the rights of other parties to restrict usage (which is indistinguishable legally from a physical restriction).
The property paradigm can be employed for certain forms of intellectual property, such as copyrights, but those systems were not designed to accommodate large "many to many" data transfers. Arrangements such as BMI/ASCAP (which organize music licensing for public radio play, etc.) are needed to help those systems achieve scale.
In any event, there is also a question of ownership where "data" is generated by an interaction (which is most (or all?) of the time). Who "owns" data about my interactions with my friends, me or them? If both parties "own" it, then it is more of a rights regime than a "property" regime as that term is generally understood. Who owns data about my purchase transactions at the supermarket, me or the store? It takes two to tango. We will be able to attribute ownership of data about interactions and relationships to one or the other party (in a non-arbitrary fashion) only when we can also answer the question "who owns a marriage?", i.e., never. You quote Bob Blakley who speaks about "your" information. I take that to be a casual reference to the class of information about someone, rather than an assertion of a right of exclusive possession or control. If it is the latter, it seems inconsistent with the indications that the database will be an "asset" of the Identity Oracle. That separation could be accomplished through a rights regime.
There is also the linguistics based problem of "non-count nouns." Certain nouns do not have objects associated with them directly. Gold and water are good examples. I don't say "I have a gold." or I have a water." In order to describe an object, it needs a "container/object convention" ("a gold necklace" or "a glass of water.") Data is a non-count noun. When it is put in a "container" (i.e., when it is observed in a context), it becomes "information." It makes no sense for me to point to a snowbank and say "there is my snowball in that snowbank." Instead, I can pick up a handful of snow (separate it out from the snowbank) and then make that declaration. Similarly, in the era of behavioral advertising, massive data collection and processing, it makes little sense to say, "there is my personal information in that data bank" (unless the data is already correlated in a file in a cohesive way, or is an "inventory control" type number such as an SSN). It takes the act of observation to place data in the information "container."
As a result, it will take more to allow parties to exert any type of "property" interests in data (even those property interests under a contract "rights regime."). First, you need to make a data "snowball" (i.e., observe it into the status of "information") from the mass of data.
The paradigm of resource allocation allows DATA to flow, while permitting rules to measure (and restrict or charge for, etc.) information. When we talk, I will share with you the concept of when limitations, measurement, valuation, monetization might be applied. Briefly, when the data is "observed" by a party, I call it a "recognition" event. That observation will always be in a context (of the observer) and be for that observer's subjective purposes. At the point of observation, data is "elevated" to information (the "Heisenberg synapses" in your brain may be firing at this notion). It is at that point that it is the "difference that makes a difference" (to quote Bateson). The first reference to "difference" is the fact that data is carried by a "state change" in a medium. The second reference to "difference" in the Bateson quote is the fact that the data matters to the observer (it has value either monetarily or otherwise). Anyway, this data/information distinction I think lends itself to a system that can allow data to "flow" but can offer appropriate "measurement" at the point of "use" ,i.,e, observation, that can form the basis of legal structures to value, monetize, limit, restrict, protect, etc. the information that the data contains.
This works well with context-based limitation. Ask me about the example using data held by my banker under Gramm Leach Bliley.
The resource allocation and “non-count nouns” concepts are very interesting to me and is something I need to digest, think about and explore a lot more.
4) Bilateral agreements, individually negotiated agreements won't scale. Standard form agreements are used in every market (financial, stock, commodities, electrical grid) where remote parties desire to render the behavior of other participants more reliable and predictable. Even the standardized legal rules of the Uniform Commercial Code (passed in all 50 states) offers standard provisions as a baseline "virtual interoperable utility" for various sub-elements of larger commercial markets (the UCC provides standard terms associated with sales of goods, commercial paper, negotiable instruments, etc. that have established standard legal duties in the commercial sector since the 1940s. . .and establish broad legal duty interoperability that makes information in the commercial sector "flow").
Standard form agreements permit remote parties without direct contractual privity to be assured about each other's performance of legal duties. This reduces "risk" in the environment of the organism (either individual or entity), since it makes the behavior of other parties more reliable and predictable. This saves costs (since parties don't have to anticipate as many external variables in planning), and so has value to parties. The concept of contract "consideration" is the measure of the value to a party for receiving promises of particular future behavior (legal duties) from another party.
The creation of a "risk-reduction territory" through the assignment of standardized legal duties to broad groups of participants is called a "market" in the commercial sector, it is called a "community" in the social sector, and it is called a "governance structure" in the political sector. Those duties can be established by contract or by legislation/regulation. In the present case (as noted above) contract is the likely route to the establishment of duties. Since all three sectors are using a shared resource, i.e., data, improvement of the reliability, predictability and interoperability in any one of the three sectors will yield benefits for participants in all three sectors. An example of this relationship among user groups is evidenced by the willingness of the government authorities to rely on the commercial sector for development of data/identity Tools and Rules.
Standard form agreements enable the creation of either mediated markets (such as those mediated by banks (match capital accumulation to those with borrowing needs), or brokers (match buy and sell orders), etc.), or unmediated markets (such as the use of standard form mortgages or car loan documents to enable the securitization (reselling) of receivables in those markets).
5) Centralized operation and enforcement won't scale. Steven Wright, the comedian, says that he has "the largest seashell collection in the world, he keeps it on beaches around the earth." This is amusing because it stretches the "ownership" concept beyond our normal understanding. Data is seashells. It will be impossible (or at least commercially unreasonable) to try to vacuum all (or even a large portion of) data into a single entity (whether commercial or governmental).
In fact, on page 90 of Luciano Floridi's book "Information - A very short introduction." (Oxford Press) (strongly recommended), the author notes that information has three main properties that differentiate it from other ordinary goods. Information is "non-rivalrous" (we can both own the same information, but not the same loaf of bread), "non-excludable" (because information is easily disclosed and sharable, it takes energy to protect it - how much energy?. . .see wikileaks issues), and "zero marginal cost" (cost of reproduction is negligible). Of these, the non-excludability characteristic suggests that a distributed "neighborhood watch" type system (more akin to the decentralization we observe in the innate and learned immune systems of animals), offers a path to enforcement that is probably more sound economically, politically, mathematically and thermodynamically than to attempt to centralize operation, control and enforcement. That is not to say that the "control reflex" won't be evidenced by existing commercial and governmental institutions. . .it will; it is simply to suggest that each such entity would be well advised to have "Plan B" at the ready.
This does not mean that data (even as "seashells") cannot be accessed centrally; it can due to the gross interoperability of scaled systems based on standardization of tools and rules. The key is "access rights" that will be based on enforceable, consensus-based agreement (and complementary technology standards). This analysis will naturally expand to topics such as ECPA reform, future 4th amendment jurisprudence and a host of related areas, where group and individual needs are also balanced (but in the political, rather than the commercial user group setting). The analysis of those civil rights/security-related issues will benefit from using a similar analysis to that relied upon for configuration of commercial systems, since both will involve the management of a single "data river" resource, and since the requirements imposed on private persons to cooperate with and assist valid governmental investigations will be applied with respect to the use of such commercial systems.
In this context it is critical to separate out the system harms caused by bad actors (that cause intentional harm), and negligent actors (that cause harm without intention). Intentional actors will not be directly discouraged by the formality of structured access rights, which they will likely violate with impunity just as they do now. The presence of structured, common rules provides an indirect defense against intentional actors, however, since it gives the system "1000 eyes." In other words, since much intentional unauthorized access is caused by fooling people through "social engineering " (in online context) and "pretexting" (in telco context), those paths to unauthorized access will be curtailed by a more standardized system that is more familiar to users (who are less likely to be fooled). Security can be built right into the rights, incentives and penalties regime (remind me to tell you about the way they handled the "orange rockfish" problem in one of the pacific fisheries). Again, there is much to discuss here as well.
Also, your business emphasis seems exactly right. Due to the energy requirements to maintain security and system integrity (resist entropy?), the system can only scale if there are incentives and penalties built into the system. Those incentives and penalties need to be administered in a way so that they are distributed throughout the system. The standardized contract model anticipates that. Ultimately, the adoption ("Opt in") curve will be derived from whether or not participation is sufficiently economically compelling for business (in their roles as IDPs, RPs and data subjects), and offers similarly compelling benefits to individuals (in similar roles). This returns the analysis to the "resource management" model.
6) As noted above, there are different user groups that use the same data resources. These include those groups in the gross categories of commercial, social and governmental users. Thus, for example, when I post to a social network a personal comment, that social network may "observe" that posting for commercial purposes. That can be conceived of as a "user group conflict" (depending on the parties’ respective expectations and “rights”) to be resolved by resort to common terms. The good news is that because all user groups are working with a common resource (data), improvement of the structuring for any one user group will have benefits for the other users of the resource as well.
In short, I agree with much of the strategy of what you suggest in the blog, but I have some comments on tactics.
There is a lot of information and concepts here and while a lot of it is something that I can map to my domain (Lack of scalability of bi-lateral agreements and central enforcement and more), there are others that I have not had to deal with before so am slowly working my way thru them. But in either case, I wanted to expose this to the larger community so that it can become part of the conversation that needs to happen on this topic. I for one, am really looking forward to further conversations with Scott on this topic!
br>
Sunday, February 27, 2011
The concept of the Identity Oracle is something that I have been giving a lot of thought to recently. It has been driven by a combination of factors including current projects, maturity of both policy conversations and technology, as well as a desire to move the art of the possible forward at the intersection of identity and privacy. My intention is to use this blog post to provide pointers to past conversations on this topic in the community, and to use that as a foundation for furthering the conversation.
When it comes to information about people (who they are, what they are allowed to do, what digital breadcrumbs they leave during their daily travels etc.), there exists in the eco-system both sources of information as well as entities that would find value in utilizing this information for a variety of purposes. What will be critical to the success of the identity eco-system is to define, as a starting point, the qualities and behavior of the "entity-that-needs-to-exist-in-the-middle" between these authoritative sources of information and consumers of such information. I believe the Identity Oracle to be a critical piece of that entity.
So, what is an Identity Oracle?
Bob Blakley, currently the Gartner Research VP for Identity and Privacy, coined the phrase "Identity Oracle", and provided a definition in a Burton Catalyst 2006 presentation:
- An organization which derives all of its profit from collection & use of your private information…
- And therefore treats your information as an asset…
- And therefore protects your information by answering questions (i.e. providing meta-identity information) based on your information without disclosing your information…
- Thus keeping both the Relying Party and you happy, while making money.
That is as succinct a definition as I've seen in the many conversations on this topic since that time, and since I have no desire to re-invent the wheel, this is as good a starting point as any.
The key point to note here is that this is NOT technology but a business, and as such if there is any hope for this to work, this business needs a viable business model i.e. something that makes it money. As Bob notes, some of the questions that need be answered by the current eco-system denizens such as Identity Providers, Attribute Providers and Relying Parties include:
- Paying for the Identity Provider server and the service it provides.
- Convincing Relying Parties that they should rely on information provided by a third party (the Identity Provider) rather than maintaining identity attribute information themselves.
- Assigning liability when a Relying Party asserts that a claimed identity attribute is incorrect.
- Assigning liability when a subject claims that the wrong identity attribute claim was released to a Relying Party.
- Making subjects whole when a security failure “leaks” subject identity attributes directly from the Identity Provider.
- Assigning liability and making subjects whole when a security failure “leaks” subject identity attributes from a Relying Party.
I will add the following to the above list:
- Making subjects whole when the Identity/Attribute Provider's desire to monetize its visibility into the transactional information across multiple Relying Parties overrides its responsibility to protect the subject's personal information.
As always, whenever something like this is proposed there is a tendency for technologists to try and map this to technology implementations. In this case technologies such as Security Token Services, Claims Transformers and Agents, Minimal Disclosure Tokens and Verified Claims. And in the "What the Identity Oracle Isn't" blog post, Bob provides a clear example of why such a technology focused view is incomplete at best by walking through an example of an Identity Oracle based transaction:
A human – let’s call him “Bob” – signs up for an account with the Identity Oracle. The Identity Oracle collects some personal information about Bob, and signs a legally binding contract with Bob describing how it will use and disclose the information, and how it will protect the information against uses and disclosures which are not allowed by the contract. The contract prescribes a set of penalties – if Bob’s information is used in any way which is not allowed by the contract, the Identity Oracle PAYS Bob a penalty: cash money.
When Bob wants to get a service from some giant, impersonal corporation (say “GiCorp”) whose business depends in some way on Bob’s identity, Bob refers GiCorp to the Identity Oracle; GiCorp then goes to the Identity Oracle and asks a question. The question is NOT a request for Bob’s personal information in any form whatsoever (for example, the question is NOT “What is Bob’s birthdate”). And the Identity Oracle’s response is NOT a “minimal disclosure token” (that is, a token containing Bob’s personal information, but only as much personal information as is absolutely necessary for GiCorp to make a decision about whether to extend the service to Bob – for example a token saying “Bob is over 18”).
Instead, GiCorp’s request looks like this:
“I am allowed to extend service to Bob only if he is above the legal age for this service in the jurisdiction in which he lives. Am I allowed to extend service to Bob?”
And the Identity Oracle’s response looks like this:
“Yes.”
The Identity Oracle, in normal operation, acts as a trusted agent for the user and does not disclose any personal information whatsoever; it just answers questions based on GiCorp’s stated policies (that is, it distributes only metadata about its users – not the underlying data).
The Identity Oracle charges GiCorp and other relying-party customers money for its services. The asset on the basis of which the Identity Oracle is able to charge money is its database of personal information. Because personal information is its only business asset, the Identity Oracle guards personal information very carefully.
Because disclosing personal information to relying-party customers like GiCorp would be giving away its only asset for free, it strongly resists disclosing personal information to its relying-party customers. In the rare cases in which relying parties need to receive actual personal data (not just metadata) to do their jobs, the Identity Oracle requires its relying-party customers to sign a legally binding contract stating what they are and are not allowed to do with the information. This contract contains indemnity clauses – if GiCorp signs the contract and then misuses or improperly discloses the personal information it receives from the Identity Oracle about Bob, the contract requires GiCorp to pay a large amount of cash money to the Identity Oracle, which then turns around and reimburses Bob for his loss.
This system provides Bob with much stronger protection than he receives under national privacy laws, which generally do not provide monetary damages for breaches of privacy. Contract law, however, can provide any penalty the parties (the Identity Oracle and its relying party customers like GiCorp) agree on. In order to obtain good liability terms for Bob, the Identity Oracle needs to have a valuable asset, to which GiCorp strongly desires access. This asset is the big database of personal data, belonging to the Identity Oracle, which enables GiCorp to do its business. And allows the Identity Oracle to charge for its services.
The Identity Oracle provides valuable services (privacy protection and transaction enablement) to Bob, but it also provides valuable services to GiCorp and other relying-party customers. These services are liability limitation (because GiCorp no longer has to be exposed to private data which creates regulatory liability and protection costs for GiCorp) and transaction enablement (because GiCorp can now rely on the Identity Oracle as a trusted agent when making decisions about what services to extend to whom, and it may be able to get the Identity Oracle to assume liability for transactions which fail because the Oracle gave bad advice).
The important take-aways for me from the above are (1) The contextual and privacy preserving nature of the question being asked and answered, (2) the allocation and assumption of liability, as well as the (3) redress mechanisms that rely on contract law rather than privacy legislation.
This approach, I believe, addresses some of the issues that are raised by Aaron Titus in his “NSTIC at a Crossroads” blog post and his concepts around “retail” and “wholesale” privacy in what he refers to as the current Notice and Consent legal regime in the United States.
Currently, one of the things that I am thinking over and having conversations with others about, is if it makes sense for the Fair Information Practice Principles (FIPPs) [Transparency, Individual Participation, Purpose Specification, Data Minimization, Use Limitation, Data Quality and Integrity, Security, Accountability and Auditing], found in Appendix C of the June 2010 DRAFT release of the National Strategy for Trusted Identities in Cyberspace (NSTIC), can be adopted as the core operating principles of an Identity Oracle. And as noted in the example above, if these operating principles could be enforced via Contract Law to the benefit of the Identity Eco-System as a whole.
br>
Sunday, December 12, 2010
I am doing a bit of research into what it would take to deploy Sharepoint 2010 as a DMZ facing portal that accepts Federated Credentials. Here are some materials I’ve come across that may help others who may be doing the same:
From MS PDC10 Presentation “How Microsoft Sharepoint 2010 was built with Windows Identity Foundation”:
Classic Authentication | Claims-based Authentication |
- NT Token Windows Identity
| - NT Token Windows Identity
- ASP.NET Forms Based Authentication (SQL, LDAP, Custom …)
- SAML 1.1++
|
| >>> SAML Token Claims Based Identity |
>>> SPUser | >>> SPUser |
More details regarding the above can be found at the MS Technet page on Authentication methods supported in SP2010 Foundation.
Windows Identity Foundation (WIF) which is the RP piece integrated with Sharepoint 2010 (SP2010) does NOT support the SAML Protocol. It only supports the WS-Federation Passive profile with SAML tokens for Web SSO.
Alternative to get SP2010 to work with a SAML2 IdP requires the deployment and usage of ADFS 2:
- Configure ADFS 2 as a SAML2 SP that accepts attributes/claims from an external SAML2 IdP
- Define the SAML2 IdP as a SAML2 Claims Provider within ADFS 2
- Exchange federation metadata between SAML2 IdP and ADFS 2 SP
- Configure the WIF based application (i.e. SP2010 application) as a RP which points to ADFS 2.0 as the Sharepoint-STS (SP-STS) to which the web apps externalize Authentication
Of course, this implies that you need to deploy another server in the DMZ that is hosting the ADFS 2 bits.
In order to configure SP2010 Authentication to work with SAML Tokens:
- Export the token-signing certificate from the IP-STS. This certificate is known as the ImportTrustCertificate. Copy the certificate to a server computer in the SharePoint Server 2010 farm.
- Define the claim that will be used as the unique identifier of the user. This is known as the identity claim. Many examples of this process use the user e-mail name as the user identifier. Coordinate with the administrator of the IP-STS to determine the correct identifier because only the owner of the IP-STS knows which value in the token will always be unique per user. Identifying the unique identifier for the user is part of the claims-mapping process. Claims mappings are created by using Windows PowerShell.
- Define additional claims mappings. Define which additional claims from the incoming token will be used by the SharePoint Server 2010 farm. User roles are an example of a claim that can be used to permission resources in the SharePoint Server 2010 farm. All claims from an incoming token that do not have a mapping will be discarded.
- Create a new authentication provider by using Windows PowerShell to import the token-signing certificate. This process creates the SPTrustedIdentityTokenIssuer. During this process, you specify the identity claim and additional claims that you have mapped. You must also create and specify a realm that is associated with the first SharePoint Web applications that you are configuring for SAML token-based authentication. After the SPTrustedIdentityTokenIssuer is created, you can create and add more realms for additional SharePoint Web applications. This is how you configure multiple Web applications to use the same SPTrustedIdentityTokenIssuer.
- For each realm that is added to the SPTrustedIdentityTokenIssuer, you must create an RP-STS entry on the IP-STS. This can be done before the SharePoint Web application is created. Regardless, you must plan the URL before you create the Web applications.
- Create a new SharePoint Web application and configure it to use the newly created authentication provider. The authentication provider will appear as an option in Central Administration when claims mode is selected for the Web application.
You can configure multiple SAML token-based authentication providers. However, you can only use a token-signing certificate once in a farm. All providers that are configured will appear as options in Central Administration. Claims from different trusted STS environments will not conflict.
The SP2010 Authentication Flow then becomes:
- User attempts to access Sharepoint web application
- User redirected to Sharepoint STS
- Validate AuthN Token (if user already has been AuthN w/ IdP)
- Augment claims, if need be - Post Token {SP-Token} to Sharepoint Web Application
- Extract Claims and construct IClaimsPrincipal
I still have a list of outstanding questions I am working thru, some of which are:
- Can the built-in SP-STS do direct Authentication of X.509 Credentials for SP2010?
- What "front-end" protocols are supported by this SP-STS? (WS-Fed Passive Profile only?)
- Is there any MS "magic sauce" added to this SP-STS that "extends" the standards to make it work with SP2010?
- Can the built-in SP-STS do direct Authentication of X.509 credentials?
- Can the built-in the SP-STS do just in time provisioning of users to SP2010? Is it needed?
- When using ADFS 2 with SP2010, does ADFS 2 replace the built-in SP-STS or does it work in conjunction with the SP-STS? i.e. if using ADFS 2, can the built-in SP-STS be disabled?
- Can ADFS 2 do direct Authentication of X.509 credentials?
- Can ADFS 2 do just in time provisioning of users to SP2010? Is it needed?
- Does this SP-STS need to be ADFS 2.0 or can it be any STS that can do SAML2 to WS-Fed token transformation on the RP side?
- If it can be any STS, how do I register a non-Microsoft STS w/ SP2010? i.e. How do I register it as a "SPTrustedIdentityTokenIssuer"
- Where can I find the metadata on the SP2010 side that can be exported to bootstrap the registration of a SP2010 RP App with an external IdP?
Part of the issue I am working thru is the differences in terminology between Microsoft and …everyone else…
that is used to describe the same identity infrastructure components. Walking thru some of the ADFS 2.0 Step-by-Step and How To Guides, especially the ones that show interop configurations with Ping Identity Pingfederate and Shibboleth 2, do help but not as much as I had hoped. The primary limitation of the guides is that they do the wizard driven click-thru UI configuration without explaining why things are being done or providing explanations on the underlying protocols that are supported and the implementation choices that are made.
br>
Tuesday, December 7, 2010
Input to access control decisions are based on information about the subject, information about the resource, environmental/contextual information, and more, that are often expressed as attributes/claims. But how do you determine what those attributes/claims should be, especially as it relates to information about the subject?
The typical way that I have seen folks handle this is based on a bottom up approach that gets a whole bunch of folks who manage and maintain directory services, lock them in a room and throw away the key until they can come to some type of agreement on a common set of attributes everyone can live with based on their knowledge of relying party applications. This often is not …ah… optimal.
The other approach is to start at the organizational policy level and identify a concrete set of attributes that can fully support the enterprise’s policies. My team was tasked with looking at the latter approach on behalf of the DHS Science and Technology Directorate. The driving force behind it was coming up with a conceptual model that remains relevant not just within an Enterprise but also across them i.e. in a Federation.
Couple of my team members, Tom Smith and Maria Vachino, led the effort which resulted in a formal peer-reviewed paper that they presented at the 2010 IEEE International Conference on Homeland Security [PPTX] last month. The actual paper is titled “Modeling the Federal User Identity, Credential, and Access Management (ICAM) decision space to facilitate secure information sharing” and can be found on IEEExplore.
Abstract:
Providing the right information to the right person at the right time is critical, especially for emergency response and law enforcement operations. Accomplishing this across sovereign organizations while keeping resources secure is a formidable task. What is needed is an access control solution that can break down information silos by securely enabling information sharing with non-provisioned users in a dynamic environment.
Multiple government agencies, including the Department of Homeland Security (DHS) Science and Technology Directorate (S&T) are currently developing Attribute-Based Access Control (ABAC) solutions to do just that. ABAC supports cross-organizational information sharing by facilitating policy-based resource access control. The critical components of an ABAC solution are the governing organizational policies, attribute syntax and semantics, and authoritative sources. The policies define the business objectives and the authoritative sources provide critical attribute attestation, but syntactic and semantic agreement between the information exchange endpoints is the linchpin of attribute sharing. The Organization for the Advancement of Structured Information Standards (OASIS) Security Assertion Markup Language (SAML) standard provides federation partners with a viable attribute sharing syntax, but establishing semantic agreement is an impediment to ABAC efforts. This critical issue can be successfully addressed with conceptual modeling. S&T is sponsoring the following research and development effort to provide a concept model of the User Identity, Credential, and Access Management decision space for secure information sharing.
The paper itself describes the conceptual model, but we have taken the work from the conceptual stage to the development of a logical model, which was then physically implemented using a Virtual Directory which acts as the backend for an Enterprise’s Authoritative Attribute Service.
br>
Friday, October 22, 2010
Information Sharing and Cybersecurity are hot button topics in the Government right now and Identity, Credentialing and Access Management are a core component of both those areas. As such, I thought it would be interesting to take a look at how the US Federal Government’s Identity, Credentialing and Access Management (ICAM) efforts around identity federation map into the Authentication, Attribute Exposure and Authorization flows that I have blogged about previously.
[As I have noted before, the entries in my blog are solely my opinions and do not represent the thoughts, intentions, plans or strategies of any third party, including my employer, except where explicitly stated. As such, what I am about to say is simply my informed opinion and may or may not be what the FICAM Gov't folks intent or believe]
When I think of the components of Identity Federation, I tend to bucket them into the 3 P’s; Protocol, Payload and Policy:
- Protocol
What are the technical means agreed to by all parties in a federation by which information is exchanged? This will typically involve decisions regarding choices and interoperability profiles that relate to HTTP, SOAP, SAML, WS-Federation, OpenID, Information Cards etc. In the past I’ve also referred to this as the “Plumbing”. ICAM calls these “Identity Schemes”.
Federal ICAM Support for Authentication Flows
Federal ICAM Support for Attribute Exposure Flows
Federal ICAM Support for Authorization Flows
- Payload
What is carried on the wire? This typically involves attribute contracts that define how a subject may be defined, the additional attributes needed in order to make access control decisions etc.
Federal ICAM Support
ICAM remains agnostic to the payload and leaves it up to the organizations and communities of interest that are utilizing the ICAM profiles to define their attribute contracts.
In Appendix A of the ICAM Backend Attribute Exchange* (BAE) [PDF] there was an attempt made to define the semantics of a Federal Government wide Attribute Contract but none of the attributes are required. Currently there is a Data Attribute Tiger Team that has been stood up under the ICAMSC Federation Interoperability Working Group which is working to define multiple attribute contracts that can potentially be used as part of an Attribute Exposure mechanism.
- Policy
The governance processes that are put into place to manage and operate a federation as well as adjudicate issues that may come up. In the past I’ve referred to this as “Governance” but thought that Policy may be much more appropriate.
Federal ICAM Support
- Which protocol is supported by ICAM is governed by the FICAM Identity Scheme Adoption Process [PDF]. Currently supported protocols include, OpenID, IMI and SAML 2.0.
- FICAM, thru its Open Identity Initiative, has put into place a layer of abstraction regarding the certification and accreditation of non Government Identity Providers allowed to issue credentials that can be utilized to access Government resources. This layer is known as a Trust Framework Provider. The Trust Framework Providers are responsible for assessing non Government Identity Providers (IDPs). The process by which an Organization becomes a Trust Framework Provider is known as the Trust Framework Provider Adoption Process [PDF]. Currently supported Trust Framework Providers include OIX and Kantara.
* The ICAM Backend Attribute Exchange (BAE) v1.0 [PDF] document that I am linking to here is rather out of date. The Architecture components of this documents are still valid but the technical profile pieces have been OBE (Overcome By Events) and are significantly out of date. The ICAMSC Architecture Working Group is currently working on v2 of this document incorporating the lessons learned from multiple pilots between Government Agencies/Departments as well as implementation experience from COTS vendors such as Layer 7, Vordel and BiTKOO who have implemented BAE support in their products. Ping me directly if you need further info.
br>
Sunday, October 10, 2010
After the blog posts on Authentication and Attribute Exposure options in the federation of identities, this post is going to focus on putting it all together for authorization. The caveats noted in the earlier posts apply here as well.
Authorization – Front Channel Attribute Based Access Control
- Clear separation of security boundaries
- Clear separation between Authentication and Authorization
- Resource B needs attributes of Subject A to make access control decision
- Resource B accepts Subject A mediating attribute delivery from authoritative sources to Resource B
1) Subject A’s attributes are gathered as part of the cross-domain brokered authentication Flows 2) Subject A’s attributes are presented as part of one of the cross-domain brokered authentication flows 3) PDP B makes an access control decision based on attributes that have been gathered and presented - While Broker A and Attribute Service A are logically separate, physical implementation may combine them.
- While PDP B is logically separate from Resource B, logical implementation may be as an externalized PEP or Internalized Code
An example of this is an IdP or SP initiated Web Browser SSO in which the subject authenticates to an IdP in its own domain and is redirected to the SP. The redirect session contains both an authentication assertion and an attribute assertion. The SP validates the authentication assertion and a PEP/PDP integrated with the SP utilizes the attributes in the attribute assertion to make an access control decision. This, with minor variations, also supports user centric flows using information cards etc. |  |
Authorization – Back Channel Attribute Based Access Control - Clear separation of security boundaries
- Clear separation between Authentication and Authorization
- Resource B needs attributes of Subject A to make access control decision
- Resource B is requires delivery of Subject A attributes directly from authoritative sources
Subject A’s is authenticated using one of the cross-domain brokered authentication Flows 1) Subject A’s access control decision has been externalized to PDP B 2) PDP B makes pulls attributes directly from authoritative sources and an access control decision based on attributes that have been gathered - While Broker A and Attribute Service A are logically separate, physical implementation may combine them.
- While PDP B is logically separate from Resource B, logical implementation may be as an externalized PEP or Internalized Code
An example of this flow is a Subject who authenticates in its own domain using an IdP or SP initiated Web Browser SSO or a subject who authenticates using an X.509 based Smart Card to the Resource. Once the subject has been validated, the access control decision is delegated to a PDP which pulls the attributes of the subject directly from authoritative sources using one of the supported Attribute Exposure Flows. |
|
Provided the infrastructure exists, there is nothing stopping you from using a combination of both Front Channel and Back Channel mechanisms for ABAC. For example, you may want to have the option of the Subject mediating privacy related attribute release via the Front Channel and combine that with Enterprise or Community of Interest Type attributes pulled via the Back Channel mechanisms.
br>
Sunday, October 3, 2010
Continuing my series of blog posts on the options available in federating identities, which I started with Authentication, I am going to try and map out some options that are available when exposing attributes.
As noted in my earlier post on Authentication, the following caveats apply:
- This is conceptual in nature
- Implementation choices, whether they are architectural or technology, may drive the separation or co-location of some of the conceptual entities noted in the pictures
- Still a work in progress…
Attribute Exposure – Organizational Query
- Clear separation of security boundaries.
- One or more authoritative sources of attributes for the Subject exist in the same Trust Domain
- Trust relationship between Resource B and Attribute Service A set up before-hand and out-of-band
1) Subject A has been Authenticated in Trust Domain B 2) Resource B recognizes Subject A as from outside its domain and utilizes attributes from Attribute Service A | |
Attribute Exposure – Single Point of Query 1
- Clear separation of security boundaries.
- One or more authoritative sources of attributes for the Subject exist in multiple Trust Domains
- Trust relationship between Resource B and Attribute Aggregator A set up before-hand and out-of-band
- Attribute Aggregator A has knowledge and trust relationships with attribute sources both inside and outside its trust domain
1) Subject A has been Authenticated in Trust Domain B 2) Resource B recognizes Subject A as from outside its domain and utilizes attributes from Attribute Aggregator A 3-4) Attribute Aggregator A aggregates Subject A attributes from multiple authoritative sources, wherever they may reside | |
Attribute Exposure – Single Point of Query 2
- Clear separation of security boundaries
- One or more authoritative sources of attributes for the Subject exist in multiple Trust Domains
- Resource B has outsourced attribute gathering to Attribute Aggregator B
- Attribute Aggregator B has knowledge and trust relationships with multiple attribute sources
1) Subject A has been Authenticated in Trust Domain B 2) Resource B recognizes Subject A as from outside its domain and utilizes attributes from Attribute Aggregator B 3-4) Attribute Aggregator B aggregates Subject A attributes from multiple authoritative sources, wherever they may reside I am most ambivalent regarding this flow because of the complexity of the moving pieces involved: - The multiple trust relationships that needs to be managed by the attribute aggregator
- The attribute aggregator must “know” where all to go to get the attributes, but given that the subject is from a separate domain and the aggregator may not have a close enough relationship with the subject, would it really know where to go to get the attributes?
| |
Attribute Exposure – Identity Oracle - Clear separation of security boundaries
- One or more authoritative sources of attributes for the Subject exist in multiple Trust Domains
- Resource B has engaged the services of an Identity Oracle
- Identity Oracle has close relationship with multiple Authoritative Attribute Sources
1) Subject A has been Authenticated in Trust Domain B 2) Resource B recognizes Subject A as from outside its domain and asks appropriate question of the Identity Oracle 3-4) Identity Oracle obtains relevant Subject A attributes from multiple authoritative sources and answers the question
| |
I am being very careful of word choices here because this is at the conceptual level and not at the implementation level. For example, I am particular about using the word “utilizes attributes from …” rather than “requests attributes from …” so that the flows could accommodate both “front-channel” attribute passing as well as “back-channel” attribute passing. For example in the “Organizational Query” flow, the physical implementation could represent both a Federation Web SSO option that provided the attributes to the Relying Party/Service Provider as a browser based SAML Attribute Assertion or attributes requested by a PDP integrated with the Relying Party/Service Provider as a SOAP request to the Attribute Service.
Comments are welcome and would be very much appreciated.
br>
Sunday, September 19, 2010
In some of the conversations I’ve had recently, there has occasionally been a sense of confusion around around the options available in federating identities, the separation of concerns between authentication and authorization as well as the choices in how attributes can be passed to applications to make access control decisions.
I am in the process of putting together some material to convey the various options available to us in the current state of technology. I am starting with authentication. Some caveats:
- This is conceptual in nature
- Implementation choices, whether they are architectural or technology, may drive the separation or co-location of some of the conceptual entities noted in the pictures
- Still a work in progress…
First a definition: A Domain is a realm of administrative autonomy, authority, or control for subjects and objects in a computing environment. For the purposes of this discussion, a Trust Domain defines the environment in which a single authority is trusted to validate the credentials presented during authentication. (Thanks Russ!)
Authentication – Direct (Single Trust Domain) 1) The Subject attempts to access the Resource and presents a credential 2) The Resource, prior to authenticating the claimed identity presented in the credential, checks the validity of the credential. This could include: (a) Is the credential issues from a source I trust? (b) Has the credential expired? (c) Has the credential been revoked? Once the validity of the credential is satisfied, the resource authenticates the Subject by verifying the Subject can prove association to the asserted identity in the credential Once Authenticated, the resource should then verify that the identity has authorized access to the requested resource, based on existing security policy. | |
Authentication – Brokered (Single Trust Domain) 1 and 2) The Subject presents a credential to the Broker. The Broker, prior to authenticating the claimed identity presented in the credential, checks the validity of the credential. This could include: (a) Is the credential issues from a source I trust? (b) Has the credential expired? (c) Has the credential been revoked? Once the validity of the credential is satisfied, the Broker authenticates the Subject by verifying the Subject can prove association to the asserted identity in the credential. Once this is done, the Subject receives a token with proof-of-authentication . 3) Subject attempts to access the Resource and presents the token from the Broker 4) The Resource validates the Subject’s token Once validated, the resource should then verify that the identity has authorized access to the requested resource, based on existing security policy. Types of Security Tokens - SAML Assertion
- Kerberos ticket
- Username token
- X.509 token
- WAM Session Token
- Custom
| |
Authentication – Direct (Cross-Domain/Federated)
This beastie does not exist!
Authentication – Brokered I (Cross-Domain/Federated) - Clear separation of security boundaries.
- Resource B only accepts identity information vouched for by Broker B.
- Dependency between Subject A and Broker B; If Broker B requires X.509 Certificates as a token, Subject A must have the ability to handle X.509 Certificates
- Trust between Broker A and Broker B is usually set up before-hand and out-of-band.
1) Subject A presents a credential to the Broker A. Broker A, prior to authenticating the claimed identity presented in the credential, checks the validity of the credential. This could include: (a) Is the credential issues from a source I trust? (b) Has the credential expired? (c) Has the credential been revoked? Once the validity of the credential is satisfied, the Broker authenticates the Subject by verifying the Subject can prove association to the asserted identity in the credential. Once this is done, Subject A receives a token with proof-of-authentication 2) Subject A presents the token to Broker B; Given that Broker B trusts tokens issued by Broker A, Broker B issues token to Subject A that is valid in Trust Domain B 3) Subject A attempts to access the Resource B and presents the token from the Broker B 4) Resource B validates the Subject A’s token Once Authenticated, the resource should then verify the identity has authorized access to the requested resource, based on existing security policy. | |
Authentication – Brokered II (Cross-Domain/Federated) - Clear separation of security boundaries.
- Resource B accepts identity information from external sources but “outsources” the actual authentication to Broker B.
- Trust between Broker B and Broker A is mediated by a third party (Bridge) which is set up before-hand and out-of-band.
1) Subject A presents a credential to the Broker A. Broker A, prior to authenticating the claimed identity presented in the credential, checks the validity of the credential. This could include: (a) Is the credential issues from a source I trust? (b) Has the credential expired? (c) Has the credential been revoked? Once the validity of the credential is satisfied, the Broker authenticates the Subject by verifying the Subject can prove association to the asserted identity in the credential. Once this is done, Subject A receives a token with proof-of-authentication --- Variation: Subject A has been issued credentials 2) Subject A attempts to access Resource B and presents the issued credentials (or token from Broker A) 3) Resource B externalizes the validation of Subject A’s credential or token to Broker B 4) Broker B validates credentials or token with the Bridge (Path Validation + Revocation for PKI or other mechanism with a Federation Operator) Once Authenticated, the resource should then verify the identity has authorized access to the requested resource, based on existing security policy. | |
As noted above, this is Authentication only. Comments are very welcome and would be appreciated.
UPDATE (10/16/2010): Updated post language based on comments and feedback from Russ Reopell
br>
Sunday, September 12, 2010
My proposal of this session at IIW East was driven by the following context:
- We are moving into an environment where dynamic, contextual, policy driven mechanisms are needed to make real time access control decisions at the moment of need
- The input to these decisions are based on attributes/claims which reside in multiple authoritative sources
- The authoritative-ness/relevance of these attributes are based on the closeness of a relationship that the keeper/data-steward of the source has with the subject. I would highly recommend reading the Burton Group paper (FREE) by Bob Blakley on "A Relationship Layer for the Web . . . and for Enterprises, Too” which provides very cogent and relevant reasoning as to why authoritativeness of attributes is driven by the relationship between the subject and the attribute provider
- There are a set of attributes that the Government maintains thorough its lifecycle, on behalf of citizens, that have significant value in multiple transactions a citizen conducts. As such, is there a need for these attributes to be provided by the government for use and is there a market that could build value on top of what the government can offer?
Some of the vocal folks at this session, in no particular order, included (my apologies to folks I may have missed):
- Dr. Peter Alterman, NIH
- Ian Glazer, Gartner
- Gerry Beuchelt, MITRE
- Nishant Kaushik, Oracle
- Laura Hunter, Microsoft
- Pamela Dingle, Ping Identity
- Mary Ruddy, Meristic
- Me, Citizen
We started out the session converging on (an aspect of) an Identity Oracle as something that provides an answer to a question but not an attribute. The classic example of this is someone who wishes to buy alcohol which is age restricted in the US. The question that can be asked of an Oracle would be "Is this person old enough to buy alcohol?" and the answer that comes back is "Yes/No" with the Oracle handling all of the heavy lifting on the backend regarding state laws that may differ, preservation of Personally Identifiable Information (PII) etc. Contrast this to an Attribute Provider to whom you would be asking "What is this person's Birthday?" and which releases PII info.
It was noted that the Government (Federal/State/Local/Tribal) is authoritative for only a finite number of attributes such as Passport #, Citizenship, Driver's License, Social Security Number etc and that the issue at present is that there does not exist an "Attribute Infrastructure" within the Government. The Federal ICAM Backend Attribute Exchange (BAE) is seen as a mechanism that will move the Government along on this path, but while there is clarity around the technical implementation, there are still outstanding governance issues that need to be resolved.
There was significant discussion about Attribute Quality, Assurance Levels and Authoritativeness. In my own mind, I split them up into Operational Issues and Governance Principles. On the Operational Issue arena, existing experiences with attribute providers have shown the challenges that exist around the quality of data and service level agreements that need to be worked out and defined as part of a multi-party agreement rather than bi-lateral agreements. On the Governance Principals side, there are potentially two philosophies for how to deal with authoritativeness:
- A source is designated as authoritative or not and what needs to be resolved from the perspective of an attribute service is how to show the provenance of that data as coming from the authoritative source
- There are multiple sources of the same attribute and there needs to be the equivalent of a Level of Assurance that can be associated with each attribute
At this point, I am very much in camp (1) but as pointed out at the session, this does NOT preclude the existence of second party attribute services that add value on top of the services provided by the authoritative sources. An example of this is the desire of an organization to do due diligence checks on potential employees. As part of this process, they may find value in contracting the services of service provider that aggregates attributes from multiple sources (some gov't provided and others not) that are provided by them in an "Attribute Contract" that satisfies their business need. Contrast this to them having to build the infrastructure, capabilities and business agreements with multiple attribute providers. The second party provider may offer higher availability, a more targeted Attribute Contract, but with the caveat that some of the attributes that they provide may be 12-18 hours out-of-date etc. Ultimately, it was noted that all decisions are local and the decisions about factors such as authoritativeness and freshness are driven by the policies of the organization.
In a lot of ways, in this discussion we got away from the perspective of the Government as an Identity Oracle but focused on it more as an Attribute Provider. A path forward seemed to be more around encouraging an eco-system that leveraged attribute providers (Gov't and Others) to offer "Oracle Services" whether from the Cloud or not. As such the Oracle on the one end has a business relationship with the Government which is the authoritative source of attributes (because of its close relationship with the citizen) and on the other end has a close contractual relationship which organizations, such as financial service institutions, to leverage their services. This, I think, makes the relationship one removed from what was originally envisioned as what is meant by an Identity Oracle. This was something that Nishant brought up after the session in a sidebar with Ian and Myself. I hope that there is further conversation on this topic about this.
My take away from this session was that there is value and a business need in the Government being an attribute provider, technical infrastructure is being put into place that could enable this, and while many issues regarding governance and quality of data still remains to be resolved, there is a marketplace and opportunity for Attribute Aggregators/Oracles that would like to participate in this emerging identity eco-system.
Raw notes from the session can be found here courtesy of Ian Glazer.
br>
Thursday, August 12, 2010
There has been a great deal of excitement about the US Federal Government's ICAM initiative that provides for the development of Trust Frameworks, and providers of same, that has resulted in the emergence of identity providers who can issue credentials to citizens that can be used to gain access to Government websites/applications/relying parties. In all of the discussions surrounding these efforts, the focus has been on leveraging existing OpenID, Information Card or other types of credentials issued by commercial or educational organizations to access Government resources.
But, is that all we want from our Government?
In this blog posting, I am going to consciously side-step the concept of the Government as an Identity Provider. In the United States at least, much more thoughtful people than I have discussed, debated and argued about the feasibility of this and I do not believe that I can add much value here. The general consensus to date seems to be that the value proposition around the concept of a "National Identity Card" has many challenges to overcome before it is seen as something that is viable in the US. Whether this is true or not, I leave to others to ponder.
But what about the US Government vouching for the attributes/claims of a person that they are already managing with our implicit or explicit permission?
My last blog post "The Future of Identity Management is...Now" spoke to the pull-based future of identity management:
- ...
- "The input to these decisions are based on information about the subject, information about the resource, environmental/contextual information, and more, that are often expressed as attributes/claims.
- These attributes/claims can reside in multiple authoritative sources where the authoritative-ness/relevance may be based on the closeness of a relationship that the keeper/data-steward of the source has with the subject."
- ...
There are certainly attributes/claims for which the US Government has the closest of relationship with its citizens and residents and as such remain the authoritative source:
- Citizenship - State Department
- Address Information - Postal Service
- Eligibility to Work in the US - Department of Homeland Security
- Eligibility to Drive - State Government DMVs
- More...
I may be wrong about which agency is responsible for what, but I hope you see my point. There are some fundamental attributes about a person, that in the US, that are managed through its life-cyle by the Government, whether Federal or State.
I firmly believe, as someone who has been involved in demonstrating the feasibility of pull based identity architectures for delivering the right information to the right person at the moment of need using current commercial technologies and standards, that we have reached a point in time where the combination of the maturity of approaches and technologies such as the Federal ICAM Backend Attribute Exchange or the Identity Meta-system technologies and the willingness of the Government to engage with the public in the area of identity, that it is time to have a discussion about this topic.
The questions are definitely NOT technical in nature but are more around need and interest, feasibility and value with a heavy infusion of privacy. Some initial questions to start the conversation rolling would be:
- What are a core set of attributes that can serve as a starting point for discussion?
- Who would find value in utilizing them? How is it any better than what they have in place right now?
- What are the privacy implications of specific attributes? How can they be mitigated (e.g. Ask if this person is old enough to buy alcohol vs. What is your birthday/age?
- Liability in case of mistakes
- How would the Government recoup some of the costs? We pay for passport renewals, we pay for driver's license renewals; don't expect this to come for free
- Much, much more....
I would be curious to find out if there is any interest in this topic and if so what your reactions are. If there is interest, and given that the next Internet Identity Workshop is for the first time going to be held on the East Coast (Washington DC) on September 9-10 with a focus on "Open Identity for Open Government", and given its un-conference nature, was going to propose this as a topic of discussion.
UPDATE: Ian Glazer, Research Director for Identity and Privacy at Gartner has agreed to tag team with me on this topic at IIW in DC. Ian's research and interests sit at the very important intersection of Identity and Privacy, and I think he will bring that much needed perspective to this conversation.
He also thought that the topic should be more correctly termed "Government's role as an Oracle" rather than as an Attribute Provider, and since I agree, that will more than likely end up being the topic
To see what is meant by an Identity Oracle and what it is NOT, read this and this blog posts by Bob Blakely
br>
Tuesday, August 3, 2010
The Gartner/Burton Group conference has a very high signal to noise ratio and is one that I was fortunate to present at this year. I spoke in my role as the Technical Lead for DHS Science & Technology Directorate's Identity Management Testbed about how we are taking the Federal ICAM Backend Attribute Exchange Interface and Architecture Specification from Profile to Usage.
The biggest buzz in the Identity Management track, where I spent most of my time, was around the “pull” based architecture that Bob Blakley and the rest of the Burton crew have been writing and speaking about for a while as being the future of Identity Management. The key take-away’s for me on this topic are:
- We are moving to an era where dynamic, contextual, policy driven mechanisms are needed to make real time access control decisions at the moment of need.
- The policy driven nature of the decisions require that the decision making capability be externalized from systems/applications/services and not be embedded within and that policy be treated as a first class citizen.
- The input to these decisions are based on information about the subject, information about the resource, environmental/contextual information, and more, that are often expressed as attributes/claims.
- These attributes/claims can reside in multiple authoritative sources where the authoritative-ness/relevance may be based on the closeness of a relationship that the keeper/data-steward of the source has with the subject.
- The relevant attributes are retrieved (“pulled”) from the variety of sources at the moment when a subject needs to access a system and are not pre-provisioned into the system.
- Standards! Standards! Standards! All of the moving parts here (finding/correlating attributes, movement of attributes across organizational boundaries, decision control mechanisms etc.) needs to be using standards based interfaces and technologies.
Potential implementation technologies proposed include virtual directories as mechanisms that can consolidate and correlate across multiple sources of attributes, standards such as LDAP(S), SAML and SPML as the plumbing standards, and External Authorization Mangers (“XACMLoids”) as decision engines.
What was interesting and relevant to me is the the US Federal Government via the ICAM effort as well as the Homeland Security, Defense and other communities have embraced this viewpoint for a while and are putting into place both the infrastructure to support it at scale, and have working implementations in use.
In particular my presentation was about how we are working an information sharing effort between two organizations who need to collaborate and share information in the event of a natural or man-made disaster where there is no way we could pre-provision users since we won’t know who those users are until they try to access systems. Our end-to-end implementation architecture really reflects pretty much everything noted in the Burton vision of the future. Relevant bits from the abstract:
The Backend Attribute Exchange (BAE) Interface and Architecture Specifications define capabilities that provide for both the real time exchange of user attributes across federated domains using SAML and for the batch exchange of user attributes using SPML.
The DHS Science & Technology (S&T) Directorate in partnership with the DOD Defense Manpower Data Center (DMDC), profiled SAML v2.0 as part of a iterative proof of concept implementation. The lessons learned and the profiles were submitted to the Federal CIO Council’s Identity, Credentialing and Access Management (ICAM) Sub-Committee and are now part of the Federal Government's ICAM Roadmap as the standardized mechanism for Attribute Exchange across Government Agencies […]
This presentation will provide an overview of the BAE profiling effort, technical details regarding the choices made, vendor implementations, usage scenarios and discuss extensibility points that make this profile relevant to Commercial as well as Federal, State, Local and Tribal Government entities.
In our flow there is a clear separation of concerns between Authentication and Authorization and in the language of my community, the subject that is attempting to access the Relying Party application is an “Unanticipated User” i.e. a subject that is from outside that organization who has NOT been provisioned in the RP Application.
- There is a organizational access control policy that is externalized from the application via the Externalized Authorization Manager (EAM) that is dynamic in nature (“Allow access to user if user is from organization X, has attributes Y and Z and the current environment status is Green”).
- The subject is identified as being from outside the organization, is authenticated and an account is created in the system. The subject has no roles, rights or privileges within the system.
- The EAM pulls the attributes that are needed from external (to organization) sources to execute the access control policy and based on a permit decision grants access to resources that are allowed by policy.
All of this, BTW, is taking place using existing standards such as SAML and XACML and technologies such as Virtual Directories, XML Security Gateways, Externalized Access Management solutions etc. This works now using existing technology and standards and gets us away from the often proprietary, connector-driven, provisioning-dependent architectures and moves us to something that works very well in a federated world.
To us this is not the future of Identity Management. This is Now!
br>
Saturday, April 17, 2010
I had the opportunity earlier in the week to attend the 9th Symposium on Identity and Trust on the Internet (IDtrust 2010) which was held at NIST.
Given that a lot of the work that I am currently doing is centered around externalized, policy driven Authorization using Attribute Based Access Control (ABAC) and the profiling and deployment of Enterprise Attribute Services, I found a paper [PDF] and presentation [PDF] given by Ivonne Thomas from the Hasso-Plattner-Institue for IT-Systems Engineering to be very interesting.
As an aside, one of the best explanations on conveying what ABAC is all about, particularly to business owners, was given by a colleague who works for the DOD in this particular domain (Thanks Ken B).
“Consider if you will, the following two situations.
You are standing in line at the Grocery store and a little old lady in a walker comes up to you and demands your driver’s license and proof-of-insurance! You will be making a particular decision at that time. Now, consider if the same question was asked of you with red and blue lights blinking behind you and someone with a badge and a gun is knocking on your windshield asking for the same information.
We make these types of decisions all the time in our lives based on a real time evaluation of who is asking the question, what they want access to, and the context in which the question is being asked. ABAC is how we could do the same thing in the electronic world. Making a real-time access control decision based on attributes of the subject, the attributes of the resource and the attributes of the environment/context.”
I love this explanation and have shamelessly stolen and used it to great effect in multiple situations.
Coming back to the paper, given that Attributes are used to make these critical access control decisions, how does one judge the “trust-worthiness” and/or “authoritative-ness” of each attribute that are used to make the decision? How could one convey these qualities related to attributes to a Relying Party so that it can make a nuanced access control decision?
On the authentication front, we have an existing body of work that can be leveraged such as the OMB E-Authentication Guidance M-04-04 [PDF] which defines the four Levels of Assurance (LOA) for the US Federal Government and the attendant NIST SP 800-63 [PDF] that defines the technologies that can be used to meet the requirements of M-04-04. In particular, you have the ability to use SAML Authentication Context to convey the LOA statements in conformance with an identity assurance framework.
The paper, which I think has a misleading title, uses the Authentication Context approach as an example and defines an extension to the SAML 2.0 schema for what is termed by the Authors as an “Attribute Context” which can be applied to each Attribute value. The authors define the parts as:
- Attribute Context This data element holds the attribute context, which is comprised of all additional information to the attribute value itself. This element is the upper container for all identity metadata.
- Attribute Data Source This data element indicates the source from which the attribute value was originally received and is part of the Attribute Context. This can be for example another identity provider, some authority as a certificate authority or the user himself who entered the data.
- Verification Context This data element holds the verification context, which comprises all information related to the verification of an identity attribute value. The Verification Context is one specific context within the Attribute Context.
- Verification Status This data element indicates the verification status of an identity attribute value, which should be one of “verified”, “not verified” or “unknown”. The verification status is part of the verification context.
- Verification Context Declaration The verification context declaration holds the verification process details. Such a detail could for example be the method that has been used for verifying the correctness of the attribute. Further extensions are possible and should be added here. The verification context declaration besides the verification status make up the verification context.
I know of many folks who are working on the policy side of this question of how to judge the “authoritative-ness” of an Attribute under multiple topics such as “Attribute Assurance”, “Attribute Practice Statements”, “Authority Services” etc. etc. But I have often thought about how one would go about conveying these types of assertions using current technology. This approach seems to provide an elegant approach for doing just that:
As you can see in the above example, the extensions proposed by the authors integrate nicely into a standard saml:AttributeStatement and convey the metadata about individual attributes to a Relying Party that can make a more nuanced access control decision.
I think this is a great beginning and would love to see the authors submit this to the OASIS Security Services (SAML) TC so that it can become part and parcel of the SAML 2.0 specification. I would also love to see a Profile come out of the OASIS SSTC that would define a consistent set of Verification Context Declarations. In particular I believe that the concept of referencing “Governing Agreements” as defined in the current “SAML 2.0 Identity Assurance Profile, Version 1.0” (which is in public review) has applicability to this work as well.
br>
Saturday, March 13, 2010
At a meeting yesterday Judy Spencer, co-chair of the Federal CIO Council ICAMSC, briefed that NIST had recently re-released Special Publication 800-73 [PDF] to account for PIV-I Card Issuance. These would be Smart Cards that can be issued by Non-Federal Issuer’s and can potentially be trusted by US Government Relying Parties.
The relevant bits are in Section 3.3 of NIST SP 800-73-3 (Quoting below so that I can easily reference them in the future):
3.3 Inclusion of Universally Unique IDentifiers (UUIDs)
As defined in [10], the presence of a Universally Unique IDentifier (UUID) conformant to the specification [11] is required in each identification card issued by Non-Federal Issuers, referred to as “PIV Interoperable” (PIV-I) or “PIV Compatible” (PIV-C) cards. The intent of [10] is to enable issuers to issue cards that are technically interoperable with Federal PIV Card readers and applications, and that may be trusted for particular purposes through a decision of the relying Federal Department or Agency. Because the goal is interoperability of PIV-I and PIV-C with the Federal PIV System, the technical requirements for the inclusion of the UUID document are specified in this document. To include a UUID identifier on a PIV-I, PIV-C, or PIV Card, a credential issuer shall meet the following specifications for all relevant data objects present on an issued identification card.
- If the card is a PIV-I or PIV-C card, the FASC-N in the CHUID shall have Agency Code equal to 9999, System Code equal to 9999, and Credential Number equal to 999999, indicating that a UUID is the primary credential identifier. In this case, the FASC-N shall be omitted from the certificates and CMS-signed data objects. If the card is a PIV Card, the FASC-N in the CHUID shall be populated as described in Section 3.1.2, and the FASC-N shall be included in authentication certificates and CMS-signed data objects as required by FIPS 201.
- The value of the GUID data element of the CHUID data object shall be a 16-byte binary representation of a valid UUID[11]. The UUID should be version 1, 4, or 5, as specified in [11], Section 4.1.3.
- The same 16-byte binary representation of the UUID value shall be present as the value of an entryUUID attribute, as defined in [12], in any CMS-signed data object that is required to contain a pivFASC-N attribute on a PIV Card, i.e., in the fingerprint template and facial image data objects, if present.
- The string representation of the same UUID value shall be present in the PIV Authentication Certificate and the Card Authentication Certificate, if present, in the subjectAltName extension encoded as a URI, as specified by [11], Section 3.
The option specified in this section supports the use of UUIDs by Non-Federal Issuers. It also allows, but does not require, the use of UUIDs as optional data elements on PIV Cards. PIV Cards must meet all requirements in FIPS 201 whether or not the UUID identifier option is used; in particular, the FASC-N identifier must be present in all PIV data objects as specified by FIPS 201 and its normative references. PIV Cards that include UUIDs must include the UUIDs in all data objects described in (2) through (4).
At the IDManagement.gov site, you can also find a list of Credential Service Providers, cross-certified with the US Federal Bridge CA at Medium Hardware LOA (i.e. Meets the requirement that FIPS 140 Level 2 validated cryptographic modules are used for cryptographic operations as well as for the protection of trusted public keys), who have the ability to issue PIV-I Credentials.
Technorati Tags:
PIV-I,
FBCA,
CSP,
PIV,
FASC-N,
UUID br>
Sunday, February 21, 2010
To be conformant to SPML v2 means that the SPML interface (Provisioning Service Provider / PSP) MUST:
- Support the set of Core operations
- a discovery operation {listTargets} on the provider
- basic operations {add, lookup, modify, delete} that apply to objects on a target
- Supports basic operations for every schema entity that a target supports
- Supports modal mechanisms for asynchronous operations
There are additional “Standard” operations described in the OASIS SPML v2 Specification [Zip]. The clear thing to keep in mind is that each operations adds a data management burden onto the provider, so the choice of whether or not to implement them should be considered very carefully.
From the perspective of deployment topologies, the PSP could be deployed separately from the Target or could very well be integrated tightly with the Target e.g. an SPML compliant web service interface on a target system.
One of the frustrating items for me when enquiring about SPML support in products has been the lack of clarity and visibility around exactly what has been implemented. All too often, vendors seem to have cherry picked a chosen set of operations (whether from the Core or from the Standard list) and used that to claim SPML support. I would be very curious to see if anyone can claim full SPML v2 compliance.
A particular use case for SPML that I am currently working on has to deal with the “batch” movement of attributes from multiple systems to a central repository. The typical flow is as follows:
- Per organizational policy & relationship to user, attributes are assigned in their home organization and/or business unit (Org A / Org B / …)
- Org A must move those users and/or their attributes to a central repository (Repository X) on a regular basis
- Repository X acts as the authoritative source of attributes of users from multiple organizations / business units and can provide those attributes to authenticated and authorized entities in a real-time request/response and in a synch-take-offline-use modes.
Some points to keep in mind are:
- Org A / B / … may have, and all too often do, have their own existing identity and provisioning systems as well as associated governance processes in place.
- The organizations and the repository may or may not be under the same sphere of control and as such cannot mandate the use of the same piece of provisioning software and associated connectors on both ends of the divide.
- The systems where the organizations store the attributes of their users may not necessarily be directory based systems.
- The Repository may or may not be directory based system.
- Identity / Trust / Security are, as you may imagine, rather important in these types of transactions.
To meet these needs, we are currently profiling SPML to support the Core SPML Operations as well as the optional “BATCH” capability. The “ASYNC” capability is something that we are more than likely going to support as well as it provides a mechanism for the provider to advertise support for asynchronous operations rather than have a request for an asynch operation fail on a requester with an error “status=’failed’” and “error=’unsupportedExecutionMode’”.
Keep in mind that the end result will satisfy more than just the one use case that I noted above. In fact, it satisfies many other use cases that we have that deal with both LACS and PACS scenarios. In addition, the profile will also bring in the pieces that are noted as out of scope in the SPML standard i.e. the Profiling of the Security protocols that are used to assure the integrity, confidentiality and trust of these exchanges. Fortunately, we can leverage some of previous work we have done in this space for that aspect.
del.icio.us Tags:
SPML,
Federation,
IdM Technorati Tags:
SPML,
Federation,
IdM br>
Saturday, February 13, 2010
Mark Diodati at the Burton Group kicked off this conversation in his blog post "SPML Is On Life Support..." Other folks, notably Nishant Kaushik ("SPML Under the Spotlight Again?"), Ingrid Melve ("Provisioning, will SPML emerge?") and Jeff Bohren ("Whither SPML or wither SPML?") bring additional perspectives to this conversation. There is also some chatter in the Twitter-verse around this topic as well.
As someone who has been involved in both the standards process as well as end user implementation, I have a semi-jaded perspective to offer on what it takes for vendors to implement interfaces that are standards based in their tooling/products. First of all, let it be clearly understood that Standards are beautiful things (and there are many of them) but a Standard without vendor tooling support is nothing more than shelf-ware. So in the case of Standards Based Provisioning, in order to get that tooling support, multiple things need to happen:
- First and foremost, do NOT let a vendor drive your architecture! User organizations need to break out the "vicious cycle" that exists by first realizing that there are choices beyond the proprietary connectors that are being peddled by vendors, and secondly by stepping up and defining provisioning architectures in a manner that prioritizes open interfaces, minimizes custom connectors and promotes diversity of vendor choice. Map vendor technology into your architecture and not the other way around, because if you start from what a vendor's product gives you, you will always be limited by that vendor's vision, choices and motivations.
- Bring your use cases and pain points to the Standards development process and invest the time and effort (Yes, this is often painful and time consuming!) to incorporate your needs into the base standard itself. I am finding that often the Technical Committees in Standards Organizations are proposed and driven by vendors and not end users. But in cases where there is a good balance between end users and vendors, the Standard reflects the needs of real people (The Security Services/SAML TC at OASIS often comes to mind as a good example).
- Organizations need to incorporate the need for open standards into their product acquisition process. This needs to go beyond "Product X will support SPML" to explicit use cases as to which portions of the standard are important and relevant. Prototype what you need and be prepared to ask tough, detailed questions and ask for conformance tests against a profile of the Standard.
- Be prepared to actively work with vendors who treat you like an intelligent, strategic partner and are willing to invest their time in understanding your business needs and motivations. These are the folks who see the strategic value and business opportunities in supporting open interfaces and standards, realize they can turn and burn quicker than the competition, and compete on how fast they can innovate and on customer satisfaction versus depending on product lock-in. They are out there, and it is incumbent upon organizations to drive the conversation with those folks.
Moving on, let me reiterate the comments that I made on Mark's blog posting:
"The concern with exposing LDAP/AD across organizational boundaries is real and may not be resolved at the technology level. Applying an existing cross-cutting security infrastructure to a SOAP binding (to SPML) is a proven and understood mechanism which is more acceptable to risk averse organizations.
I would also add two additional points:
- More support for the XSD portion of SPML vs. DSML in vendor tooling. There are a LOT of authoritative sources of information that are simply NOT directories.
- There needs to be the the analog of SAML metadata in the SPML world (Or a profile of SAML metadata that can be used with SPML) to bootstrap the discovery of capabilities. The "listTargets" operation is simply not enough."
While I do resonate with the "pull" model interfaces noted by Mark in his posting, I do believe that exposing LDAP(S)/AD Interfaces either directly of via Virtual Directories outside organizational boundaries is a non-starter for many organizations.
At the same time I believe there exists options in the current state of technology to provide a hybrid approach that can incorporate both the pull model as well as provide the application of cross-cutting security infrastructure into the mix. The architecture that we are currently using incorporates a combination of both Virtual/Meta Directory capabilities as well as an XML Security Gateway to provide policy enforcement (security and more) when exposed to the outside.
I will also reiterate that there needs to be more support for the XSD portion of SPML vs. DSML. A lot of the authoritative sources of user information that I am dealing with are simply not found in directory services but in other sources such as relational databases, custom web services and sometimes proprietary formats in addition to LDAP/AD.
I hope to post some the use cases for standards based provisioning as well as the details of some of the profiling that we are doing on SPML to satisfy those use cases in future blog posts. Looking forward to further conversations around this topic.
br>
Thursday, January 14, 2010
I am currently conducting some testing involving SAML Attribute Authorities. We are currently using an OpenSSL based CA to issue the digital signature and encryption certificates used to provide message level security for all entities involved. In particular, this allows us to encrypt the response to an attribute query using the public key of the requester such that only the requester is able to decrypt the response using their corresponding private key.
Today I got a call from one of the vendors who have been testing against us noting that they were having issues decrypting the response. They did some troubleshooting on their end and noted that the stateorProvinceName part of the certificate's Subject DN that they were seeing on their end was NOT matching what was coming across in my response.
Specifically on their end, the stateorProvinceName part of the Certificate DN looked like "S=MD" while what I was returning was "ST=MD".
In the course of troubleshooting the problem on my end, I checked out the openssl.cfg file for the CA. It was set up correctly.
I then looked at the public certificate that was generated (pem format). It too had State in the format that I was expecting (ST=MD).
I also pulled up the certificate in a java tool called Portecle which allows you to work with keystores etc. Same result.
The interesting thing about this particular vendor is that their product is built on top of Windows, .NET and WCF. So their usage of digital signatures and encryption/decryption functionality is leveraging Microsoft technologies and the.NET platform. So I changed the pem extension on the public certificate to cer and opened it on my Windows desktop.
And there it was... "S=MD"
From what I understand, per RFC 2256 which is Normative for LDAP, the correct field name for "STATE" is st which "... contains the full name of a state or province (stateOrProvinceName)".
If I can extrapolate what I am seeing here, it would appear that both the UI in Windows and the programmatic access to this information is via the same API, which is telling them S=MD.
I did a bit of search on the web and this issue seems to have up in some cases regarding Windows 2003 certificate services, so I don't think I am alone here. I also believe that I have done everything right on how the key-pair itself is generated.
But I am at a bit of stand still as I am not sure of what guidance to provide the vendor. Some questions that I have are:
- Is this a known problem, or is this being caused by some oversight/mistake on my part (would not be the first time)?
- Is there any particular function/API call that someone working in .NET/WCF could use such that they are working with the actual Subject Name?
I am hoping that people out there have encountered this issue, identified where the problem is, and come up with some sort of a solution. Any pointers would be deeply appreciated.
br>
Friday, August 14, 2009
I had a great time at Burton Group's Catalyst Conference this year. Spent my time between the Identity Management, SOA and Cloud sessions. Also had an opportunity to attend the Cloud Security & Identity SIG session as well.
As the fast-thinking, slow talking, and always insightful Chris Haddad notes on the Burton APS Blog (Chris... enjoyed the lunch and the conversation) "Existing Cloud Computing's momentum is predominantly focused on hardware optimization (IaaS) or delivery of entire applications (SaaS)".
But the message that I often hear from Cloud vendors is:
- We want to be an extension of your Enterprise
- We have deep expertise in certain competencies that are not core to your business, and as such you should let us integrate what we bring to the table into your Enterprise
... and variations on this theme.
But in order to do this, an Enterprise needs to have a deep understanding of its own core competencies, have clearly articulated it's capabilities into distinct offerings, and gone through some sort of a rationalization process for its existing application portfolio.. In effect, have done a very good job of Service Orient-ing themselves!
But we are also hearing at the same time that SOA has lost its bright and shiny appeal and that most SOA efforts, with rare exceptions, have not been successful. For the record, success in SOA to me is not about building out a web services infrastructure, but about getting true value and clear and measurable ROI out of the effort.
So to me, it would appear that without an organization getting Service Orientation right, any serious attempt they make on the cloud computing end will end up as nothing more than an attempt at building a castle on quicksand.
The other point that I noted was that while there were discussions around Identity and Security of Cloud offerings (they still need to mature a whole lot more, but the discussion was still there), there was little to no discussion around visibility and manageability of cloud offerings. A point that I brought up in questions and in conversations on this topic was that while people's appetite for risk vary, one of the ways to evaluate and potentially mitigate risk was to provide more real time visibility into cloud offerings. If a cloud vendor's offerings are to be tightly integrated into an Enterprise, and I now have a clear dependency on them, I would very much want to have a clear awareness of how the cloud offerings were behaving.
From a technical perspective, what I was proposing was something very similar in concept to the monitoring (and not management) piece of what WS-Management & WSDM brought to the table on the WS-* front. In effect, a standardized interface that all cloud vendors agree to implement that provides health and monitoring visibility to the organizations that utilize their services. In short, I do not want to get an after-the-fact report on your status sent to me by e-mail or pulled up on a web site, I want the real time visibility into your services that my NOC can monitor. There was a response from some vendors that they have this interface internally for their own monitoring. My response back to them is to expose it to your customers, and work within the cloud community to standardize it such that the same interface exits as I move from vendor to vendor.
br>
Saturday, June 20, 2009
As part of the BAE profiling and reference implementation, we have a full test & validation suite. Our desire has always been to make the barrier to entry for anyone using the test suites to be the minimum it needs to be. As such we focused on creating our test suites using open source tooling so that we could provide a test suite project that an implementer could import into their open source testing tool, point it at their BAE implementation, run it, and get immediate feedback on whether or not their implementation was conformant to the profile.
To that end, we have been using the popular and free soapUI testing tool. Unfortunately, we are running into some limitations in the tool support for SAML 2.0. It would appear that the current soapUI implementation is using the OpenSAML 1.1 implementation and not the current OpenSAML 2.0 which supports SAML v2. In particular, this means that the following functionality that relates to the testing of SAML AttributeRequest/Response are not supported:
- Ability to digitally sign and validate attribute requests and responses using the enveloped signature method
- Ability to utilize the <saml:EncryptedID> as a means of carrying the encrypted name identifier
- Ability to decrypt the <saml:EncryptedAssertion> element sent by the Attribute Authority which contains the encrypted contents of an assertion
This has required us to go thru some gyrations in how we are implementing the test suites, which is making the user experience not as smooth as we would like.
Ideally we would love to continue using soapUI going forward, but we are also on the lookout for other open source tooling that we could utilize for our testing. Suggestions and recommendations from folks who have experienced this issue and have found a resolution would be very much appreciated.
br>
Saturday, June 6, 2009
FIPS 201 defines a US Government-wide interoperable identification credential for controlling physical access to federal facilities and logical access to federal information systems. The FIPS 201 credential, known as the Personal Identity Verification (PIV) Card, supports PIV Cardholder authentication using information securely stored on the PIV Card. Some PIV Cardholder information is available on-card through PIV Card external physical topology (i.e., card surface) and PIV Card internal data storage (e.g. Magnetic stripe, integrated circuit chip).
Other PIV Cardholder information is available off-card. Examples of off-card information, say in the First Responder & Emergency Response domain, could be certifications that could be presented by a Doctor or EMT that could verify their claims and allow physical and/or logical access to resources.
Accordingly, the federal government requires a standard mechanism for Relying Parties to obtain PIV Cardholder information (User Attributes), which are available off-card, directly from the authoritative source (Attribute Authority). The authoritative source is the PIV Card Issuing Agency, which is the agency that issued the PIV Card to the PIV Cardholder. The exchange of these User Attributes between backend systems is known as “Backend Attribute Exchange” (BAE). The architectural vision for the BAE can be found at IDManagement.gov (Direct link to "Backend Attribute Exchange Architecture and Interface Specification" - PDF).
I, and members of my team, have been part of a joint DHS and DOD team that have been working on a proof of concept implementation of the BAE in order to validate the approach, gain valuable implementation experience, and to provide feedback to the relevant governance organizations within the US Federal Government. The results of our work are three-fold:
- A SAML2 Profile of the BAE, with both normative and informative sections, that provide concrete implementation guidance, lessons learned as well as recommendations for folks seeking to support this profile
- Reference implementations stood up within the T&E environments of both DHS and DOD for interoperability testing
- Test suites that can be used by implementers to verify compliance with the profile
I am happy to report that the profile is currently at v1.0 (DRAFT) status, under external review, and that we are scheduled to give a briefing on the work to a sub-committee of the Federal CIO Council later this month. In addition, we have our reference implementations up and running and are putting the finishing touches on the Test Suites.
As someone who has and is participating in industry standards efforts, I am fully aware that one of the critical items for a standard to become successful is for incorporation of the standard into vendor tooling. Some of the choices that we made, beyond satisfying the needed functionality, was to make sure that it was as easy as possible to build in profile support by:
- Not reinventing the wheel; Leverage the conventions and standards established by some of the fine work that has been done to date by the OASIS Security Services (SAML) TC on Attribute Query Profiles
- Keep the delta's as small as possible between the BAE Profile and existing profiles such as the X.509 Attribute Sharing Profile (XASP)
- Provide LOTS of informative guidance
- Striking a balance between making sure that the profile was generic enough to be widely used and deployable, but provided enough information in the message flow for implementers to get full value.
The last item was something that we found to be critical and sometimes contentious to balance. But, we would not be where we are right now, had we not been informed by our actual proof-of-concept implementations. A pure paper effort would have left too many holes to patch.
We have also made an active effort to reach out to vendors, especially in the federation, entitlement management and XML security arenas, and have been gratified by their response in committing to support this profile in their tooling (In some cases, folks already have beta support baked in!). We are fully expecting to highlight and point out those folks during our out-brief later this month. If you are a vendor, want to find out what it takes to support this profile, and are interested in receiving a copy of the v1.0 DRAFT, please feel free to ping me at anil dot john at jhuapl dot edu.
This has been a pretty extensive, exciting and detailed effort and we are very grateful for the senior level support from both Organizations for this effort. Beyond that, it has been a blast working with some very smart people from both DHS and DOD to make this real.
br>
Saturday, December 13, 2008
In another context, I was recently asked:
Since you first posted your article about interoperability, did you find out "Who among you actually implement this interoperable interface specification in your current shipping product?"
Thought I'd share the relevant bits of the answer that I gave:
"... before I answer your question, please know that the focus for my environment was/is on building out a policy driven infrastructure for a web services environment. And the motivations for going down that path include:
- Consistent enforcement of policy (and in this particular case, access control policy) across multiple services
- Minimize the number of interception points in the message path
- Take the burden/complexity/headache of security policy out of the hands of the development teams who are deploying the services and move it into the infrastructure
As such, what I was particularly interested in having happen is for the XML Security Gateways in my environment to act as a XACML PEP to a remote XACML PDP. So to answer your question, you need to look at both the PEP and the PDP side:
One the PEP side, the answer is “It depends” on how flexible the product is. *Most* of the gateways provide you some mechanism for making an external “call-out” as part of the decision making process. i.e. Incoming request comes in to the PEP; PEP intercepts, does some basic threat and malicious content scanning, Authenticates the user/entity, then formulates an AuthZ request, sends it out in an “external call-out” to a PDP, and acts on the decision when it is returned. The ease of how you can do this, and the ability to customize that call-out depends on the particular product. You basically have on one extreme the need to engage the consulting services of the vendor to customize that call, to the other of being able to have the ability to do-it yourself using nothing more than message templates. So in short, you can bend the metal to make the PEPs generate a XACMLAuthzDecisionQuery and I am aware that at least a couple of the vendors in this space have it on the roadmap to be a native XACML PEP, but I am unsure of exactly what they mean by that term.
On the PDP side, what I will say is that silence as answer to the question is an answer in itself..
Pretty much all of the PDP vendors have some sort of a web service interface to their “Authorization Service”. To date my experience working with multiple products (both on the PEP and the PDP side) has been that you simply cannot point a PEP to PDPs implemented by multiple vendors and expect it to work without custom “franken-code” on either/both the PEP and PDP ends (Even though this was exactly the point of the Catalyst Demo that I noted in my blog entry).
These days my response to the vendor response of “Oh, Sure we do that!” is a request for a pointer to the WSDL and the XSDs of their Authorization/Entitlement Service of their current shipping product to prove that they indeed do it. For some reason, the conversation seems to just die out at that point … <shrug>"
As always, if my understanding is incomplete or incorrect, please feel free to leave a comment on this blog entry.
del.icio.us Tags:
XACML,
PEP,
PDP,
Interop Technorati Tags:
XACML,
PEP,
PDP,
Interop br>
Sunday, September 28, 2008
What is the current state of interoperability between XACML PEPs and PDPs from different vendors? I am currently looking to see if there is a consistent implementation of PDP interfaces among the multiple Fine Grained Authorization/Entitlement vendors such that I can point a XACML PEP from one vendor to the XACML PDP of multiple vendors and not have to do custom integration to make it work.
Back in February 2007, Burton Group issued a challenge to the the industry to demonstrate interoperability of XACML. Some of the questions they asked were "Can enterprises really mix and match policy administration points (PAPs), policy decision points (PDPs), and policy enforcement points (PEPs) from different vendors? Is the XACML RBAC Profile practical? Or will we find that different interpretations of the specification yield less than satisfactory levels of interoperability?"
The industry responded, via the OASIS XACML TC, in June 2007 by having the first XACML Interoperability Demo at the Burton Group Catalyst conference. There were two particular use cases in this demo, which required interoperability between vendor implementations of PEPs, PDPs and PAPs:
- Authorization Decision Request/Response
- Policy Exchange
I am particularly interested in the first scenario and in looking at the interop scenario document, it would appear that some specific choices were made in order to make this work:
- Implementation of the XACML Interface of the PDP as a SOAP Interface which accepts a XACMLAuthzDecisionQuery and returns a XACMLAuthzDecisionStatement which are contained in the SOAP body.
- Use of the SAML 2.0 Profile for XACML 2.0 which defines a Request/Response mechanism for carrying xacml-context:Request and xacml-context:Response elements.
In effect, what you ended up with in order to make this work is the implementation of a standardized SOAP interface that adhered to the following Request/Response (Taken from the interop scenario document):
Sample SOAP SAML XACML Request wrapper:
Sample SOAP SAML XACML Response wrapper:

I attended this event and remember coming away impressed at the results while simultaneously amused at some of the coding heroics of the vendors who, if I remember correctly, in some cases had very short time frames to work with.
It has now been more than a year since this interop event and at this point I have a very simple question for the vendors in this space "Who among you actually implement this interoperable interface specification in your current shipping product?"
From what I see to date (and I am more than happy to be corrected on this point) is that while many vendors claim conformance and implementation of the XACML 2.0 standard, their PDP interfaces are still proprietary and unique. Oh, don't get me wrong, these interfaces may be implemented using web services etc. BUT each web service implementation is unique and special to that vendor and does not follow any consistent interface specification and as such is an integration exercise that is left up to implementers if you have PEPs from multiple vendors. e.g. XML Security Gateways or Software PEPs from multiple vendors which need to talk to a XACML PDP.
del.icio.us Tags:
XACML,
PEP,
PDP,
Interop Technorati Tags:
XACML,
PEP,
PDP,
Interop br>
Sunday, September 21, 2008
In the physical world, when an attacker is preparing to assassinate someone or bomb a target, the first thing that they will do is to determine how best to set up that attack. The phrase used to describe the initial phase of the set-up is called 'pre-operational surveillance'.
Unfortunately, the default configuration of most web services allow a potential attacker to do the digital equivalent of pre-operational surveillance very easily. In the digital world, these type of threats are often classified under the category of 'Information Disclosure Threats'. There are two in particular (there are more) that I would like to call attention to:
- SOAP Fault Error Messages
- WSDL Scanning/Foot-Printing/Enumeration
1. SOAP Fault Error Messages
All too often, detailed fault messages can provide information about the web service or the back-end resources used by that web service. In fact, one of the favorite tactic of attackers is to try to deliberately cause an exception or fault in a web service in the hope that sensitive information such as connection strings, stack traces and other information may end up in the SOAP fault. Mark O'Neill has a recent blog entry 'SOAP Faults - Too much information' in which he points to a vulnerability assessment that his company did of a bank that provided information that enabled an attacker to understand the infrastructure the bank was running and presumably allowed them to further tailor the attack.
The typical mitigation for this type of information disclosure is the implementation of the 'Exception Shielding Pattern' as noted in the Patterns & Practices Book 'Web Service Security' [Free PDF Version] which can be used to "Return only those exceptions to the client that have been sanitized or exceptions that are safe by design. Exceptions that are safe by design do not contain sensitive information in the exception message, and they do not contain a detailed stack trace, either of which might reveal sensitive information about the Web service's inner workings." (FULL DISCLOSURE: I was an external, unpaid, technical reviewer of this book).
You can either implement this pattern in software or use a hardware device like a XML Security Gateway to implement this pattern. Mark utilized a Vordel Security GW, but this is something that can be implemented by all devices in this category. I have direct experience with Layer 7 as well as Cisco/Reactivity Gateways and happen to know that they support this functionality and I don't doubt that IBM/DataPower and others in this space support it as well.
Note that this does not imply that the error's that happen are not caught or addressed but simply that they are not propagated to an end-user.
2. WSDL Scanning/Foot-Printing/Enumeration
Appendix A of 'NIST 800-95: Guide to Secure Web Services' provides a listing of common attacks against web services, and you will note that there are many references to the information that can be found in a WSDL that can lend itself to a variety of attacks including Reconnaissance Attacks, WSDL Scanning, Schema Poisoning and more.
And in the 'Security Concepts, Challenges, and Design Considerations for Web Services Integration' article at the "Build Security In" web site sponsored by the DHS National Cyber Security Division, it notes that "An attacker may footprint a system’s data types and operations based on information stored in WSDL, since the WSDL may be published without a high degree of security. For example, in a world-readable registry, the method’s interface is exposed. WSDL is the interface to the web services. WSDL contains the message exchange pattern, types, values, methods, and parameters that are available to the service requester. An attacker may use this information to gain knowledge about the system and to craft attacks against the service directly and the system in general."
The type of information found in a WSDL, and which can be obtained simply by appending a ?WSDL to the end of a service endpoint URL, can be an extremely useful source of info for an attacker seeking to exploit a weakness in a service, and as such should not be provided or simply turned off.
There are multiple ways of mitigating this type of an attack which include turning off the automatic ?WSDL generation at the SOAP stack application level or by the configuring the intermediary that is protecting the service end-point. For example, most XML Security Gateway's by default turn off the ability to query the ?WSDL on a service end-point.
I consider this to be a very good default.
When this option is implemented, there are often a variety of questions that come up that I would like a take a quick moment to address.
Q. If you turn off the automatic WSDL generation capabilities (i.e. ?WSDL) how are developers supposed to implement a client that invokes the web service?
There are two ways. (1) Publish the WSDL and the associates XML Schema and Policy files in an Enterprise Registry/Repository that has the appropriate Access Control Mechanisms on it so that a developer can obtain a copy of the WSDL/Schema/Policy Documents at design time. (2) Provide the WSDL/Schema/Policy files out of band (e.g. Zip File, At a protected web site) to the developer.
Oh yes, there is always the run-time binding question that comes up here as well. What I will say is that run-time binding does not mean "run time proxy generation + dynamic UI code generation + glue code" but simply that the client side proxy and the associated UI and glue code are generated at design time, but that the end-point that the client points to may be a dynamic lookup from a UDDI compliant Registry. I've done this before and this does not require any run-time lookup of a web service's WSDL.
There is an additional benefit to this method as well. Have you ever gone through the process of defining a WSDL and Schema using best practices for web services interoperability, implemented a service using that WSDL and Schema, and then looked at the auto-generated WSDL? You may be surprised to find that the automatic generated WSDL may be in a majority of cases is not as clean or easy to follow and in some cases may indeed be wrong. The best practice for developing interoperable web services recommends following a contract-first approach. This requires that the "contract" i.e. the WSDL and the Schema to be something that is developed with a great deal of care given to interoperability. Since the automatic generation of WSDL is platform-specific, there is always the possibility of some platform-specific artifacts ending up in the contract documents, which is not what you intended to happen.
Q. What about those existing/legacy services that do a run time lookup? Won't those break?
The question that needs to be asked at this point is why these services are doing a run time lookup, is there value being added by this capability in this client, and are there alternatives that will enable the client to provide the same functionality without compromising security?
As an example take the case of a BEA Weblogic client. If you will look at the documentation that BEA provides on building a Dynamic client you will note that they provide two different approaches, one that uses a dynamic WSDL lookup and another that does not. The interesting thing about this is that the approach that uses the WSDL makes a run-time lookup of a Web Service's WSDL which will end up breaking if the ?WSDL functionality is turned off. But the alternative approach of building a dynamic client provides the same functionality without the run-time WSDL lookup.
From what I can see, from a functional perspective there is no difference between the two approaches and given that one of the things that you want to do when developing web services, or any software for that matter, is to minimize the number of external dependencies, I would choose the second option of NOT doing a run-time WSDL lookup in this particular case. What is regrettable in this case is that it appears that the default configuration in BEA's tooling is to use the run-time WSDL option (Or so I have been informed), which leads to issues when folks who choose the default options with their tools develop the clients.
Mitigating these information disclosure threats requires both developers and operational support folks to understand their shared responsibility for security. Developer's need to understand that security should be part of the software development lifecycle and is not something that is bolted on at the end or is 'thrown over the wall' for someone else to take care of. Operational folks need to understand that a layered defense in depth strategy is needed and that secure coding practices of developers are an essential component of any operational environment. In particular the mentality of "Firewalls and SSL will save us all" needs to change for all parties concerned.
br>
Saturday, September 13, 2008
Digital ID World 2008 is the first IdM conference that I've gone to as part of a team, and given the variety of breakout sessions we decided early on to use the divide and conquer approach based on our areas of interest and expertise.
The following are some highlights on some (not all) of the sessions that I attended and found to be interesting. As with a lot of conferences, there were some sessions that were pretty much disguised vendor pitches which I am not even going to bother with a mention.
Keynote - Identity Assurance: A Backbone For The Identity Marketplace
by Peter Alterman - GSA, Andrew Nash - PayPal, Frank Villavicencio - Citigroup
In some ways this was rehash of the panel on the same topic that was moderated by Mark Diodati at Burton Catalyst but with the addition of Peter Alterman of the GSA, who tends to add a certain amount of ...ah... flair to the conversation 
The intent of the Liberty Identity Assurance Framework (IAF) is to develop a framework that leverages the existing work that has been done by EAP, tScheme, US e-Auth etc. to generate an identity assurance standard that is technology agnostic but provides a consistent way of of defining identity credential policy and the process and policy rule set etc. The IAF consists of four parts (1) Assurance Levels (2) Assessment Criteria (3) Accreditation and Certification Model and (4) Business Rules. You can find out more about it on the IAF Section of the Liberty Alliance Web Site.
What interested me about the entire conversation was the leveraging of OMB M-04-04 and NIST 800-63 to define the assurance criteria but the drive to make a "Liberty Alliance IAF Assurance Token" (if you will) that will be certified to mean the same thing across federations. Mr. Alterman also noted, and I hope that I interpreted this correctly, that the intent from the GSA side would be to not re-invent the wheel but to adopt this IAF framework going forward. He spoke of current inter-federation work he is involved in between NIH and the InCommon Federation that is leveraging this.
During the Q&A session, I brought up the fact that this work is directly focused on AuthN but in general, access to resources is granted based on a variety of factors, only one of which is the strength and assurance of the authentication token. The response is that the Liberty work is deliberately focusing on the AuthN and considers AuthZ to be out-of-scope for their work.
Keynote Presentation: State Of The Industry
by Jamie Lewis - Burton Group
Enterprise IdM is the set of business processes, and a supporting infrastructure, that provides identity-based access control to systems and resources in accordance with established policies.
- Business trends are driving integration across processes and folks are being asked to do more with less.
- SaaS is gaining momentum
Many failures in IdM projects caused by a lack of doing homework and a belief in the silver bullet product etc.
- People manage risk, not products.
- IdM is a means and not an end; It is about enabling capabilities and not an end in itself.
- The Identity Big Bang is around new ways of working, collaborating and communicating
- Make every project an installment on the Architecture and scope the goals to around 3 years.
- Always think about data linking and cleansing
That was the first half of the keynote, but the second half was something I found to be very fascinating and is based on work that Burton has been proposing around the idea of a "Relationship Layer for the Web"
- AuthN and AuthZ are necessary but not sufficient
- Centrism of any kind does NOT work
- Lessons from social science on trust, reciprocity, reputation etc.
- The future of identity is relationships
- Difference between close and distant relationships; Able to make many observations in a close relationship, so able to get good identity information. Not so for distant relationships
- A good relationship provides value to all parties. And it is not just about rights but also obligations
- Values like privacy etc. require awareness of relationship context
- Systems fail if they are not "relationship-aware"
- Difference between Custodial, Contextual and Transactional identities.
-- Custodial Identity is directly maintained by an org and a person has a direct relationship with the org.
-- Contextual identity is something you get from another party but there are rules associated with how that identity can be used.
-- Transactional identity is just the limited amount of info that an RP (?) gets to complete a transaction e.g. Ability to buy alcohol requires a person to be over 18 (?) but in a transactional relationship, you would simply ask the question of "Is this person old enough to buy alcohol?" and the answer would come back as "Yes/No". Compare this to a question of "What is this person's age or birthday?" which releases a lot more info. - The last type of identity in effect requires the existence of what Burton Calls an "Identity Oracle" (See Bob Blakley's blog entries) that has a primary and trusted relationship with a user as well as with relying party and can stand behind (from a legal and liability perspective) the transactional identity statements that it makes.
I found this entire topic absolutely fascinating as this is so very relevant to a lot of the work that I do around information sharing across organizations that may or may not trust each other for a variety of (sometimes very valid) reasons. Will be actively tracking this area on an ongoing basis.
The Plot To Kill Identity
by Pamela Dingle - Nulli Secundus
I really enjoyed this session by Pamela on the disconnect that currently exists between the needs of the users, what is being asked of the application vendors and the lack of a common vocabulary to express our needs such that there is a change in the same old way of doing business.
- Need for an effort to be consistent all the way at the RFP/RFI time
- Need a common vocabulary when requesting capability from vendors
- Start with: Provide and Rely support i.e. the ability to choose whether or not a product relies on external identity services or provides its own.
- Pamela also had a great starting set of RFI type questions one can use.. I am hoping that she will post them on her blog.
One of the questions I brought up during the Q&A session was that if I bought in to the Kool-Aid of what she discussed during the presentation (and I do), what would it take to scale the conversation to a larger audience? Bob Blakley, who was also in the audience, chimed in and noted that if Pamela wrote up a white-paper on the topic, he would help her get it published and widely distributed as well.
I would also be very interested in expanding the scope of the sample RFI questions to be grouped by product/project category (and released under an open licence; Creative Commons?) so that folks like me can use them in our RFP/RFIs as well.
There were more sessions that I attended that were interesting such as the Concordia Workshop on "Bootstrapping Identity Protocols: A Look At Integrating OpenID, ID-WSF, WS-Trust And SAML", "Using An Identity Capable Platform To Enhance Cardspace Interactions" and more..
All in all, beyond the sessions themselves, the hall-way conversations and the connections made to be as valuable (or even more so) than just the sessions themselves. I know that I found and made connections with multiple folks who work in my community and am very much looking forward to future collaborations with them and others.
br>
Sunday, September 7, 2008
I am off next week to Anaheim, with the rest of my team, to attend Digital ID World 2008. Very much looking forward to the event given its packed agenda as well as some already scheduled side-bar meetings.
This looks like it is going to be another one of my usual business trips that combines visiting some of the nicest/most-scenic cities on the North American Continent and spending all the time indoors in window-less conference sessions, which in turn leaves you with absolutely no time for any site-seeing 
br>
Thursday, April 24, 2008
Federating identities across information and security domains is not just a technical problem, and anyone who tells/sells you that it is, is not operating in a frame of reality that is conducive to success!
Please note that, for me, an implementation of an Identity Federation architecture takes into account both Authentication and Authorization as well as a host of other areas. As such I've always found it amusing to be informed (usually by a vendor) that this is a straight forward problem and that once I deploy [Insert technology / tool / product / magic pixie dust of choice here], we will have you "federating in no time". Ha!
We have been wrestling with this and at one of our working meetings recently, one of my team-mates came up with the following representation to describe the challenges of reaching agreement on what information needs to flow across federation boundaries, and what needs to be in place to accomplish it. Based on the same principle as the Boy Scout's triangle (heat, oxygen, fuel), you take away one side, and the entire Attribute Triangle (or as we call it, "Tom's Triangle", in honor of our team-mate who came up with it) collapses.
When you look at it, it seems so obvious and simplistic, but we have found value in thinking thinking about it in this manner. Organizational Policy determines the rules of the road. Those rules in turn are reflected in the choices of attributes and the agreements around their semantics. At the same time, you need to be assured that the agreed upon attributes are not things that you come up out of the blue but are instead drawn from trusted and authoritative sources in the Enterprise.
br>
Sunday, April 13, 2008
GSA's USA Services/Intergovernmental Solutions sponsors monthly workshops around topics such as emergency preparedness, environmental monitoring, healthcare and law enforcement.
The upcoming "Exploring Identity Management: Global Landscape and Implications for Stakeholder Engagement Around the National Response Framework" session is focused on the implications of the "National Response Framework [PDF]" to Identity Management.
National Response Framework (NRF) is a guide to how the Nation conducts all-hazards response. It is built upon scalable, flexible, and adaptable coordinating structures to align key roles and responsibilities across the Nation, linking all levels of government, nongovernmental organizations, and the private sector. It is intended to capture specific authorities and best practices for managing incidents that range from the serious but purely local, to large-scale terrorist attacks or catastrophic natural disasters.
I had the opportunity to speak with both Susan Turnbull at the GSA as well as Dr. Duane Caneva, Director of Medical Preparedness at the White House Homeland Security Council, who are putting this event together, and came away impressed with their obvious passion in addressing this critical issue.
Basically, this is all about the technical, social and organizational infrastructure that needs to be in place to respond to a Katrina-like or Tsunami-like event. Identity Management is seen as an enabler in bringing the right people, the right resources and the right information together to help make a difference in responding to a crisis of this magnitude.
I also came away with an action item
to discuss with this community how some of the work that I am currently involved with could help out in this particular domain. The agenda looks pretty interesting and builds upon past events such as the IDTrust 2008 etc. Looking forward to this!
br>
Friday, April 4, 2008
Just picked up the current issue of IEEE Security & Privacy Magazine and it is full of Identity Management Goodness!
Looking forward to this read!
br>
Sunday, March 30, 2008
Some time ago, I was having a conversation with some folks about the usage of SAML Authentication Assertions for Web Browser Single Sign-On (SSO) versus Digital Certificates. The folks that I was having this conversation with support one of the larger PKI deployments in the US, and their response to my comment about the lack of support for SAML for Web Browser SSO in that particular vertical was the following question:
"Provided the experience to the user is the same, why does it matter?"
I didn't have a very good answer at that point in time but it is something that I've been mulling over since that time. The issue has come up again in separate conversations, including this one by Patrick Harding of Ping Identity and this posting by James McGovern. This blog posting is an attempt to articulate some of the points on both sides of this debate.
SAML 2.0 and Web Browser SSO
The Web Browser SSO Profile in SAML 2.0 supports both an Identity Provider (IdP) initiated and Service Provider (SP) initiated SSO message flows. As described in the SAML documentation ".. the most common scenario for starting a web SSO exchange is the SP-initiated web SSO model which begins with the user choosing a browser bookmark or clicking a link that takes them directly to an SP application resource they need to access. However, since the user is not logged in at the SP, before it allows access to the resource, the SP sends the user to an IdP to authenticate. The IdP builds an assertion representing the user's authentication at the IdP and then sends the user back to the SP with the assertion. The SP processes the assertion and determines whether to grant the user access to the resource.
In an IdP-initiated scenario, the user is visiting an IdP where they are already authenticated and they click on a link to a partner SP. The IdP builds an assertion representing the user's authentication state at the IdP and sends the user's browser over to the SP's assertion consumer service, which processes the assertion and creates a local security context for the user at the SP."
Some points to keep in mind regarding these two flows:
- The user's credentials are maintained at their IdP, which means that the SP must trust the IdP to assert information about its users. The establishment of this "organizational trust" is typically done out of band.
- The IdP can support multiple authentication mechanisms of varying strengths including user-id/password, software certificates and smart-cards based on a PKI, biometrics etc.
- The type and the strength of the authentication used by the user can be conveyed in a SAML authentication context which can be used in (or referred to from) a SAML Authentication Assertion. In fact an SP can include an authentication context in a request to an IdP to request that the user be authenticated using a specific set of authentication requirements, such as a multi-factor authentication.
- Do not conflate authentication with authorization! Although the user has been authenticated, the SP more than likely needs a LOT more information about the user (than what was provided in the Authentication Assertion) in order to make an access control decision. This typically requires the usage of SAML attribute statements and/or SAML authorization decision statements. And in any reasonably complex environment that wants to remain standards based, this more than likely involves the usage of XACML for defining access control criteria.
- SAML supports mechanisms to support the integrity and confidentiality of the assertions themselves including SSL mutual authentication, XML Signature etc. and does so across both the HTTP and SOAP bindings.
PKI and Web Browser SSO
This is pretty straight forward from the usage perspective. It begins with a user choosing a bookmark or clicking a link that takes them directly to a Relying Party (RP) i.e. application resource they need to access. The user is prompted to present a digital certificate as the authentication mechanism. The user's certificate, whether it is in the form of a soft certificate or coming from a smart card, is used to authenticate the user.
Some points to keep in mind:
- In a PKI environment, when a Certification Authority issues a certificate, it is making a statement to a RP that a particular public key is bound to a specific entity (i.e. the subject of the certificate).
- The degree to which a RP trusts a CA is based on the RP's understanding of the CA's user identification and credential issuance practices, operating policies, security controls etc.
- Depending on the identity issuance requirements of a CA, the digital certificate is usually consider a higher assurance authentication mechanism than something like a user-id and password.
- Each RP has to put into place the technical infrastructure needed to make it PKI-aware i.e. the ability to use digital certificates as authentication mechanism.
- Each RP has to put into place the mechanisms for both validation and revocation operations. This is especially challenging when you have CA's that are cross-certified and CA's and clients need to support certificate path processing.
- The user experience, in browsing from one PKI protected resource to another may not be seamless.
- The authorization aspect is indeed separate from the authentication. Information needed to make an access control decision may not be present in the information provided by a digital certificate.
What strikes me when I look at these two options is that the question posed at the start of this entry may not be the right one to ask. The question one should be asking instead is "Who do you trust?"
The fundamental precept of a PKI environment is that everyone must buy into trusting the CA. I would bet that that a lot of the "entrenched PKI communities" have expended significant amount of resources in standing up not just the technical infrastructure but the credential proofing and issuance processes for their domain. As such, they implicitly trust a certificate vouched for by the CA. The downside to this is that every single RP must be PKI-enabled, which is non-trivial.
SAML is not a trust mechanism but more of a mechanism for a particular domain to make assertions about its users. As such, what is needed in the federated world for this to work is for a relying domain to trust the asserting domain. The relying domain would have to have confidence in the credential proofing and issuance process of the asserting domain. The advantages here would be that SAML-enabling an SP is a more straight forward process and there is significant out-of-the-box support for SAML in vendor tooling.
In each case, I consider authorization to be separate and distinct from the authentication.
But that organizational trust... Ah! Is it not remarkable that the truly hard problems, whether one is discussing Identity Management or Service Orientation, really do not have to do with technology but with people, culture and behavior?
br>
Wednesday, March 19, 2008
Congratulations to the Shibboleth Team on the release of Shibboleth
2.0. This version provides support for SAML 2.0 as well as integration with most major identity stores, including Microsoft Active Directory, Kerberos, LDAP-compliant directory services, and JDBC-compliant databases.
For those not familiar with this fine piece of open source software, Shibboleth is a "... standards-based, open source middleware software which provides Web Single Sign-On (SSO) across or within organizational boundaries. It allows sites to make informed authorization decisions for individual access of protected online resources in a privacy-preserving manner."
br>
Saturday, March 8, 2008
One of the things I have been doing a bit of work on has been Attribute Authorities in the SAML 2.0 sense i.e. a SAML entity that produces assertions in response to identity attribute queries from an entity acting as an attribute requester. In particular, my interest lies in controlling the release of attributes based on policies that could be externalized.
My interest in finding a hopefully standardized way of doing this was sparked by the article "Using XACML for Privacy Control in SAML-Based Identity Federations" by Wolfgang Hommel, which I found on the XACML section of the OASIS Cover Pages. The article describes the use of XACML to control the release of attributes and an implementation of this using an earlier release of Shibboleth.
At the IDTrust 2008 Symposium, one of the sessions that I enjoyed was the panel session on "Federations Today and Tomorrow" hosted by Ken Klingenstein of Internet2 and Patrick Harding of Ping Identity. When Ken spoke a bit about the release of Shibboleth 2.0, which supports SAML 2.0, this brought me back full circle.
In an earlier conversation I had had on this topic with some colleagues who are working with release candidate versions of Shibboleth 2.0, I was curious to find out if the "Attribute Filter" capability in Shibboleth had made it into the SAML 2.0 standard given that Shibboleth 2.0 is the convergence of Shibboleth, SAML 1.X and the Liberty IDFF. Unfortunately, I was informed that it had not. The implementation is specific to the Shibboleth functionality and does not seem to exist as part of the SAML 2.0 specification.
So what I asked the panel was that given that both SAML 2.0 and XACML 2.0 are based out of OASIS, is there any work in integrating the two standards to enable this type of functionality, as there is definitely a use case for this in a lot of the communities that I am familiar with. The answer I got back was that, while this is not outside the realm of possibility, it is not something that someone is working on. In a hallway conversation I had with some other folks after the session, someone mentioned that this type of functionality may be built into one of the Oracle products, but again the implementation was proprietary to that vendor.
br>
Sunday, March 2, 2008
Sunday, January 27, 2008
I recently got an e-mail asking about a blog entry I had made back in 2005 regarding Identity Federation, SAML and WS-Federation, and if I had attained any measure of clarity regarding their usage since that time. Since this is something I have been spending a bit of time on recently, it seemed like the perfect opportunity to talk about this.
When I wrote the original blog post, I phrased it as a competitive situation. I have since moved on from that position and consider them to be be complementary approaches, each with strengths in certain areas. An Enterprise that is looking at an IdM implementation should be seeking to leverage the strengths that each camp brings to the table.
As of November, 2007, SAML 2.0 is an OASIS standard and can be considered a combination of the features of SAML 1.X, Liberty Alliance Identity Federation Framework and Shibboleth.
The other camp one should be looking at in the area of Identity Federation is the OASIS Web Services Secure Exchange (WS-SX) family of standards which include WS-Trust, WS-SecureConversation and WS-SecurityPolicy.
The area of contention between the two camps all too often arises when it comes to the domain of browser-based Single Sign-On. This has historically been the playground of SAML but now the new kid on the block is WS-Federation which describes how to use WS-Trust for browser-based Single Sign On scenarios. WS-Federation is currently going through the standardization process at OASIS.
My perspective on both starts with the basic fact that SAML assertions are universal and can be used independently of the SAML protocol. I also very much like the capability that is provided by the WS-Trust based Security Token Service (STS) which provides the ability to translate token formats.
SAML has great deal of traction in the browser based federation arena while WS-SX targets securing web services. Given that SAML assertions are supported by a wide variety of IdM and SOA infrastructure products such as WAM products, WSM products, XML Security Gateways and more, my approach to dealing with interoperability concerns in this area (until the vendor camps work this out) will be to use products and technologies that bridge the gap by supporting both camps.
br>
Saturday, November 10, 2007
Note to self for for use as a reference...
SAML assertions have no dependencies on and can be used independently of the SAML Protocol. SAML 2.0 defines three types of assertion statements:
- Authentication:- The assertion subject was authenticated by a particular means at a particular time.
- Authorization Decision:- A request to allow the assertion subject to access the specified resource has been granted or denied.
- Attribute:- The assertion subject is associated with the supplied attributes.
<Issuer> (Required):- The SAML authority that is making the claim(s) in the assertion.
<Signature> (Optional):- An XML Signature that protects the integrity of and authenticates the issuer of the assertion.
<Subject> (Optional):- The subject of the statement(s) in the assertion.
<Conditions> (Optional):- Conditions that MUST be evaluated when assessing the validity of and/or when using the assertion.
<Advice> (Optional):- Additional information related to the assertion that assists processing in certain situations but which MAY be ignored by applications that do not understand the advice or do not wish to make use of it.
Zero or more of the following statement elements:
- <Statement>
- <AuthnStatement>:- An authentication statement.
- <AuthzDecisionStatement>:- An authorization decision statement.
- <AttributeStatement>:- An attribute statement.
An assertion with no statements MUST contain a <Subject> element. Such an assertion identifies a principal in a manner which can be referenced or confirmed using SAML methods, but asserts no further information associated with that principal.
Otherwise <Subject>, if present, identifies the subject of all of the statements in the assertion. If <Subject> is omitted, then the statements in the assertion apply to a subject or subjects identified in an application- or profile-specific manner. SAML itself defines no such statements, and an assertion without a subject has no defined meaning in this specification.
<Version> (Required):- Version of the assertion. "2.0" for SAML 2.0.
<ID> (Required):- The identifier for this assertion.
<IssueInstant> (Required):- The time instant in UTC.
SAML 2.0 Core Spec [PDF], OASIS Security Services (SAML) TC
br>
Thursday, November 1, 2007
Cisco announced today that it is planning on acquiring Securent, which is considered a leading vendor in the area of Entitlement/Fine Grained Authorization with deep support for XACML.
My initial thought after hearing about that this was that this would be a rather interesting and complementary match to their earlier acquisition of Reactivity, the XML Security Gateway vendor. Would it not be sweet if the the Reactivity (now the ACE XML) Security Gateway could act as a Policy Enforcement Point that for the Securent supplied Policy Decision Point and even more could now natively understand XACML?
Then I read this blog entry by Phil Schacter of the Burton Group and learned that the acquisition is actually driven by Cisco's
Collaboration Software Group, which is looking to "... enhance its ability to provide some shared access control infrastructure for its growing portfolio of collaboration and unified communication offerings..". As noted in Phil's blog entry this acquisition is NOT driven by Cisco's Security Technology Group which I presume has ownership of the NAC, VPN, network authentication product lines (which I assume is where the ACE XML Gateway product line lives). As he notes, it will be hard for Cisco to not focus exclusively on the business priorities of the Collaboration Software Group.
I personally would consider that to be a bad thing from the perspective of both the current Securent customers as well as folks from Enterprise's that are interested in Entitlement Management/Fine Grained Authorization. Will be watching to see how this plays out.
br>
Sunday, October 28, 2007
There is new blog posting by James McGovern, "How Industry Analysts weaken Enterprise Security", that seems to take industry analysts to task for not asking enterprise application vendors if they "...implement this security specification or any security specification..." in their product. The example specification that is used is XACML.
It is an interesting question but seems to be designed more to get a rise out of people rather than addressing the ground truth, which is that the responsibility lies not with the Analysts but with the Architects and Engineers who are evaluating potential products for their Enterprise.
So, to my mind, the more appropriate questions would be:
- Are the customers of the various analyst firms are being provided the appropriate and independent information such that they can ask the right questions of the vendors? Which is the role of the Analysts.
- Are the Enterprise's actually holding the vendors accountable by NOT spending money with vendors that do not implement open standards? Which is the role of the Business and the Enterprise Architect.
br>
Tuesday, August 14, 2007
Interesting report [1] published by the Information Assurance Technology Analysis Center (IATAC) and the Data and Analysis Center for Software (DACS) on the current state of Software Security Assurance.
"The [report] provides an overview of the current state of the environment in which software must operate and surveys current and emerging activities and organizations involved in promoting various aspects of software security assurance. The report also describes the variety of techniques and technologies in use in government, industry, and academia for specifying, acquiring, producing, assessing, and deploying software that can, with a justifiable degree of confidence, be said to be secure. The report also presents observations about noteworthy trends in software security assurance as a discipline."
[1] http://iac.dtic.mil/iatac/download/security.pdf
br>
Sunday, July 15, 2007
Message replay is a very real attack vector for web services attacks. The description of the defense against it is pretty straight forward. To quote the "Message Replay Detection Pattern":
Message replay detection requires that individual messages can be uniquely identified. [...] Cache an identifier for incoming messages, [...] identify and reject messages that match an entry in the replay detection cache.
You have a couple of choices in how you can leverage the relevant web services standards to implement this.
- Given that WS-Security XML Signature must include a <ds:SignatureValue> element, you can use that as a unique identifier for the messages. The <ds:SignatureValue> is computed from the hash values of the message parts that are being signed (and you can make sure that the parts include both the message body and the WS-Security timestamp i.e. <wsu:Timestamp>.
- You can use another unique identifier e.g. the WS-Addressing messageid <wsa:MessageID>, which is a URI value that uniquely identifies the message that carries it. Combine it with the WS-Security timestamp i.e. <wsu:Timestamp> and you are good to go here as well.
In both cases, what you need in order for this to work is a configurable replay cache. And that is also relatively straight-forward to implement, especially if you are doing this in software.
The proverbial devil and the details come into play when you want to abstract this type of capability into the infrastructure. I for one am very interested in NOT doing this in software (Perf, consistent implementation across SOAP stacks etc.), so a natural place that I would seek to implement this type of capability would be in my perimeter PEP which typically, at least in my environment, is implemented using a XML Security Gateway.
When it comes to deployment, you do not want a single point of failure in your infrastructure, so what you typically do for deploying perimeter PEP's (XML Security Gateways) is to deploy a cluster (at least two) with a (clustered) load balancer in front of the Gateways. Now, if you are going completely stateless what typically happens in this configuration is that requests are dynamically spread across your cluster of XML Gateways. And that is where the details come into play. If each Gateway implements its own replay cache, and requests are spread out across multiple Gateways, how can you truly be assured that you can detect a replay attack?
The way that you would handle this, if you were implementing this in software, would be to have a shared replay cache (typically a shared database) across multiple load balanced app servers (Note that there are perf implications with this). But this really does not work with XML Gateways. These are hardened, FIPS compliant devices that typically cannot be tied into a database back-end. So, what possible solutions could you have for this:
- Configure the load balancer for Active-Passive rather than an Active-Active configuration. i.e. Instead of distributing traffic across the Gateways equally, send all traffic to one Gateway and only if that one fails should the traffic be sent to another cluster member.
- Actually modify the Gateway configuration to be able to connect to a database back-end.
- Others?
At present, I am thinking that option (1) is probably the most realistic one. I am not aware of any XML Security Gateway vendors who have implemented anything like option (2). But I am most certainly curious as to how folks are doing this in their environment, so if you are doing this, I would appreciate any info you can share on your implementation details and what you find to be the trade-offs.
br>
Sunday, February 25, 2007
Dominick Baier, now with thinktecture (Congrats Christian!), has a good article in the current issue of MSDN Magazine on the usage of certificates in .NET 2.0. The article covers:
- The Windows Certificate Store
- Certificate classes in .NET
- Validation, SSL, Web services, and code signing
- Signing and encrypting data
It is a good read for those who need to work with certs on the .NET Platform. Check it out!
br>
Thursday, December 28, 2006
Right after posting my last blog entry on Threats to Message Exchanges in a SOA, I cam across a blog entry by Gunnar Peterson of Cigital that points to a paper that he co-authored with Howard Lipson at CERT on "Security Concepts, Challenges, and Design Considerations for Web Services Integration" in which they describe "... best practices for development staff who want to actually build security services into the software they are developing. The paper is really two papers in one - the first part is on web services and their impact on security concepts, the second part deals with message level security (WS-Security, WS-Trust, WS-SecureConversation) to enable end to end security model for an integrated system, and the last part is on design considerations for security in Web Services."
I have not had a chance to peruse this in detail, but this definitely looks like a must read document!
br>
Monday, November 20, 2006
A new bit of security goodness from the MS ACE Security Services Team. From the docs:
"Cross-site scripting (XSS) attacks exploit vulnerabilities in Web-based applications that fail to properly validate and/or encode input that is embedded in response data. Malicious users can then inject client-side script into response data causing the unsuspecting user's browser to execute the script code. The script code will appear to have originated from a trusted-site and may be able to bypass browser protection mechanisms such as security zones.
These attacks are platform and browser independent, and can allow malicious users to perform malicious actions such as gaining unauthorized access to client data like cookies or hijacking sessions entirely.
Simple steps that developers can take to prevent XSS attacks in their ASP.NET applications include (see How To: Prevent Cross-Site Scripting in ASP.NET in the patterns & practices series for more detail):
- Validating and constraining input
- Encoding output
For defense in depth, developers may wish to use the Microsoft Anti-Cross Site Scripting Library to encode output. This library differs from most encoding libraries in that it uses the "principle of inclusions" technique to provide protection against XSS attacks. This approach works by first defining a valid or allowable set of characters, and encodes anything outside this set (invalid characters or potential attacks). The principle of inclusions approach provides a high degree of protection against XSS attacks and is suitable for Web applications with high security requirements."
Check it out @ http://msdn2.microsoft.com/en-us/security/aa973814.aspx
br>
Wednesday, August 30, 2006
This came across on one of the lists that I am on. One of the best books on Security, “Security Engineering” by Ross Anderson is now available for free on the web!
Here is the chapter index:
1. What is Security Engineering?
2. Protocols
3. Passwords
4. Access Control
5. Cryptography
6. Distributed Systems
7. Multilevel Security
8. Multilateral Security
9. Banking and Bookkeeping
10. Monitoring Systems
11. Nuclear Command and Control
12. Security Printing and Seals
13. Biometrics
14. Physical Tamper Resistance
15. Emission Security
16. Electronic and Information Warfare
17. Telecom System Security
18. Network Attack and Defense
19. Protecting E-Commerce Systems
20. Copyright and Privacy Protection
21. E-Policy
22. Management Issues
23. System Evaluation and Assurance
24. Conclusions
25. Bibliography
Check it out!
br>
Saturday, August 26, 2006
I am trying to troubleshoot an installation of L2TP VPN on my SBS2K3 network. I had set this up to work at one point, but that was before I upgraded to ISA 2004 and the expiry of my laptop computer cert.
The issue is around the inability to request a computer certificate for a client computer. I keep getting the error “The certificate request failed. The RPC server is unavailable.” whenever I attempt to request a Computer Cert for the laptop. I am also seeing a “DCOM was unable to communicate with the computer XYZ using any of the configured protocols” in my System Event Viewer. Did some web searching and came across this entry which seemed to be an exact duplicate of my problem. But the solution does not seem to work for me. Just for the record, I am logged into the domain with domain admin creds, disabled the Wi-Fi connectivity, and am connected using a hard line to the internal network. Has anyone come across this problem and resolved it?
Is there a documented method for setting up L2TP VPNs on a SBS2K3 network? I was using the excellent instructions in the “Windows Small Business Server 2003 Administrator’s Companion” by Charlie Russel, Sharon Crawford and Jason Gerend, but the relevant section of the book became outdated when ISA 2004 was released for SBS2K3.
br>
Saturday, April 22, 2006
Someone on one of the OASIS lists asked for a “SAML Elevator Pitch”. Eve Maler [Sun] pointed to her “SAML in a technical nutshell” [PDF] slide desk. Good read!
SAML in a technical nutshell:
- XML-based framework for marshaling security and identity information and exchanging it across domain boundaries
- Wraps existing security technologies rather than inventing new ones
- Its profiles offer interop for a variety of use cases, but you can extend and profile it further
- At SAML's core: assertions about subjects
- Assertions contain statements: authentication, attribute, entitlement, or roll-your-own
Key use cases covered by SAML out-of-the-box:
- Single sign-on
- Using standard browsers
- Using enhanced HTTP clients (such as hand-held devices) that know how to interact with IdPs but are not SOAP-aware
- Identity federation
- Using a well-known name or attribute
- For anonymous users by means of attributes
- Using a privacy-preserving pseudonym
- Attribute services
- Getting attributes that can be interpreted according to several common attribute/directory technologies
- Single logout
br>
Sunday, March 26, 2006
Have you ever wanted to pay a friend for lunch or settle a coffee bill? And wanted to do it while you were out and about directly from your Mobile phone? Then check out the new PayPal mobile service.
Security is an important consideration for something like this and they do a good job of making sure that Identity and Authentication remain distinct. For a good read on this topic, check out Steve Riley’s article on TechNet on this topic.
To reprise some elements of the above article, Identity is that answer to the question “Who are you?” that you present to the system that you wish to access. The interesting thing about Identity is that it is a claim that you make about yourself using something public like a ATM Card, a User ID or in this particular case your cell phone and the corresponding cell phone number. Authentication is the answer to the question “Can you prove you are you?”. Common mechanisms for doing this are passwords, PINs etc. In short, this is a secret known to you and the system (The system either knows it or can verify that the secret is authentic). In this particular case, when you send the request to make a payment from your cell phone, the PayPal IVR system calls you back on your mobile phone number and you have to prove that you are indeed you by putting in a shared secret that only you and PayPal know about. Your possession of the secret verifies that you are you and the payment proceeds. Very nice!
[Now playing: The Mummers' Dance - Book of Secrets]
br>
Sunday, December 18, 2005
J.D. Meier is the Program Manager on the patterns & practices team who is focused on helping customers incorporate security and performance into their life-cycle. In his own words “Who wants an insecure app that scales ... or a "secure" app that won't?”
He has a great deal of information to share and now that he has a weblog, he has a place to share them as well. Check out his latest entries:
I especially love his entry on Security Approaches that Don’t work.
br>
Saturday, November 19, 2005
“Federation refers to the establishment of some or all of business agreements, cryptographic trust, and user identifiers or attributes across security and policy domains to enable more seamless cross-domain business interactions. As web services promise to enable integration between business partners through loose coupling at the application and messaging layer, federation does so at the identity management layer - insulating each domain from the details of the others' authentication and authorization infrastructure.” — SAML Executive Overview [PDF]
This is a big deal to any distributed enterprise that needs to manage Identity and provide Single Sign On. Security Assertion Markup Language (SAML) 1.1, which is an OASIS Standard, has been an accepted mechanism for accomplishing this. SAML is something is extensively leveraged within the Enterprise that I work in, so this is of particular interest to me. SAML 2.0 is the next generation of this technology that is going through the OASIS standardization process and is backed by folks like the Liberty Alliance among others. I recently read in an Infoworld article that Microsoft will not be supporting SAML 2.0, but will instead back the WS-Federation protocols. WS-Federation is an effort that is being backed by companies such as IBM, Microsoft, BEA Systems, RSA Security, and VeriSign.
I am unsure of what this means as of yet, so I need to do some further research into both efforts. Here are some links to various sources of information on both efforts so that we can understand, hopefully, what the technical approach each effort is taking, and the impact if one chooses one approach versus the other.
br>
Wednesday, October 12, 2005
J.D. Meier, the guy who was responsible for driving PAG’s books on “Improving Web Application Security” and “Perf & Scale” has some great blog postings that cover various aspects of Security Engineering. These are must read items if you wish to bake in security as part of your Software Development Life-cycle.
In particular check out:
This material is so very relevant to some work that I am currently doing and J.D. has been kind enough to allow me to review some of this material and reference it. [I am not a big believer in re-inventing the wheel].
Heck, just subscribe to the man’s blog! He puts out some very, very good info!
br>
Saturday, October 8, 2005
I am currently in the process of reviewing some material for the PAG on Security Engineering.. In short how to bake in security practices into the development life-cycle. Keith has a blog entry about this as well, so I won’t repeat it here. What I will say is that this is a very approachable and readable material that is targeted at the developers who work in the trenches. I am very much looking forward to this. A good jumping off point for the Security Goodness that the PAG is putting out is http://msdn.com/securityguidance
During the MVP Summit, the developer security MVPs such as myself, had a chance to spend some time with Michael Howard, the author of “Writing Secure Code”. He is currently working on a book on the Secure Development Life-cycle which I am very much looking forward to reading as well.
br>
Tuesday, September 6, 2005
Foundstone has released a white-paper based on a bug that they discovered that “…..describes the limitations of the FormsAuthentication.SignOut method and provides more information about how to ease cookie replay attacks when a forms authentication cookie may have been obtained by a malicious user. The paper introduces methods that web developers can employ to reduce cookie replay attacks in the ASP.NET applications. Some of these methods include:
- Use SSL by configuring the Web application in Microsoft Internet Information Services. This ensures the forms authentication feature will never issue a cookie over a non-SSL connection.
- Enforce TTL and use absolute expiration instead of sliding expiration.
- Use HttpOnly cookies to ensure that cookies cannot be accessed through client script, reducing the chances of replay attacks.
- Use the membership class in ASP.NET 2.0 only in order to protect forms authentication cookies from being used maliciously by storing user information in the MembershipUser object.”
Check it out here [PDF]
According to them, in response to this bug, Microsoft now has a KB article that details the limitations of the FormsAuthentication.SignOut Method
br>
Thursday, August 25, 2005
Security Engineering Index
http://msdn.microsoft.com/SecurityEngineering
This page provides an index to available and emerging guidance for patterns & practices Security Engineering. To build secure applications, security engineering activities must be an integral part of your software development practices. patterns & practices Security Engineering builds on, refines, and extends core life cycle activities to create security-specific activities. You can adopt these activities incrementally as you see fit. These security activities are integrated in MSF Agile, available with Visual Studio Team System. This provides tools, guidance, and workflow to help make security a seamless part of your development experience.
Security Guidance Index
http://msdn.microsoft.com/SecurityGuidance
This page provides an index of patterns & practices Security Guidance for applications. The resources include guides and books available on MSDN together with modular content of various types including scenarios and solutions, guidelines, explained, checklists, and How Tos.
Threat Modeling
http://msdn.microsoft.com/ThreatModeling
This guidance presents the patterns & practices approach to creating threat models for Web applications. Threat modeling is an engineering technique you can use to help you identify threats, attacks, vulnerabilities, and countermeasures that could affect your application. You can use threat modeling to shape your application's design, meet your company's security objectives, and reduce risk.
Security How To Index
http://msdn.microsoft.com/library/en-us/dnpag2/html/SecurityHowTosIndex.asp
This page provides an index of patterns & practices Security How Tos organized using multiple views by categories. The "A Through Z" view at the bottom lists each How To in alphabetical order.
The security engineering index happens to be among my favorite security resources and the Security How-To Index is awesome!
br>
Saturday, August 20, 2005
Sunday, July 17, 2005
I spent some time yesterday upgrading my SBS2003 home network to SP1. The document written by the SBS MVP’s on “How to Install Service Pack 1 for SBS 2003” was very helpful in this regard. Thank You!
I currently have a networked Tivo in my home network which I moved to a land line some time ago, primarily so that I could beef up the security on my wireless network. The Tivo firmware STILL does not support anything more than WEP and I most definitely was not comfortable with the “security” of WEP. The key point with having the Tivo on the home network is that, if you want it to use the network to connect to the Tivo service, you need to set it up as a SecureNAT client.
In ISA Server 2004 2000, in addition to setting it up as a SecureNAT client, I had to open the out-bound TCP ports 1026, 4006 and 8080 for the Tivo to connect to the service. The great thing in ISA 2004 was that I could get rid of all of those extra items that I needed to set up.
In ISA Server 2004:
- Internet Access Firewall Policy: Tivo => External Network
External network is predefined in ISA and I added the Tivo as a Computer Network Object
- Protocol == All Out-bound Traffic
- Condition == All Users
All Users group includes both authenticated and unauthenticated users. The SecureNAT client is an unauthenticated user.
- Go into the HTTP Protocol and disable the “Web Proxy” Application Filter.
The above limits unauthenticated users to the Tivo box which is now on a closed land line network which I physically control. All other machines in the network require authentication and utilize the ISA 2004 Firewall client.
br>
Sunday, July 10, 2005
Microsoft, earlier this month, released v2.0 of MBSA. Here is the official blurb:
“In response to direct customer need for a streamlined method of identifying common security misconfigurations, Microsoft has developed the Microsoft Baseline Security Analyzer (MBSA). Version 2.0 of MBSA includes a graphical and command line interface that can perform local or remote scans of Windows systems. MBSA runs on Windows Server 2003, Windows 2000, and Windows XP systems and will scan for common security misconfigurations in the following products: Windows 2000, Windows XP, Windows Server 2003, Internet Information Server (IIS) 5.0, and 6.0, SQL Server 7.0 and 2000, Internet Explorer (IE) 5.01 and later, and Office 2000, 2002 and 2003. MBSA also scans for missing security updates, update rollups and service packs published to Microsoft Update.”
Definitely a tool that should be in your Security Toolbox. Check it out….
br>
Monday, June 27, 2005
One of the basic tenets of secure coding is that ALL input is EVIL and should be validated and sanitized before being allowed into the application. This is also definitely an area where a lot of mistakes can be made.
The PAG folks have written a set of modular How-To's to tackle the finer points of injection attacks and as such implement effective input validation in your ASP.NET Applications. The guidance covers both .NET 1.1 and 2.0.
Check them out:
br>
Wednesday, June 8, 2005
Ha! Who cares about TechEd!
More interesting things are happening as part of the patterns and practices Security wiki coming out party!
Ward Cunningham, Yes.. THAT Ward Cunningham, walked through with the PAG Security folks on a scenario and a solution for an application and documented the process on the wiki. Check it out!
br>
Monday, June 6, 2005
If you are a software developer and you are interested in making sure that your application is robust and secure, this is a MUST see & utilize resource!
The PAG ( patterns & practices ) folks have put online a resource that provides a view into their present and future deliverables around security engineering to application scenarios. The additional benefit is that the content is provided as a wiki so that the community can annotate, elaborate and contribute.
The security wiki is brought to you by the same folks who brought you "Improving Web Applicaton Security" and "Building Secure ASP.NET Applications" which are both great resources in their own right.
In their own words "This is where we think out loud. Here you’ll find emerging practices, guidance for application scenarios, security engineering, threat modeling, technical guidance and more. We’re looking for your experience, input and feedback to make this a useful resource for application security."
I've had the pleasure of working with the PAG folks on this effort.. I hope that you will also take this opportunity to contribute to making this security wiki a living, working resource that will improve the state of software security.
Check it out @ http://Channel9.Msdn.Com/Security
The topics discussed include everything from ApplicationSecurityMethodology to WebServerSecurity. The products and technologies cover everything from NETFrameworkSecurityHub to ASPNET2SecurityHub. Some of the resources that are provided include SecurityChecklists (These are awesome, BTW!) to information about the SecurityBlocks.
Highly recommended!
br>
Wednesday, May 25, 2005
Gary McGraw of Cigital, co-author of "Exploiting Software" and "Building Secure Software", has been writing a series of articles on secure coding issues in the IEEE Security & Privacy magazine. As a service to the community, he has provided free PDF downloads of the articles at his web site. Be sure to check them out.
br>
Tuesday, May 17, 2005
The Patterns & Practices folks have released an updated Security Guidance regarding Threat Modeling for Web Applications.
The Threat Modeling process as defined here is context-relevant (i.e. The threat model for a Web App is going to be different from a Win Forms application) as well as a more iterative process. The iterative threat modeling process as defined here consist of:
- Step 1: Identify security objectives. Clear objectives help you to focus the threat modeling activity and determine how much effort to spend on subsequent steps.
- Step 2: Create an application overview. Itemizing your application's important characteristics and actors helps you to identify relevant threats during step 4.
- Step 3: Decompose your application. A detailed understanding of the mechanics of your application makes it easier for you to uncover more relevant and more detailed threats.
- Step 4: Identify threats. Use details from steps 2 and 3 to identify threats relevant to your application scenario and context.
- Step 5: Identify vulnerabilities. Review the layers of your application to identify weaknesses related to your threats. Use vulnerability categories to help you focus on those areas where mistakes are most often made.
Beyond the above there are also Templates that can quickly get you started, a web application security frame that uses categories to organize security vulnerabilities, as well as Tool integration with the Visual Studio Team System.
In short this is an great piece of work by the same folks who brought you "Improving Web Applications Security", "Perf & Scale" and more (Way to go J.D!)
I was fortunate enough to have the opportunity to contribute to this work as well as act as an external reviewer. Because of that experience, I believe that this particular work will make Threat Modeling much more approachable and understandable to the people who really need to utilize Threat Modeling; The developers in the trenches.
br>
Wednesday, May 11, 2005
I am the Vice-Chair of the IEEE Computer Society (Baltimore Chapter). Our next technical meeting will be held on Thursday, May 19, 2005 (6:30 p.m - 8 p.m) at the Historical Electronics Museum. Please note that both IEEE Members AND Non-Members are welcome to attend!
If you are interested in Information Security, please join us.
TOPIC
"Application of Operations Research techniques in Infosec" by Dr. Julie J.C.H. Ryan.
ABSTRACT
There is a great deal of opportunity in using the techniques and methods in operations research to investigate the operational aspects of information security. At GWU, we are embarked on a long-term research effort to systematically investigate the application of OR methods to operational information security to see which provide the most promise for more in-depth study. So far we have investigated the use of the Cox model, most commonly used in medical research, and the approach of expert judgment elicitation. This talk will provide a brief overview of the work performed to date and the results achieved through these efforts.
ABOUT THE GUEST SPEAKER
Julie J.C.H. Ryan received her D.Sc. from The George Washington University (GWU) in Engineering Management and Systems Engineering. She holds an M.L.S. in Interdisciplinary Studies from Eastern Michigan University and a B.S. from the United States Air Force Academy. She is currently an Assistant Professor at GWU. Her research interests include information security, knowledge management, international relations, and information warfare. She worked for 18 years as an information security specialist, systems engineer, intelligence data analyst, and policy consultant prior to her academic career. She is the co-author of "Defending Your Digital Assets Against Hackers Crackers, Spies, and Thieves" (2000, McGraw-Hill).
LOCATION
Historical Electronics Museum
1745 West Nursery Road
Linthicum, Maryland
Directions @ http://www.hem-usa.org/contact.html
br>
Friday, December 10, 2004
Came across an interesting comment on one of the lists that I am on.
It would appear that Michael Howard and David LeBlanc, the authors of Writing Security, are working on a new book with John Viega (Building Secure Software) and David Wheeler which is scheduled to hit the shelves in about 6 months. According to LeBlanc, they specifically chose this set of authors to provide really good cross-platform coverage.
Looks like a must have book!
br>
Tuesday, December 7, 2004
Thursday, November 25, 2004
Jerry Bryant [MS] has an excellent post with links to Security resources that are provided by Microsoft. I am copying this here so that I do not have to go looking for them later:
Tools
- Microsoft Baseline Security Analyzer (MBSA)
Use this tool to identify common security misconfigurations and missing security updates. MBSA runs on the Windows Server™ 2003, Windows® 2000, and Windows XP operating systems and will scan for vulnerabilities in multiple products and technologies, including Microsoft Internet Information Services (IIS) and SQL Server™. - Software Update Services (SUS) / Windows Update Services (WUS)
Quickly and reliably deploy the latest security updates, and service packs with Software Update Services. This new site now has the latest info on WUS. - Windows Update
Scans your computer and provides a selection of updates tailored for your operating system, software, and hardware. - Microsoft Office Product Updates
Scans and updates Microsoft Office products. - IIS Web Server Lockdown Wizard
Reduces the attack surface of Internet Information Services (IIS) and includes URLScan to provide multiple layers of protection against attackers. - UrlScan Security Tool
Helps prevent potentially harmful HTTP requests from reaching IIS Web servers.
Removal Tools:
Other Tools:
Updating
Isolation and Resiliency
Engineering Excellence
Guidance and Training
- Security Guidance Centers on Microsoft.com
Worldwide
US
Prescriptive guidance to help provide defence-in-depth security. - E-Learning Security Training
E-Learning self-paced clinics - 4 Developer and 8 ITPro modules
Now available in French, German, Spanish and Japanese
XP SP2 - Security Guidance Kit CD (now shipping in US and Canada)
CD-ROM with tools, templates, and how-to guides - Microsoft IT Security Showcase
An insider view into Microsoft's process of deploying, and managing its own enterprise solutions. - Security Newsletter
Register for our free monthly e-mail newsletter that's packed with security news, guidance, updates, and community resources to help you protect your network. - Security Program Guide: Events and Training Information
Events, webcasts and training ivailable for both IT Professionals and Developers. - US Security Summit Keynote and Training Content
- Security Notifications via e-mail
Sign up today to get e-mail alerts when an important security bulletin or virus alert has been released. - Security Update RSS Feed
- Security Bulletin Search Page
Search on product, technology or KB article - Security Bulletin Webcast
Join Microsoft experts on the day after bulletin announcements to get the latest information and have the opportunity to ask questions. - How to Tell If a Microsoft Security-Related Message Is Genuine
- Writing Security, 2nd edition
Best practices for writing Security and stopping malicious hackers. - Building and Configuring More Secure Web Sites
Best Practices used at OpenHack. - Recent Security Guidance Center additions:
Windows XP Guide, includes SP2
New Security Risk Management Guide
Windows NT 4.0 and Windows 98 Threat Mitigation Guide
Microsoft Identity and Access Management Series
Antivirus Defense-in-Depth
Securing Wireless LANs with PEAP and Passwords - Small Business Guidance
Guidance specifically for the smaller business - Configuring Windows XP 802.11 Wireless Networks for the Home / Small Business
- Consumer Information:
http://www.microsoft.com/security/protect
http://www.microsoft.com/athome/security/default.mspx - Newsletter for home users
- Security bulletin notifications for home users
br>
Friday, November 19, 2004
Michael Howard discusses how you can run as an administrator and access Internet data safely by dropping unnecessary administrative privileges when using any tool to access the Internet.
He has created an application called DropMyRights to help users who must run as an administrator run applications in a much-safer context—that of a non-administrator. It does this by taking the current user's token, removing various privileges and SIDs from the token, and then using that token to start another process, such as Internet Explorer or Outlook. This tool works just as well with Mozilla's Firefox, Eudora, or Lotus Notes e-mail.
Check out the article...
br>
Sunday, November 14, 2004
Like most computer savvy folks these days, the amount of digital "stuff" in my house is growing rather rapidly. That includes:
- MP3 music files that I've ripped from my CDs
- Photos from my digital camera
- Videos that I've taken
- Documents and Papers
- Source Code stored in my CM system
- Virtual Machine Images
- and more...
Needless to say I have multiple computers in the house that are connected via both wired and wireless networks. Currently I am running a Windows 2000 Domain in the house as my server class machine, which is a bit old, is not one I have upgraded to Windows 2003. All my Windows 2003 machines are Virtual Machines

Recently, I've bitten the bullet and am in the process standing up a server class machine that can run Windows 2003 at home. My requirements are that:
- I need a redundant and reliable file storage for my network. A lot of the content that I have on the network is simply things I cannot afford to lose.
- I want to lock down my wireless network.
- ASP.NET Development environment.
- I am seriously getting into collaboration via Windows SharePoint Services. So I am looking to make sure that I have an environment that I can play a bit with it.. A personal goal, at least for the home, is to have a shared calendar for the family.
(1) Starting out with the basics, I picked up a Dell server on sale. The only thing I upgraded was to bump up the memory and add a second network card to it. Redundant and reliable for me means that the storage in my machine needs to be configured either as a RAID 1 or RAID 5. For various reasons, I chose RAID 1. So, I also picked up a HighPoint RocketRaid IDE controller and two 200GB hard disks.
I am also picking up an external USB hard disk to which I intend to back up my RAID array on a weekly basis. I will be keeping this at work; a poor man's version of off-site backup. This way, at most I am not losing more than a week of data if something untoward happens to my entire home system.
(2) I love my Tivo but when it comes to security, it has some issues. My Tivo is set up with the Home Media Option such that I can play all of my MP3s, which are stored on my W2K server, via my Home Theater system. In addition, I can display all of my photos, again stored on my W2K box, on my TV. The Tivo is connected to my home network via a USB Wireless adapter and goes out over the network for program updates etc.
The issue I have is that the highest level of encryption Tivo supports is 128 WEP. It does not support WPA at all! This has limited my ability to upgrade the security of my Wireless network. So, I've gotten irritated enough that I am pulling wires to my Tivo to convert it from wireless to a hard line. Once this is done, my plan is to implement 802.11x authentication with certificates and lock down the the network.. Now, if I you ask me if I REALLY need to do this, the answer would be, probably not.. But I can, so I will

(3) (4) Now this is the interesting part, I could install Windows 2003 with WSS and get *some* of the functionality that I want (ASP.NET/Collaboration). But why bother? There is a solution out there that will give me all of the components that I am looking for (Windows 2003, WSS, Exchange, SQL2K) supposedly integrated rather well and designed to run on a single box. Windows Small Business Server 2003.
From what I've seen of and heard about this product, it seems to be ideal for what I am looking for within the house. I am thinking that if I install SUS on top of the standard SBS 2003 install, I would also get the ability to update and patch the machines on my network as well.
The only decision I have not made as of yet, is where to put the SBS server on the network. I am currently connected to the Internet via a cable modem, which in turn is coming into a Wireless router with hard line ports. The router has NAT capabilities and has a built in simplistic firewall that has done the job for me so far. But SBS 2003 premium comes with ISA server and I have 2 NICs in the box, so I could hook it up to be Internet facing. Or I could simply hook up the SBS machine to the internal network behind the Router. I'll have to think a bit more about it..
One resource that I am finding extremely helpful is "Windows Small Business Server 2003 Administrator's Companion" by Charlie Russel, Sharon Crawford and Jason Gerend.
br>
Saturday, October 30, 2004
Monday, October 25, 2004
One of my fellow CMAP User Group Members, Scott McMaster, recently posted a question on our listserve:
"Like most people, I imagine, I've always considered Windows Authentication for intranet-only scenarios. However, from what little relevant discussion I've been able to find on the subject, it appears that using Windows Authentication to access domain-hosted ASP.NET applications over the Internet using IE5+ is a valid approach as long as IIS is properly configured (i.e. no anonymous access, no basic auth). IE and IIS do NTLM/Kerberos without sending passwords around, and the world is nice and safe."
Just to level-set here, this is the web server and browser configuration:
- Website is set for only Integrated Windows Authentication
- Stand alone client machine on the Internet (Not logged into domain)
- Browser is IE 5+
Now, I am a bit... ah.. paranoid when it comes to things like this. Given the fact that if you are on the Internet and are not connected to a domain, you get a login prompt, I went with the assumption that if the login prompt came up and you had to enter your domain credentials, then they were sent as clear text. Well, Scott was persistent and was backed up by our local DCC, Geoff Snowman, who also chimed in that it was valid to use Integrated Windows Authentication on the Internet.
By this time, I was well and truly engaged. In communicating privately with Scott, the resources that we were finding in our searches were simply not that clear on this point ... at least to me
So, following my traditional method of when in doubt, ask the experts, I asked the question regarding this scenario on a list that I am on and got a definitive answer from Ken Schaefer, who just so happens to be an IIS MVP.
The short answer, Scott's research proved to be right, my assumptions were wrong, and the world is a safer place 
The long answer is as follows (The answers are pretty much a direct quote from Ken. My stuff in bold):
Integrated Windows Authentication covers two authentication mechanisms - Kerberos and NTLM. Neither authentication mechanism allows for plain-text credentials (well, not of the password anyway).
In general:
- Whether the site is in the Intranet security zone determines whether IE attempts to automatically authenticate when prompted by the server.
- Whether the site is in the Internet security zone determines whether IE attempts to use Kerberos authentication (Kerberos authentication requires the client machine to be able to contact the KDC to get TGTs etc, and generally this isn't possible in an Internet setting, so IE uses NTLM instead).
- Whether your user is logged on to the domain or not, on their workstation, is irrelevant to determining the authentication mechanism used, or how IE sends credentials to the server.
If the site is placed into the local Intranet security zone -and- Internet Explorer is still in its default configuration (if you go to Tools -> Internet Options -> Security -> Custom settings for Intranet zone, there is an option "automatic logon only in Intranet zone"), then Internet Explorer will attempt to log you on using your current logged on credentials when the web server sends back its 401 response (IE will attempt an anonymous request first no matter what the configuration, then the server will send back a 401, then IE will attempt to auto-logon). If the credentials IE sends automatically are not accepted by the server (the server sends back another 401), then IE will prompt you to supply alternate credentials.
The important thing to note here is that if the browser is IE, the domain credentials that I enter are NOT sent in cleartext but instead use either NTLM or Kerberos depending on the configuration above.
Neither NTLM nor Kerberos authentication uses plain text to pass the password. NTLM authentication uses the NTLM hashing algorithm to generate a hash of the password. This is sent across the wire by the client and is compared to the hash of the password stored by the web server (for local accounts) or by the DC (for domain accounts). If the hash matches, then the user is authenticated. (The process is actually a little more complex, otherwise anyone could just sniff a hash and use that). If you want the gory details, check out: http://davenport.sourceforge.net/ntlm.html (about 40% of the way down the page is a section titled "The NTLM v2 Response" which describes how the hash is constructed when using NTLM v2). Kerberos authentication uses Kerberos tickets.
Excellent! It is a good day when you learn something new. It is a great day when what you have learned can improve your security. Thanks Guys!
br>
Sunday, October 24, 2004
Mark Burnett, who is the author of "Hacking the Code", has a couple of great articles posted to the OWASP site.
Both are must read articles!
br>
Thursday, October 14, 2004
There has been much talk about what is considered a secure password. So it was a true pleasure for me to recently read a fascinating study on this topic that provided some hard numbers to back up the claims. The study was published in the current issue of IEEE Security and Privacy and is titled "Password Memorability and Security: Empirical Results" by Jeff Yan, Alan Blackwell, Ross Anderson and Alasdair Grant.
First some background. Per the article "Human memory for sequences is temporally limited, with a short term capacity of around seven, plus or minus two items. In addition, when humans do remember a sequence of items, those items be familiar chunks such as words or familiar symbols. Finally, human memory thrives on redundancy-we're much better at remembering information we can encode in multiple ways"
So what these folks did was have three separate test groups:
- The control group were asked to choose a seven-character password with at least one nonletter
- Second group chose passwords by closing their eyes and pointing randomly to a grid of numbers and letters
- The third group was instructed to chose passwords based on mnemonic phrases and given examples of how to go about doing so
Then the testers ran the following types of attacks against the passwords:
- Dictionary attacks: Simply use different dictionary files to crack the passwords
- Permutation of words and numbers: For each word from a dictionary file, permute with 0, 1, 2 and 3 digits and also use common number substitutions such as 1 for an I and 5 for S etc.
- User information attacks: Exploit user data that is collected from password files such as userid, full name etc
- They also tried brute force attacks (Try all possible combination of keys) against passwords 6 characters long.
Pick up and read the article itself for the details and the numbers, but the conclusions are interesting. The permuted dictionary attack was the most successful and the brute force attack successfully cracked all six-character passwords.
They also confirmed the two folk beliefs that "... user have difficulty remembering random passwords and that passwords based on mnemonic phrases are harder to guess than naively selected passwords." They have also debunked the folk beliefs that "... random passwords are better than passwords based on mnemonic phrases. Each appeared to be as strong as the other" and that "... passwords based on mnemonic phrases are harder to remember than naively selected passwords. In fact, each type is as easy to remember as the other".
Some of the key take-aways were:
- "... security can be significantly improved by educating users to select mnemonic passwords
- Size of the password matters
- Entropy per character matters, so instruct users to choose passwords containing numbers and special characters as well as letters."
So what does this mean for me? Well from now on, my password selection page is going to have the following (Some of the content is adapted from the directions that were given to the mnemonic group in the test):
-
Choosing a good password is critical to maintaining the security of this system. To construct a good password, create a simple sentence of 8 to 9 words and choose letters from the words to make up a password. You might take the initial or final letters; you should put some letters in upper case to make the password harder to guess; and at least one number and special character should be inserted as well. An example is the phrase "It's 12 noon and I am hungry" which can be used to create the password "I's12n&Iah". All passwords will be checked to make sure that the following complexity requirements are met:
- Must be at least 9 characters
- Must contain at least one lower case letter, one upper case letter, one digit and one special character
- Valid special characters are - @#'$%^&+=
The key point here is not to just to show them the 3 above bullet items but to provide explicit guidance on how a password should be chosen to meet the outlined complexity criteria.
Oh yes, as a bonus here is a regex that will enforce the above complexity requirement:
^.*(?=.{9,})(?=.*\d)(?=.*[a-z])(?=.*[A-Z])(?=.*[@#'$%^&+=]).*$
br>
Saturday, October 9, 2004
It has been interesting to me to see the recent ASP.NET vulnerability play out. One of the main factors that came into focus for me was that most developers do not seem to consider the principal of Defense in Depth when it comes to writing privileged code.
Since an example speaks much louder than lectures, lets take the following case..
My web site is as follows:
\webroot
- web.config
\Camelot
- ProtectedPage.aspx
My web.config has the following:
<location path="Camelot">
<system.web>
<authorization>
<allow roles="KnightsRoundTable" />
<deny users="*" />
<< font>authorization>
<< font>system.web>
< font><location>
As you can see above, the web application is configured such that the only users who have access to the protected directory "Camelot" are members of the group "KnightsRoundTable".
The problem is that most people leave it at that.. That is NOT Enough!
At this point you are basically exposed when your authorization module is somehow bypassed. So let us take a look at some things you can do to apply the principle of Defense in Depth to ProtectedPage.aspx.
1) Restrict which users can call your code
One of the easiest ways to do this is to annotate your classes and methods with declarative principal permission demands to control which users can call your classes and class members. In the above example, I would do the following:
[System.Security.Permissions.PrincipalPermission
(System.Security.Permissions.SecurityAction.Demand,Role=@"KnightsRoundTable")]
public class ProtectedPage: System.Web.UI.Page
{
}
If anyone who is not in the KnightsRoundTable group tries to call this page, they will get the following error:
Security Exception
Exception Details: System.Security.SecurityException: Request for principal permission failed.
And if you've done right thing and set up a Default Redirect page for errors, they will not get a stack trace and will be redirected to a generic error page.
2) Protect against spoofed post backs.
void Page_Init (Object sender, EventArgs e)
{
if (User.Identity.IsAuthenticated)
ViewStateUserKey = User.Identity.Name;
}
What this does is key the view state to an individual using a unique value of your choice. This option, which is only available in ASP.NET 1.1, is the Page.ViewStateUserKey. This needs to be applied in Page_Init because the key has to be provided to ASP.NET before view state is loaded.
3) Redirect if user is not authenticated
The third thing that I do in a protected page simply make sure that a user is authenticated before they are allowed to view any content.
private void Page_Load(object sender, System.EventArgs e)
{
if (!User.Identity.IsAuthenticated)
Response.Redirect("~/GoodBye.aspx",true);
}
The key point here is that I am not simply depending on just one thing here to protect this page but a layered defense. Hopefully if one thing fails, the others will protect the page.
Oh, did I mention that I also extensively instrument my applications such that when someone does try to access a protected page, I log that activity and if the content of that page is sensitive enough, I may also send real time notification of attempted break-ins to an admin?
Paranoid? Perhaps. But I also sleep a whole lot more soundly 
br>
Thursday, October 7, 2004
In response to the vulnerability in ASP.NET forms authentication that was posted to NTBugtraq, Microsoft has released a HTTP Module and associated installer that "... protects all ASP.NET applications on a Web server against canonicalization problems that are currently known to Microsoft.."
Find more info about it and install NOW!
br>
Thursday, September 23, 2004
There has been some discussion of late about passwords vs. pass phrases and how long a password should be. I won't add to the mix except to say that I am a believer when it comes to complex passwords. Heck, my 4 year old is required to use a userid and password to log into his session on his computer 
I've recently been working on some things that require me to make sure that the passwords that are used are sufficiently complex. Here is what I am using right now:
- Must be at least 10 characters
- Must contain at least one one lower case letter, one upper case letter, one digit and one special character
- Valid special characters are - @#$%^&+=
The regex that I am using to enforce this is:
^.*(?=.{10,})(?=.*\d)(?=.*[a-z])(?=.*[A-Z])(?=.*[@#$%^&+=]).*$
As you can see in the regex, the list of special characters is configurable...
br>
Robert has a great post in which he talks about some of the attacks that can be mounted against web sites.. In particular he gives good info on hidden field tampering, sql injection and cross-site scripting.
I just wanted to add a couple of notes to his most excellent comments on protecting the viewstate. By default, view state transmitted to the client includes a salted hash. But you can also use the <machineKey> element to specify the encryption keys, validation keys and the particular algorithm used to protect both the forms authentication cookies as well as the page level view state.
<machineKey validationKey="AutoGenerate,IsolateApps"
decryptionKey="AutoGenerate,IsolateApps" validation="SHA1" />
The IsolateApps setting is new to .NET 1.1 and tells ASP.NET to automatically generate the encryption keys and make them unique for each app. The validation attribute specifies the algorithm used for checking the integrity of the page-level viewstate.
The caveat is that in a web farm scenario you would have to explicitly generate the keys to keep them the same across the web farm nodes. When you do so, make sure you use a cryptographically strong key. I would highly suggest using Keith Brown's "GenerateMachineKey" utility which can be found on pluralsight's tools page.
The other thing that can be done is to key the view state to an individual using a unique value of your choice. This option, which is again only available in ASP.NET 1.1, is the Page.ViewStateUserKey. This needs to be applied in Page_Init because the key has to be provided to ASP.NET before view state is loaded. Here is an example:
void Page_Init (Object sender, EventArgs e)
{
if (User.Identity.IsAuthenticated)
ViewStateUserKey = User.Identity.Name;
}
Make sure you check out Robert's blog entry...
br>
Sunday, September 19, 2004
I recently saw a BizTalk demo that utilized the WSE adapter to authenticate against a web service using an X.509 certificate. From what I saw this was purely a machine to machine authentication.
The question I have is "Is it possible to dynamically pass my credentials i.e. X.509 cert, into a BizTalk orchestration such that such that the authentication against an external web service is done using MY credentials?"
I *think* what I am looking for (I am not a BizTalk guru so may be getting my terminology mixed up) is for an Orchestration to run under my security context so that everything that is done as part of that orchestration is done using my credentials... Is it possible? Scott? Anyone?
br>
Wednesday, September 15, 2004
In the latest issue of Crypto-Gram, Bruce Schneier provides a "Cryptanalysis of MD5 and SHA" which looks at the weakness in the MD5 and SHA functions that were announced at the CRYPTO Conference recently. Some highlights:
"... Today, the most popular hash function is SHA-1, with MD5 still being used in older applications. "
".. To a user of cryptographic systems -- as I assume most readers are -- this news is important, but not particularly worrisome. MD5 and SHA aren't suddenly insecure."
"It's time for us all to migrate away from SHA-1."
"Luckily, there are alternatives. The National Institute of Standards and Technology already has standards for longer -- and harder to break -- hash functions: SHA-224, SHA-256, SHA-384, and SHA-512. They're already government standards, and can already be used."
.NET Provides out of the box support for MD5, SHA-1, SHA-256, SHA-384 and SHA-512 hashing algorithms.
A major use of hash functions in a web based application is to store a password as a hash, or even better as a salted hash. A frequently used helper function that is used by many to implement this functionality is the very appropriately named HashPasswordForStoringInConfigFile method of FormsAuthentication. Presently, the only hash algorithms that are supported by this method are MD5 and SHA-1. I REALLY would like to see this support extended to SHA-256.
br>
Tuesday, September 14, 2004
One of the things I am researching a bit these days is how best to go about incorporating logging into a web based application. I am looking at it from both an application troubleshooting perspective as well as a security perspective.
Obviously web servers log activity in the W3C format. But those are access logs which tend to get very detailed and large. And it really does not give you a good picture of what is going on in the application. I've been doing a fair bit of reading in preparation for this and have come across some good information. Hopefully it will spark some conversations from the developers, IA (Information Assurance) folks and other like minded people on what they would like to see in an application log.
A source that addressed this question directly was the
AppSec FAQ at
OWASP:
- Do I need to have logging in my application even if I've W3C logs?
Yes, it's important that your application maintains "application level" logs even when W3C logging is used. As W3C logs contain records for every http request, it is difficult (and, at times impossible) to extract a higher level meaning from these logs. For instance, the W3C logs are cumbersome to identify a specific session of user and the activities that the user performed. It's better that the application keeps a trail of important activities, rather than decode it from W3C logs.
- What should I log from within my application?
Keep an audit trail of activity that you might want to review while troubleshooting or conducting forensic analysis. Please note that it is inadvisable to keep sensitive business information itself in these logs, as administrators have access to these logs for troubleshooting. Activities commonly kept track of are:
- Login and logout of users
- Critical transactions (e.g.. fund transfer across accounts)
- Failed login attempts
- Account lockouts
- Violation of policies
The data that is logged for each of these activities usually include:
- User ID
- Time stamp
- Source IP
- Error codes, if any
- Priority
Mike Gunderloy, in his book "
Coder to Developer" (excellent book!, go buy it now!), also devotes an entire chapter to logging application activity. As compared to OWASP, he looks at it more from an application troubleshooting perspective and advises that as a rule, you should "Log the information that you use when debugging". In particular he recommends looking at logging the following items:
- Error messages and information, including a stack trace of the error
- The state of internal data structures at key points in the application
- User actions, such as button clicks and menu item selections
- The time the significant actions were performed
- Important information about the environment, such as variable settings and available memory
- Audit and log access across application tiers
- Consider identity flow
- Log key events
- Secure log files
- Back up and analyze log files regularly
- Log IP Addresses
- If you are creating a lot of files, you should have your own log files and not use the OS Application Log
- Logs should go into a directory that is user configurable and it's best to create a new log file every day
- Consider creating multiple log files, one for routine events another for extraordinary events
- Application logs should be writable only by the administrator and the user the service runs under
- When code fails for security reasons, log the data in a place that only the administrator has access to
One surprise in my research was that this topic was almost completely (beyond a couple of sentences) ignored in "
Code Complete 2".
Comments?
br>
Thursday, September 9, 2004
"Hacme Bank™ is designed to teach application developers, programmers, architects and security professionals how to create secure software. Hacme Bank simulates a "real-world" online banking application, which was built with a number of known and common vulnerabilities such as SQL injection and cross-site scripting. This allows users to attempt real exploits against a web application and thus learn the specifics of the issue and how best to fix it. Foundstone uses this application extensively in our Ultimate Web Hacking and Building Secure Software training classes. "
The application is written in ASP.NET (C#) and they have a "User and Solutions Guide" that walks you through the lessons. Very cool! You can find the link to download the software and the guide on Foundstone's Strategic Secure Software Page.
br>
Monday, September 6, 2004
A link to this info (Thanks Susan!) came across on one of the lists that I am on:
USB "thumb drives" drive some security folks crazy because they're so small physically and so big storage-wise; what's to keep people from popping a USB drive into a USB slot, copying corporate data and walking out the door? For the USB-paranoid, SP2 includes an ability to let users read data from a USB drive, but not write data to that drive. It's a simple Registry change. First, create a whole new key: HKLM\System\CurrentControlSet\Control \ StorageDevicePolicies. Then create a REG_DWORD entry in it called WriteProtect. Set it to 1 and you'll be able to read from USB drives but not write to them.
Cool!
br>
Wednesday, September 1, 2004
From /. :
Cryptography Research has issued a Q&A that explains the security implications of the hash function collision attacks recently announced at CRYPTO 2004. Apparently the consequences can be catastrophic for certain kinds of code signing and digital signatures, but MD5 sums for checking binaries are (mostly) OK. While the speculation that SHA-1 is about to fail seems to be overblown, updating the many legacy systems and protocols that rely on MD5 is going to be a massive undertaking.
br>
Per Mike Shaw:
There is now a new version of the Microsoft Security Baseline Analyzer updated. MBSA is a tool that can be used to validate the configuration and patch status of computers on your network. It is a BASELINE tool i.e. it gives you a place to start with your security configuration.
You can get more details and download from: http://www.microsoft.com/technet/security/tools/mbsahome.mspx
br>
The Windows XP Security Guide provides recommendations for deploying Windows XP in three distinct environments. The first and most common of these is an enterprise environment that consists of Windows XP running in a Windows 2000 or Microsoft Windows Server(tm) 2003 domain. The second consists of Windows XP in a high security environment in which security risk mitigation can be implemented at the highest possible level. Finally, guidance is offered for deploying Windows XP in a stand-alone or unmanaged environment. Information is also provided about the numerous new security options that are available in Windows XP Service Pack 2 (SP2).
br>
Tuesday, August 31, 2004
It looks like OWASP, one of my favorite web application security projects has updated their portal (Umm... Guys, where is the RSS feed? - at least for the news items on the front page?) AND even better, is starting local chapters!
I am chagrined to note that I missed the initial meeting of my relatively local chapter (Washington D.C) :-( The DC chapter leader is Jeff Williams of Aspect Security and hopefully I can meet him and the rest of the like minded folks at the next chapter meeting. The DC chapter meetings are scheduled to meet the last Wednesday of every month.
br>
Per Michael Howard:
Authentication and Access Control Diagnostics 1.0 (more commonly known as AuthDiag) is a tool released by Microsoft aimed at aiding IT professionals and developers at more effectively finding the source of authentication and authorization failures.
These users have often seen behavior from Internet Information Services (IIS) that doesn't seem appropriate or random when users authenticate to the IIS server. The complex world of authentication types and the various levels of security permissions necessary to allow a user to access the server causes many hours of labor for those tasked with troubleshooting these problems.
AuthDiag 1.0 offers a robust tool that offers a efficient method for troubleshooting authentication on IIS 5.x and 6.0. It will analyze metabase configuration and system-wide policies and warn users of possible points of failure and guide them to resolving the problem. AuthDiag 1.0 also includes a robust monitoring tool called AuthMon designed at capturing a snapshot of the problem while it occurs in real-time. AuthMon is robust and specially designed for IIS servers removing any information not pertinent to the authentication or authorization process.
Download @
br>
The primary focus of Microsoft .NET Framework 1.1 Service Pack 1 (SP1) is improved security. In addition, the service pack includes roll-ups of all reported customer issues found after the release of the Microsoft .NET Framework 1.1. Of particular note, SP1 provides better support for consuming WSDL documents, Data Execution prevention and protection from security issues such as buffer overruns. SP1 also provides support for Windows XP Service Pack 2 to provide a safer, more reliable experience for customers using Windows XP.
br>
Tuesday, August 24, 2004
Ken on the SC-L Listserve asked for suggestions on ".... first steps that developers might consider, even in the absence of top-level embracing of a more secure development methodology" and Hans Westphal [MS] responded with the following list of excellent resources. I am putting this down for my own benefit!
Subscribe to Security lists:
Sc-l@securecoding.org, NTBUGTRAQ@LISTSERV.NTBUGTRAQ.COM
Self Education through books:
and Webcast's:
MSDN Webcast: Secure Mobile Data Using the Microsoft .NET Compact Framework and SQL CE 2.0 - Level 300
Wednesday, September 01, 2004 - 11:00 AM-12:30 PM Pacific Time
Rob Tiffany, President, Hood Canal Mobility
Would you like to be certain that data on a mobile device is secure? Without needing any knowledge of cryptography, you can build an application that lets users check-in and check-out their sensitive files. This webcast focuses on building an encrypted, password-protected storage vault for files residing on Pocket PCs.
http://www.placeware.com/cc/mseventsbmo/join?id=1032257382&role=attend&pw=webcast
MSDN Webcast: Essentials of Application Security (Part 1) - Secure Communications - Level: 200
Friday, September 3, 2004 - 9:00 AM-10:00 AM Pacific Time
Ron Cundiff, MSDN Developer Community Champion, Microsoft Corporation
This webcast is the first of a 3-part series about the importance of Application Security and its best practices and guidelines. This part specifically addresses Secure Communications in the context of secure
application development. After an overview of the costs of inadequate security and the benefits of developing secure applications, this presentation concentrates on secure communications as part of a larger
security solution, examining specific techniques such as using certificates in the Secure Sockets Layer (SSL). The webcast includes two demonstrations: Buffer Overruns and SSL Server Certificates.
http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032257602&Culture=en-US
MSDN Webcast: Essentials of Application Security (Part 2) - Authentication - Level: 300
Tuesday, September 7, 2004 - 9:00 AM-10:00 AM Pacific Time
Ron Cundiff, MSDN Developer Community Champion, Microsoft Corporation
This webcast is the second of a 3-part series about the importance of Application Security and its best practices and guidelines. This part specifically addresses Authentication in the context of secure application development. After an overview of the costs of inadequate security and the benefits of developing secure applications, we concentrate on Authentication as part of a larger security solution, examining specific Authentication techniques and best practices in IIS. The webcast includes two demonstrations: Buffer Overruns and IIS Authentication Techniques.
http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032257885&Culture=en-US
MSDN Webcast: "Ask The Developer Security Experts" Series: Windows XP Service Pack 2: A Developer Overview - Level: 200
Tuesday, September 7, 2004 - 11:00 AM-12:00 PM Pacific Time
Tony Goodhew, Product Manager, Microsoft
This webcast series brings together some of the sharpest security-focused Microsoft developers to provide expert answers to your security questions. Beginning with a brief overview of Windows(r) XP Service Pack 2 (SP2), we will focus the discussion on what these changes mean for you as a developer and how these changes will affect your various development tools. This presentation will be followed by an
extensive Q&A period where you can "Ask the Experts" your in-depth questions about Windows XP SP2. Do you have a question you want to submit to the experts before the webcast? Send your security questions
about Windows XP SP2 to our panel of experts ahead of time at devxcast@microsoft.com.
http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032257887&Culture=en-US
MSDN Webcast: A Hackers View of Your Web Applications Part 1: Procedures for Code Security - Level: 300
Tuesday, September 7, 2004 - 1:00 PM-2:00 PM Pacific Time
Dennis Hurst, Senior Consulting Engineer, SPI Dynamics
With the threat of cyber attacks, today's Web environment has made application security an essential element in the application development lifecycle. The first part of this two part series will define what Web
application security is, why it is needed, and how it differs from other categories of Internet security. Additionally, we will examine appropriate procedures and technologies essential to the security of Web
application code. Through a review of recent Web application breaches, we will expose the prolific methods hackers use to execute break-ins via the Web. By taking an in-depth look at how Web-based applications work and the techniques hackers use to exploit them, you will be better equipped to protect your confidential information.
http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032257889&Culture=en-US
MSDN Webcast: Essentials of Application Security (Part 3) - Authorization - Level: 300
Friday, September 10, 2004 - 9:00 AM-10:00 AM Pacific Time
Ron Cundiff, MSDN Developer Community Champion, Microsoft Corporation
This webcast is the third of a 3-part series about the importance of Application Security and its best practices and guidelines. This part specifically addresses Authorization in the context of secure
application development. After an overview of the costs of inadequate security and the benefits of developing secure applications, we concentrate on Authorization as part of a larger security solution,
examining Trusted Subsystem Model Authorization techniques and best practices. The webcast includes two demonstrations: Buffer Overruns and Trusted Subsystem Model Authorization Techniques.
http://msevents.microsoft.com/cui/eventdetail.aspx?EventID=1032257892&Culture=en-US
MSDN Webcast: A Hackers View of Your Web Applications Part 2: Web Hacking - Attack Scenarios and Examples - Level: 300
Monday, September 13, 2004 - 1:00 PM-2:00 PM Pacific Time
Dennis Hurst, Senior Consulting Engineer, SPI Dynamics
By taking advantage of the public access to a company and using it to subvert your applications, hackers can gain easy access into your company's sensitive backend data. Firewalls and IDS will not stop such
attacks because hackers using the Web application layer are not seen as intruders. In the 2nd part of this two-part series, learn how to defend against attacks at the Web application layer with examples covering
recent hacking methods such as: SQL Injection, Cross Site Scripting, Parameter Manipulation, Session Hijacking, and LDAP Injection.
http://msevents.microsoft.com/cui/eventdetail.aspx?EventID=1032257907&Culture=en-US
MSDN Webcast: Overview of XP SP2 for Developers - Level: 200
Tuesday, September 14, 2004 - 9:00 AM-10:30 AM Pacific Time
Tony Goodhew, Product Manager, Microsoft
Review the changes that Windows XP Service Pack 2 delivers and what they mean for you. Windows XP SP2 is designed to deliver a number of safety technologies in the Internet Connection Firewall, Web Browsing
experience, Email /IM and Application Memory Protection. Each of these areas has direct impact on developers and this session covers the major items and what you need to know. Learn how these changes will affect your various development tools.
http://msevents.microsoft.com/cui/eventdetail.aspx?EventID=1032257920&Culture=en-US
MSDN Webcast: Implementing Application Security Using the .NET Framework Part 1 - Level: 300
Wednesday, September 14, 2004 - 9:00 AM-10:00 AM Pacific Time
Rob Jackson, Developer Community Champion, Microsoft Corporation
This is part 1 of a 3-part series for experienced developers. In this series, you will learn how to implement additional security features to secure applications that are built on the .NET Framework. You will learn
how security features are integrated into the .NET Framework. You will learn how to use both code access security and role-based security to limit vulnerabilities. You will also learn how to use the cryptographic
provider support in the .NET Framework to encrypt and sign data. Additionally, you will learn how to secure Web applications and Web services that are built by using ASP.NET. Finally, you will learn a few
tips for writing Security with the .NET Framework. Parts 2 and 3 of the series will be presented on 9/21 and 9/28, respectively.
http://msevents.microsoft.com/cui/eventdetail.aspx?EventID=1032257965&Culture=en-US
MSDN Webcast: Writing Security - Threat Defense Part 1 - Level: 200
Friday, September 17, 2004 - 9:00 AM-10:00 AM Pacific Time
David Deatherage
This is part 1 of a 3-part series for experienced developers. In this series, you will learn established best practices for applying security principles throughout the development process. You will learn effective
strategies for defending common security threats such as buffer overruns, cross-site scripting, SQL injection, and denial of service attacks. Parts 2 and 3 of the series will be presented on 9/24 and
10/1, respectively.
http://msevents.microsoft.com/cui/eventdetail.aspx?EventID=1032258007&Culture=en-US
MSDN Webcast: Implementing Application Security Using the .NET Framework Part 2 - Level: 300
Tuesday, September 21, 2004 - 9:00 AM-10:00 AM Pacific Time
Ron Cundiff, MSDN Developer Community Champion, Microsoft Corporation
This is part 2 of a 3-part series for experienced developers. In this series, you will learn how to implement additional security features to secure applications that are built on the .NET Framework. You will learn
how security features are integrated into the .NET Framework. You will learn how to use both code access security and role-based security to limit vulnerabilities. You will also learn how to use the cryptographic
provider support in the .NET Framework to encrypt and sign data. Additionally, you will learn how to secure Web applications and Web services that are built by using ASP.NET. Finally, you will learn a few
tips for writing Security with the .NET Framework. Part 3 of the series will be presented on 9/28.
http://msevents.microsoft.com/cui/eventdetail.aspx?EventID=1032258017&Culture=en-US
MSDN Webcast: "Ask The Developer Security Experts" Series: Using WSE to Secure your Web Services with WS-Security - Level: 200
Thursday, September 23, 2004 - 11:00 AM-12:00 PM Pacific Time
Maarten Van De Bospoort, Consultant, Microsoft Corporation
This webcast series brings together some of the sharpest security-focused Microsoft developers to provide expert answers to your questions about securing your Web services. We will begin this webcast with a brief discussion of the advantages of using WS-Security over traditional wire level security on the protocol level, including an explanation of how WS-Security is built upon XML security and how the new Web Services Enhancements (WSE) make this easy to implement. After this overview, this session will continue with an extensive Q&A period where you can "Ask the Experts" your in-depth questions about securing your web services with WS-Security and WSE. Do you have a question you want to submit to the experts before the webcast? Send your questions about securing Web services to our panel of experts ahead of time to
devxcast@microsoft.com.
http://msevents.microsoft.com/cui/eventdetail.aspx?EventID=1032258027&Culture=en-US
MSDN Webcast: Writing Security - Threat Defense Part 2 - Level: 200
Friday, September 24, 2004 - 9:00 AM-10:00 AM Pacific Time
Ron Cundiff, MSDN Developer Community Champion, Microsoft Corporation
This is part 2 of a 3-part series for experienced developers. In this series, you will learn established best practices for applying security principles throughout the development process. You will learn effective
strategies for defending common security threats such as buffer overruns, cross-site scripting, SQL injection, and denial of service attacks. Part 3 of the series will be presented on 10/1.
http://msevents.microsoft.com/cui/eventdetail.aspx?EventID=1032258029&Culture=en-US
MSDN Webcast: Implementing Application Security Using the .NET Framework Part 3 - Level: 300
Tuesday, September 28, 2004 - 9:00 AM-10:00 AM Pacific Time
Rob Jackson, Microsoft Corporation
This is part 3 of a 3-part series for experienced developers. In this series, you will learn how to implement additional security features to secure applications that are built on the .NET Framework. You will learn
how security features are integrated into the .NET Framework. You will learn how to use both code access security and role-based security to limit vulnerabilities. You will also learn how to use the cryptographic
provider support in the .NET Framework to encrypt and sign data. Additionally, you will learn how to secure Web applications and Web services that are built by using ASP.NET. Finally, you will learn a few tips for writing Security with the .NET Framework.
http://msevents.microsoft.com/cui/eventdetail.aspx?EventID=1032258031&Culture=en-US
MSDN Webcast: Windows XP Server Pack 2 Change Walkthrough - Level: 300
Tuesday, September 28, 2004 - 11:00 AM-12:30 PM Pacific Time
Tony Goodhew, Product Manager, Microsoft
This session is a detailed walkthrough of the changes to Windows XP with Service Pack 2. It will cover the 4 major areas of change - Networking, Web Browsing, Email/IM and Hardware. In each of these sections the
change and its implication will be discussed.
http://msevents.microsoft.com/cui/eventdetail.aspx?EventID=1032258033&Culture=en-US
br>
Thursday, July 8, 2004
Friday, July 2, 2004
Tuesday, June 29, 2004
Per fes:
We've posted an updated Threat Modeling Tool at MSDN that addresses a few bugs.
Thanks to BobB for his assistance. Basically, this addresses several unhandled exceptions that resulted in the tool crashing at some rather inconvenient times. (Okay, not that there are convenient times for a tool to crash.)
Some notes on the tool:
-
We released it mostly because it is a useful way of organizing the data collected during threat modeling. Since it is not formally supported externally, I (and a few other contributors) fix bugs and add features also informally. So I'm hoping not to get a barrage of bug reports, but I will do my best to find time to address serious issues.
-
Note that it works best (DFD-wise) if you have Visio 11 installed. Visio 11 has a drawing control that you can embed in other applications (which is exactly what the TM tool does). This is a much easier way of integrating DFDs in to the threat model.
-
If you want to print from the tool, the best way to do it is to use the Preview button. This applies the default XSLT (configurable in Tools->Config) to the threat model and displays it in an IE control. You can right-click in this control and select print to print directly.
-
The threat model document, if you haven't taken a look, is XML. (Visio diagrams are stored in BASE64 blobs, though, and not in their XML format.) So, you can customize the report format if you like playing with XSLTs. The XSLTs that come with it are fairly basic, but show some ways of presenting the document.
-
The sample document for the tool is in the tool's install directory, and is for “Fabrikam Phone 1.0.” This is basically the same as one of the samples in the threat modeling book (
http://www.microsoft.com/MSPress/books/6892.asp). Note that the DFDs are in Visio, so you won't see them if you don't have it installed. The sample is intended to show threat modeling concepts without being specific to any software type or technology.
Not sure, but I think the above post is from Frank Swiderski, who happens to be the primary creator of the tool and the author of the Threat Modeling book. If it is him, his blog can be found @
http://blogs.msdn.com/fes/
br>
Wednesday, June 23, 2004
Sunday, June 20, 2004
Maxim is back with another article on CAS.
".....in my previous article on Code Access Security I barely scratched the surface, so in this installment I want to continue my endeavor with CAS and point out 3 different approaches regarding its use; (1) Configure Policy, (2) Sandbox pattern and (3) Install into a Global Cache Assembly, to successfully execute an assembly in the partially trusted execution environment."
Needless to say, Check it out @
http://ipattern.com/simpleblog/PermLink.aspx?entryId=48
br>
Aaron Margosis (MCS Federal) has started a series of articles on the trials and tribulations of running as non-admin. Good read. Check them out:
I'll also point to my own article on Developing as Non-Admin using VS.NET 2003 [1]
br>
Monday, June 14, 2004
Essentials of Security (Part 1) - Security and Defense - Level 200
June 14, 2004, 9:00AM-10:00AM Pacific Time (GMT-7, US & Canada)
http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032253084&Culture=en-US
How does a security plan affect the commerce of the business it is supposed to protect? How can you be sure your security plan implements the right kind of security for each type of vulnerability? This webcast presents a defense-in-depth model that can help provide protection for each layer of an infrastructure. The discussion also includes strategies for security response, common attack scenarios, and best practices. During this webcast we will walk through two demonstrations: Internet Connection Firewall and Protecting IIS 5.0.
.NET Framework Security (Part 2) - Code Access and Role-Based Security - Level 300
June 14, 2004, 1:00PM-1:45PM Pacific Time (GMT-7, US & Canada)
http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032253116&Culture=en-US
This webcast is the second of a 3-part series about the importance of Application Security and its best practices and guidelines. This part specifically addresses Authentication in the context of secure application development. After an overview of the costs of inadequate security and the benefits of developing secure applications, we concentrate on Authentication as part of a larger security solution, examining specific Authentication techniques and best practices in IIS. The webcast includes two demonstrations: Buffer Overruns and IIS Authentication Techniques.
Essentials of Application Security (Part 3) - Authorization - Level 300
June 16, 2004, 9:00AM-9:45AM Pacific Time (GMT-7, US & Canada)
http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032253123&Culture=en-US
This webcast is the third of a 3-part series about the importance of Application Security and its best practices and guidelines. This part specifically addresses Authorization in the context of secure application development. After an overview of the costs of inadequate security and the benefits of developing secure applications, we concentrate on Authorization as part of a larger security solution, examining Trusted Subsystem Model Authorization techniques and best practices. The webcast includes two demonstrations: Buffer Overruns and Trusted Subsystem Model Authorization Techniques.
Writing Security - Threat Defense - Level 300
June 18, 2004, 9:00AM-10:30AM Pacific Time (GMT-7, US & Canada)
http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032253126&Culture=en-US
In this session for experienced developers, you will build upon existing knowledge of secure coding best practices to learn about analyzing, mitigating and modeling threats. The session will discuss established threat modeling methodologies and tools and show how they can be applied with other best practices to minimize vulnerabilities and limit damage from attacks.
br>
Thursday, May 27, 2004
Dana posted a list of resources that describe different aspects of Threat Modeling. Posting it for future reference.
br>
Wednesday, May 26, 2004
The Securing Wireless LANs with PEAP and Passwords solution guide is designed to help small- and medium-sized organizations protect their wireless local access network (LANs). This prescriptive guidance will assist you in planning, deploying, testing, and managing a wireless LAN security infrastructure using Microsoft Windows XP, Windows Server 2003, and Pocket PC 2003. The guide is a companion to the earlier solution guide Securing Wireless LANs – a Certificate Services Solution. However, this updated guide uses passwords to authenticate users and computers to the LAN instead of digital certificates.
The solution uses industry standards such as 802.1X to ensure broad interoperability. Windows XP Wireless Auto Configuration and the Microsoft Active Directory directory service help to minimize the complexity of installing and managing the solution—many of the more complex operations are automated in scripts that are provided with the guide. You can also install the solution entirely on existing servers in your environment to keep costs low.
Download @
http://www.microsoft.com/downloads/details.aspx?familyid=60c5d0a1-9820-480e-aa38-63485eca8b9b&displaylang=en
br>
The May/June issue of the IEEE Security & Privacy magazine is out and that means another issue of the "Building Security In" column, which is edited by Gary McGraw. This month's column actually has been released free to the web, so go check it out...
Misuse and Abuse Cases: Getting Past the Positive
Paco Hope, Gary McGraw, and Annie I. Antón
http://www.computer.org/security/v2n3/bsi.htm
Software development is all about making software do something: when software vendors sell their products, they talk about what the particular products do to make customer's lives easier, such as improving business processes or something similarly positive. Following this trend, most systems for designing software also tend to describe positive features.
br>
Ken on the SC-L list posted a pointer to an excellent article on the principle of least privilege. The article is by David Wheeler and can be found at:
http://www-106.ibm.com/developerworks/linux/library/l-sppriv.html?ca=dgr-lnxw04Privileges
The examples in the article are *nix/Linux focused but the concepts are relevant whatever OS you are running.
A section that struck a chord with me is:
"One of the most important ways to secure programs, in spite of these bugs, is to minimize privileges. A privilege is simply permission to do something that not everyone is allowed to do. On a UNIX-like system, having the privileges of the "root" user, of another user, or being a member of a group are some of the most common kinds of privileges. Some systems let you give privileges to read or write a specific file. But no matter what, to minimize privileges:
- Give a privilege to only the parts of the program needing it
- Grant only the specific privileges that part absolutely requires
- Limit the time those privileges are active or can be activated to the absolute minimum
These are really goals, not hard absolutes. Your infrastructure (such as your operating system or virtual machine) may not make this easy to do precisely, or the effort to do it precisely may be so complicated that you'll introduce more bugs trying to do it precisely. But the closer you get to these goals, the less likely it will be that bugs will cause a security problem. Even if a bug causes a security problem, the problems it causes are likely to be less severe. And if you can ensure that only a tiny part of the program has special privileges, you can spend a lot of extra time making sure that one part resists attacks."
Another interesting bit that the article made a reference to is the history of the SELinux implementation by the NSA.
"The NSA found that most operating systems' security mechanisms, including Windows and most UNIX and Linux systems, only implement "discretionary access control" (DAC) mechanisms. DAC mechanisms determine what a program can do based only on the identity of the user running the program and ownership of objects like files. The NSA considered this to be a serious problem, because by itself DAC is a poor defense against vulnerable or malicious programs. Instead, NSA has long wanted operating systems to also support "mandatory access control" (MAC) mechanisms.
MAC mechanisms make it possible for a system administrator to define a system-wide security policy, which could limit what programs can do based on other factors like the role of the user, the trustworthiness and expected use of the program, and the kind of data the program will use. A trivial example is that with MAC, users can't easily turn "Secret" into "Unclassified" data. However, MAC can actually do much more than that.
... So, NSA hit upon an idea that seems obvious in retrospect: take an open source operating system that's not a toy, and implement their security ideas to show that (1) it can work and (2) exactly how it can work (by revealing the source code for all). They picked the market-leading open source kernel (Linux) and implemented their ideas in it as "security-enhanced Linux" (SELinux)."
Hmm.. I wonder what the NSA's take would be on the Code Access Security capabilities of the .NET Framework as it would appear to implement a lot of the goals that they were looking for.
BTW, David Wheeler's article is part of a series of articles called the "Secure Programmer" on the Linux Technical library section of the IBM developerWorks site. The home page of the series can be found
here.
Just wish the site would implement either a RSS feed or a newsletter subscription....
br>
Tuesday, May 25, 2004
Threat modeling allows you to systematically identify and rate the threats that are most likely to affect your system. By identifying and rating threats based on a solid understanding of the architecture and implementation of your application, you can address threats with appropriate countermeasures in a logical order, starting with the threats that present the greatest risk.
The Threat Modeling tool was built by Frank Swiderski, a Microsoft Security Software Engineer, who is also the author of an upcoming book on Threat Modeling.
The Threat Modeling Tool allows users to create threat model documents for applications. It organizes relevant data points, such as entry points, assets, trust levels, data flow diagrams, threats, threat trees, and vulnerabilities into an easy-to-use tree-based view. The tool saves the document as XML, and will export to HTML and MHT using the included XSLTs, or a custom transform supplied by the user.
http://www.microsoft.com/downloads/details.aspx?FamilyID=62830f95-0e61-4f87-88a6-e7c663444ac1&displaylang=en
[Now Playing: Chalte Chalte (1) - Mohabbatein]
br>
Friday, May 21, 2004
From Michael Howard:
The Microsoft Solutions for Security (MSS) team has released The Antivirus Defense-in-Depth Guide on the Web @
http://go.microsoft.com/fwlink/?LinkId=28734
- A high-level overview of different malware types such as viruses and worms, their characteristics and replication techniques, and the payloads or actions malware use to attack computers.
- How to use defense-in-depth planning for both the clients and servers in your organization, including patch management, firewall protection, general security measures, and related tools to reduce the risk of infection.
- A comprehensive step-by-step methodology to quickly and effectively respond to malware outbreaks or infections and recover from them. The guidance in this area is based largely on Microsoft internal operations and the experience Microsoft has gained from assisting customers.
br>
Casey Chesnut had implemented the functionality that is present in the desktop .NET System.Security.Cryptography namespace in the .NET compact framework as part of his http://www.brains-N-brawn.com/spCrypt article. What he did was wrap the unmanaged CryptoApi in managed code and then made sure that there was cross platform compatibility between his implementation and the full framework on the desktop.
Very Cool!
Now his code has been incorporated into the NETCF Smart Device Framework v1.1, which is a third party framework that " ..enriches and extends the .NET Compact Framework by providing a rich set of classes and controls not available in the .NET Compact Framework."
His code can be found under the OpenNETCF.Security.Cryptography Namespace. Check out the functionality provided @ http://www.opennetcf.org/library/
br>
All relational databases are susceptible to SQL injection attacks. The following SQL Server Magazine article teaches you four important steps to protecting your Web applications from SQL injection attacks.
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnsqlmag04/html/InjectionProtection.asp
The above came across on the MSDN feed. I thought I would add a bit to it.
SQL Injection attacks occur when applications use input to construct dynamic SQL statements to access the database. One of the often quoted defenses against this type of attack is to use stored procedures. That is a good start. Just remember that SQL injection attacks can also occur if your code uses sprocs that accept strings which contain unfiltered user input. This attack gets exponentially worse if the application is using an over privileged account to connect to the database.
You prevent SQL Injection using the following tactics:
- Constrain the input by validating it for type, length, format and range. Remember, ALL INPUT IS EVIL, until proven otherwise!
- Use type safe SQL parameters. The parameter collection in SQL provides type checking and length validation. So if you use the Parameters collection, input is treated as a literal value and SQL does not treat it as executable code. Another point is that the Parameters collection can be used to enforce type and length checks so that values outside of the range trigger exceptions. You can use the Parameters collection with both sprocs as well as dynamic SQL.
- Use filter routines that sanitize the code by adding escape characters to characters that have special meaning to SQL. An example would be adding an escape character to the single apostrophe character. Keep in mind that these type of filter routines can be bypassed by an attacker that uses ASCII hex characters. So they should be used as just another part of your defense in depth strategy.
br>
Wednesday, May 19, 2004
What is the Security Guidance Kit?
The Security Guidance Kit is a collection of how-to information, software tools, and detailed prescriptive guidance within a small "viewer" application. The materials within the Kit are all designed to help you implement security measures in your environment. The topics covered include patch management, anti-virus measures, securing remote access, and blocking unsafe email attachments.
What is in the Security Guidance Kit?
The Security Guidance Kit contains documentation files along with a viewer application to allow you to navigate the documentation. It also contains free software tools from Microsoft, which you can optionally install or copy to other computers in your network.
Who is the Security Guidance Kit for?
The Security Guidance Kit is for the information technology implementer in any small, medium, or large organization. The Kit is not intended for use by the home pc user or by the application developer. Home users should continue to consult www.microsoft.com/protect for security guidance and information. Developers should consult the Security Guidance Center (www.microsoft.com/security/guidance) or MSDN (msdn.microsoft.com).
Download @
http://www.microsoft.com/downloads/details.aspx?familyid=c3260bd0-2ebb-4496-ad07-7e9d55d0ef1f
br>
Tuesday, May 18, 2004
Per Brian Redmond:
"The Microsoft Solutions for Security team is proud to announce the release and availability of the Microsoft Identity and Access Management Series on the Web. This is an excellent set of papers, scrips, and solution details related to Microsoft Identity Management technologies. Each paper is based on field experience that deals with real-world problems, and the solutions offered are technically validated. Microsoft engineering teams, architects, consultants, support engineers, partners, and customers contributed to, reviewed, and approved each paper."
Part I – The Foundation for Identity and Access Management
Part II – Identity Life-Cycle Management
Part III – Access Management and Single Sign On
Check it out @
http://www.microsoft.com/technet/security/topics/identity/idmanage/default.mspx
br>
Saturday, May 15, 2004
MSDN Webcast: Writing Security - Threat Defense - Level 200
http://go.microsoft.com/fwlink/?LinkId=27559
May 20, 2004, 1:00 PM - 2:30 PM Pacific Time
Joel Semeniuk, VP of Software Development, ImagiNET Resources Corp.
In this session for experienced developers, you will build upon existing knowledge of secure coding best practices to learn about analyzing, mitigating and modeling threats. The session will discuss established threat modeling methodologies and tools and show how they can be applied with other best practices to minimize vulnerabilities and limit damage from attacks
[Now Playing: Tumse Milke Dilka Jo Haal - Main Hoon Na]
br>
Tuesday, May 11, 2004
Date: Thursday, May 13, 2004
Time: 9:00AM-10:30AM Pacific Time (GMT-8, US & Canada)
Description: In this webcast for experienced developers, you will learn established best practices for applying security principles throughout the development process. We will discuss common security threats faced by application developers, such as buffer overruns, cross-site scripting and denial of service attacks, and you will learn effective strategies to defend against those threats.
Register for the level 300 Webcast @
http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032250060&Culture=en-US
[Now Playing: Kuch To Hua Hai - Kal Ho Naa Ho]
br>
Friday, May 7, 2004
Thursday, May 6, 2004
SecurityFocus has an article that discusses common attacks and vulnerabilities in e-commerce shopping cart systems, with reference to SecurityFocus vulnerability reports where relevant.
Among the ones mentioned are:
- SQL Injection
- Price Manipulation
- Buffer Overflows
- Cross-Site Scripting
- Remote Command Execution
- Weak authentication and authorization
According to the article "Countermeasures should also include strict input validation routines, a 3-tier modular architecture, use of open-source cryptographic standards, and other secure coding practices."
Nice to see that these were some of the specific things that were addressed as part of DevDays during my Threats and Countermeasures presentation.
[Now Playing: Meri Makhna Meri Soniye - Baghban]
br>
This guide is designed to provide you with essential information about how to harden your Microsoft® Exchange Server 2003 environment. In addition to practical, hands-on configuration recommendations, this guide includes strategies for combating spam, viruses, and other external threats to your Exchange 2003 messaging system. While most server administrators can benefit from reading this guide, it is designed to produce maximum benefits for administrators responsible for Exchange messaging, both at the mailbox and architect levels.
[Now Playing: Mann Ki Lagan - Paap]
br>
Wednesday, May 5, 2004
Dana points to 4 Security slide decks from the MSDN Security Seminars.. Check them out!
[Now Playing: Kabhi Khushi Kabhie Gham - Kabhi Khushi Kabhie Gham]
br>
Tuesday, May 4, 2004
I presented today to CMAP (http://www.cmap-online.org) on writing Secure ASP.NET based applications. I wanted to take a moment to thank everyone who came out and asked questions. Hopefully I provided some take-away's that you can utilize every day.
As I mentioned, there are four reference books that I believe should be on EVERY .NET developer's bookshelf:
Oh, did I mention that EVERY SINGLE ONE of the books above is put out by the Microsoft PAG and focuses on Current, Shipping Technology?
UPDATE: One of the topics that came up in conversation was how best to find out about local .NET events. For that I would highly suggest keeping tabs on Geoff Snowman's weblog. Geoff is the Microsoft Developer Community Champion for the Mid-Atlantic, all around great guy, and the “Bringer of Gifts“ to local user groups 
[Now Playing: Kal Ho Naa Ho - Kal Ho Naa Ho]
br>
Monday, May 3, 2004
One of the basic tenets of Secure Coding is that "All input is Evil" until it has been validated to be otherwise.
Both in my DevDays presentation as well as in the "Improving Web Application Security" book, one of the Defense in Depth countermeasures when it comes to input validation is to set the correct character encoding in your web application. The recommendation in both is to use the "ISO-8859-1" encoding.
You do this because:
- "To successfully restrict what data is valid for your Web pages, it is important to limit the ways in which the input data can be represented. This prevents malicious users from using canonicalization and multi-byte escape sequences to trick your input validation routines."
- Using "safe" character encodings mitigates the possibility of using Unicode and multibyte encodings to disguise harmful characters. For example, an attacker might compromise URL authorizations incorporating the user name "Bob" by employing a legitimate account named "Bxb," where "x" is an oddball encoding of the letter "o"
"ISO-8859-1" is called the Latin-1 encoding and should work fine for any western European language including English. So safety in this case is enforced by limiting the character set to what is possible in a western European language. But try to display a language like Russian or Hebrew or Hindi and you'll get a bunch of un-displayable characters on the screen.
Of course you can use the UTF-8 character set, but then you will expand the ways in which input data can be represented. Which in turn leads to the possibility of canonicalization attacks.
So the take away from this is not necessarily to use "ISO-8859-1" at all times for all languages, as much as it is to limit the character encodings that are used in your web app such that you can realistically validate the info that comes in. So for example, if you are running a Hebrew language website, an option may be to use an encoding that limits the input to only what is possible in that language.
BTW, to find out more about Unicode and Character Sets, I would point you to the following article by Joel Spolsky, which is probably the most lucid explanation of the topic that I've come across.
“The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)“
http://www.joelonsoftware.com/articles/Unicode.html
[Now Playing: Tujhe Yaad Na Meri Aayee - Kuch Kuch Hota Hai]
br>
Tuesday, April 27, 2004
On a list serve that I am on, a question was posed by a developer who is working with a hosting company that runs their ASP.NET webservers under a partial trust scenario. The developer uses File I/O and Reflection within the control code for some functions. The hosting company's recommendation to the developer was to ask that all of the controls be marked with the AllowPartiallyTrustedCallersAttribute (APTCA) attribute. The questions that were posed had to do with Code Access Security (CAS) under .NET 1.0 and 1.1 as well exactly what AllowPartiallyTrustedCallersAttribute (APTCA) does.
It was an interesting question and may be something that pops up more and more in the future, so I thought I would share the answer that I gave. Comments and corrections are very welcome.
ASP.NET Web Applications under .NET 1.0 run with FullTrust privileges (i.e. They are no way constrained by CAS). The concept of the “trust“ element simply did NOT exist in 1.0. Only .NET 1.1 will allow you to run ASP.NET Web Applications under a partial trust scenario.
The basic concept behind AllowPartiallyTrustedCallersAttribute (APTCA) is that:
- An assembly that has a strong name cannot, by default, be called by a partial trust assembly (i.e. an assembly that has not been granted Full Trust privileges). So by default, an ASP.NET Web Application that is running in a partial trust scenario cannot call a strongly named assembly.
- The only way a partial trust assembly/ASP.NET Web App running under partial trust can call an assembly with a strong name is if the strong named assembly is marked with the APTCA.
BTW, Web Applications built on .NET 1.0 always run with full trust because the types in System.Web 1.0 demand full trust callers. In .NET 1.1, the System.Web, System.Web.Services, System.XML and some others are marked with APTCA, so they can actually be called from a partially trusted ASP.NET Application.
So one of the basic approaches to running an application under partial trust is:
- The ASP.NET Policy files grant full trust to any assembly that is located in the GAC.
- Put your code that accesses the privileged resource in a wrapper assembly that is then strongly named so that it can be installed in the GAC.
- The wrapper assembly, because it is in the GAC and has full trust can now call the privileged resource.
- Mark the wrapper assembly with APTCA
- Because the wrapper has been marked with APTCA, the ASP.NET application which is running under partial trust can actually call it.
- In short, Web App cannot call the privileged resource directly, but can call the wrapper which in turn can the privileged resource.
Of course the thing to note is the gaping security hole in the above sequence. What is stopping every Tom, Dick and Jane from calling the wrapper assembly? This is where Demands and Assertions come in.
So what exactly is a Demand?
If you have an assembly that calls a class from the .NET Framework that in turn accesses a privileged resource, that .NET class will "demand" the following? "Do you have the permission to access me?" This is appropriately enough called "Issuing a permission demand".
At this point, what I think of as the Enforcer comes into the picture. The Enforcer is the .NET runtime. Now the Enforcer does not just check the assembly that actually called the .NET Class. It checks everything up the call stack! i.e. It demands the permissions of not just your assembly, but that of the assembly/code that called your assembly as well. This is known as "Walking the Stack", whereby the runtime examines the permissions of each caller in the stack and if ANY of those callers do not have the required permission, a SecurityException is thrown.
BTW, there is a variation on this called a Link Demand. In this case the Enforcer does not do a full stack walk, but simply checks the permissions of the immediate caller. There are security implications to this as you need to be particularly aware of possible luring attacks.
Assert, Deny and PermitOnly methods of the Code Access permissions modify the behavior of the above mentioned Stack Walk.
Assert is of particular importance here. When you call the CAS.Assert method in your assembly, you stop the stack walk from moving any further up the call chain. In essence what you are doing is vouching for the trustworthiness of ANY of your code's callers. This needs to be used with a great deal of caution.
So what you should be doing in the above scenario is demand an alternate permission so that the runtime can authorize the calling code prior to calling assert.
And that in essence is the Sandboxing pattern for developing partial trust web apps.
Regarding the request from the hosting company.. Strongly naming the assembly, marking it with APTCA, and putting it in the \bin really does not do anything.. Because the assembly would NOT be fully trusted, it could not call the FileIOPermission or the ReflectionPermission, the first of which is Privileged Code and the second of which is a Privileged Operation.
Now if the control assembly was strongly named, marked with APTCA, and installed in the GAC, it would under the ASP.NET policy be running with Full Trust and would have such permissions. But I am unsure if the hosting company would allow the installation of GAC'd assemblies.
Another option of course is customizing the policy to grant the required permission to the particular web app. But since this was a hosting scenario, I disregarded it as I would not be surprised if the hosting company had pretty strict restrictions as far as changes to default policies were concerned.
[Now Playing: Ek Kunwara Phir Gaya Mara - Masti]
br>
Monday, April 26, 2004
I'll be doing a reprise of my DevDays 2004 presentation on "Defenses and Countermeasures" for the Columbia, MD ASP.NET Professionals User Group on Tuesday, May 4, 2004.
Here is the Official Blurb:
Date: 5/4/2004 6:30pm-9:00pm
Topic: Defenses and Countermeasures - Secure Your ASP.NET Applications from Hackers
Location: 8850 Stanford Blvd, Suite 4000, Columbia, MD 20723
Description: Secure Your ASP.NET Applications from Hackers
This session presents countermeasures to defend against threats. Topics include input validation; best practices when working with Microsoft SQL Server™, including the use of parameterized commands, stored procedures, accounts with limited privileges, Microsoft Windows; authentication versus SQL Server logins, and secure storage of connection strings; HTML-encoding of user input; vulnerabilities specific to ASP.NET forms authentication and forms authentication cookies; use of encrypted view state rather than hidden fields to maintain state between requests; storage of password hashes rather than passwords for added security; and more.
Please stop by and say hello if you are in the area. I won't be under the strict time constraints that I was under for the DevDays presentation, so my hope is that it will be a more interactive session. BTW, I've presented to these guys before, so I KNOW interaction won't be an issue 
[Now Playing: Chale Chalo - Lagaan]
br>
Sunday, April 25, 2004
Saturday, April 24, 2004
Keith "Mr. Security" Brown kicks off a series of security articles on Longhorn. In this piece,[1] Keith digs into the current plans for making Longhorn a much safer place for applications, only really giving administrator privileges to applications with a special need for them, regardless of the privileges of the user running the applications. This is a core piece of taking back our computers from the world's hackers and Keith lays out the basics nicely.
[Marquee de Sells: Chris's insight outlet]
This is my first and probably my last link to Longhorn until it is shipped as a beta, as I am interested in solving today's security and architecture issues.. But it is a glimpse at where Microsoft is going with Security in the future OS and it is written by Keith Brown.. So..
[1] http://msdn.microsoft.com/longhorn/default.aspx?pull=/library/en-us/dnlong/html/leastprivlh.asp
[Now Playing: O Rey Chori - Lagaan]
br>
Wednesday, April 21, 2004
Sunday, April 18, 2004
From the latest TechNet Flash:
This book [1] discusses how, when using S/MIME, encryption protects the contents of e-mail messages and digital signatures verify the identity of a purported sender of an e-mail message. The book also provides guidance on how to implement S/MIME with Microsoft Exchange Server 2003 and it directs you to other resources where those are necessary.
[1] http://www.microsoft.com/technet/prodtechnol/exchange/2003/library/exmessec.mspx
[Now Playing: Yeh Ladka Hai Allah - Kabhi Khushi Kabhie Gham]
br>
Thursday, April 15, 2004
Wednesday, April 14, 2004
Gary McGraw, author of "Exploiting Software" posted the following recently to the SC-L mailing list:
The April 2004 issue of Information Security Magazine contains an article I wrote on the future of software. It is adapted (pretty heavily) from Exploiting Software and identifies seven trends that will help you understand how software is evolving and how this evolution will impact security. You can view the article at http://infosecuritymag.techtarget.com/ss/0,295796,sid6_iss366_art684,00.html
At 12:00PM EST on Thursday of this week (4/15), I will conduct a live webcast titled "Best Practices for Software Security. According to the hype, l will offer "exclusive training tips for software developers, architects and testers as well as providing security strategies managers can use to make the smartest technology choices." Righto. Anyway, you can pre-register for the webcast at http://www.searchsecurity.com/secure_code. If you miss the live event, you'll still be able to view the webcast on demand by visiting http://searchsecurity.techtarget.com/webcasts/0,295024,sid14,00.html
br>
Tuesday, April 13, 2004
Saturday, April 10, 2004
Microsoft Security E-Learning Our free Microsoft® Security E-Learning Clinics follow the same content outline as our Security Webcasts, but deliver that information via a learner-centered format that offers unique user benefits.
[BufferOverrun]
Brian points to the FREE Security E-Learning Clinics from Microsoft. This content is platform-centric rather than developer-centric.
Clinic 2801: Microsoft® Security Guidance Training I This online clinic provides students with introductory knowledge and skills essential for the design and implementation of a secure computing environment. It also provides students with prescriptive guidance on security update management and best practices for implementing security on Microsoft Windows® server and client computers.
|
Clinic 2802: Microsoft® Security Guidance Training II This online clinic builds on existing knowledge of server and client security and provides students with the knowledge and skills to apply best practices to implement perimeter and network defenses and enhance security for applications and Microsoft Windows Server System™ components. It also provides students with prescriptive guidance to enhance security for Microsoft Windows® server and client computers and practical strategies for implementing security best practices across an environment.
|
[Now Playing: O Rey Chori - Lagaan]
br>
Michael Howard - When does threat modeling come into play?
http://channel9.msdn.com/ShowPost.aspx?PostID=946
Michael Howard, program manager on Microsoft's security team, discusses how the Internet Explorer team used threat modeling to reduce the attack surface of its software.
Michael Howard - What if we had an unattackable system?
http://channel9.msdn.com/ShowPost.aspx?PostID=168
What if Michael Howard's job became obsolete? After all, he's the top security official at Microsoft. What would the bad guys do if the system itself became unattackable?
Michael Howard - What isn't being taught well enough in college?
http://channel9.msdn.com/ShowPost.aspx?PostID=169
Michael Howard, Microsoft's top security official, notes that many college graduates need to get remedial security training.
[BufferOverrun]
Cool! Great information from the co-author of "Writing Security"!
[Now Playing: Meri Makhna Meri Soniye - Baghban]
br>
Saturday, April 3, 2004
For those who have been following the thread on how to set up SSH Tunneling, Beau has written up an excellent step by step article [1] on how to configure this on the Windows platform. Highly recommended!
[1] http://bmonday.com/articles/653.aspx
[Now Playing: Sharara - Mere Yaar Ki Shaadi Hai]
br>
Friday, April 2, 2004
Microsoft Knowledge Base Article - 823659 [1]
This article describes incompatibilities that may occur on client computers that are running Microsoft Windows 95, Microsoft Windows 98, Microsoft Windows NT 4.0, Microsoft Windows 2000, Microsoft Windows XP Professional, or Microsoft Windows Server 2003 when you modify specific security settings and user rights assignments in Windows NT 4.0 domains, in Windows 2000 domains, and in Windows Server 2003 domains. By configuring these settings and assignments in local policies and in group policies, you can help tighten the security on domain controllers and on member computers. The downside of increased security is the introduction of incompatibilities with clients, with services, and with programs.
This article contains examples of clients, of programs, and of operations that are affected by specific security settings or user rights assignments. However, the examples are not authoritative for all Microsoft operating systems, for all third-party operating systems, or for all program versions that are affected. Not all security settings and user rights assignments are included in this article.
This KB article came across one of the list serves that I am on. Very useful information written in a very accessible manner. Check it out.
[1] http://support.microsoft.com/default.aspx?scid=kb;en-us;823659
br>
Thursday, April 1, 2004
Gary fired off a message to SC-L pointing out that the National Cyber Security Partnership released a set of reports about the problems with software security today. Included was a report [1] that he co-authored with Mike and a few others on the process of producing secure software.
The principal recommendations in this report are in three categories:
- Principal Short-term Recommendations
- Adopt software development processes that can measurably reduce software specification, design, and implementation defects.
- Producers should adopt practices for producing secure software
- Determine the effectiveness of available practices in measurably reducing software security vulnerabilities, and adopt the ones that work.
- The Department of Homeland Security should support USCERT, IT-ISAC, or other entities to work with software producers to determine the effectiveness of practices that reduce software security vulnerabilities.
- Principal Mid-term Recommendations
- Establish a security verification and validation program to evaluate candidate software processes and practices for effectiveness in producing secure software.
- Industry and the DHS establish measurable annual security goals for the principal components of the US cyber infrastructure and track progress.
- Principal Long-Term Recommendations
- Certify those processes demonstrated to be effective for producing secure software.
- Broaden the research into and the teaching of secure software processes and practices.
br>
Monday, March 29, 2004
I recently came across this utility that allows you to securely wipe hard disks.
"Darik's Boot and Nuke ("DBAN") [1] is a self-contained boot floppy that securely wipes the hard disks of most computers. DBAN will automatically and completely delete the contents of any hard disk that it can detect, which makes it an appropriate utility for bulk or emergency data destruction."
It is part of the National Nuclear Security Administration suite of security tools and is available in both Floppy and CDR/CDRW versions.
[1] http://dban.sourceforge.net/
br>
Friday, March 26, 2004
I got some comments and questions regarding my set up, so I figured that I would address them here.
A friend of mine asked why I was using SSH for TS/Remote Desktop connections when Terminal Services uses encryption natively for its connections.
I did a bit of research on this and here is what I found:
- The protocol that is used by Remote Desktop Client is the Microsoft Remote Desktop Protocol (RDP). Windows 2000 supports RDP 5.0. and XP Supports RDP 5.1.
- RDP uses RSA's RC4 cipher. The encryption strength that is used is determined by the server, and can be up to 128-bits. It will also connect using a 40-bit or 56-bit key if that is what the server is using.
- W2K Terminal Services has 3 levels of encryption (Low, Medium and High) which is configured on the server side. Low encryption specifies that only data you send from the client to the server should be encrypted. If you select Medium encryption, Terminal Services encrypts the data sent in both directions. If your client is a Win2K computer, Terminal Services uses a 56-bit key for Low and Medium encryption. If you connect with any other client, Terminal Services uses a shorter 40-bit key. If you select High encryption, Terminal Services encrypts data sent in both directions—like Medium, except that High encryption uses a much stronger 128-bit key.
- From what I understand XP Defaults to a Medium Security model (56 bit both ways).
So what does all that mean? Simply that Remote Desktop does provide encryption as noted above and if that is all you need, go for it. I am a bit concerned with the 56 bit key default for XP. I have not explored if it is possible to bump it up to 128 bits when you have XP on both ends.
But in my case, an important criteria that I was looking was secure direct access to my source control tree which is on my internal network.
I was familiar with using SSH on my Linux and FreeBSD boxes primarily as a replacement for Telnet. The things I like about it are:
- It encrypts all traffic to effectively eliminate eavesdropping, connection hijacking etc.
- It provides strong authentication. I like having the option of using PubKey authentication. I am one of those people who actually LIKE two factor authentication
- You can redirect TCP/IP ports through the encrypted tunnel. For example I mentioned that I access my source control provider (Sourcegear Vault) via its Web Services API which is available on port 80 of my W2K box on my internal network. To access my source tree, I simply point the Vault client to localhost:8080. That traffic is automatically redirected via my encrypted SSH connection over the internet to the port 80 of my Vault Server. (Port 8080 because, my local dev machine is running its own web server on port 80).
I could just as easily redirect IMAP, POP traffic here as well. Also do note that I mentioned that I am using my client machine over WiFi connections, so over the air snooping and eavesdropping is a possible concern which is addressed by having all important traffic going out over a SSH tunnel.
- SSH is built on the premise of never trusting the network. That satisfies my paranoid side.

Another option that I was not aware of, but was
pointed out to me by
Dana, is the
OpenVPN package. It looks very interesting and is something I will take a look at to see if it meets my needs.
UPDATE:My friend who likes Terminal Server sent me a couple of links on Terminal Services on XP, Win2k3.
br>
I am addicted to book stores and can spend an inordinate amount of time in one.
Combine that with the fact that I recently got a Tablet PC with built in Wi-Fi AND that pretty much all of the Borders bookstores and Starbucks coffee shops in my area are now T-Mobile Hotspots and I am in the position of a truck rolling downhill and picking up speed. Combine all of the above with the fact that I recently got a free offer from T-Mobile for 2000 free hours and the truck now has NO brakes!!!
While I am a fan of connectivity at any time from anywhere, I am also the paranoid type. Especially when it comes to WiFi. WEP is just a door made of tissue paper, so I had some requirements that needed to be satisfied if I was going to be able to work from any of these locations.
The relevant pieces of my configuration were:
- Broadband cable provider who does not assign fixed IP's. The DHCP leases are pretty long, but I did not want to worry about them.
- Consumer grade router as the externally facing device on my network.
- Windows 2000 Server - Running IIS, .NET 1.1, and Sourcegear Vault
- Windows XP Pro - Dev Machine
- Windows XP Tablet - Which would be the client that would connect from outside.
I needed the following:
- Secure Access via Terminal Services to both the W2K and XP boxes
- Secure access to my source code which is stored in Sourcegear Vault on the W2K server
- I was NOT going to spend any extra money!
Took me a couple of days to put everything together but I do believe I am on the right track.
First thing was to use ZoneEdit.com's Dynamic IP capability to assign a domain name to my rotating external IP address. That way I did not have to worry about remembering an IP address and it changing on me.
Second, I chose SSH as the method of establishing a VPN connection from my client machine to my internal network. The only exposed port on my internal network is the SSH port. That port is forwarded to my SSH Server. I have chosen to use Public Key Authentication combined with a pass phrase as my authentication mechanism for SSH. I believe this is more secure than the password or the host based authentication mechanisms that SSH provides.
Once the SSH Connection is established, I tunnel Terminal Server as well as port 80 traffic via that encrypted connection. I am tunneling Port 80 traffic as my Sourcegear vault exposes a web services API. So sitting anywhere I have network access to the Internet, I can with a reasonable degree of confidence connect into my home network and get access both to my source control tree and my home machines.
The tools used were all free and widely available:
- OpenSSH for Win32
- Putty for Win32
- Lots of trial and error
One of these days, I'll do a write up on the gotchas and the configuration issues that I went through. But for right now, in a truly amazing change of pace, the weather today is just about gorgeous! So am going to go and enjoy it!
br>
Wednesday, March 24, 2004
Courtesy of the SBS Diva (Susan Bradley) 
--CIS BENCHMARK (v1.1.3) FOR WINDOWS XP PROFESSIONAL--
The Benchmark contains four levels of technical control settings for Windows XP Professional, enabling users to choose the consensus security configurations most appropriate for their particular environments.
The four levels are:
LEGACY: Designed for XP systems that need to operate with older systems such as Windows NT, or in environments where older third party applications are required. The settings will not affect the function or performance of the OS, or the applications running on it.
ENTERPRISE STANDALONE: Designed for XP Professional systems operating in a managed environment where interoperability with legacy systems is not required. It assumes that all operating systems within the enterprise are Windows 2000 or later, therefore able to use all possible security features available within those systems. In such environments, these Enterprise settings are not likely to affect the function or performance of the OS. However, one should carefully consider the possible impact to software applications when applying
these recommended XP technical controls.
ENTERPRISE LAPTOP: Nearly identical to the Enterprise Standalone settings, but with modifications appropriate for mobile users whose systems must operate both on and away from the corporate network. In environments where all systems are Windows 2000 or later, these Enterprise settings are not likely to affect the function or performance of the OS. However, one should carefully consider the possible impact to software applications
when applying these recommended XP Professional technical controls.
HIGH: Designed for XP Professional systems where security and integrity are the highest priority, even at the expense of functionality, performance, and interoperability. Therefore, each setting should be considered carefully and only applied by an experienced administrator who has a thorough understanding of the potential impact of each setting or action in a particular environment.
The XP Professional Benchmark was developed via consensus among CIS members, with participation by Microsoft. The names assigned to the four security levels are consistent with the names assigned to security configuration guidance distributed by Microsoft.
The Center for Internet Security (CIS) is a non-profit enterprise whose mission is to help organizations reduce the risk of business and e-commerce disruptions resulting from inadequate technical security controls.
Download and more Info @ http://www.cisecurity.org/
br>
Keith Brown has a new MSDN Magazine Security Brief that discusses the implications of fully trusted code. [1]
His conclusion is thought provoking to say the least - "The goal of this column was to demonstrate that many of the security features of the CLR can only be enforced in a partial-trust environment. While the notion of full trust might seem obvious to some, I've reviewed plenty of designs that make assumptions about CLR security that simply don't fly in a full trust scenario. If you compare the CLR's built-in security to Windows built-in security, running with full trust is akin to running as SYSTEM. Fully trusted code can get around all of the CLR's built-in security features. That's why it's called fully trusted—it must be trusted to do the right thing. SYSTEM can get around any security constraint in Windows, which is why code running as SYSTEM must be trusted."
[1] http://msdn.microsoft.com/msdnmag/issues/04/04/SecurityBriefs/
br>
Organizations require a network operating system (NOS) that provides secure network access to network data by authorized users and that rejects access by unauthorized users. For a Microsoft® Windows® Server 2003 NOS, the Active Directory® directory service provides many key components for authenticating users and for generating authorization data that controls access to network resources. A breach in Active Directory security can result in the loss of access to network resources by legitimate clients or in the inappropriate disclosure of potentially sensitive information. Such information disclosure affects data that is stored on network resources or in Active Directory. To avoid these situations, organizations need more extensive information and support to ensure enhanced security for their NOS environments.
This guide addresses this need for organizations that have new, as well as existing, Active Directory deployments. This guide contains recommendations for protecting domain controllers against known threats, establishing administrative policies and practices to maintain network security, and protecting DNS servers from unauthorized updates. It also provides guidelines for maintaining Active Directory security boundaries and securing Active Directory administration. This guide also includes procedures for enacting these recommendations.
http://www.microsoft.com/downloads/details.aspx?familyid=4e734065-3f18-488a-be1e-f03390ec5f91&displaylang=en
br>
Wednesday, March 17, 2004
The Shared Web Hosting Deployment guide provides information and guidance for service providers and resellers to deploy Windows Server™ 2003 and SQL Server™ 2000 in a shared Web hosting environment. The guide does not assume extensive prior administration experience with Microsoft Windows® and SQL Server. It provides practical, procedure-based guidance on configuration, deployment, and troubleshooting. In addition to written guidance, the Shared Web Hosting Deployment guide includes a set a sample provisioning scripts intended to be used as a starting point for your own scripts. [1]
In addition, make sure you check out Chapter 20 [Hosting Multiple Web Applications] of the fabulous "Improving Web Application Security" Guide from the PAG. That chapter "... shows you how to secure ASP.NET applications in hosting scenarios, where multiple applications are hosted by the same Web server. Hosting scenarios for Windows 2000 and Windows Server 2003 are covered. The chapter explains a number of techniques to provide application isolation. This includes the use of multiple identities, IIS6 application pools on Windows Server 2003 and using .NET code access security to constrain applications and enforce isolation." [2]
[1] http://www.microsoft.com/serviceproviders/microsoftsolutions/sharedhostingguide.asp
[2] http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnnetsec/html/THCMCh20.asp
[Now Playing: Mehndi Laga Ke Rakhna - Dilwale Dulhania Le Jaayenge]
br>
This topic came up recently on one of the security lists that I participate on and I wanted to pass on links to some excellent resources that were posted by various people.
Wireless Network Security - Best Practices
(by Harry Waldron who keeps the list updated @
http://forums.mcafeehelp.com/viewtopic.php?t=18810)
- Best Practices for Wireless Security
http://www.computerworld.com/securitytopics/security/story/0,10801,86951,00.html
- Microsoft Recommended - Best Practices for Wireless Security
http://microsoft.com/downloads/details.aspx?FamilyId=CDB639B3-010B-47E7-B234-A27CDA291DAD&displaylang=en
- PC Magazine - Ironclad Security for Wireless LANs
http://www.pcmag.com/article2/0,1759,1276349,00.asp
- PC Magazine - WPA Security step-by-step
http://www.pcmag.com/article2/0,1759,1277020,00.asp
- Limit mobile risks--a security checklist
http://zdnet.com.com/2100%2D1107%2D5108423.html
- Tips on locking down your WLAN
http://www.computerworld.com/securitytopics/security/story/0,10801,87705,00.html
- GOOGLE SEARCH: wireless+security+best+practices
http://www.google.com/search?&q=wireless+security+best+practices
- Top 10 basic wireless security practices
http://www.computerworld.com/securitytopics/security/story/0,10801,87324,00.html
Other links that were posted include:
[Now Playing: Lok Boliyan - Bhangra Beatz]
br>
Monday, March 15, 2004
Bruce Schneier's Crypto-Gram newsletter is always a great read and it is now available via an RSS feed @ http://www.schneier.com/crypto-gram-rss.xml
The current issue's lead story is the Microsoft source code leak.
br>
Sunday, March 14, 2004
Dan Sellers has a blog entry based upon a Canadian DevChats delivered on March 15, 2004, in which he covers the Security fundamentals used by IIS and ASP.Net (Part 1 of 3)
The topics discussed are:
- ASP.Net Security Fundamentals
- IIS Security
- IIS Authentication Mechanisms
- IIS 5 and ASP.NET
- IIS 6 and ASP.NET
- ASP.NET Worker Process Identity
- Securing Process Credentials
- ASP.NET Authentication and Authorization
- Impersonation
- Security Principal Objects
- Using IPrincipal and Iidentity
Good one! Looking forward to the next two.
[Now Playing: Pretty Women - Kal Ho Naa Ho]
br>
Saturday, March 13, 2004
As many of you know, the number one priority at Microsoft these days is security. One of the ways we are implementing this priority is by focusing all the developer training sessions we do in the next couple of months on security. At Dev Days last week, the ASP.NET track was entirely focused on security. We have another day's worth of security training material that folks from our seminar team will be delivering over the next couple of months. (None of this is a repeat of the Dev Days content, although it covers many of the same topics.)
We are using several vehicles to deliver the same content, so let me try to lay out the vehicles and the dates. There are four sessions:
1. Essentials of Application Security
2. Writing Security – Best Practice
3. Writing Security – Threat Defense
4. Implementing Application Security by using the .NET Framework
Details of all four sessions are here.
There are three vehicles for delivering the developer content: Security Summits, MSDN Security Briefings, and MSDN Seminars.
In my area, there are two security summits: Washington, DC on April 8th and Philly on May 4th. Security Summits have three tracks. Two of the tracks are focused on IT professionals. You can read the details here. The two security summits also have a developer track. This will present all four of the seminars listed above.
We also have six MSDN Security Briefings in my geography: Roanoke, March 16th, Charlottesville, March 18th, Pittsburgh, March 25th, Richmond, May 18th, Norfolk, May 20th, and Allentown, June 1st. Each of these briefings will cover sessions one and two: Essentials of Application Security and Writing Security – Best Practice. Many of these sessions will be taught by local author and all-around .NET expert Andrew Duthie.
Finally, we have six MSDN Seminars here: Fairfax, March 23rd, Philadelphia, March 25th, State College, April 13th, Pittsburgh, April 15th, Norfolk, April 27th, and Richmond, April 29th. Each of these MSDN Seminars will cover sessions three and four: Writing Security – Threat Defense and Implementing Application Security by using the .NET Framework. (The sessions last week in Allentown and Linthicum were also the same content.)
Assuming my feet are functional, I will teach four of these seminars and the sessions in Norfolk and Richmond will be taught by my colleague Paul Murphy. I’m off to Detroit on Thursday of this week to try out the material in Motown.
So if you want to see all four of these application development security sessions, you can either go to one of the two security summits, or you can go to both an MSDN Security Briefing and an MSDN Seminar.
My team members who have already delivered the MSDN Seminar content tell me it’s getting rave reviews. People have literally run out of the seminar to call the office after they learned about vulnerabilities in their sites that they weren’t aware of. Security is different to other kinds of developer training. For anything else that developers do, you can keep making progress until you don’t know how to do something, then you check the documentation. When it comes to security, stuff you don’t know can get your company into trouble and you fired. Unless you do take some training, you literally don’t know what you don’t know.
For info on the IT professional sessions, check here and here.
Thanks Geoff! Great Info indeed, especially for those out in the Mid-Atlantic Area

br>
Thursday, March 11, 2004
Microsoft has released a slide deck [1] and white paper [2] on their Application Security Assurance Program.
Microsoft founded the Application Security Assurance Program (ASAP) to inventory, assess, and—when necessary—ensure resolution of security vulnerability issues found in line-of-business applications. Topics include the program's criteria for assessing applications, the participants in the review process, the requirements for a secure application environment, lessons learned while evaluating applications at Microsoft, and best practices for enhancing the security of applications in development.
br>
Wednesday, March 10, 2004
You are talking about secure coding as an educational issue. Education is great, but it may be exceptionally difficult to make sure that every coder is a "good" coder. In fact, even "good" coders may have to cut corners due to schedules and deadlines. On a small scale this means that it might be possible to write secure software, but as you scale it up it seems likely that inevitably serious security flaws will surface.
I agree that every coder is not going to be a great coder simply by the virtue of training and education. What I am talking at a minimal level is raising the bar of their knowledge regarding how software systems are vulnerable. I've interviewed enough developers to know that in a majority of cases, knowledge of XSS attacks, SQL Injection attacks and other vulnerabilities are not part of their vocabulary. What I am aiming for to start with, is bringing such an awareness to every developer such that it is as much a part of their development "life" as knowing a programming language. We cannot hope to propose solutions or mitigations to problems until developers are aware that there is indeed a problem.
Given such an education and training it is possible to write hack resilient applications. For example, the Microsoft Reference Application for OpenHack, which was demonstrated at DevDays 2004, is an application that withstood 80,000+ hack attempts in the OpenHack competition without failing. So it IS possible to write Security given the right training and knowledge.
I agree completely on your comments about cutting corners due to schedules and deadlines. But is that a technical problem or a business issue? Provided that the developer is capable of doing the right thing (because he has had the training and education), the fact that he is not doing so due to other non-technical constraints means to me that they and their management have considered the trade-off's involved in taking such a risk and are comfortable with the consequences. It would be a case of being told to use seat belts in a car in case of an accident. In fact some states in the United States have specific laws in place to force this. But if you still choose to drive without the seatbelt, get into an accident and go through the windshield, the responsibility is yours!
I agree with your point regarding threat models, but again, this is another place where the problem may be the software. Threat models evolve. If you read Ross Anderson's excellent book Security Engineering you'll see that almost all security systems have broken over time largely due to changing environments and hence changed threat models.
But software itself (in the current model) doesn't evolve to meet the changing threat model. Not automatically. Not without intervention.
Now this is an interesting point and I have to agree. All too often have I seen such fire-and-forget mentality from development teams and their management. All too often, applications are written and deployed and no time, resources or budget is allocated to regular application upgrades or testing. This is probably more relevant for custom apps developed and deployed by corporate IT rather than software product shops. The latter at least understand that the product is their bread and butter and has an incentive to upgrade and bug fix.
As to the issue of changing threat models, I've been giving it some thought and have come to focus on the management of change aspect. You see, I've recently been exploring Test Driven Development (TDD). One of the primary tenets of this methodology is to write tests first and code against them such that the tests pass. As part of this process I can run an automated batch of tests that exercises the functionality of the application. If even a single test fails, you do not proceed. At it most simplistic level, you are testing every externally exposed interface of an application to make sure that it behaves as expected and you are doing it in an automated fashion.
Currently the tests as proposed are testing the functionality of the application. What if we combined Threat Modeling with TDD? What if you create security focused test suites that are driven off the Threat Models? The advantage I see in this case is once the initial tests are set up, they could be run in a very automated fashion to check for vulnerabilities. In addition, as threats change, additional tests can be added to the test suite to run against the existing software. The advantage to the developer would be that he is getting instant feedback on the vulnerabilities in the code.
As you noted, software does not evolve without intervention. And even in this case, you would have to update the test suite and update the software to make the test pass and to fix the vulnerability. BUT, at least in this case you would be aware that there is indeed a vulnerability that can be exploited. Which hopefully is the first step in the process of fixing the vulnerability.
I am going to take a look at this in a lot more detail as I bring myself up to speed on TDD and see if this line of inquiry has any merit. Until the paradigm shift in computer science that you mentioned happens, we have to work within the limitations of the current technology and hope that training and education (both for development teams as well as their management), combined with the appropriate tools have an impact on the quality of the code.
br>
Tuesday, March 9, 2004
As I've remarked before, Crypto support in the .NET BCL is pretty extensive. As such, please do NOT roll your own Crypto! This is one of those things that unless you know what you are doing you can seriously shoot yourself in the foot.
.NET Crypto FAQ from GotDotNet [1]
In addition, here are some pointers to some .NET Crypto libraries that I personally use. I use them because they are well tested, written by people who actually do this for a living and have a lot more knowledge of the topic than I do (... and I do not want to shoot myself in the foot!).
How To: Create an Encryption Library [2]
[MS PAG] This How To shows you how to create a managed class library to provide encryption functionality for applications. It allows an application to choose the encryption algorithm. Supported algorithms include DES, Triple DES, RC2, and Rijndael.
How To: Create a DPAPI Library [3]
[MS PAG] This How To shows you how to create a managed class library that exposes DPAPI functionality to applications that want to encrypt data, for example, database connection strings and account credentials.
Encrypting and decrypting data [4]
[Ivan Medvedev, former CLR Security guy, and now a member of the Secure Windows Initiative Red Team]
[1] http://www.gotdotnet.com/team/clr/cryptofaq.htm
[2] http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnnetsec/html/SecNetHT10.asp
[3] http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnnetsec/html/SecNetHT07.asp
[4] http://www.dotnetthis.com/Articles/Crypto.htm
[Now Playing: Meri Makhna Meri Soniye - Baghban]
br>
Monday, March 8, 2004
Security
- Computer Crime and Security - Level 200
March 09, 9:00 A.M.
- Protecting Your System from SQL Injection Attacks - Level 200
March 09, 1:00 P.M.
- Protect Your IP with Code Access Security - Level 200
March 10, 9:00 A.M.
- Efficient Software Testing - Level 200
March 10, 1:00 P.M.
- Application Hacking Techniques and How to Stop Them - Level 200
March 12, 9:00 A.M.
- How to Perform a Security Review - Level 200
March 12, 11:00 A.M.
- .NET Code Access Security - Level 200
March 16, 1:00 P.M.
- Creating a Single Sign-On Enterprise Security Portal - Level 200
March 17, 9:00 A.M.
- Implementing Authentication and Authorization with AD/AM and AzMan - Level 200
March 18, 1:00 P.M.
- Essentials of Application Security - Level 300
March 19, 9:00 A.M.
- Implement Custom Authentication and Authorization in .NET
March 19, 11:00 A.M.
- .NET Distributed Security - Level 300
March 19, 1:00 P.M.
Architecture
br>
Sunday, March 7, 2004
As part of my "Defenses and Countermeasures" presentation at the D.C. DevDays, one of the things that I talked about was how to properly secure connection strings.
The options ranged from encrypting the connection strings and storing the key in an ACL'd registry key to using DPAPI to encrypt the connection string. The recommendation was to use DPAPI. As noted during the discussion, the primary advantage of DPAPI is that it offloads the task of key management to the operating system.
But there is a catch if your deployment environment is a web farm.
You see, DPAPI is keyed to a machine. Which means that a DPAPI encrypted string that is created on one machine will NOT work on another machine. What this means from a deployment perspective is that the use of DPAPI to encrypt connection strings breaks XCOPY deployment. You simply cannot replicate the key that has been created on one node across all nodes of a web farm. Instead, what you will have to do is create the encrypted connection string on each node of the web farm at deployment time.
Security is all about trade offs. In this case, the benefit of greater security combined with the off loading of key management to the OS comes at the cost of administrative overhead at deployment.
br>
The first Integrated Development Environments (IDEs) for software development appeared in the early 1990s. These IDEs combined a code editor, compiler and debugger into a unified framework. Later developments allowed third-party tools to be integrated with the IDE through the use of open APIs. In 2002 the Meta Group extended the terminology to Integrated Life-cycle Development Environments (ILDEs). An ILDE includes functionality such as integrated requirements management, change request management, knowledge sharing, task management, version control, testing and bi-directional asset linking. Borland has begun utilizing the term ILDE as well as Application Lifecycle Management (ALM), while IBM Rational incorporates similar concepts within its Rational Unified Process (RUP).
ILDE, ALM, or RUP can deliver benefits to the application development process when used appropriately. However, one concern I have is the lack of priority that both Borland and Rational have given to deliver secure coding tools within their development environments. My impression is that none of these major vendors have yet got security. Recently Microsoft has begun disclosing details on its Whidbey Visual Studio, due for release later this year. It will deliver some assistance for developers including "support constraining and validating your design against the Web Services Enhancements (WSE), IIS Security, SQL Security, and ASP.NET Security." Again Microsoft's committed stance towards improved security has moved it from the position of security laggard to security innovator. Tools from Microsoft Research such as Prefast, Prefix, FxCop, SDV, Aegis and ESP are examples of first phase security related tools that all tool vendors should be providing.
The challenge is for all tool vendors: Borland, IBM Rational, Sun, the Eclipse platform and Microsoft to dramatically improve the security components within their life-cycle application development frameworks. Certainly tools are not the total solution. Ongoing training of coding staff is also critical. However programmers are always under pressure to deliver functional code, and smart security tools are required to assist it is secure functional code.
Resources
Development Tools
Borland ALM: http://www.borland.com/alm/
IBM Rational RUP: http://www-136.ibm.com/...
Microsoft Developer Tools Roadmap 2004-2005: http://msdn.microsoft.com/...
Microsoft Research tools (ppt file): http://research.microsoft.com/...
Microsoft Whidbey FAQ: http://msdn.microsoft.com/...
Background
Microsoft security - don't underestimate its secure future: http://davidcartwright.com/...
[SECURITY++ David Cartwright's weblog]
+1 - Completely agree with David's sentiments here.
One additional item I should have added in
my earlier post is exactly what David is asking for. Tool support for Secure Coding.
br>
I run my Windows XP Pro machine as a non-Administrator.
I am not an Administrator, I am not a Power User, but a lowly User on my machine. I have been doing this for almost two years now. I am productive on my machine when I am carrying out my regular tasks and I have had minimal issues with developing/debugging .NET Applications that range from Winforms, to Web Apps to Mobile Apps.
I believe in the Principle of Least Privilege.
That belief has protected me from Trojans such as Back Orifice and virus's such as ILoveYou, which attempt to write to the Windows System Directory and to certain registry keys. As a lowly user, I do not have the authority to write to these protected areas, so I have weathered them. Needless to say, I keep my anti-virus protection up to date as well as you can never be too careful.
In developing as a non-administrator, I have encountered errors that I would not have encountered if I was running with admin privileges. I believe that this in turn has made the systems and software that I build much more secure.
Please, join me and become a lowly user, and experience the freedom it gives you 
BTW, this little bit of info sharing was prompted by a recent memory. My friend Andrew, who has been doing this for a while as well, mentioned running as a non-Admin during his DevDays presentation. I also saw a couple of recent references to the section of Keith Brown's book that mention this as well.
If you are interested in the reasons for doing this and need information on how to accomplish this, please take a look at the following reference material.
br>
Saturday, March 6, 2004
Randy has an article on Secure Software that is an interesting read. The main point of his message can be found in the following paragraph:
"Maybe we are writing code as securely as we can. Maybe the flaw is so fundamental to the way that computer science and modern programming have evolved that it is essentially unsolvable using the current framework. Will making software companies liable make their code more secure if software programming as a science is inherently “brittle?” Will enforcing good coding habits ensure that software is more secure?"
I do not believe that we are writing code as securely as we can. At its most basic level, the problem is one of education, or lack thereof. Developers simply are NOT taught the proper way to write Security to defend against possible attacks. It is a very rare college curriculum that addresses the issues involved in defensive coding. Oh, they address the issues of software development processes such as Agile Development or RUP or things like OO and Debugging, but not Secure Coding. That mentality pervades the professional software development lifecycle as well, where getting the product out is more important than making a product secure.
As to the question of making software companies liable, my response would be that even if we do, it does not address the issue of custom applications and web sites that are developed and deployed by companies. And that issue can only be addressed by educating the developers who create the software in Secure Coding practices and educating their management on the value of having robust, secure software products.
As to good coding habits.. What constitutes good coding habits? The book "Code Complete" by McConnell is considered a classic on what constitutes good coding habits. It places no emphasis on security as part of the software development lifecycle.
There is also the gunslinger mentality of a lot of coders to deal with, that place a premium on fast code vs. Security. i.e. Emphasizing performance over security. You see, Secure Coding is not sexy. But talking about how you wring that last erg of speed out of an application? Now, that is sexy! The basic premise of trade-offs is simply not something that is discussed all too often.
I saw this demonstrated very vividly in the Blogging world just recently. When the Microsoft Patterns and Practices group (PAG) released "Improving Web Application Security: Threats and Countermeasures", which is an excellent book on building Secure Software on Microsoft's .NET platform, the response was at most, "Yawn, Ho Hum".
But when the PAG released the beta version of "Improving .NET Application Performance and Scalability", the amount of noise produced would have woken up a dead man!
I found that state of affairs... sad.
So what can be done?
- I will quote Michael Howard of "Writing Security" on this one:
"We need more education regarding secure design, secure coding, and more thorough testing. A good, well-rounded, three-semester course on systems security would cover general security concepts and threat analysis in the first semester, understanding and applying threat mitigation techniques in the second, and practicing designing and building a real system in the third. The student would learn that systems should be built not only to serve the business or customer but also to serve the business or customer securely. The course should prove the student with balanced doses of security theory and security technology"
- Techniques such as Threat Modeling need to be integrated directly into the software development lifecycle. I will quote Michael again - "You cannot build a secure system until you understand your threats"
Security MUST NOT be an afterthought. It must pervade every phase of the development lifecycle. The testing process must include security testing in addition to unit and functional testing.
- Sample code, feature demos, sample applications that are produced by software companies MUST be reviewed for security best practices before being made available to the public. It is an unfortunate but true fact that lot of the code that is provided in such a fashion is often "re-used" using cut and paste.
In short WALK the TALK! Talking about code security, then providing insecure samples simply propagates the "Do as I say, not do as I do" mindset which results in people eventually tuning you out.
We live in a world of connected systems. The age of "irrational exuberance" is over. Billions of dollars worth of business is conducted online on a daily basis. Software is at the heart of that process. At the same time, the day when we can expect a zero-day exploit draws closer and closer. The process of software development cannot stay as is if software is to survive.
br>
(From Bugtraq): Here's the announcement of a new mailing list devoted to discussions about application security research.
"... So, the new list will be all about how to protect and break software, whether it's a vulnerability or a packer.. "
Wonder how it will be different from the Secure Coding Mailing List @
http://www.securecoding.org/list/. There are some really smart people on that list who are seriously into Secure Coding. One of the creators and moderators of the list is Ken van Wyk, who wrote the "Secure Coding: Principles and Practices" book.
I am not complaining, mind you. I figure that more discussion there is on this topic, the better educated everyone will be.
br>
Friday, March 5, 2004
The following information was recently posted to the SC-L Mailing list by Chris Wysopal of Vulnwatch in response to a question. Interesting and relevant information.
@stake published its first application security metrics report in April 2002. It is an analysis of 45 "e-business" applications that @stake assessed for its clients. Most are web applications.
The Security of Applications: Not All Are Created Equal
http://www.atstake.com/research/reports/acrobat/atstake_app_unequal.pdf
@stake found that 70% of the defects analyzed were design flaws that could have been found using threat modeling and secure design reviews before the implementation stage of development.
62% of the apps allowed access controls to be bypassed
27% had no prevention of brute force attacks against passwords
71% had poor input validation
@stake lists the top 10 categories of application defects found. The list predates the OWASP Top 10 by eleven months and is largely the same. The data has percentage of applications effected and is ranked, so it is not anecdotal.
The is a follow-up of the first application defect study done 15 months later in July, 2003. This was done to see if application security is improving.
The Security of Applications, Reloaded
http://www.atstake.com/research/reports/acrobat/atstake_app_reloaded.pdf
The results found that security is improving overall but that there is a widening gap between the security quality of the top quartile of applications and the bottom quartile.
There is another article that 3 @stake authors wrote for IEEE Security and Privacy Magazine which contains elements from both reports.
Information Security: Why the Future Belongs to the Quants
http://www.atstake.com/research/reports/acrobat/ieee_quant.pdf
br>
Jeff Schoolcraft made a comment on Andrew's weblog entry that:
"I must say I am somewhat disappointed that there were no examples on how cookie or session hijacking happens. It was professed to be a terrible thing, which it [hijacking] probably is, however, I would have loved to see an example of this. SQL Injection any shmuck can type ' OR 1=1 -- but what skillset, level of effort does it take to hijack a session?"
To answer your questions:
- Cookie and Session hijacking were indeed demonstrated in the 2nd Session, when Dwayne used a Cross-Site Scripting attack to post the Cookie information and the Session Id info from a Search site onto another site. And I in my demo's showed you possible counters to this attack. What was NOT demonstrated was a cookie replay attack, which I believe was not done in the interest of time more than anything else.
Note that the “session hijacking” I refer to here is getting access to the ASP.NET Session Id and doing nasty, evil things with it. I am not referring to TCP Session hijacking, which referes to an attack used by a cracker to take over a TCP session between two machines. The defense against this type of attack is more on the Admin side rather than the Dev side.
- As to SQL Injection, it does not take much to type in the stuff for SQL Injection, but Dwayne in his session demonstrated more than that. He showed how using SQL Injection, you can actually gather enough information on the Database schema to craft a SQL statement that allows you to taint a database. And in my session, I showed possible counters to this attack as well.
In any developer event, there is going to be a cross-section of skill levels and knowledge represented. As both Andrew and I noted, our informal conversations after the presentations seemed to indicate that the majority of the people who came out did seem to get a great deal of value out of the sessions. At the same time, there will definitely be people there who already "get" it.
I am actually gratified to hear that Jeff did not find anything new here. That is a GOOD thing, which means that more and more developers are integrating Security into their every day lives and Microsoft's message about building in Secure Coding practices into the development lifecycle is getting through!
The 4 sessions are the real meat of the day. As to the keynotes and the vendor demos, I really can't speak to them, except to say that depending on your level of interest in the topic presented they may or may not have been interesting for you. For example, someone I spoke with was very interested in the information that was presented about SQL Server reporting services.
Please do keep the feedback coming and if you did not get a chance to do it on the eval form, make sure you contact the DevDays folks @
br>
Thursday, March 4, 2004
This was FUN!
I am finally home and just want to crash at this point as it has been a long day. But I wanted to write this up while the emotional impact of the day is still with me.
First of all, a big “Thank You“ to all of the folks who came out.
The Washington D.C. DevDays was held at the Ronald Reagan Convention Center and the prep work started last night. We got an email from our local DE Justin about the practice and logistics session between 5 and 7 p.m last evening. Justin is a great guy who talks with a rather unique accent and has pretty much been the primary coordinator of the entire D.C. event. Just don't ask him if he is from Australia 
So, last evening, armed with mappoint directions and a false sense of confidence, I made the decision to drive into D.C. Needless to say, I hit D.C rush hour, missed a turn and ended up cruising the highways and byways of D.C for about an hour and a half before I actually found the place. I have to admit though, D.C. does have some very scenic areas, especially around the Washington monument. I think I found them all!
NOTE TO SELF: Do not EVER drive into D.C. again. D.C. has excellent public transportation. Take the Metro and save yourself the pain. A fellow user group member forwarded me the link to the D.C. RideGuide ( http://rideguide.wmata.com/index.html ) which is excellent.
After going through the security inspection at the building in order to get into the parking garage, I went up to the Atrium to meet all of the other guys who were doing the presentations ( http://msdn.microsoft.com/events/devdays/agenda/washingtondc/default.aspx ). I also finally met Dwayne Taylor who turned to be pretty funny and extremely knowledgeable. We then went to get miked by the AV guys. After speaking into the mike a couple of times, I was informed by them that I was just a very loud person that I would be specifically fitted with a "low gain" mike so that I would not blow out the ear drums of the audience 
I verified that all of my demo's worked and the presentations were set up and ready to go. At the end of the session, Andrew, who is a lot more familiar with D.C. than I am, was kind enough lead me out of the maze that is D.C. Thank You!
Since I try very hard not to repeat my mistakes, I made sure that I took the commuter train into D.C. today. Needless to say, I made it into D.C. without any problems!
Then it was onto the presentations!
Andrew, who gave the "ASP.NET Web Application Security Fundamentals Overview", did a bang up job. He is a veteran of talks like this and I have seen him present before, both in a small group setting such as a user group and to a large audience like at ASP.NET Connections.
Next up was Dwayne, who did the "scary, evil cracker demo" presentation (Threats and Threat Modeling - Understanding Web Application Threats and Vulnerabilities). Dwayne really enjoyed his work!
Hyped up on an Apple and a bottle of water for lunch, I was up next for "Defenses and Countermeasures - Secure Your ASP.NET Applications from Hackers" ... 
After me came Vishwas who closed out the track by doing a walk through of the Open Hack Reference Application. Vishwas is a very knowledgeable guy, who also happens to be a Microsoft RD. Needless to say, he did a great job.
Just walking around afterwards and getting comments and questions from people, my general impression was that people were extremely interested in the topic and got a lot out of the sessions. We also had an excellent turn out and I think some people who were expecting a lot of marketing fluff were very pleased with the hardcore and actionable content that they got.
br>
Tuesday, March 2, 2004
Just a quick reminder that the Washington D.C DevDays event will be held on Thu, March 4.
I will be presenting the "Defenses and Countermeasures" session on the Web Security Track. Please stop by and say "Hello".
Others who will be presenting on the Web Track include G. Andrew Duthie and Vishwas Lele, both of whom I can personally attest as being very knowledgeable about .NET and Security (.. and able to convey that knowledge to the audience). The other person on the web track is Dwayne Taylor, who I have not personally met, but have heard good things about. This should be a fun event!
The full speaker list for D.C can be found @
http://msdn.microsoft.com/events/devdays/agenda/washingtondc/default.aspx
[Now Playing: Sona Sona Soniye - Jaal]
br>
Microsoft Executive Circle Webcast: Monthly Update from the Microsoft VP for Security [1]
March 16, 2004, 8:30 A.M.-9:30 A.M. Pacific Time, U.S. and Canada (GMT-8)
Join Mike Nash, the Microsoft senior executive in charge of security, as he provides the latest details on Microsoft security enhancements and offers tips and insights into strategies for customers.
Security Summit 2004 [2]
Learn how to better protect your infrastructure and applications from security threats, and get free tools. Register for the free Microsoft Security Summit coming to 20 U.S. cities, April-June, 2004.
Hands-on Labs: Security Training-Applying Microsoft Security Guidance [3]
Register for free Hands-on-Lab training. This one-day training enables students to apply information and guidance that can help in implementing and managing security in a Microsoft Windows based network.
TechNet Security Webcast: Essentials of Security - Level 200 [4]
Wednesday, March 03, 2004, Time: 9:00AM-11:00AM Pacific Time (GMT-8, US & Canada)
In this session you will gain knowledge and skills essential for the design and implementation of a secure computing environment. The session will cover important security concepts and discuss the need for establishing a process for security within an organization. You will learn how to identify system criticalities, understand and assess system vulnerabilities and apply best practices to improve the security of your infrastructure.
TechNet Security Webcast: Implementing Security Patch Management - Level 200 [5]
Friday, March 05, 2004, 9:00AM-11:00AM Pacific Time (GMT-8, US & Canada)
In this session you will learn how to apply security best practices and use available tools and technologies to implement a patch management process and strategy within your organization. The session will discuss the patch management lifecycle and demonstrate how tools such as Microsoft Baseline Security Analyzer and Software Update Services can be used to quickly and effectively respond to published security bulletins and establish patch compliance across your infrastructure.
[1] http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032245101&Culture=en-US
[2] http://www.microsoft.com/seminar/securitysummit/default.mspx
[3] http://www.msftsecuritytraining.com/
[4] http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032244972&Culture=en-US
[5] http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032244980&Culture=en-US
[Now Playing: Chand Sitare - Kaho Naa Pyar Hai]
br>
Anil John posted some observations about cross-site scripting attacks and the mitigations offered by ASP.NET 1.1:
ASP.NET 1.1 provides auto-protection from scripting attacks
Did you know that ASP.NET v1.1 automatically checks for possible scripting attacks when users enter info into you forms? I didn't! I learned it in my prep for my DevDays session.
Matt Lyons did an XSS demo explaining some of this at the 2003 PDC Security Symposium. His demo is in the middle session: SECSYM2 - Security Symposium: Putting Security Theory Into Practice: Processes and Policies. Check it out
here. You need to navigate through the Symposia heading.

My friend from Vermont, Julie Lerman [1] posted the original info.. I and others just chimed in with additional observations.
Also, be sure to check out
Shanku Niyogi's insight into the feature [2]. Shanku “..
leads the program management team responsible for design of both ASP.NET and Microsoft’s web development toolset in Visual Studio.NET“ As such, he has a behind the scenes take on the feature.
[Now Playing: O Haseena Zulfon Wali - Dil Vil Pyar Vyar]
br>
After checking out the Microsoft Guidance Center for Developers and IT Pro's [1], make sure you pre-order the Microsoft Security Guidance Kit CD [2], "... with tools, templates, roadmaps and how-to guides in addition to our prescriptive security guidance. The kit is designed to help you implement measures like automating security patch installation and blocking unsafe email attachments to help your organization stay protected."
[1] http://www.microsoft.com/security/guidance/default.mspx
[2] http://www.microsoft.com/security/guidance/order/default.mspx
[Now Playing: Dola Re Dola - Devdas]
br>
Sunday, February 29, 2004
Did you know that ASP.NET v1.1 automatically checks for possible scripting attacks when users enter info into you forms? I didn't! I learned it in my prep for my DevDays session.
So this:

(with Errors =”Off” in web.config) results in this: (click to enlarge)

This protection is on by default. It is controlled in a few places. See this article on Microsoft's ASP.NET site for more details.
It is a great feature. but there are some gotchas and caveats to watch out for:
- Some people who upgrade their app from 1.0 to 1.1 find themselves caught by this. In a frenzied panic (all too often because someone is breathing down their backs), they immediately go into web.config and disable the Request Validation feature. NOT a good thing!
The key thing to keep in mind is that if you choose to disable this option, make sure that you have some sort of Input Validation in your code. Remember ALL Input is EVIL! (Until it has been verified otherwise).
- Vendor Applications that you need to use, and don't have access to the source, which disable the Request Validation feature. I remember reading the installation directions for an app (written by a global consulting firm) that stated. "If you are running .NET 1.1, make sure you turn OFF Request Validation in your Web.config"! Since I did not have access to the source, I could not verify that they were actually doing any input validation. My confidence in the vendor was not helped by the fact that when talking with their developers, they could not tell me if they supported Out Of Process Session Storage Support or not. Not just couldn't tell me as much as did not understand the difference between InProcess and Out of Process State Storage Options. (Which I needed to know as the app was going to be deployed on a web farm). Oh yes, we won't discuss the Web SSO vendor who has a problem with this as well, and who still has not resolved the issue

Not much you can do in this case, except see if you can actually talk to someone at the vendor end who understands your question and gives you a reasonable answer or a fix.
[Now Playing: Udja Kale Kawan - Gadar]
br>
This came out some time ago and I had noted this on my old weblog, but I wanted to book mark it here on my new weblog... So here it is again:
The digital security consulting firm @stake announced on June 3, 2003, the results of an independent Security Analysis of the .NET Framework and the J2EE Framework represented by IBM WebSphere running on both Unix and Linux environments.
They are pretty up front in saying that while the analysis was funded by Microsoft, it was performed with no assistance from any of the vendors involved.
The results of the analysis are:
- Both platforms provide infrastructure and effective tools for creating and deploying secure applications
- The .NET Framework 1.1 running on Windows Server 2003 scored slightly better with respect to conformance to security best practices
- The Microsoft solution scored even higher with respect to the ease with which developers and administrators can implement secure solutions
More information here:
[Now Playing: Mere Yaar Ki Shaadi Hai - Mere Yaar Ki Shaadi Hai]
br>
Found this link from Steve Schofield over at AdminBlogs on Windows Server 2003 Feature Packs.
The current list consists of:
|
- Active Directory Application Mode
Active Directory® Application Mode (ADAM) is now available for download. Organizations, independent software vendors, and developers who want to integrate their applications with a directory service now have an additional capability within Active Directory that provides numerous benefits.
|
|
- Automated Deployment Services
With Automated Deployment Services (ADS), Microsoft is extending the Windows platform to enable faster, more flexible server deployment. Find out how ADS can help you streamline this process.
|
|
- DSML Services for Windows
DSML Services for Windows (DSFW) allows Active Directory access using SOAP over HTTP based on the OASIS DSML v2 specification.
|
|
- Group Policy Management Console
The Microsoft Group Policy Management Console (GPMC) is a new tool that unifies management of Group Policy across the enterprise. The GPMC consists of a new MMC snap-in and a set of programmable interfaces for managing Group Policy.
|
|
- Identity Integration Feature Pack
The Identity Integration Feature Pack for Microsoft Windows Server Active Directory manages identities and coordinates user details across Active Directory, Active Directory Application Mode (ADAM), Microsoft Exchange 2000 Server, and Exchange Server 2003 implementations.
|
|
- Services For NetWare 5.02 SP2
Services For NetWare SP2 provides a cumulative set of updates and services that have been offered since the release of Services For Netware 5.01 SP 1.
|
|
- Shadow Copy Client
For computers running a version of Windows earlier than Windows Server 2003, you can download the Shadow Copy Client to take advantage of the intelligent file storage capabilities of the Shadow Copies of Shared Folders feature.
|
|
- Software Update Services
Microsoft Software Update Services (SUS) enables administrators to deploy critical updates to Windows 2000-based, Windows XP, and Windows Server 2003 computers.
|
|
- Windows Rights Management Services
Microsoft Windows Rights Management Services (RMS) for Windows Server 2003 is information protection technology that works with RMS-enabled applications to help safeguard digital information from unauthorized use—both online and offline, inside and outside of the firewall.
|
|
- Windows SharePoint Services
Windows® SharePoint™ Services is now available to help teams start creating Web sites for their information sharing and document collaboration needs.
|
|
- Windows System Resource Manager
Windows System Resource Manager (WSRM) is available for use with Windows Server 2003, Enterprise Edition and Datacenter Edition. WSRM provides resource management and enables the allocation of resources among multiple applications based on business priorities. |
[Now Playing: Ghanan Ghanan Ghir Ghir Badra - Lagaan]
br>
Friday, February 27, 2004
We all have heard the mantra about "Use a Strong Password"!
So, what exactly is a strong password, or correspondingly what is a weak password? I had written this up some time ago after reading "Writing Security, 2nd Edition". I am in the process of preparing for my DevDays presentation next week and thought I would reference this little blurb that I had written on the topic. Scary!
You can find it @
http://cyberforge.com/weblog/aniltj/articles/253.aspx
br>
Thursday, February 26, 2004
Fyodor announced today, on the BugTraq mailing list, the immediate availability of NMap v3.50.
As most of you know, NMap is the defacto standard in the security realm for vulnerability scanning and host fingerprinting. It also would have ranked #1 on Fyodor's 75 Top Security Tools had he not disqualified his own utility from being voted on and appearing on the list.
The changelog for NMap can be viewed here.
Thanks Fyodor, and everyone who contributes to the NMap project, for such a fantastic piece of software.
Edit: NMap was the utility used in the movie Matrix Reloaded to scan the power station network for vulnerabilities. It's often used in that exact way here in the real world...
[bmonday(dot)com]
br>
It would appear that Secure Coding is entering the Geek Mainstream. The topic of today's "Screen Savers" show on TechTV is "How to Break Code".
Here is the official blurb:
How to Break Code
How does software break? Should people be taught how to write viruses? Gary McGraw and Gary Hoglund, co-authors of "Exploiting Software: How to Break Code," talk about coding issues.
br>
Wednesday, February 25, 2004
In Single Sign-on Enterprise Security for Web Applications Paul shows how to create a solution that enables multiple intranet applications to share a single sign-on for security. It's an excellent solution to a common customer request. This is the article that goes along with the recent webcast he gave just the other day (which is now available on-demand).
[Kent Sharkey's blog]
Interesting article, but a bit... hmmm... .
I guess if you define "Enterprise" as consisting of ONLY Microsoft technologies, this is a possible solution. The problem is that a true Enterprise is often a mix of varied technologies. Most large organizations who are classified as Enterprises consist of a mix of technologies from Mainframe to *nix to Portal solutions that may be running on other platforms. The solution proposed does not address such a mix and I do not believe that Microsoft has an out of the box solution that addresses the issue.
In addition, why would I want to run through all these gyrations? If I am in an environment that has standardized on Microsoft Technologies (AD for a directory Store, WinTel Servers for Web Servers), why would I simply not use Windows Auth instead for Forms Authentication?
And If I want to mix Windows and Forms Authentication, why not add Paul Wilson's Technique for mixing for Forms and Windows auth [1] into the mix?
I must be missing something here...
[1] http://msdn.microsoft.com/asp.net/archive/default.aspx?pull=/library/en-us/dnaspp/html/mixedsecurity.asp
[Now Playing: Zinda Rehti Hain Mohabbatein - Mohabbatein]
br>
Here's the link [1] to Foundstone's free security tools for Assessment, Forensics, Intrusion Detection, Scanning and Stress Testing.
[joatBlog]
Cool!
The list of tool is pretty extensive:
Assessment Utilities |
|
Fpipe™ |
v2.1 |
Forensic Tools |
|
Pasco |
v1.0 |
|
Galleta |
v1.0 |
|
Rifiuti |
v1.0 |
|
NTLast™ |
v3.0 |
|
Forensic Toolkit™ |
v2.0 |
|
ShoWin™ |
v2.0 |
|
BinText™ |
v3.0 |
|
PatchIt™ |
v2.0 |
|
Vision™ |
v1.0 |
Intrusion Detection Tools |
|
IPv4Trace |
v1.0 |
|
Carbonite™ |
v1.0 |
|
FileWatch™ |
v1.0 |
|
Attacker™ |
v3.0 |
|
Fport™ |
v2.0 |
Scanning Tools |
|
SuperScan™ |
v4.0 |
|
MydoomScannerCheck for MyDoom Worm |
v1.0 |
|
MessengerScan |
v1.05 |
|
SQLScan |
v1.0 |
|
BOPing™ |
v2.0 |
|
ScanLine™ |
v1.01 |
|
Trout™ |
v2.0 |
|
DDosPing™ |
v2.0 |
|
SNScan™ |
v1.05 |
|
CIScan |
v1.0 |
|
RPCScan |
v2.03 |
Stress Testing Tools |
|
FSMax™ |
v2.0 |
|
Blast™ |
v2.0 |
|
UDPFlood™ |
v2.0 | | |
[1] http://www.foundstone.com/resources/freetools.htm
[Now Playing: Missing You - Simply the Best]
br>
Tuesday, February 24, 2004
Sunday, February 22, 2004
Came across this article by Gary McGraw of Cigital [1] on the SC-L mailing list on the distinction between Application Security and Software Security.
In the article, Software Security is defined as defined as "... engineering software so that it continues to function correctly under malicious attack". Application Security in turn is defined as "....the protection of software after it's already built."
A very interesting read.
[1] http://www.cigital.com/papers/download/software-security-gem.pdf
[Now Playing: Snow of the Sahara - Metamorphosis]
br>
I've been out for about a week due to illness, and am finally catching up on my e-mail and noticed that "SecureCoder" got mentioned by Ken van Wyk on the SC-L List.
Thank You!
In the same context, I would be negligent if I did not mention that Ken is the co-author of the O'Reilly book "Secure Coding: Principles & Practices" [1] and one of the people who is out there preaching the gospel of Security. 
In addition, he is is one of the founders and moderators of one of the best sources of information on Secure Coding, the SC-L Listserve. I've mentioned this before [2], but it is worth mentioning again. If you are interested in writing Security, whatever your platform of choice, you owe it to yourself to be on this list.
Information on the list, including how to subscribe to it, can be found @ http://www.securecoding.org/list/
[1] http://www.securecoding.org/
[2] http://cyberforge.com/weblog/aniltj/archive/2003/12/01/197.aspx
[Now Playing: Mr. Wah - Best Of Chris Botti]
br>
Keith Brown from Developmentor is one of my favorite authors on the topic of Security. As I have noted before, he is writing a book on .NET Security and is putting the book online for review (See the Security Resources Section for the link to the book).
He currently has a chapter available on "How to Store Secrets on a machine" available online. [1] Excellent reading as this is one of the most frequently asked questions. As part of his explanation, he has provided a .NET class that wraps the calls to DPAPI in Win32. Check it out.
Another relevant article is "Safeguard Database Connection Strings and Other Sensitive Settings in Your Code" [2] that appeared in MSDN magazine some time back.
[1] http://www.develop.com/kbrown/book/html/howto_storesecrets.html
[2] http://msdn.microsoft.com/msdnmag/issues/03/11/ProtectYourData/default.aspx
[Now Playing: Dulhe Raja - Hum Kisise Kum Nahin]
br>
Friday, February 13, 2004
This guide provides a set of security recommendations for building a secure Active Directory environment that can be applied to both new and existing Active Directory implementations. The scripts and procedures provided are designed to simplify the implementation of these recommendations.
[
Microsoft Download Center]
br>
Thursday, February 12, 2004
With the recent flood of issues surrounding MyDoom, it is good to see Microsoft take some time to educate developers on secure coding principles and practices.
Microsoft has announced a special week of webcasts (Feb 16-20) addressing the most important and newly emerging security issues surrounding developers. Topics range from corporate security reviews and computer crime to a host of webcasts aimed specifically at developers. These webcasts are designed to help developers write applications that are resistant to security attacks. Webcasts will address a broad range of issues facing developers today: specific coding techniques to make applications inherently more secure, SQL Server considerations, authentication and authorization, Enterprise Security Portals, and protecting your intellectual property with Code Access Security. Tune in as top industry experts walk you through key security concepts that will help your organization -- and the code you write -- rise to the security challenges we all face today.
Here are the webcasts I will be tuning into:
If you want to check out the full list of available webcasts, you can check it out here.
[Dana Epp's ramblings at the Sanctuary]
br>
Date: Friday, February 27, 2004
Time: 11:00AM-12:30PM Pacific Time (GMT-8, US & Canada)
Description: This Webcast will cover the Authorization and Profile Application block, which is a set of reusable code components that you can use to customize the behavior of an application for individual users. This guide describes the design and features of the Authorization and Profile Application block and demonstrates how you can use the block in your applications.
Register @
http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032243066&Culture=en-US
[Now Playing: Tere Ishq Mein Nachenge - Raja Hindustani]
br>
Monday, February 2, 2004
Saw this while browsing the OWASP site:
"... FAQ answers some of the questions that developers have about Web Application Security. This FAQ is not specific to a particular platform or language. It addresses the common threats to web applications and are applicable to any platform."
Check it out @ http://www.owasp.org/documentation/appsecfaq
[Now Playing: Meri Makhna Meri Soniye - Baghban]
br>
This came through some time ago on the WebAppSec listserve:
".... the Open Web Application Security Project (OWASP) released its updated list of the 10 most critical web application security problems, marking the second year for this report. OWASP created this list to help organizations understand and improve the security of their web applications and web services.
The Top 10 list is organized around particular categories of vulnerabilities that frequently occur in Web applications. This year's revision includes a new category for web application denial of service vulnerabilities that have
become increasingly prevalent in systems over the last year. Also, the list now aligns with the current draft web security definitions that will be incorporated in the soon-to-be-released OASIS WAS XML standard. Many minor
improvements were made as well.
Recent application DOS attacks have locked users out of accounts, exhausted an application's database connections, and consumed all of an application's processing power. Exploiting these vulnerabilities, an attacker can target
specific users or block all access to an application at will. The attacks do not require any special tools or expertise to launch, and have become a major risk for most web applications."
Download the standard from the OWASP Web site at
http://www.owasp.org/documentation/topten
[Now Playing: Jaane Dil Mein - Mujhse Dosti Karoge]
br>
Ryan Dunn and I have been having a dialog on the ASP.NET Forums about my recent article on Mixing Forms and Windows Security in ASP.NET. He has another technique that attempts to do something similar here on GotDotNet and he very much disagrees that my solution is sufficient. Basically, my solution only demos how to combine Forms and Windows Authentication to automatically capture an Intranet user's name. His method instead combines Forms and Windows Authorization by creating a WindowsPrincipal that roles can be checked against. I apologize if someone thinks I've misled them since my article did not go all the way and illustrate the combined Authorization also, so I'm attaching the small amount of code, based on Ryan's work, that will create the WindowsPrincipal and complete the example.
Please also note that Ryan's technique is not at all sufficient, and is thus very misleading, since it does not actually do any real Windows Authentication! It makes an assumption that all users within a certain IP address range are valid users -- which is not at all true if your network allows visitors to plug into the network and access the Intranet. This means that visitors will automatically get access to your applications that use this technique and don't then check for an additional role. It will also actually prevent such visitors from ever logging in with the alternative custom login form to prove they have the roles in the custom scenario you worked hard to create. So, here's the necessary code, not very “clean” since its just a quick example, to complete my technique by creating a real WindowsPrincipal for Windows users:
Change the entirety of WinLogin.aspx's Page_Load method to:
IServiceProvider service = (IServiceProvider) this.Context;
HttpWorkerRequest request = (HttpWorkerRequest) service.GetService(typeof(HttpWorkerRequest));
this.Response.Cookies.Add(new HttpCookie("UserToken", request.GetUserToken().ToString()));
string userName = this.Request.ServerVariables["LOGON_USER"];
FormsAuthentication.RedirectFromLoginPage(userName, false);
Then add the following to the Global.asax's Application_AuthenticateRequest:
else if (this.Request.Cookies["UserToken"] != null) {
string token = this.Request.Cookies["UserToken"].Value;
IntPtr userToken = new IntPtr(int.Parse(token));
WindowsIdentity identity = new WindowsIdentity(userToken,
"NTLM", WindowsAccountType.Normal, true);
HttpContext.Current.User = new WindowsPrincipal(identity);
}
You can make this “cleaner” (i.e. more secure) by including the userToken into the FormsAuthentication cookie's UserData so that it gets encrypted, instead of being a separate cookie as I've done here.
Follow-up on Paul's MSDN article... Putting it up more for self-reference as I am sure that I will be looking for this in the near future.
[Now Playing: Chunari Chunari - Monsoon Wedding]
br>
IEEE Security & Privacy has an article in which the author examines ".... a handful of the more than 3,000 unique vulnerabilities and 115,000 security incidents reported in 2003 (according to CERT Coordination Center’s report for quarters one through three) and do my best to predict information security woes for 2004."
Interesting read @
http://www.computer.org/security/v2n1/j1att.htm
[Now Playing: Humko Humise Chura Lo - Mohabbatein]
br>
Examine a solution that combines Forms and Windows Authentication, enabling ASP.NET security for both internal and external users.
[MSDN: ASP.NET]
This is a question that is often asked in the various online forums. Paul Wilson tackles this at:
[Now Playing: Laila Laila - Samay]
br>
Looks that there is a new downloadable resource [1] on Threat Modeling available on Microsoft Downloads. "In this session, see how to design and build more secure systems by evaluating threats and selecting technologies to counter those threats."
Another great resource that goes into Threat Modeling is Michael Howard's "Writing Security 2". The second session on the Web Track of the DevDays presentations is on the topic of Threat Modeling as well.
[1] http://www.microsoft.com/downloads/details.aspx?familyid=ebd24aad-a39a-4978-81e0-99fbfb72a7bd&displaylang=en
[Now Playing: Raat Kali Ek Khwab - Dil Vil Pyar Vyar]
br>
Sunday, February 1, 2004
Describes the proper way to configure a server to securely run the ASP.NET worker process runs as the system account
[Code Project Latest Article Briefs]
There is NOTHING Secure about running the ASP.NET worker process as "SYSTEM".
Keith Brown has the best quote on this in his online book - "SYSTEM is like root on Unix. It's all powerful, and is considered to be part of the trusted computing base (TCB). You should configure as little code (preferably none) to run under this logon, as compromise of this logon immediately compromises the entire machine (when you're part of the TCB, you're trusted to enforce security policy, as opposed to being subject to it!)"
As those of us who have been around, we actually had ASP.NET doing this in the Beta 1 stage. The ASP.NET Team recognized the vulnerability and created the low privilege ASPNET account specifically for this purpose..
What the author is proposing can easily be circumvented by using the "RevertToSelf" function which terminates the impersonation of a client application, which in this case leaves the code running as SYSTEM!
Doing this defeats the principle of least privilege and increases the damage that can be done by an attacker who is able to execute code using the Web application's process security context!
This is explicitly stated as a NO-NO in "Improving Web Application Security".
The correct solution in this case is for this person to have a discussion with the administrators of his AD domain about properly configuring the relevant Group Policies.
NOTE: This article also showed up on the ASPAlliance site, but the Editor added a note regarding the caveats. Not as strongly worded as I would like, but I'll take it.
[Now Playing: Mujhko Huyi Na Kabar (Le Gayi) - Dil To Pagal Hai]
br>
Thursday, January 29, 2004
DevDays 2004 is coming to a city near you. More information about DevDays and the Agenda can be found at the official DevDays site.
There are two tracks, the Smart Client Track and the Web Development Track. The Web Development Track will explore Web security basics and the methodologies for determining at-risk aspects of Web applications and how to defend them. We'll walk through the Microsoft security best-practices reference application for OpenHack throughout the track and see how you can put those same best practices to work for you.
I’ll be speaking on the Web Development Track at the Washington, D.C. DevDays:
Washington, D.C.—Thursday, March 4, 2004
International Trade Center and Ronald Reagan Bldg.
1300 Pennsylvania Ave NW
Washington, D.C. 20004
So register, and come by and say “Hello”.
br>
Saturday, January 24, 2004
The second part of the Java vs. .NET Security is online at the O'Reilly OnJava.com. This one deals with "...issues of cryptography support and the mechanisms of communication protection on those platforms."
Part 1 of the series explored configuration and code containment and can be found at:
[Now Playing: Lok Boliyan - Bhangra Beatz]
br>
Recent post to the [SC-L] List:
FYI, Stephen Kost of Integrigy Corporation has published a paper called, "An Introduction To SQL Injection Attacks For Oracle Developers". The full 24 page paper (in PDF format) is freely available at:
http://www.net-security.org/dl/articles/IntegrigyIntrotoSQLInjectionAttacks.pdf
On first glance, it appears to me to be a pretty worthwhile read, FWIW. Although it is aimed at Oracle developers and much of the paper is indeed Oracle-specific, pretty much anyone writing multi-tier SQL database software could find useful information in it.
[Now Playing: Pyar Aaya - Plan]
br>
Wednesday, January 21, 2004
A new Security related Application Block has been released by the Patterns & Practices folk:
The Authorization and Profile Application Block provides you with an infrastructure for role-based authorization and access to profile information. The block allows you to:
- Authorize a user of an application or system.
- Use multiple authorization storage providers.
- Plug in business rules for action validation.
- Map multiple identities to a single user.
- Access profile information that can be stored in multiple profile stores.
Download @
http://www.microsoft.com/downloads/details.aspx?familyid=ba983ad5-e74f-4be9-b146-9d2d2c6f8e81&displaylang=en
[Now Playing: Sajna Ve Sajna - Chameli]
br>
Monday, December 15, 2003
From the Current Issue of the CRYPTO-GRAM by by Bruce Schneier:
The Doghouse: Amit Yoran
Here's a question: if you don't think it's possible to improve the
security of computer code, what are you doing in the computer security
industry?
"Amit Yoran, the new head of the Department of Homeland Security's
national cybersecurity division, said the administration is assessing
the impact of various regulatory proposals. One of them calls for
companies to report, through the Securities and Exchange Commission,
their preparedness for attacks on their computer networks. Mr. Yoran,
formerly a vice president of Symantec Corp., said the department is
considering other measures, though it leans toward private-sector
approaches.
"'For example, should we hold software vendors accountable for the
security of their code or for flaws in their code?' Mr. Yoran asked in
an interview. 'In concept, that may make sense. But in practice, do
they have the capability, the tools to produce more Security?'"
The sheer idiocy of this quote amazes me. Does he really think that
writing more Security is too hard for companies to manage? Does he
really think that companies are doing absolutely the best they possibly
can?
I can handle blatant pandering to industry, but this is just too stupid
to ignore.
The article:
<http://online.wsj.com/article/0,,SB107040249488089600,00.html>
<http://news.com.com/2008-7355-5112350.html>
I like a man who calls it like it is 
br>
Thursday, December 4, 2003
Wednesday, December 3, 2003
Q. How can I limit access to a .NET assembly that I've created? I've got two separate assemblies A and B. A references B and uses a number of instance methods on B, but I don't want any other assembly to be able to access B. In effect, I'd like to make B "private" to my application. Is there any way to achieve that with .NET?
A. There are probably a number of ways to achieve this, but the simplest involves signing your calling assembly A with a unique public / private key pair (use sn -k to achieve this). Once it's signed, you can use the StrongNameIdentityPermission attribute on the callee assembly B to demand that any callers are signed with a matching public key. If any other assembly tries to call B that isn't signed with the same key, a SecurityException will be thrown.
For more information on the StrongNameIdentityPermission attribute, see the appropriate topic in the MSDN Library. There's also a good walkthrough here.
[Tim Sneath's Blog]
Also check out the following Chapters from "Improving Web Application Security: Threats and Countermeasures"
Chapter 8 – Code Access Security in Practice
http://msdn.microsoft.com/library/en-us/dnnetsec/html/THCMCh08.asp
Chapter 9 – Using Code Access Security with ASP .NET
http://msdn.microsoft.com/library/en-us/dnnetsec/html/THCMCh09.asp
[Now Playing: Dulhe Raja - Hum Kisise Kum Nahin]
br>
Tuesday, December 2, 2003
Two new Security newsletters from Microsoft coming soon. You can sign up for the IT/Dev or Consumer versions at the Microsoft Subscription Center:
Microsoft Security Newsletter
This monthly newsletter is the authoritative information source for understanding the Microsoft security strategy and priorities. Written for IT professionals, developers, and business managers, it provides links to the latest security bulletins, FAQs, prescriptive guidance, community resources, events, and more.
Microsoft Security Newsletter for Home Users
This bimonthly newsletter offers easy-to-follow security tips, FAQs, expert advise, and other resources that help you enjoy a private and secure computing experience.
If you subscribe to the TechNet or the MSDN newsletters currently, you'll probably receive one copy of the technical newsletter as a special edition. This newsletter has some great content for anybody who's interested in security, so I would encourage anyone interested in that topic to sign up.
[Brian Johnson]
Just Do It! @
https://profile.microsoft.com/RegSysSubscriptionCnt/SubCntDefault.aspx?LCID=1033&SIC=1
[Now Playing: Soni Soni - Mohabbatein]
br>
Monday, December 1, 2003
TSS refers to this article which talks about the security comparison between Java and .NET.
[WebLogs @ ASP.NET]
Part 1 of the article from O'Reilly's OnJava Section covers "Security Configuration and Code Containment" and can be found @
http://www.onjava.com/pub/a/onjava/2003/11/26/javavsdotnet.html
The conclusion of the article seems to be that regarding the Security Configuration and Code Containment aspects "Java offers a lot of advantages with its configurability. When it comes to code containment, both platforms have pretty strong offerings, with .NET having slightly more choices and being more straightforward to use."
[Now Playing: How Could I - Marc Anthony]
br>
The Programming Security and Inventory Visibility in Order Systems Book shows you how to implement a secure order system by integrating Microsoft Windows Server System technologies.
It features Microsoft BizTalk Server 2002, Microsoft Visual Studio .NET, Web Services Enhancements (WSE) for Microsoft .NET, ASP.NET, business-to-business (B2B) Web service security, and principles of the real-time enterprise (RTE). It discusses how Ford Motor Company uses technologies to build and maintain a secure and reliable order system to feed its just-in-time (JIT) supply chains, and provides sample applications that illustrate these technologies. This book:
- Demonstrates the integration of Windows Server System products.
- Explains common-sense security practices.
- Shows hard-to-get information about security and large-scale deployments.
- Explains and demonstrates inventory management and inventory replenishment.
- Showcases important product features such as disparate data format handling, business process implementation, and Web services.
[Now Playing: Hunter - No Angel]
br>
I would like to announce the availability of a new and free resource to the software security community, the SC-L email discussion forum. The moderated forum is open to the public. The group's purpose is, "to further the state of the practice of developing secure software, by providing a free and open, objectively moderated, forum for the discussion of issues related to secure coding practices throughout a software development lifecycle process (including architecture, requirements and specifications, design, implementation, deployment, and operations)." (The complete text of the group's charter, including its acceptable and unacceptable usage policies, can be found at http://www.securecoding.org/list/charter.php.)
To subscribe to the list, simply connect to http://www.securecoding.org/list and follow the directions on the form. Submissions should be sent (by subscribers only) to sc-l@securecoding.org.
Cheers,
Ken van Wyk
Moderator, SC-L mailing list
This came across on NTBugTraq today. Ken van Wyk is one of the co-authors of O'Reilly's "Secure Coding: Principles & Practices"
[Now Playing: Power of Love - The Best Of Jennifer Rush]
br>
When you hear that security is one of the missing pieces of Web services, you’re probably listening to a discussion about complex SOAs that demand newfangled security protocols yet to be submitted to any standards organization. Today, most Web services connections, even those that cross firewalls, mirror the Web: a client and a server interacting more or less in real time, with security controlled by the server.
[InfoWorld: Web Services]
The primary thrust seems to be that ".. such simple interactions typically rely on usernames and passwords for authentication and SSL for message encryption and integrity. More complex requirements, such as authorization and nonrepudiation, can be coded within the service applications themselves".
And according to the the article "things start getting more complicated when at least one of three conditions are true:
- The architecture includes intermediaries (that is, messages must be carried across multiple hops);
- messages are stored and must be secured beyond the time during which they’re transmitted; or
- more than one party wants control over some aspect of the security (for example, the usernames and passwords defined by the client must be used to access the server, which has an independent concept of authentication).
Interesting read....
[Now Playing: Home Sweet Home ['91 Remix] - Decade of Decadence]
br>
Tuesday, November 25, 2003
TechNet Security Webcast Week is December 1-5, 2003. Join Microsoft security experts for a series of webcasts covering patch management, secure network access, Windows Rights Management Services, and more: http://www.microsoft.com/technet/security/webcasts/default.asp
On December 9, 2003, Microsoft releases its monthly security bulletins. On December 10, Microsoft security professionals will present a live webcast to discuss the bulletins' technical details and steps you can take to protect your environment. Register at: http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032240095&Culture=en-US
Each month, after a security patch release, Microsoft's Product Support Services core team will conduct a technical chat to explain the patch and vulnerability and help users understand the impact of the patch in their environments. The next chat will be December 12, 2003. To register, go to: http://communities2.microsoft.com/home/chatroom.aspx?siteid=34000015
The Microsoft Security Newsletter is scheduled to debut in December with news, guidance, and resources related to Microsoft's security strategy and priorities. TechNet Flash subscribers will receive the premier edition and information about how to register to continue receiving the newsletter
[Now Playing: Tujhe Yaad Na Meri Aayee - Kuch Kuch Hota Hai]
br>
Monday, November 24, 2003
Wouldn't it be nice if there was a free graphical tool for viewing secure XML documents, inserting security tokens, posting them over SSL to a protected Web Service, and seeing the results? Well, now there is - it's the Vordel SOAPbox. It's a tool which we developed internally for testing secure Web Services, and now we've released it to the wider community.
The SOAPbox supports WS-Security and SSL, as well as SAML, and underlying specifications such as XML Signature.
[Mark O'Neill's Radio Weblog]
Tool can be found @
http://www.vordel.com/soapbox/
[Now Playing: Mr. Wah - Best Of Chris Botti]
br>
ASP.NET 1.1 added the ValidateRequest attribute to protect your site from cross-site scripting. What do you do, however, if your Web site is still running ASP.NET 1.0? This article shows how you can add similar functionality to your ASP.NET 1.0 Web sites.
Article @
http://msdn.microsoft.com/library/en-us/dnaspp/html/ScriptingProtection.asp
[Now Playing: Worlds Outside - Best Of Chris Botti]
br>
Sunday, November 23, 2003
CACert.org is a public Certificate Authority (CA). For non-admin types, this is a self-proclaimed issuer of free SSL certificates.
Is it worth anything? Like a lot of other things on the Internet, the answer is "it depends". It depends on how well people trust the site and use it. Note: You don't have to use Verisign, you can issue your own certificates. Verisign's strength is that, by way of government sponsorship, the majority of users "trust" it as a CA.
Update: For those that are interested in rolling your own, check out the "OpenSSL Certificate Cookbook".
[joatBlog]
Majority of users trust Verisign as a CA because they don't get any scary certificate messages when they browse to a site that is protected using a SSL Cert issued by Verisign. This has more to do with the fact that the major browser vendors have by default included Verisign as a trusted CA.
UPDATE: Just saw Dana's post about this topic. Lot more info and definitely worth a read.
[Now Playing: Chunari Chunari - Monsoon Wedding]
br>
Friday, November 21, 2003
Initiative is associated with Microsoft's security campaign.
"Computer Associates International Inc. (CA) will give away its consumer antivirus and firewall software product with a year's subscription to virus signature updates, it said Tuesday.
The eTrust EZ Armor product carries a retail price of $49.95 but will be available as a free download from the CA Web site through June 30 next year, the Islandia, New York, company said in a statement released at the Comdex tradeshow in Las Vegas."
[InfoWorld: Security]
[Now Playing: Kabhi Khushi Kabhie Gham - Kabhi Khushi Kabhie Gham]
br>
Microsoft is investigating a potential security issue with Exchange Server 2003, which would be the first since the e-mail server was launched last month.
The potential flaw lies in the Outlook Web Access (OWA) component of Exchange Server 2003. A network administrator at a Nashville, Tennessee, provider of investment performance reporting tools found that users logging in to OWA could be logged in to another user's mailbox at random and have full access privileges.
[Info World Security]
[Now Playing: Kabhi Khushi Kabhie Gham - Kabhi Khushi Kabhie Gham]
br>
Add an Extra Layer of Security with SQL Firewalls
Add a SQL content firewall to your conventional network security as part of a defense in depth strategy.
http://go.microsoft.com/?linkid=323526
[Now Playing: Wild Child - A Day Without Rain]
br>
Thursday, November 20, 2003
.... I already used hashing method, what is called one-way encryption.
I didn't know that you have also a two-way method, AES (Advanced Encryption Standard) based on a 256 bit key.
To say the least, surely secure enough!
James's article includes also a C# implementation. I think using it for the case I store user passwords, and I need an admin to be able to retrieve and decrypt a lost password.
[Paschal L]
The built-in crypto capabilities of the .NET framework are pretty extensive. It contains the ability to do both Symmetric (DES, RC2, Rijndael, TripleDES) and Asymmetric Encryption (DSA, RSA) as well as Hashing (MD5, SHA1, SHA256, SHA384, SHA512).
As far as storing passwords in a database. DON'T! One of the basic tenets of security is that if you don't need to keep a secret, don't! Passwords are a great example where this should be followed. Hash or even better store a salted hash of the password.
The byproduct of this of course is, how do you go about doing password resets?
Couple of ways I can think of are to have password hints that you are provided by the user when the account is set up that are provided by the user when the password needs to be changed or sending out a temp password to an known and verified e-mail account on file with an explicit and short time window during which you can make the password change.
Of course, for highly secure apps, the cleanest would be to provide a phone number where a human actually verifies the identity of the user and does temp password reset.
[Now Playing: Mitwa - Lagaan]
br>
This white paper from Microsoft describes how to configure secure wireless access using IEEE 802.1X authentication using Protected Extensible Authentication Protocol-Microsoft Challenge Handshake Authentication Protocol version 2 (PEAP-MS-CHAP v2) and Extensible Authentication Protocol-Transport Layer Security (EAP-TLS) in a test lab using a wireless access point (AP) and four computers. Of the four computers, one is a wireless client, one is a domain controller, certification authority (CA), and Dynamic Host Configuration Protocol (DHCP) and Domain Name System (DNS) server, one is a Web and file server, and one is an Internet Authentication Service (IAS) server that is acting as a Remote Authentication Dial-in User Service (RADIUS) server.
http://www.microsoft.com/downloads/details.aspx?familyid=0f7fa9a2-e113-415b-b2a9-b6a3d64c48f5&displaylang=en
[Now Playing: Bole Chudiyan - Kabhi Khushi Kabhie Gham]
br>
Erik Olson discusses the configuration options in ASP.NET that control process and thread identity.
[Microsoft Download Center]
Erik is a PM on the ASP.NET Team and is their go-to guy for Security related stuff.
[Now Playing: Saanwali Si Ek Ladki - Mujhse Dosti Karoge]
br>
Tuesday, November 18, 2003
This just came across one of the lists that I'm on.
SOAP Web Services Attack - Part 1 Introduction and Simple injection
http://www.spidynamics.com/whitepapers/SOAP_Web_Security.pdf
One of the key take-aways from here seems to be that Input Validation is just as, or even more important in Web Services deployment as in web app deployment as the attack in question is using a variation of the SQL Injection attack. Very Interesting and scary reading.
[Now Playing: Dulhe Raja - Hum Kisise Kum Nahin]
br>