My blog has moved and can now be found at

No action is needed on your part if you are already subscribed to this blog via e-mail or its syndication feed.

Wednesday, March 10, 2004
« Which Cryptographic Operations are Avail... | Main | Interesting Items from the latest Micros... »
Randy has a good response in my comments in regard to my earlier post [1].  I've been mulling over for the last couple of days on how to respond back to him.  My thoughts are not as fleshed out as I would like but hopefully the conversation that we are having will help to clarify them. 
 You are talking about secure coding as an educational issue.  Education is great, but it may be exceptionally difficult to make sure that every coder is a "good" coder.  In fact, even "good" coders may have to cut corners due to schedules and deadlines.  On a small scale this means that it might be possible to write secure software, but as you scale it up it seems likely that inevitably serious security flaws will surface.

I agree that every coder is not going to be a great coder simply by the virtue of training and education. What I am talking at a minimal level is raising the bar of their knowledge regarding how software systems are vulnerable. I've interviewed enough developers to know that in a majority of cases, knowledge of XSS attacks, SQL Injection attacks and other vulnerabilities are not part of their vocabulary.  What I am aiming for to start with, is bringing such an awareness to every developer such that it is as much a part of their development "life" as knowing a programming language.  We cannot hope to propose solutions or mitigations to problems until developers are aware that there is indeed a problem.
Given such an education and training it is possible to write hack resilient applications. For example, the Microsoft Reference Application for OpenHack, which was demonstrated at DevDays 2004, is an application that withstood 80,000+ hack attempts in the OpenHack competition without failing. So it IS possible to write Security given the right training and knowledge.
I agree completely on your comments about cutting corners due to schedules and deadlines.  But is that a technical problem or a business issue?  Provided that the developer is capable of doing the right thing (because he has had the training and education), the fact that he is not doing so due to other non-technical constraints means to me that they and their management have considered the trade-off's involved in taking such a risk and are comfortable with the consequences.   It would be a case of being told to use seat belts in a car in case of an accident. In fact some states in the United States have specific laws in place to force this. But if you still choose to drive without the seatbelt, get into an accident and go through the windshield, the responsibility is yours!
 I agree with your point regarding threat models, but again, this is another place where the problem may be the software.  Threat models evolve.  If you read Ross Anderson's excellent book Security Engineering you'll see that almost all security systems have broken over time largely due to changing environments and hence changed threat models.

  But software itself (in the current model) doesn't evolve to meet the changing threat model.  Not automatically.  Not without intervention.
Now this is an interesting point and I have to agree. All too often have I seen such fire-and-forget mentality from development teams and their management. All too often, applications are written and deployed and no time, resources or budget is allocated to regular application upgrades or testing.  This is probably more relevant for custom apps developed and deployed by corporate IT rather than software product shops. The latter at least understand that the product is their bread and butter and has an incentive to upgrade and bug fix.
As to the issue of changing threat models, I've been giving it some thought and have come to focus on the management of change aspect.  You see, I've recently been exploring Test Driven Development (TDD). One of the primary tenets of this methodology is to write tests first and code against them such that the tests pass.   As part of this process I can run an automated batch of tests that exercises the functionality of the application. If even a single test fails, you do not proceed.  At it most simplistic level, you are testing every externally exposed interface of an application to make sure that it behaves as expected and you are doing it in an automated fashion.
Currently the tests as proposed are testing the functionality of the application.  What if we combined Threat Modeling with TDD?  What if you create security focused test suites that are driven off the Threat Models?  The advantage I see in this case is once the initial tests are set up, they could be run in a very automated fashion to check for vulnerabilities. In addition, as threats change, additional tests can be added to the test suite to run against the existing software.  The advantage to the developer would be that he is getting instant feedback on the vulnerabilities in the code.
As you noted, software does not evolve without intervention.  And even in this case, you would have to update the test suite and update the software to make the test pass and to fix the vulnerability. BUT, at least in this case you would be aware that there is indeed a vulnerability that can be exploited.  Which hopefully is the first step in the process of fixing the vulnerability.
I am going to take a look at this in a lot more detail as I bring myself up to speed on TDD and see if this line of inquiry has any merit.  Until the paradigm shift in computer science that you mentioned happens, we have to work within the limitations of the current technology and hope that training and education (both for development teams as well as their management), combined with the appropriate tools have an impact on the quality of the code.
Tags:: Security
3/10/2004 10:35 AM Eastern Standard Time  |  Comments [0]  |  Disclaimer  |  Permalink   
Comments are closed.