The following text was originally submitted on 16 June 2011 as a part of an assignment I completed for my Masters degree coursework at Rhodes University. Thanks to Dr. Barry Irwin (@barryirwin) and Yusuf Motara for encouraging me to publish it.
The question posed was:
“The industry is divided between ‘builders’ and ‘breakers’, often with both groups not fully understanding each other’s jobs. If you were responsible for both a development team and an application security team, how would you go about creating an outreach program, to ensure both groups work together to solve the bigger picture of security, or lack of it? Is it a case that developers don’t understand security, hence we are in a situation whereby applications are still being developed in an insecure manner, or is it the security professionals who don’t understand enough about development to understand why it’s not a simple process?”
Builders vs Breakers (Part A)
In the preface to “TeX: The Program” renowned computer scientist and author, Donald E. Knuth, wrote:
“I believe that the final bug in TeX was discovered and removed on November 27, 1985. But if, somehow, an error still lurks in the code, I shall gladly pay a finder’s fee of $20.48 to the first person who discovers it. (This is twice the previous amount, and I plan to double it again in a year; you see, I really am confident!)”(1)
In 2002, Knuth stated during a lecture that no new bugs had been discovered since around 1995 [p3](2), but a quick check on the Internet shows that changes to the TeX code have continued past 2003 (3). In October 2008 Knuth established “The Bank of San Serriffe”(4) to cater for bug bounties rewarded on issues discovered since 2006 and it is clear that even since then (and up to around April 2011) the TeX code still had not been finalised.
Knuth does have a clause in the TeX changelog that the very last change shall be numbered “pi”. From that point forward the project would need to be forked and a name other than TeX used(3). TeX was released in 1978 and continues to mature, but perfection seems to be elusive.
This compelling example illustrates that writing perfect code may seem feasible, even to the experts, but it is truly hard. The truth is that as the complexity of our systems increase and the number of components interacting with each other multiply it will only get harder.
Security Holes and Bugs
In an email posted to the email@example.com mailing list the creator and maintainer of Linux (the kernel), Linus Torvalds, argued strongly that bugs should be treated with the same amount of care as security holes and that there seems to be an incorrect tendency by security professionals to glorify bugs that lead to security issues - as a part of what he called the “security circus”.
Torvalds argued strongly that all bugs are important and that something such as a “spectacular crash due to bad locking” (5) is just as important and “special” as a security flaw. If taken holistically into consideration using the familiar “CIA triad” (Confidentiality, Integrity and Availability), one can classify his example as loss of “Availability” due to a system crash, illustrating that even normal bugs could have a security impact.
The distinction between security holes and bugs comes under further attack from Daniel J. Bernstein(6) who argues strongly that one of the most important rules to follow when writing secure code is to write bug free code, in his paper “Some Thoughts on Security After Ten Years of qmail 1.0”(7).
When investigating other more formal methods for secure code development, such as the Microsoft SDL, OWASP Touchpoints or BSIMM one can easily see a continuation of the bug-reduction theme hinted to above. Further investigation shows that a large amount of the effort in these programs goes into the reduction of bugs and improving the quality of code, leading to more secure code.
The premise for this paper will be that when reaching out to developers and application security professionals, instead of using very security specific paradigms and methods, the first step in such a program would be to provide simple tools and strategies, familiar to both disciplines, in order to kickstart a secure development program.
Building secure software is often approached differently by developers and application security professionals.
In general it seems that developers come under enormous pressure to produce code on time and within budget and thus as an overall concern security, features low on the list of priorities(8). To lay blame on the developers would be unfair although apathy on the part of the developers and not enough security education being given as a part of computer programming curriculums could be criticised.
Application security professionals often have an outside-in point of view. When evaluating application security, unless the application security professional has a strong development background it is hard for the professional to appreciate the complexity and time that goes into developing mature, and secure, software.
Is it not simple to illustrate how easily a stone can break a glass window without knowing the intricate process that is involved in producing that glass?
Creating Secure Code
Daniel J. Bernstein
Daniel J. Bernstein is the principal developer of qmail (Mail Transfer Agent) and djbdns (DNS Server)(6), what sets him apart is the widely acknowledged acceptance that qmail and djbdns have an extremely low rate of security holes compared to other widely used software of the same kind (BIND and Sendmail). He also established a “security guarantee” for qmail in 1997, long before Tippingpoint’s Zero-Day initiative and Google’s bug bounty programs [p.2](7)(9).
In 2007 Bernstein released a paper with the title “Some thoughts on security after ten years of qmail 1.0”(7). In this paper Bernstein sets out to explore his successes and failures related to qmail security and comes up with some practical advice for fellow developers, in essence Bernstein’s advice boils down to “Meta-engineer processes with lower bug rates” [p.3](7). In other words, minimise the amount of bugs and through that minimise the amount of security holes.
Bernstein defines a bug as a “software feature violating the user’s requirements”, and by extension that a security hole is a “software feature violating the user’s security requirements”, a comparison that reflects Torvald’s sentiments.
Although Bernstein’s focus is mostly on the software development elements of software security, and not on the overall security architecture of large software systems his software seems to have withstood the test of time. Through his coding practices he managed to deliver some of the most secure DNS server and MTA software in wide use on the internet today (estimated more than 1,000,000 installs in 2007[p.2](7))
As a part of the initial qmail development, and in further research, Bernstein has been investigating the use of environments that are robust and tolerant to failure, to this end Bernstein recommends “extreme sandboxing” [p.11] (10), an approach employed to some extent by, to name but a few, Adobe Flash(11)(although with mixed results(12)(13)), Google Chrome(14) and Google Chrome for Adobe Flash(15)(again with mixed results (16)).
Bernstein stresses that a lot of modern day strategies are designed to stop “yesterday’s” attacks and that these efforts are distractions and bring us no closer to building invulnerable software [p.2](7), he asserts that “Every security hole is therefore a bug.” and then dives into strategies to reduce security holes through bug reduction.
In his first strategy, Bernstein stresses the need to meta-engineer processes to achieve lower bug-rates. One example is the use of coverage tests and good design choices early in the software-engineering process [p.5](7).
Secondly, design a system to have the same functionality through less code. The smaller the amount of code, the more effective the processes to “Eliminate Bugs” will be. Examples include identifying and using common functions, automated handling of temporary errors and re-using operating system provided function wherever possible.
Eliminate Trusted Code
Finally, Bernstein describes the need to “architect computer systems to place most of the code into untrusted prisons”, a modern day example could be the extensive use of sand-boxing in Chrome, it should become impossible for trusted code (such as input and parsing functions) to violate the user’s requirements.
He continues to stress that if trusted code can be reduced enough and if bug elimination efforts pay special attention to the relatively small amount of trusted code, one could potentially hope for bug free trusted code, with no security holes.
Bernstein provides a strategy to reduce the security hole rate in a program that is easy to communicate to developers.
In May 2010 Dr. Gary McGraw, Sammy Migues and Dr. Brian Chess released the “Building Security In Maturity Model”(17). The work is the result of a statistically significant study through which the security programs of 30 top software producing companies were examined and summarised into a practical model for building a secure software development practice, based on real-world practices.
The companies in the study that agreed to be identified included large software or hardware manufacturers such as Adobe, EMC, Google, Intel, Intuit, Microsoft, Nokia, QUALCOMM, Symantec, and VMware.
What makes the study interesting is that it summarised real world results and included practices such as Cigital Touchpoints, OWASP CLASP and Microsoft SDL which resulted in a model that highlights the common ground of these approaches. The BSIMM thus incorporates a lot of the elements of the before-mentioned methodologies into a aggregated secure development program.
In the BSIMM strong emphasis is made for the need for Executive Leadership and it is highlighted that self organised, grassroots, security programs seem to have a very bad success rate in the industry[p.6](17).
The authors of the BSIMM also uncovered that none of the 30 companies interviewed were successfully carrying out activities without the existence of a Software Security Group (SSG). The SSG is mainly composed of experienced developers that understand code and that have been brought up to speed with security.
A successful SSG is composed of groups of individuals who have architectural experience and a development background. The ratio of SSG members to actual developers observed was roughly 1 to 100 or roughly 1% of the developer population [p.7](17).
The goal of the BSIMM is to create a framework for companies to establish activities and evolve a software security program in a reasonable amount of time, and without unnecessary costs.
The BSIMM outlines 109 activities that an organisation can put in place divided into 12 practices as part of a Software Security Framework. Organisations are encouraged to partake in activities in all 12 of the practices.
The governance domain includes practices that encourage activities that focus on organising, managing and measuring the program. It also encourages development and training as a key activity[p.11](17). The practices in this domain include “Strategy and Metrics”, “Compliancy and Policy” as well as “Training”.(17)
A central activity observed was that of organisational knowledge sharing and learning, to put good practices in place, identifying shortcomings as well as assisting in planning through activities such as threat modelling. The practices in this domain include: “Attack Models”, “Security Features and Design” and “Standards and Requirements”.(17)
The authors of the BSIMM describe this domain as containing “practices associated with analysis and assurance of particular software development artefacts and processes”. They also note that OWASP CLASP, Microsoft SDL and Cigital Touchpoints all include these practices. Practices in this domain include: “Architecture Analysis”, “Code Review” and “Security Testing”(17). It is also prudent to note that all of these activities should result in reduced bug count in the code.
In this domain the BSIMM includes more of the traditional network and software security practices. In the BSIMM it is acknowledged at multiple stages that software is never complete, or completely stand-alone and that the environment that the software gets deployed into matters a lot. Practices include: “Penetration Testing”, “Software Environment” and “Configuration and Vulnerability Management”(17). This domain is invaluable as input into the other domains.
It is also interesting to note how a lot of some of the system or process vulnerabilities in OWASP CLASP’s use-cases(18), such as password management and password ageing, would be covered within this domain of the BSIMM, whereas Bernstein focusses mostly on software development.
Of particular interest is how this work intersects with Bernstein’s views on code re-use (Standardisation),testing and bug-reduction (Quality Control).
When designing an outreach program one has to take into account the effect that the program should have on attitudes and abilities of all the major stakeholders. In order to produce the required secure software one would need to create an environment where developers are empowered to make the right decisions and put the correct coding practices in use.
Application security team members should be used in support of the program and care be taken not to foster adversarial, or us vs. them, attitudes. The BSIMM creates a great framework for the role of all stakeholders, the application security team members should not play the part of the police.
As an academic, open source, software developer, operating independently, Bernstein had the mandate and the opportunity to develop qmail at the specific pace set by himself, without interference from management or other kinds of oversight. Indeed Bernstein mentions that qmail was developed out of frustration with Sendmail and out of a promise made to a friend and that it was done in a period after his formal commitments, giving lectures, had been met [p.1](7).
However, most secure software development will happen within groups or organisations and as such executive sponsorship cannot be avoided. The BSIMM underscores the need for Executive Leadership, and unless there is a solo developer (as with qmail) who can act as judge, jury and executioner, Executive Leadership will be needed.
Step 1: Executive Leadership
The first step of the outreach program would be to secure Executive Leadership. By reaching out to the leaders and decision makers and assuring sponsorship for a secure software development program, aiming to establish a high-level directive within the organisation for secure code.
The organisation needs to make the message clear that it will stand behind it’s developers and provide time and a place for secure software to be developed. A strong example of this can be seen with Microsoft’s decision to delay Microsoft Vista’s release to rewrite a major part of it’s code securely(19).
Step 2: Easy and Practical
There are advanced threats and seasoned adversaries out on the internet today and chances are that software will be vulnerable before a software security initiative is undertaken.
During a “ramp-up” phase, after Executive Leadership has been achieved, there will, at first, be a vacuum in which the organisation comes to terms with it’s new objectives and it’s new processes.
An invaluable second step would be to empower the application security professionals to bring “quick win” security measures to developers in the form of well-know development dogma. The steps outlined by Daniel J. Bernstein are not only very familiar to developers, they are also not specific to security, but good coding practice and as such easily digested and incorporated into development practices.
Developers need to learn that by “reducing bugs, reducing code and reducing trusted code” their software can be made more secure. Therefore they will not be interrupted by the “security circus”(5) with endless OWASP TOP 10 briefings and application security professionals telling them how they don’t understand security, using the qmail case-study as an example.
Cognisance needs to be given to the current realities, commitments, organisational habits and culture of the organisation. The qmail approach is a stealthy way to start the secure software development practice and raise the profile and importance of security in the existing environment.
The dialog between the application security team and developers should start to align understanding of what “secure code” is, if never exposed to secure coding practices the need to differentiate “what is right” from “what is wrong”, becomes a first training priority.
The “OWASP Top 10” (20) is often used as a starting point for developer outreach and is a great guide to some very common risks, it should however not be presented solely from a security point of view. An attempt should be made to contextualise the risks from a developer’s point of view. For example, present SQL Injection (A1-Injection in the OWASP TOP 10) as an input validation problem.
Step 3: BSIMM and early wins
The 3rd part of the outreach program starts with identifying the parts of the BSIMM that the organisation already adheres to. Acknowledging this within the organisation will support the momentum towards a mature secure development program.
For any sizeable organisation or group with more than around 100 developers one would then go about establishing an SSG. The SSG is then tasked with co-ordinating activities around the secure software development program, it also becomes the clearing house for organisational knowledge.
It is critical that lessons learned, and “what-went-wrong” scenarios get integrated back into the organisation’s secure software development program via the SSG. As mentioned before the SSG’s size would be around 1% of the developer population. It does not become necessary for all developers to be application security specialists, as a matter of fact that kind of blanket approach is discouraged by the BSIMM and made redundant with a general Bernstein-styled approach to development.
The SSG’s gathered knowledge become an important source for ongoing outreach and education.
The BSIMM emphasises organisational knowledge sharing and Bernstein highlights “meta-engineering” practices. In this phase code re-use and code reduction becomes a central strategy.
“Code-volume minimisation” activities as well as the adoption of API’s, frameworks and programming languages that reduce the amount of code needed get integrated into development processes early(7) and make it hard to do the “wrong thing”. For example: Creating tools that can take strings and explicitly tag them as “normal”(not containing escape characters); also adjusting APIs to be strict about what it allows, thus reducing error rates and improving the code(10). Another great source of trusted code could be the OWASP ESAPI project.
Bernstein also stresses the need for an environment that, when it’s semantics are employed, does exactly as it says. For example: Where integer addition “a + b” in some languages could potentially result in integer overflow errors, some programming environments would correctly implement those statements and only fail in situations such as “out of memory” errors, thus protecting the developer from subtle mistakes [p.5](7).
The outreach program would encourage the adoption of reliable tools and practices that incorporate the above ideas.
Step 4: Toward Maturity
The final step sees the outreach program transforming into a continued education program. The secure development program will also become well established within the organisation. The BSIMM practices are put in place or another methodology such as OWASP CLASP or Microsoft SDL could used as the secure software development program.
Application security professionals take their rightful role in the secure development program as part of the overall process and as a critical part in the initial architecture and design processes. The application security professionals participate, they do not preach or police.
Ultimately a large majority of developers will never be security professionals and not be integrated directly into something like a SSG.
If developers are however empowered with simple strategies to “code right”, for example: less code, less bugs, code re-use and input validation, none of these being security specific, but good common sense practice for all developers then the need for all developers to be experts at security diminishes.
Application Security professionals on the other hand need to understand development to the extent that they can fulfil their roles as outlined in the BSIMM earlier, in that sense it can also be said that the problem with insecure code, and subsequently communication between AppSec and Developers not be that security people “do not understand development”.
The BSIMM illustrates that Application Security professionals have definite role to play in the overall program, but as they state “No amount of traditional security knowledge can overcome software cluelessness” [p.6](17).
Ultimately writing bug free code is hard, but writing bug free code, or at the very least reducing the number of bugs, is important for security. The proposed security outreach program uses application security professionals to kickstart the program and deliver simple strategies to move the organisation almost immediately into a bug reduction mode and allow the organisation to grow into formal programs such as OWASP CLASP, Microsoft SDL or BSIMM.
If an organisation or program asks if it is a case of developers not understanding security or application security professionals not understanding enough about development, I believe Knuth said it best in “All Questions Answered”, 2002:
“Computer science is a tremendous collaboration of people from all over the world adding little bricks to a massive wall. The individual bricks are what make it work, and not the milestones.”(2)
Or to put in layman’s terms, “no man is an island”.
1. TrueTeX Software -- Donald Knuth's Reward Check [Internet]. truetex.com. 2011 May 31 [cited 2011 May 31];:1–2. Available from: http://www.truetex.com/knuthchk.htm
2. Knuth D. All Questions Answered [Internet]. ams.org. 2002 Mar. 1 [cited 2011 Jun. 2];Available from: http://www.ams.org/notices/200203/fea-knuth.pdf
3. ftp://tug.ctan.org/pub/tex-archive/systems/knuth/dist/errata/tex82.bug [Internet]. tug.ctan.org. 2011 May 31 [cited 2011 May 31];:1–143. Available from: ftp://tug.ctan.org/pub/tex-archive/systems/knuth/dist/errata/tex82.bug
4. Knuth D. Knuth: The Bank of San Serriffe [Internet]. sunburn.stanford.edu. 2011 May 31 [cited 2011 May 31];:1–10. Available from: http://sunburn.stanford.edu/~knuth/boss.html
5. Torvalds L. Re: stable Linux 22.214.171.124 [Internet]. article.gmane.org. 2008 Jul. 15 [cited 2011 Jun. 1];:1–1. Available from: http://article.gmane.org/gmane.linux.kernel/706950
6. Daniel J. Bernstein - Wikipedia, the free encyclopedia. Wikipedia [Internet]. 2011 Jun. 1;:1–4. Available from: https://secure.wikimedia.org/wikipedia/en/wiki/Daniel_J._Bernstein#cite_note-4
7. Bernstein DJ. Some thoughts on security after ten years of qmail [Internet]. cr.yp.to. 2007 Nov. 1 [cited 2011 Apr. 18];Available from: http://cr.yp.to/qmail/qmailsec-20071101.pdf
8. Wilander J. Apps and Security: Security People vs Developers [Internet]. appsandsecurity.blogspot.com. 2011 Feb. 13 [cited 2011 Jun. 2];:1–7. Available from: http://appsandsecurity.blogspot.com/2011/02/security-people-vs-developers.html
9. Bernstein DJ. The qmail security guarantee [Internet]. cr.yp.to. 2011 Jun. 3 [cited 2011 Jun. 3];:1–3. Available from: http://cr.yp.to/qmail/guarantee.html
10. Bernstein DJ. activities-20050107.pdf [Internet]. cr.yp.to. 2005 Jan. 7 [cited 2011 Jun. 1];Available from: http://cr.yp.to/cv/activities-20050107.pdf
11. Arkin B. Introducing Adobe Reader Protected Mode « Adobe Secure Software Engineering Team (ASSET) Blog. 2010 Jul. 20;:1–2. Available from: http://blogs.adobe.com/asset/2010/07/introducing-adobe-reader-protected-mode.html
12. Constantin L. Critical Security Updates Available for Adobe Reader and Acrobat - Softpedia. news.softpedia.com [Internet]. 2011 Apr. 21;:1–2. Available from: http://cr.yp.to/cv/activities-20050107.pdf
13. Prince B. Adobe Flash Sandbox Bypassed by Security Researcher - Security - News & Reviews - eWeek.com. 2011 Jan. 7;:1–3. Available from: http://www.eweek.com/c/a/Security/Adobe-Flash-Sandbox-Bypassed-by-Security-Researcher-576573/
14. Sylvain N. Chromium Blog: A new approach to browser security: the Google Chrome Sandbox [Internet]. blog.chromium.org. 2008 Oct. 2 [cited 2011 May 26];:1–4. Available from: http://blog.chromium.org/2008/10/new-approach-to-browser-security-google.html
15. Schuh J, Pizano C. Chromium Blog: Rolling out a sandbox for Adobe Flash Player. 2010 Dec. 1;:1–3. Available from: http://blog.chromium.org/2010/12/rolling-out-sandbox-for-adobe-flash.html
16. Constantin L. Google Denies Chrome Sandbox Breach - Softpedia. news.softpedia.com [Internet]. 2011 May 16;:1–2. Available from: http://news.softpedia.com/news/Google-Denies-Chrome-Sandbox-Breach-200585.shtml
17. Brian Chess GMSM. Building Security In Maturity Model. 2010 May 5;:1–56. Available from: http://bsimm.com/
18. Graham D. Vulnerability Use Cases. OWASP; 2006.
19. McMillan R. Microsoft bets big on Vista security - PC World Business [Internet]. pcworld.idg.com.au. 2006 Jul. 25 [cited 2011 Jun. 3];:1–4. Available from: http://www.pcworld.idg.com.au/article/160878/microsoft_bets_big_vista_security/?fp=2&fpid=1
20. Top 10 2010-Main - OWASP [Internet]. owasp.org. 2011 Jun. 3 [cited 2011 Jun. 3];:1–3. Available from: https://www.owasp.org/index.php/Top_10_2010-Main