5 (software) security books that every (software) developer should read

I must admit that the title is a little bit catchy; a better title would have been “5 software security books that every developer should be aware of“. Depending on your interest you might want to read entirely these books or you could just know that they exists. There must be tons of software security books on the market but this is my short list of books about software security that I think that each developer that is interested in software security should be aware of.

Hacking – the art of exploitation This book explains the basics of different hacking techniques, especially the non-web hacking techniques: how to find vulnerabilities (and defend against)  like buffer overflow or stack-based buffer overflow , how to write shellcodes, some basic concepts on cryptography and attacks linked to the cryptography like the man-in-the-middle attack of an SSL connection. The author tried to make the text easy for non-technical peoples but some programming experience is required (ideally C/C++) in order to get the best of this book. You can see my full review of the book here.

Iron-Clad Java: Building secure web applications This book presents the hacking techniques and the countermeasures for the web applications; you can see this books as complementary of the previous one; the first one contains the non-web hacking techniques, this one contains (only) web hacking techniques; XSS, CSRF, how to protect data at rest, SQL injection and other types of injections attacks. In order to get the most of the book some Java knowledge is required. You can see my full review of the book here.

Software Security-Building security in  This books explains how to introduce the security into the SDLC; how to introduce abuse cases and security requirements in the requirements phase, how to introduce risk analysis (also known as Threat Modeling) in the design phase and software qualification phase. I really think that each software developer should at least read the first chapter of the book where the authors explains why the old way of securing application (seeing the software applications as “black boxes” than can be protected using firewalls and IDS/IPS) it cannot work anymore in the today software landscape. You can see my full review of the book here: Part 1, Part 2 and Part 3.

The Tangled Web: A Guide to Securing Modern Web Applications This is another technical book about security on which you will not see a single line of code (the Software Security-Building security in is another one) but it highly instructive especially if you are a web developer. The book presents all the “bricks” of the today Internet: HTTP, WWW, HTML, Cookies, Scripting languages, how these bricks are implemented in different browsers and especially how the browsers are implementing the security mechanism against rogue applications. You can see my full review of the book here.

Threat modeling – designing for security Threat modeling techniques (also known as Architectural Risk Analysis) were around for some time but what it has changed in the last years is the accessibility of these technique for the software developers.  This book is one of the reasons for which the threat modeling is accessible to the developers. The book is very dense but it  suppose that you have no knowledge about the subject. If you are interested in the threat modeling topic you can check this ticket: threat modeling for mere mortals.

Book review : The Tangled Web: A Guide to Securing Modern Web Applications

This is a review of the The Tangled Web: A Guide to Securing Modern Web Applications book.tangledwebbook

(My) Conclusion

This books makes a great job explaining how the “bricks” of the Internet (HTTP, HTML, WWW, Cookies, Script Languages) are working (or not) from the security point of view. Also a very systematic coverage of the browser (in)security is done even if some of the information it starts to be outdated. The book audience is for web developers that are interested in inner workings of the browsers in order to write more secure code.

Chapter 1 Security in the world of web applications

This goal of this chapter is to set the scene for the rest of book. The main ideas are around the fact that security is non-algorithmic problem and the best ways to tackle security problems are very empirical (learning from mistakes, develop tools to detect and correct problems, and plan to have everything compromised).

Another part of the chapter is dedicated to the history of the web because for the author is very important to understand the history behind the well known “bricks” of the Internet (HTTP, HTML, WWW) in order to understand why they are completely broken from the security point of view. For a long time the Internet standards evolutions were dominated by vendors or stakeholders who did not care much about the long-term prospects of technology; see the Wikipedia Browser Wars page for a few examples.

Part I Anatomy of the web (Chapters 2 to 8)

The first part of the book is about the buildings blocks of the web: the HTTP protocol, the HTML language, the CSS, the scripting languages (JavaScript, VBScript) and the external browser plug-ins (Flash, SilverLight). For each of these building blocks, the author presents how are implemented and how are working (or not) in different browsers, what are the standards that supposed to drive the development and how these standards are very often incomplete or oblivious of security requirements.

In this part of the book the author speaks only briefly about the security features, knowing that the second part if the book supposed to be focused on security.

Part II Browser security features (Chapter 9 to 15)

The first security feature presented is the SOP (Same Policy Origin), which is also the most important mechanism to protect against hostile applications. The SOP behaviour is presented for the DOM documents, for XMLHttpRequest, for WebStorage and how the security policies for cookies could also impact the SOP.

A less known topic that is treated is the SOP inheritance; how the SOP is applied to pseudo-urls like about:, javascript: and data:. The conclusion is that each browser are treating the SOP inheritance on different ways (which can be incompatible) and it is preferable to create new frames or windows by pointing them to a server-suplied blank page with a definite origin.

Another less known browser features (that can affect the security) are deeply explained; the way the browsers are recognizing the content of the response (a.k.a content sniffing), the navigation to sensitive URI schemes like “javascript:”, “vbscript:”, “file:”, “about:”, “res:” and the way the browsers are protecting itself against rogue scripts (in the case of the rogue scripts protection the author is pointing the inefficiently  of the protections).

The last part is about different mechanisms that browsers are using in order to give special privileges to some specific web sites; the explained mechanisms are the form-based password managers, the hard-coded domain names and the Internet Explorer Zone model.

Part III Glimpse of things to come (Chapter 16 to 17)

This part is about the developments done by the industry to enhance the security of the browsers,

For the author there are two ways that the browser security could evolve; extend the existing frameworks/s or try to restrict the existing framework/s by creating new boundaries on the top of existing browser security model.

For the first alternative, the following solutions are presented: the W3C Cross-Origin Resource Sharing specification , the Microsoft response to CORS called XDomainRequest  (which by the way was deprecated by Microsoft) and W3C Uniform Messaging Policy.

For the second alternative the following solutions are presented: W3C (former Mozilla) Content Security Policy , (WebKit) Sandboxed frames and Strict Transport Security.

The last part is about how the new planned APIs and features could have impact on the browser and applications security. Very briefly are explained the “Binary HTTP”, WebSocket (which was not yet a standard when the book was written), JavaScript offline applications, P2P networking.

Chapter 18 Common web vulnerabilities

The last chapter is a nomenclature of different known vulnerabilities grouped by the place where it can happen (server side, client side). For each item a brief definition is done and links are provided towards previous chapters where the item has been discussed.

Book review : Practical Anonymity: Hiding in Plain Sight Online

This is a review of the Practical Anonymity: Hiding in Plain Sight Online book.


This is not a technical book about the inner workings of Tor or Tails and I think a better title would be “How to use Tor and Tails for dummies”. Almost all the information present in the book can be found in the official documentation, the only positive point is that all the needed information is present in one single place.

Chapter 1. Anonymity and Censorship Circumvention

This first chapter is an introduction to what is on-line anonymity, why (the on-line anonymity ) is important for some people and how it can be achieved using Tor. The chapter contains also the fundamentals of how Tor is working, what it can do to on-line anonymity and some advises about how it can be used safely.

Chapter 2. Using the Tor Browser Bundle

The chapter is presenting the TBB (Tor Browser Bundle) in detail. The TBB is composed of three components; Vidalia which is the control panel for Tor, the Tor software itself and Mozilla Firefox browser. Each of this tree components are described from the user point of view, each of the possible configuration options are presented in detail.

Chapter 3. Using Tails

Tails is a is a Linux distribution that includes Tor and other softwares to provide an operating system that enhances privacy. The Tails network stack has been modified so that all Internet connectivity is routed through the Tor network.

In order to enhance privacy, Tails is delivered with the following packages :

  • Firefox
  • Pidgin
  • GNU Privacy Guard
  • Metadata Anonymization Toolkit
  • Unsafe Web Browser

Detailed instructions are presented about how to create a bootable DVD and a bootable USB stick and how to run and configure the operating system. The persistent storage feature of Tails is presented in detail so that the reader can understand what are the benefits and the drawbacks.

Chapter 4. Tor Relays, Bridges and Obfsproxy

The chapter is about how the Tor adversaries can disrupt the network and how the Tor developers are trying to find new technique to workaround these disruptions.

One way to forbid to the user the access to the Tor network is to filter the (nine) Tor directory authorities, that are servers that distribute information about active Tor entry points. One way to avoid this restriction is the use of Tor bridge relays. A bridge relay is like any other Tor transit relay, the  only difference is that it is not publicly listed and it is used only for entering the Tor network from places where public Tor relays are blocked. There are different mechanisms to retrieve the list of this bridge relays, like a web page on the Tor website or emails sent by email.

Another way to disrupt the Tor network is to filter the Tor traffic knowing that the Tor protocol packages have a distinguished signature. One way to avoid the package filtering is to conceal the Tor packages in  other kind of packages. The framework that can be used to implement this kind of functionality is called Obfsproxy (obfuscated proxy). Some of the plug-ins that are using Pbfsproxy: StegoTorus, Dust, SkypeMorph.

Chapter 5. Sharing Tor Resources

This chapter describes how a user can share his bandwidth becoming a Tor bridge relay, a Tor transit relay or a Tor exit relay. Detailed settings descriptions are made for each type of relays and also the incurred risks for the user.

Chapter 6. Tor Hidden Services

A (Tor) hidden service is a server that can be accessed by other clients within the anonymity network, while the actual location (IP address) of the server remains anonymous. The hidden service protocol is briefly presented followed by how to set up a hidden service. For the set up a hidden service the main takeaways are:

  • install Tor and the service that you need on a VM.
  • run the VM on a VPS (virtoual private server)  hosted  in a country having privacy-friendly legislation in place.
  • the VM can/should be encrypted, be power cycled and that has no way to know what IP address or domain name of the computer on which it is running.

Chapter 7. Email Security and Anonymity practices.

This last chapter is about the email anonymity in general and how the use of Tor can improve the email anonymity. The main takeaways :

  • choose a email provider that do not require another email address or a mobile phone.
  • choose an email provider that supports HTTPS.
  • encrypt the content of your emails.
  • register and connect to the email box using ALWAYS Tor.



Book review: Iron-Clad Java Building Secure Web Applications

This a review of the Iron-Clad Java: Building Secure Web Applications book.

(My) Conclusion

I will start with the conclusion because it’s maybe the most important part of this review.

For me this is a must read book if you want to write more robust (web and non web) applications in Java, it covers a very large panel of topics from the basics of securing a web application using HTTP/S response headers to handling the encryption of sensitive informations in the right way.

Chapter 1: Web Application Security BasicsironCladJavaBook

This chapter is an introduction to the security of web application and it can be split in 2 different types of items.

The first type of items is what I would call “low-hanging fruits” or how you could improve the security  of your application with a very small effort:

  • The use of HTTP/S POST request method is advised over the use of HTTP/S GET. In the case of POST the parameters are embedded in the request body, so the parameters are not stored and visible in the browser history and if used over HTTPS are opaques to a possible attacker.
  • The use of the HTTP/S response headers:
    • Cache-Control – directive that instructs the browser how should cache the data.
    • Strict-Transport-Security – response header that force the browser to alway use the HTTPS protocol. This it can protect against protocol downgrade attacks and cookie hijacking.
    • X-Frame-Options – response header that can be used to indicate whether or not a browser should be allowed to render a page in a <frame>, <iframe> or <object> . Sites can use this to avoid clickjacking attacks, by ensuring that their content is not embedded into other sites.
    • X-XSS-Protection – response header that can help stop some XSS atacks (this is implemented and recognized only by Microsoft IE products).

 The second types of items are more complex topics like the input validation and security controls. For this items the authors just scratch the surface because all of this items will be treated in more details in the future chapters of the book.

Chapter 2: Authentication and Session Management

This chapter is about how a secure authentication feature should work; under the authentication topic is included the login process, session management, password storage and the identity federation.

The first part is presenting the general workflow of login and session management (see next picture) and for every step of the workflow  some dos and don’t are described.

login and session management workflow

login and session management workflow

The second part of the chapter is about common attacks on the authentication and for each kind of attack a solution to mitigated is also presented. This part of the chapter is strongly inspired from the OWASP Session Management Cheat Sheet which is rather normal because one of the authors (Jim Manico) is the project manager of the OWASP Cheat Sheet Series.

If you want to have a quick view of this chapter you can take a look to the presentation Authentication and Session Management done by Jim.

Even if you are not implementing an authentication framework for you application, you could still find good advices that can be applied to other web applications; like the use of the use of the secured and http-only attributes for cookies and the increase of the session ID length.

Chapter 3: Access Control

The chapter is about the advantages and pitfalls of implementing an authorization framework and can be split in three parts.

The first part describes the goal of an authorization framework and defines some core terms:

  • subject : the service making the request
  • subject attributes : the attributes that defines the service making the request.
  • group : basic organizational structure
  • role : a functional abstraction that uniquely describe system collaborators with similar or unique duties.
  • object : data being operating on.
  • object attributes : the attributes that defines the type of object being operating on.
  • access control rules : decision that need to be made to determine if a subject is allowed to access an object.
  • policy enforcement point : the place in code where the access control check is made.
  • policy decision point : the engine that takes the subject, subject attributes, object, object attributes and evaluates them to make an access control decision.
  • policy administration point : administrative entry in the access control system.

The second part of the chapter describes some access control (positive) patterns and anti-patterns.

Some of the (positive) access control patterns: have a centralized policy enforcement point  and policy decision point (not spread through the entire code),  all the authorization decisions should be taken on server-side only (never trust the client), the changes in the access control rules should be done dynamically (should not be necessary to recompile or restart/redeploy the application).

For the anti-patterns, some of then are just opposite of the (positive) patterns : hard-coded policy (opposite of “changes in the access control rules should be done dynamically”), adding the access control manually to every endpoint (opposite of have a centralized policy enforcement point  and policy decision point)

Others anti-patterns are around the idea of never trusting the client: do not use request data to make access control policy decisions and fail open (the control access framework should cope with wrong or missing parameters coming from the client).

The third part of the chapter is about different approaches (actually two) to implement an access control framework. The most used approach is RBAC (Role Based Access Control) and is implemented by some well knows Java access control frameworks like Apache Shiro and Spring Security. The most important limitation of RBAC is the difficulty of implementing data-specific/contextual access control. The use of ABAC (Attribute Based Access Control) paradigm can solve the data-specific/contextual access control but there are no mature frameworks on the market implementing this.

Chapter 4: Cross-Site Scripting Defense

This chapter is about the most common vulnerability found across the web and have two parts; the presentation of different types of cross-site scripting (XSS) and the way to defend against it.

XSS is a type of attack that consists in including untrusted data into the victim (web) browser. There are three types of XSS:

  • reflected XSS (non persistent) – the attacker tampers the HTTP request to submit malicious JavaScript code. Reflected attacks are delivered to victims via e-mail messages, or on some other web site. When a user clicks on a malicious link, submitting a specially crafted form the injected code travels to the vulnerable web site, which reflects the attack back to the user’s browser. The browser then executes the code because it came from a “trusted” server.
  • stored XSS (persistent XSS) – the malicious script is stored on the server hosting the vulnerable web application (usually in the database) and it is served later to other users of the web application when the users are loading the vulnerable page. In this case the victim does not require to take any attacker-initiated action.
  • DOM-based XSS – the attack payload is executed as a result of modifying the DOM “environment” in the victim’s browser.

For the defense techniques the big picture is that the input validation and output encoding should fix (almost) all the problems but very often various factors needs to be considered when deciding the defense technique.

Some projects are presented for the input validation (OWASP Java Encoder Project) and output encoding (OWASP HTML Sanitizer, OWSP AntiSamy).

Chapter 5: Cross-Site Request Forgery Defense and Clickjacking

Chapter 6: Protecting Sensitive Data

This chapter is articulated around three topics; how to protect (sensitive) data in transit, how to protect (sensitive) data at rest and the generation of secure random numbers.

How to protect the data in transit

The standard way to protect data in transit is by use of cryptographic protocol Transport Layer Security (TLS). In the case of web applications all the low level details are handled by the web server/application server and by the client browser but if you need a secure communications channel programmatically you can use the Java Secure Sockets Extension (JSSE). The authors recommendations for the cipher suites is to use the JSSE defaults.

Another topic treated by the authors is about the certificate and key management in Java. The notions of trustore and keystore are very well explained and examples about how to use the keytool tool are provided. Last but not least examples about how to manipulate the trustores and keystores programmatically are also provided.

How to protect data at rest

The goal is how to securely store the data but in a reversible way, so the data must be wrapped in protection when is stored and the protection must be unwrapped later when it is used.

For this kind of scenarios, the authors are focusing on Keyczar which is a (open source) framework created by Google Security Team having as goal to make it easier and safer the use cryptography for the developers. The developers should not be able to inadvertently expose key material, use weak key lengths or deprecated algorithms, or improperly use cryptographic modes.

Examples are provided about how to use Keyczar for encryption (symmetric and asymmetric) and for signing purposes.

 Generation of secure random numbers

Last topic of the chapter is about the Java support for the generation of secure random numbers like the optimal way of using the java.security.SecureRandom (knowing that the implementation depends on the underlying platform) and the new cryptographic features of Java8 (enhance of the revocation certificate checking, TLS Server name indication extension).

Chapter 7: SQL Injection and other injection attacks

This chapter is dedicated to the injections attacks; the sql injection being treated in more details that the other types of injection.

SQL injection

The sql injection mechanism and the usual defenses are very well explained. What is interesting is that the authors are proposing solutions to limit the impact of SQL injections when the “classical” solution of query parametrization cannot be applied (in the case of legacy applications for example): the use of input validation, the use of database permissions and the verification of the number of results.

Other types of injections

XML injection, JSON-Based injection and command injection are very briefly presented and the main takeaways are the following ones:

  • use a safe parser (like JSON.parse) when parsing untrusted JSON
  • when received untrusted XML, an XML schema should be applied to ensure proper XML structure.
  • when XML query language (XPath) is intermixed with untrusted data, query parametrization or encoding is necessary.

Chapter 8: Safe File Upload and File I/O

The chapter speaks about how to safety treat files coming from external sources and to protect against attacks like file path injection, null byte injection, quota overloaded Dos.

The main takeaways are the following ones: validate the filenames (reject filenames containing dangerous characters like “/” or “\”), setting a per-user upload quota, save the file to a non-accessible directory, create a filename reference map to link the actual file name to a machine generated name and use this generated file name.

Chapter 9: Logging, Error Handling and Intrusion Detection


What should be be logged: what happened, who did it, when it happened, what data have been modified, and what should not be logged: sensitive information like sessions IDs, personal informations.

Some logging frameworks for security are presented like OWASP ESAPI Logging and Logback. If you are interested in more details about the security logging you can check OWASP Logging Cheat Sheet.

Error Handling

On the error handling the main idea is to not leak to the external world stacktraces that could give valuable information about your application/infrastructure to an attacker. It is possible to prevent this by registering to the application level static pages for each type of error code or by exception type.

Intrusion Detection

The last part of the chapter is about techniques to help monitor end detect  and defend against different types of attacks. Besides the “craft yourself” solutions, the authors also re presenting the OWASP AppSensor application.

Chapter 10: Secure Software Development Lifecycle

The last chapter is about the SSDLC (Secure Software Development Life Cycle) and how the security could be included in each steps of development cycle. For me this chapter is not the best one but if you are interested about this topic I highly recommend the Software Security: Building Security in book (you can read my own review of the book here, here and here).

Book review: Building microservices (part 1)

This is the first part of the review of the Building Microservices book.

Chapter 1: Microservicesmicroservices

This first chapter is a quick introduction to microservices, the definition, the concept genesis and the key benefits. The microservices idea have emerged from the (new) ways of crafting software today, this new ways implies the use domain-driven design, the continuous delivery, the virtualization, the infrastructure automation and the small autonomous teams.

The author is defining the microservices as “small, autonomous services that work together”.

The key benefits of the microservices are:

  • technology heterogeneity; use the right tool for the right job.
  • resilience; because the microservices have service boundaries quite well defined the failures are not cascading, it’s easy to quick find and isolate the problem(s).
  • scaling; the microservices can be deployed and run independently, so it is possible to choose which microservices need special attention to be scaled accordingly.
  • ease of deployment; microservices are independent by nature so, it can be (re)deployed punctually.
  • optimizing for replaceability; due to autonomous characteristics, the microservices can be easily replaced with new versions.

Chapter 2: The Evolutionary Architect

This chapter is about the role of the architect in the new IT landscape; for the author the qualities of an IT architect a re the following ones: should have a vision and be able to communicate it very clearly, should have empathy so he could understand the impact of his decisions over the colleagues and customers, should be able to collaborate with the others in order to define and execute the vision,  should be adaptable so he can change the vision as the changing of requirements, should be autonomous so he could find the right balance between standardizing and enabling autonomy for the team.

For me this chapter it does not feet very well in the book because all the ideas (from the chapter) could very well be applied to monolithic systems also.

Chapter 3: How to model services

The goal of this chapter is to split the services in the right way by finding the boundaries between services. In order to find the right service boundaries, it must see the problem from the model point of view.

The author introduces the notion of bounded context, notion that was coined by Eric Evans’s in Domain-Driven Design book. Any domain consists of multiple bounded contexts, and residing within each are components that do not need to be communicated outside as well as things that should be shared externally with other bounded contexts. By thinking in terms of model, it is possible to avoid the tight coupling pitfall. So, the each bounded context represents an ideal candidate for a microservice.

This cut on the bounded context is rather a vertical slice but in some situation due to technical boundaries, the cut can be done horizontally.

Chapter 4: Integration

All the ideas of this chapter are around 3 axes; inter-microservices integration, user interface integration with microservices and the COTS (Commercial Off the Shelf Software) integration with microservices.

For the inter-microservices integration different communications styles (synchronous versus asynchronous), different ways to manage (complex) business processes (orchestration versus choreography) and technologies (RPC, SOAP, REST) are very carefully explained with all the advantages and drawbacks. The author tend to prefer the asynchronous-choreographic using REST style, but he emphases that there is no ideal solution.

Then some integration issues are tackled; the service versioning problem or how to (wisely) reuse the code between microservices and/or client libraries and no fit all solution is proposed, just different options.

For the user interface integration with microservices some interesting ideas are presented, like the creation of a different backend api if your microservices are used by different ui technologies (create a backend api for the mobile application and a different backend api for the web application). Another interesting idea is to have services directly serving up UI components.

 The integration of microservices and the COTS part is around the problems that a team should solve in order to integrate with COTS; lack of control (the COTS could use a different technological stack that your microservices), difficult customization of COTS.

Chapter 5: Splitting the Monolith

The goal of this chapter is to presents some ideas about how to split a monolithic application into microservices. The first proposed step is to find portions of the code that can be treated in isolation and worked on without impacting the rest of the codebase (this portions of code are called seams, this word have been coined by Michael Feather in Working Effectively with Legacy Code). The seams are the perfect candidates for the service boundaries.

In order to easily refactor the code to create the seams the authors is advertising the Structure101 application which is an ADE (Architecture Development Environment); for alternatives to Structure 101 you can see Architecture Development Environment: Top 5 Alternatives to Structure101

 The rest of the chapter is about how to find seams in the database and into the code that is using it. The overall idea is that every microservice should have his own independent (DB) schema. Different problems will raise if this happens like the foreign key relationship problem, share of static data stored in the DB, shared tables, the transactional boundaries. Each of this problem is discussed in detail and multiple solutions are presented.

The author recognize that splitting the monolith it’s not trivial at all and it should start very small (for the DB for example a first step would be to split the schema and keep the code as it was before, the second step would be to migrate some parts of the monolithic code towards microservices). It also recognize that sometimes the splitting brings new problems (like the transactional boundaries).

Chapter 6: Deployment

This chapter presents different deployment techniques for the micro services. The first topic that is tackled is how the micro services code should be stored and how the Continuous Integration (CI) process should work; multiple options are discussed: one code repository for all micro services, and one CI server; one code repository by micro service and one CI server, build pipelines by operating system or by directly platform artifacts.

A second topic around the deployment is about the infrastructure on which the micro services are deployed. Multiple deployment scenarios are presented: all micro services deployed on same host, one micro service by host, virtualized hosts, dockerized hosts. The most important idea on this part is that all the deployment and the host creation should be automate; the automation is essential for keeping the team/s productive.



Book review: Software Security: Building Security in – Part III: Software Security Grows Up

This is a review of the third part of the Software Security: Building Security in book. This part is dedicated to how to introduce a software security program in your company; it’s something that I’m much less interested than the previous topics, so the review will be quite short.

Chapter 10: An Enterprise Software Security ProgramSecuritySoftwareBookCover

The chapter contains some ideas about how to ignite a software security program in a company. The first and most important idea is the software security practices must have a clear and explicit connection with the with the business mission; the goal of (any) software is to fulfill the business needs.

In order to adopt a SDL (Secure Development Lifeycle) the author propose a roadmap of five steps:

  1. Build a plan that is tailored for you. Starting from how the software is done in present, then plan the building blocks for the future change.
  2. Roll out individual best practice initiatives carefully. Establish champions to take ownership of each initiative.
  3. Train your people. Train the developers and (IT) architects to be aware of security and the central role that they play in the SDL (Secure Development Lifeycle).
  4. Establish a metric program. In order to measure the progress some metrics a are needed.
  5. Establish and sustain a continuous improvement capability. Create a situation in which continuous improvement can be sustained by measuring results and refocusing on the weakest aspect of the SDC.

Chapter 11: Knowledge for Software Security

For the author there is a clear difference between the knowledge and the information; the knowledge is information in context, information put to work using processes and procedures. Because the knowledge is so important, the author prose a way to structure the software security knowledge called “Software Security Unified Knowledge Architecture” :

Software Security Unified Knowledge Architecture

Software Security Unified Knowledge Architecture

The Software Security Unified Knowledge Architecture has seven catalogs in three categories:

  • category prescriptive knowledge includes three knowledge catalogs: principles, guidelines and rules. The principles represent high-level architectural principles, the rules can contains tactical low-level rules; the guidelines are in the middle of the two categories.
  • category diagnostic knowledge includes three knowledge catalogs: attack patterns, exploits and vulnerabilities. Vulnerabilities includes descriptions of software vulnerabilities, the exploits describe how instances of vulnerabilities are exploited and the attack patterns describe common sets of exploits in a form that can be applied acc
  • category historical knowledge includes the catalog historical risk.

Another initiative that is with mentioning is the Build Security In which is an initiative of Department of Homeland Security’s National Cyber Security Division.

Book review: Software Security: Building Security in – Part II: Seven Touchpoints for Software Security

This is a review of the second part of the Software Security: Building Security in book.

Chapter 3: Introduction to Software Security TouchpointsSecuritySoftwareBookCover

This is an introductory chapter for the second part of the book. A very brief description is made for every security touch point.

Each one of the touchpoints are applied on a specific artifact and each touchpoints represents either a destructive or constructive activity. Based on the author experience, the ideal order based on effectiveness in which the touch points should be implemented is the following one:

  1. Code Review (Tools). Artifact: code. Constructive activity.
  2. Architectural Risk Analysis. Artifact: design and specification. Constructive activity.
  3. Penetration Testing. Artifact: system in its environment. Destructive activity
  4. Risk-Based Security Testing. Artifact system. Mix between destructive and constructive activities
  5. Abuse Cases. Artifact: requirements and use cases. Predominant destructive activity.
  6. Security Requirements. Artifact: requirements. Constructive activity.
  7. Security Operations. Artifact: fielded system. Constructive activity.

Another idea to mention that is worth mentioning is that the author propose to add the securty aspects as soon as possible in the software development cycle;moving left as much as possible (see the next figure that pictures the applicability of the touchpoints in the development cycle); for example it’s much better to integrate security in the requirements or architecture and design (using the risk analysis and abuse cases touchpoints) rather than waiting for the penetration testing to find the problems.

Security touchpoints

Security touchpoints

Chapter 4: Code review with a tool

For the author the code review is essential in finding security problems early in the process. The tools (the static analysis tools) can help the user to make a better job, but the user should also try to understand the output from the tool; it’s very important to not just expect that the tool will find all the security problems with no further analysis.

In the chapter a few tools (commercial or not) are named, like CQual, xg++, BOON, RATS, Fortify (which have his own paragraph) but the most important part is the list of key characteristics that a good analysis tool should have and some of the characteristics to avoid.

The key characteristics of a static analysis tool:

  • be designed for security
  • support multi tiers architecture
  • be extensible
  • be useful for security analysts and developers
  • support existing development processes

The key characteristics of a static analysis tool to avoid:

  • too many false positives
  • spotty integration with the IDE
  • single-minded support for C language

Chapter 5: Architectural Risk Analysis

Around 50% of the security problems are the result of design flows, so performing an architecture risk analysis at design level is an important part of a solid software security program.

In the beginning of the chapter the author present very briefly some existing security risk analysis methodologies: STRIDE (Microsoft), OCTAVE (Operational Critical Threat, Asset and Vulnerability Evaluation), COBIT (Control Objectives for Information and Related Technologies).

Two things are very important for the author; the ara (architectural risk analysis) must be integrated in and with the Risk Management Framework (presented briefly in Book review: Software Security: Building Security in – Part I: Software Security Fundamentals), and we must have a “forest-level” view of the system.

In the last part of the chapter the author present the Cigital way of making architectural risk analysis. The process has 3 steps:

  1. attack resistance analysis – have as goal to define how the system should behave against known attacks.
  2. ambiguity analysis – have as goal to discover new types of attacks or risks, so it relies heavily on the experience of the persons performing the analysis.
  3. weakness analysis – have as goal to understand end asses the impact of external software dependencies.
Process diagram for architectural risk analysis

Process diagram for architectural risk analysis

Chapter 6: Software Penetration Testing

The chapter starts by presenting how the penetration testing is done today. For the author, the penetration tests are misused and are used as a “feel-good exercise in pretend security”. The main problem is that the penetration tests results cannot guarantee that the system is secured after all the found vulnerabilities had been fixed and the findings are treated as a final list of issues to be fixed.

So, for the author the penetration tests are best suited to probing (live like) configuration problems and other environmental factors that deeply impact software security. Another idea is to use the architectural risk analysis as a driver for  the penetration tests (the risk analysis could point to more weak part(s) of the system, or can give some attack angles). Another idea, is to treat the findings as a representative sample of faults in the system and all the findings should be incorporated back into the development cycle.

Chapter 7: Risk-Based Security Testing

Security testing should start as the feature or component/unit level and (as the penetration testing) should use the items from the architectural risk analysis to identify risks. Also the security testing should continue at system level and should be directed at properties of the integrated software system. Basically all the tests types that exist today (unit tests, integration tests) should also have a security component and a security mindset applied.

The security testing should involve two approaches:

  • functional security testing: testing security mechanism to ensure that their functionality is properly implemented (kind of white hat philosophy).
  • adversarial security testing: tests that are simulating the attacker’s approach (kind of black hat philosophy).

For the author the penetration tests represents a outside->in type of approach, the security testing represents an inside->out approach focusing on the software products “guts”.

Chapter 8: Abuse Case Development

The abuse case development is done in the requirements phase and it is intimately linked to the requirements and use cases. The basic idea is that as we define requirements that suppose to express how the system should behave under a correct usage, we should also define how the system should behave if it’s abused.

This is the process that is proposed by the author to build abuse cases.

Diagram for building abuse cases

Diagram for building abuse cases

The abuse cases are creating using two sources, the anti-requirements (things that you don’t want your software to do) and attack models, which are known attacks or attack types that can apply to your system. Once they are done, the abuse cases can be used as entry point for security testing and especially for the architectural risk analysis.

Chapter 9: Software Security Meets Security Operations

The main idea is that the security operations peoples and software developers should work together and each category can (and should) learn from the other (category).

The security operation peoples have the security mindset and can use this mindset and their experience in some of the touchpoints presented previously; mainly abuse cased, security testing, architectural risk analysis and penetration testing.