A Java implementation of CSRF mitigation using “double submit cookie” pattern

Goal of this articlecsrf

The goal of this article is to present an implementation of the “double submit cookie” pattern used to mitigate the Cross Site Request Forgery (CSRF) attacks. The proposed implementation is a Java filter plus a few auxiliary classes and it is (obviously) suitable for projects using the Java language as back-end technology.

Definition of CSRF and possible mitigations

In the case of a CSRF attack, the browser is tricked into making unauthorized requests on the victim’s behalf, without the victim’s knowledge. The general attack scenario contains the following steps:

  1. the victim connects to the vulnerable web-site, so it have a real, authenticated session.
  2. the hacker force the victim (usually using a spam/fishing email) to navigate to another (evil) web-site containing the CSRF attack.
  3. when the victim browser execute the (evil) web-site page, the browser will execute a (fraudulent) request to the vulnerable web-site using the user authenticated session. The user is not aware at all of the fact that navigating on the (evil) web-site will trigger an action on the vulnerable web-site.

For deeper explanations I strongly recommend  to read chapter 5 of Iron-Clad Java: Building Secure Applications book and/or the OWASP Cross-Site Request Forgery (CSRF) Prevention Cheat Sheet.

Definition of “double submit cookie” pattern

When a user authenticates to a site, the site should generate a (cryptographically strong) pseudo-random value and set it as a cookie on the user’s machine separate from the session id. The server does not have to save this value in any way, that’s why this patterns is sometimes also called Stateless CSRF Defense.

The site then requires that every transaction request include this random value as a hidden form value (or other request parameter). A cross origin attacker cannot read any data sent from the server or modify cookie values, per the same-origin policy.

In the case of this mitigation technique the job of the client is very simple; just retrieve the CSRF cookie from the response and add it into a special header to all the requests:

csrfclient
Client workflow

The job of the server is a little more complex; create the CSRF cookie and for each request asking for a protected resource, check that the CSRF cookie and the CSRF header of the request are matching:

csrfserver
Server workflow

Note that some JavaScript frameworks like AngularJS implements the client worflow out of the box; see Cross Site Request Forgery (XSRF) Protection

Java implementation of “double submit cookie” pattern

The proposed implementation is on the form of a (Java) Servlet filter and can be found here: GenericCSRFFilter GitHub.

In order to use the filter, you must define it into you web.xml file:

<filter>
   <filter-name>CSRFFilter</filter-name>
   <filter-class>com.github.adriancitu.csrf.GenericCSRFStatelessFilter</filter-class>
<filter>

<filter-mapping>
   <filter-name>CSRFFilter</filter-name>
   <url-pattern>/*</url-pattern>
</filter-mapping>

 

The filter can have 2 optional initialization parameters: csrfCookieName representing the name of the cookie that will store the CSRF token and csrfHeaderName representing the name of the HTTP header that will be also contains the CSRF token.

The default values for these parameters are “XSRF-TOKEN” for the csrfCookieName and “X-XSRF-TOKEN” for the csrhHeaderName, both of them being the default values that AngularJS is expecting to have in order to implement the CSRF protection.

By default the filter have the following features:

  • works with AngularJS.
  • the CSRF token will be a random UUID.
  • all the resources that are NOT accessed through a GET request method will be CSRF protected.
  • the CSRF cookie is replaced after each non GET request method.

How it’s working under the hood

The most of the functionality is in the GenericCSRFStatelessFilter#doFilter method; here is the sequence diagram that explains what’s happening in this method:

doFilter method sequence diagram
doFilter method sequence diagram

The doFilter method is executed on each HTTP request:

  1. The filter creates an instance of ExecutionContext class; this class is a simple POJO containing the initial HTTP request, the HTTP response, the CSRF cookies (if more than one cookie with the csrfCookieName is present) and implementation of the ResourceCheckerHook , TokenBuilderHook and ResponseBuilderHook .(see the next section for the meaning of this classes).
  2. The filter check the status of the HTTP resource, that status can be:MUST_NOT_BE_PROTECTED, MUST_BE_PROTECTED_BUT_NO_COOKIE_ATTACHED,MUST_BE_PROTECTED_AND_COOKIE_ATTACHED (see ResourceStatus enum) using an instance of ResourceCheckerHook.
  3. If the resource status is ResourceStatus#MUST_NOT_BE_PROTECTED
    ResourceStatus#MUST_BE_PROTECTED_BUT_NO_COOKIE_ATTACHED then
    the filter creates a CSRF cookie having as token the token generated by an instance of TokenBuilderHook.
  4. if the resource status ResourceStatus#MUST_BE_PROTECTED_AND_COOKIE_ATTACHED
    then compute the CSRFStatus of the resource and then use an instance of ResponseBuilderHook to return the response to the client.

How to extend the default behavior

It is possible to extend or overwrite the default behavior by implementing the hooks interfaces. All the hooks implementations must be thread safe.

  1. The ResourceCheckerHook is used to check the status of a requested resource. The default implementation is DefaultResourceCheckerHookImpl and it will return ResourceStatus#MUST_NOT_BE_PROTECTED for any HTTP GET method, for all the other request types, it will return {@link ResourceStatus#MUST_BE_PROTECTED_BUT_NO_COOKIE_ATTACHED if any CSRF cookie is present in the query or ResourceStatus#MUST_BE_PROTECTED_BUT_NO_COOKIE_ATTACHED otherwise.The interface signature is the following one:
    public interface ResourceCheckerHook extends Closeable {
        ResourceStatus checkResourceStatus(ExecutionContext executionContext);
    }  
  2. The TokenBuilderHook hook is used to generate the token that will be used to create the CSRF cookie. The default implementation  is DefaultTokenBuilderHookImpl and it uses a call to UUID.randomUUID to fetch a token. The interface signature is the following one:
    public interface TokenBuilderHook extends Closeable {
        String buildToken(ExecutionContext executionContext);
    }
  3. The ResponseBuilderHook is used to generate the response to the client depending of the CSRFStatus of the resource. The default implementation is DefaultResponseBuilderHookImpl and it throws a SecurityException if the CSRF status is CSRFStatus#COOKIE_NOT_PRESENT, CSRFStatus#HEADER_TOKEN_NOT_PRESENT or CSRFStatus#COOKIE_TOKEN_AND_HEADER_TOKEN_MISMATCH. If the CSRF status is CSRFStatus#COOKIE_TOKEN_AND_HEADER_TOKEN_MATCH then the old CSRF cookies are deleted and a new CSRF cookie is created. The interface signature is the following one:
    public interface ResponseBuilderHook extends Closeable {
        ServletResponse buildResponse(ExecutionContext executionContext,
                                      CSRFStatus status);
    }
    

The hooks are instantiated inside the GenericCSRFStatelessFilter#init method using the ServiceLoader Java 6 loading facility. So if you want to use your implementation of one of the hooks then you have to create a  META-INF/services directory that contains a text file whose name matches the fully-qualified interface class name of the hook that you want to replace.

Here is the sequence diagram representing the hooks initializations:

initmethod

Book review : Practical Anonymity: Hiding in Plain Sight Online

This is a review of the Practical Anonymity: Hiding in Plain Sight Online book.

Conclusion

This is not a technical book about the inner workings of Tor or Tails and I think a better title would be “How to use Tor and Tails for dummies”. Almost all the information present in the book can be found in the official documentation, the only positive point is that all the needed information is present in one single place.

Chapter 1. Anonymity and Censorship Circumvention

This first chapter is an introduction to what is on-line anonymity, why (the on-line anonymity ) is important for some people and how it can be achieved using Tor. The chapter contains also the fundamentals of how Tor is working, what it can do to on-line anonymity and some advises about how it can be used safely.

Chapter 2. Using the Tor Browser Bundle

The chapter is presenting the TBB (Tor Browser Bundle) in detail. The TBB is composed of three components; Vidalia which is the control panel for Tor, the Tor software itself and Mozilla Firefox browser. Each of this tree components are described from the user point of view, each of the possible configuration options are presented in detail.

Chapter 3. Using Tails

Tails is a is a Linux distribution that includes Tor and other softwares to provide an operating system that enhances privacy. The Tails network stack has been modified so that all Internet connectivity is routed through the Tor network.

In order to enhance privacy, Tails is delivered with the following packages :

  • Firefox
  • Pidgin
  • GNU Privacy Guard
  • Metadata Anonymization Toolkit
  • Unsafe Web Browser

Detailed instructions are presented about how to create a bootable DVD and a bootable USB stick and how to run and configure the operating system. The persistent storage feature of Tails is presented in detail so that the reader can understand what are the benefits and the drawbacks.

Chapter 4. Tor Relays, Bridges and Obfsproxy

The chapter is about how the Tor adversaries can disrupt the network and how the Tor developers are trying to find new technique to workaround these disruptions.

One way to forbid to the user the access to the Tor network is to filter the (nine) Tor directory authorities, that are servers that distribute information about active Tor entry points. One way to avoid this restriction is the use of Tor bridge relays. A bridge relay is like any other Tor transit relay, the  only difference is that it is not publicly listed and it is used only for entering the Tor network from places where public Tor relays are blocked. There are different mechanisms to retrieve the list of this bridge relays, like a web page on the Tor website or emails sent by email.

Another way to disrupt the Tor network is to filter the Tor traffic knowing that the Tor protocol packages have a distinguished signature. One way to avoid the package filtering is to conceal the Tor packages in  other kind of packages. The framework that can be used to implement this kind of functionality is called Obfsproxy (obfuscated proxy). Some of the plug-ins that are using Pbfsproxy: StegoTorus, Dust, SkypeMorph.

Chapter 5. Sharing Tor Resources

This chapter describes how a user can share his bandwidth becoming a Tor bridge relay, a Tor transit relay or a Tor exit relay. Detailed settings descriptions are made for each type of relays and also the incurred risks for the user.

Chapter 6. Tor Hidden Services

A (Tor) hidden service is a server that can be accessed by other clients within the anonymity network, while the actual location (IP address) of the server remains anonymous. The hidden service protocol is briefly presented followed by how to set up a hidden service. For the set up a hidden service the main takeaways are:

  • install Tor and the service that you need on a VM.
  • run the VM on a VPS (virtoual private server)  hosted  in a country having privacy-friendly legislation in place.
  • the VM can/should be encrypted, be power cycled and that has no way to know what IP address or domain name of the computer on which it is running.

Chapter 7. Email Security and Anonymity practices.

This last chapter is about the email anonymity in general and how the use of Tor can improve the email anonymity. The main takeaways :

  • choose a email provider that do not require another email address or a mobile phone.
  • choose an email provider that supports HTTPS.
  • encrypt the content of your emails.
  • register and connect to the email box using ALWAYS Tor.

 

 

(My) OWASP Belgium Chapter meeting notes

CloudPiercer: Bypassing Cloud-based Security Providers (by Thomas Vissers, iMinds-DistriNet-KU Leuven)

The goal of the presentation was to show how the CBSP (Cloud Based Security owasp_logoProviders) are protecting the applications and how this protections can be circumvented.

The most common attacks on the web applications are the DDOS.

2 types of DDOS attacks:

  • volumetric attacks – no more network bandwidth

application level attacks – servers are targeted

How the CBSP are protecting the web application ?

CBSP reroute and filter the customer traffic through their cloud (see the following picture).

cbsp

The secrecy of the origin server IP address is crucial because, (if discovered) the server can be hit directly and the CBSP protection is useless.

Vulnerabilities, or how the origin server IP can be found

  1. subdomains – administrators can create a specific subdomain, such as origin.example.com, that directly resolves to the origin’s IP address; they need it in order to easily connect to the server for non http services (SSH, FTP)
  2. dns records – other DNS records might still reveal your origin.; ex TXT records, MX records
  3. SSL certificates – it concerns the https connection between CBSP and origin server. If an attacker is able to scan all IP addresses and retrieve all SSL certificates, he can find the IP addresses of hosts with certificates that are associated with the domain he is trying to expose.
  4. IP history – companies constantly track DNS changes
  5. sensitive files on the (target) web application; error messages, files containing IP information
  6. outbound connections – force the origin to connect to you.

Defenses/what can i do to protect ?

  • request a new ip address when activating the CBSP.
  • block all non-CBSP requests with your firewall
  • choose a CBSB that assignes a dedicated IP address to you
  • use cloudpiercer.org to scan your website

If interested you can read Bypassing Cloud-based Security Providers – DistriNet – KU Leuven

Hackers! Do we shoot or do we hug? (by Edwin van Andel, Zerocopter)

For me the presentation was a (very) funny pleading for an ethical hacking.

OWASP Security Knowledge Framework – the missing tutorial

skf-miniA few months  ago (during BeneLux OWASP Days 2016) I’ve seen a presentation of the OWASP Security Knowledge Framework. I found the presentation very interesting so I decided to dig a little bit to learn more about OWASP Security Knowledge Framework a.k.a SKF. I found the official documentation a little bit sparse so after playing with SKF a few days I decided to write on paper what I have learned.

Introduction

SKF is a tools that helps the software developers to ease the integration of security into SDLC (Software Development Lifecycle). For the end-user, SKF can be used as web application which can be accessed after creating an account.

SKF web site have deployed a demo application, that can be accessed here SKF demo site (the username is admin and the password is test-skf. (be aware that the site content is scratched every hour).

Installation

If you want to install SKF in-house, then you can follow the installation instructions (installation instructions under Ubuntu, MacOS and Windows are provided and also Chef and AWS). Another installation option would be to use Docker; in this case you could use the following Docker image that I created. (the image is based on Ubuntu 14.04 64 bits version).

Once the application is running (in the case of Ubuntu manual installation it will run over HTTPS on port 5443) you have to unlock the default admin account (I will speak later about user and group management in SKF). This procedure is described in First run page; what is important to note is that on the “first login” page you must use the pincode 1234 and the email [email protected] (these values are hard-coded).

SKF Big picture

The SKF is articulated around 4 security topics:

  • (security) knowledge base.
  • (security) code examples.
  • introduction of the security checklists for applications using the OWASP Application Security Verification Standard (ASVS) project.
  • introduction of the security requirements in the SDLC.

The knowledge base

The SKF knowledge base contains the descriptions and the solutions to over 200 vulnerabilities, attacks and security best practices: API responses security headers, Access Control patterns, Command injection and many, many more.

The code examples

SKF also contains a few dozens of code examples of best practices to write secure code; the examples are written in PHP and C# languages.

Security Checklists

SKF offers also the possibility to create projects and users, the idea being that the users can be part of one or more projects and each project can contains a list of security checklists and security requirements.

The admin user (the user that have been  unlocked after the installation) can create projects and users. For the user creation, a unique pincode is generated and the new user need to use this pincode the first time when he connects to the application.

The security checklists are representing a way to testing web application technical security controls and also provides developers with a list of requirements for secure development. SKF uses the OWASP Application Security Verification Standard (ASVS) checklists. SKF shows the checklists in a very user friendly way and the checklists can be customized in case your project have special security requirements. It is possible to add as many checklists as wanted end every checklists can be downloaded as a .docx document.

The OWASP ASVS contains the following verification criteria:

  • Architecture, design ant threat modeling.
  • Authentication.
  • Session management.
  • Access control.
  • Malicious input handling.
  • Cryptography at rest.
  • Error handling and logging.
  • Data protection.
  • Communications.
  • HTTP security configuration.
  • Malicious controls.
  • Business logic.
  • Files and resources.
  • Mobile
  • Web services
  • Configuration

Security requirements

For me, the most interesting feature of the SFK is the ability to create and attach to each project multiple items called functions. The basic idea is that the user can choose from a list of security requirements depending of the feature that should be implemented; for example you if for implementing a new feature, an eternal library will be used then you can add as security requirement the “third party software” function.  Each security requirement contains a description and a solution about how to be handled.

The OWASP SKF contains the following security requirements:

  1. third party software
  2. sub-domains
  3. Access controls or Login systems
  4. User registration
  5. Form
  6. Sessions
  7. Password forget functions
  8. Forward or redirect
  9. GET variables or parameters
  10. XML files
  11. File Download
  12. File upload
  13. Regular expressions
  14. Eval type functions
  15. Private user data
  16. System commands
  17.  SSI commands
  18. XSLT input and output
  19. HTTP headers
  20. LDAP commands
  21. User-input in HTML output
  22. X-Path
  23. File inclusion
  24. Path or Filename
  25. SQL commands

(My) Conclusion

What I like about OWASP SKF is that it tries to introduce the secure coding practices into the SDLC in a easy and customizable way; ideally you should use all the features of the SKF but it’s up to each team/project to choose how  SKF could help to have a more secure code.

On the negative side  I would have the following remarks:

  • the application looks unstable; very often I have “Internal server error” on my Docker instance.
  • the code examples for Java are in the “Coming Soon” status for the last (at least) 8 months; maybe should be removed to not set-up unreasonable expectations.
  • I (personally) do not appreciate the ergonomy of the user interface; the actions linked projects (project creation, results) are mixed with the user management actions (user and group creation) and with other actions which are independent of projects and groups (code examples and knowledge base).  As a new user I was very confused and I didn’t understood right away that the 4 SKF features can be used independently.

Book review: Software Security: Building Security in – Part III: Software Security Grows Up

This is a review of the third part of the Software Security: Building Security in book. This part is dedicated to how to introduce a software security program in your company; it’s something that I’m much less interested than the previous topics, so the review will be quite short.

Chapter 10: An Enterprise Software Security ProgramSecuritySoftwareBookCover

The chapter contains some ideas about how to ignite a software security program in a company. The first and most important idea is the software security practices must have a clear and explicit connection with the with the business mission; the goal of (any) software is to fulfill the business needs.

In order to adopt a SDL (Secure Development Lifeycle) the author propose a roadmap of five steps:

  1. Build a plan that is tailored for you. Starting from how the software is done in present, then plan the building blocks for the future change.
  2. Roll out individual best practice initiatives carefully. Establish champions to take ownership of each initiative.
  3. Train your people. Train the developers and (IT) architects to be aware of security and the central role that they play in the SDL (Secure Development Lifeycle).
  4. Establish a metric program. In order to measure the progress some metrics a are needed.
  5. Establish and sustain a continuous improvement capability. Create a situation in which continuous improvement can be sustained by measuring results and refocusing on the weakest aspect of the SDC.

Chapter 11: Knowledge for Software Security

For the author there is a clear difference between the knowledge and the information; the knowledge is information in context, information put to work using processes and procedures. Because the knowledge is so important, the author prose a way to structure the software security knowledge called “Software Security Unified Knowledge Architecture” :

Software Security Unified Knowledge Architecture
Software Security Unified Knowledge Architecture

The Software Security Unified Knowledge Architecture has seven catalogs in three categories:

  • category prescriptive knowledge includes three knowledge catalogs: principles, guidelines and rules. The principles represent high-level architectural principles, the rules can contains tactical low-level rules; the guidelines are in the middle of the two categories.
  • category diagnostic knowledge includes three knowledge catalogs: attack patterns, exploits and vulnerabilities. Vulnerabilities includes descriptions of software vulnerabilities, the exploits describe how instances of vulnerabilities are exploited and the attack patterns describe common sets of exploits in a form that can be applied acc
  • category historical knowledge includes the catalog historical risk.

Another initiative that is with mentioning is the Build Security In which is an initiative of Department of Homeland Security’s National Cyber Security Division.

Book review: Software Security: Building Security in – Part II: Seven Touchpoints for Software Security

This is a review of the second part of the Software Security: Building Security in book.

Chapter 3: Introduction to Software Security TouchpointsSecuritySoftwareBookCover

This is an introductory chapter for the second part of the book. A very brief description is made for every security touch point.

Each one of the touchpoints are applied on a specific artifact and each touchpoints represents either a destructive or constructive activity. Based on the author experience, the ideal order based on effectiveness in which the touch points should be implemented is the following one:

  1. Code Review (Tools). Artifact: code. Constructive activity.
  2. Architectural Risk Analysis. Artifact: design and specification. Constructive activity.
  3. Penetration Testing. Artifact: system in its environment. Destructive activity
  4. Risk-Based Security Testing. Artifact system. Mix between destructive and constructive activities
  5. Abuse Cases. Artifact: requirements and use cases. Predominant destructive activity.
  6. Security Requirements. Artifact: requirements. Constructive activity.
  7. Security Operations. Artifact: fielded system. Constructive activity.

Another idea to mention that is worth mentioning is that the author propose to add the securty aspects as soon as possible in the software development cycle;moving left as much as possible (see the next figure that pictures the applicability of the touchpoints in the development cycle); for example it’s much better to integrate security in the requirements or architecture and design (using the risk analysis and abuse cases touchpoints) rather than waiting for the penetration testing to find the problems.

Security touchpoints
Security touchpoints

Chapter 4: Code review with a tool

For the author the code review is essential in finding security problems early in the process. The tools (the static analysis tools) can help the user to make a better job, but the user should also try to understand the output from the tool; it’s very important to not just expect that the tool will find all the security problems with no further analysis.

In the chapter a few tools (commercial or not) are named, like CQual, xg++, BOON, RATS, Fortify (which have his own paragraph) but the most important part is the list of key characteristics that a good analysis tool should have and some of the characteristics to avoid.

The key characteristics of a static analysis tool:

  • be designed for security
  • support multi tiers architecture
  • be extensible
  • be useful for security analysts and developers
  • support existing development processes

The key characteristics of a static analysis tool to avoid:

  • too many false positives
  • spotty integration with the IDE
  • single-minded support for C language

Chapter 5: Architectural Risk Analysis

Around 50% of the security problems are the result of design flows, so performing an architecture risk analysis at design level is an important part of a solid software security program.

In the beginning of the chapter the author present very briefly some existing security risk analysis methodologies: STRIDE (Microsoft), OCTAVE (Operational Critical Threat, Asset and Vulnerability Evaluation), COBIT (Control Objectives for Information and Related Technologies).

Two things are very important for the author; the ara (architectural risk analysis) must be integrated in and with the Risk Management Framework (presented briefly in Book review: Software Security: Building Security in – Part I: Software Security Fundamentals), and we must have a “forest-level” view of the system.

In the last part of the chapter the author present the Cigital way of making architectural risk analysis. The process has 3 steps:

  1. attack resistance analysis – have as goal to define how the system should behave against known attacks.
  2. ambiguity analysis – have as goal to discover new types of attacks or risks, so it relies heavily on the experience of the persons performing the analysis.
  3. weakness analysis – have as goal to understand end asses the impact of external software dependencies.
Process diagram for architectural risk analysis
Process diagram for architectural risk analysis

Chapter 6: Software Penetration Testing

The chapter starts by presenting how the penetration testing is done today. For the author, the penetration tests are misused and are used as a “feel-good exercise in pretend security”. The main problem is that the penetration tests results cannot guarantee that the system is secured after all the found vulnerabilities had been fixed and the findings are treated as a final list of issues to be fixed.

So, for the author the penetration tests are best suited to probing (live like) configuration problems and other environmental factors that deeply impact software security. Another idea is to use the architectural risk analysis as a driver for  the penetration tests (the risk analysis could point to more weak part(s) of the system, or can give some attack angles). Another idea, is to treat the findings as a representative sample of faults in the system and all the findings should be incorporated back into the development cycle.

Chapter 7: Risk-Based Security Testing

Security testing should start as the feature or component/unit level and (as the penetration testing) should use the items from the architectural risk analysis to identify risks. Also the security testing should continue at system level and should be directed at properties of the integrated software system. Basically all the tests types that exist today (unit tests, integration tests) should also have a security component and a security mindset applied.

The security testing should involve two approaches:

  • functional security testing: testing security mechanism to ensure that their functionality is properly implemented (kind of white hat philosophy).
  • adversarial security testing: tests that are simulating the attacker’s approach (kind of black hat philosophy).

For the author the penetration tests represents a outside->in type of approach, the security testing represents an inside->out approach focusing on the software products “guts”.

Chapter 8: Abuse Case Development

The abuse case development is done in the requirements phase and it is intimately linked to the requirements and use cases. The basic idea is that as we define requirements that suppose to express how the system should behave under a correct usage, we should also define how the system should behave if it’s abused.

This is the process that is proposed by the author to build abuse cases.

Diagram for building abuse cases
Diagram for building abuse cases

The abuse cases are creating using two sources, the anti-requirements (things that you don’t want your software to do) and attack models, which are known attacks or attack types that can apply to your system. Once they are done, the abuse cases can be used as entry point for security testing and especially for the architectural risk analysis.

Chapter 9: Software Security Meets Security Operations

The main idea is that the security operations peoples and software developers should work together and each category can (and should) learn from the other (category).

The security operation peoples have the security mindset and can use this mindset and their experience in some of the touchpoints presented previously; mainly abuse cased, security testing, architectural risk analysis and penetration testing.

Book review: Software Security: Building Security in – Part I: Software Security Fundamentals

This is a review of the first part of the Software Security: Building Security in book.

Chapter 1: Defining a disciplineSecuritySoftwareBookCover

This chapter lands out the landscape for the entire book; the author presents his view on the today challenges in having secure holes free software.
In the today world, the software is everywhere, from microwaves oven to nuclear power-stations, so the “old view” of seeing the software applications as “black boxes” than can be protected using firewalls and IDS/IPS it’s not valid anymore.

And just to make the problem even harder, the computing systems and the software applications are more and more interconnected must be extensible and have more and more complex features.

The author propose a taxonomy of the security problems that can be affected the software applications:

  • defect: a defect is a problem that may lie dormant in software only to surface in a fielded system with major consequences.
  • bug: an implementation-level software problem; only fairy simple implementations errors. A large panel of tools are capable to detect a range of implementation bugs.
  • flaw: a problem at a deeper level; a flow is something that can be present at the code level but it can be also present or absent at the design level. What is very important to remark is that the automated technologies to detect design-level flows do not yet exist, through manual risk-analysis can identity flows.
  • risk: flaws and bugs lead to risk. Risk capture the chance that a flaw or a bug will impact the purpose of a software.

In order to solve the problem of the software security, the author propose a cultural shift based on three pillars: applied risk management, software security touchpoints and knowledge.

Pillar 1 Applied Risk Management

For the author under the risk management names there 2 different parts; the application of risk analysis at the architectural level (also known as threat modeling or security design analysis or architectural risk analysis) and tracking and mitigating risks as a full life-cycle activity (the author call this approach, the risk management framework – RMF).

Pillar 2 Software security Touchpoints.

Touchpoint it’s just a fancy word for “best practices”. Today there are best practices for design and coding of software system and as the security became a property of a software system, then best practices should also be used to tackle the security problems.Here are the (seven) touch points and where exactly are applied in the development process.

Security Touchpoints
Security Touchpoints

The idea is to introduce as deeply as possible the touch points in the development process. The part 2 of the book is dedicated to the touchpoints.

Pillar 3 Knowledge

For the author the knowledge management and training should play a central role in encapsulation and sharing the security knowledge.The software security knowledge can be organized into seven knowledge catalogs:

  • principles
  • guideline
  • rules
  • vulnerabilities
  • exploits
  • attack patterns
  • historical risks

How to build the security knowledge is treated in the part 3 of the book.

Chapter 2: A risk management framework

This chapter presents in more details a framework to mitigate the risks as a full lifecycle activity; the author calls this framework the RMF (risk Management Framework).

The purpose of the RMF is to allow a consistent and repeatable expert-driven approach to risk management but the main goal is to find, rank, track and understand the software security risks and how these security risks can affect the critical business decisions.

Risk Management Analysis steps
Risk Management Analysis steps

The RMF consists of five steps:

  1. understand the business context The goal of this step is describe the business goals in order to understand the     types of software risks to care about.
  2. identify the business and technical risks. Business risk identification helps to define and steer the use of particular technological methods for measuring and mitigating software risk.The technical risks should be identified and mapped (through business risk) to business goals.
  3. synthesize and rank the risks. The ranking of the risks should take in account which business goals are the most important, which business goals are immediately threatened and how the technical risks will impact the business.
  4. define a risk mitigation strategy. Once the risks have been identified, the mitigation strategy should take into account cost, implementation time, likelihood of success and the impact
  5. carry out required fixes and validate that they are correct. This step represents the execution of the risk mitigation strategy; some metrics should be defined to measure the progress against risks, open risks remaining.

Even if the framework steps are presented sequentially, in practice the steps can overlap and can occur in parallel with standards software development activities. Actually the RMF can be applied at several different level; project level, software lifecycle phase level, requirement analysis, use case analysis level.