I must admit that the title is a little bit catchy; a better title would have been “5 software security books that every developer should be aware of“. Depending on your interest you might want to read entirely these books or you could just know that they exists. There must be tons of software security books on the market but this is my short list of books about software security that I think that each developer that is interested in software security should be aware of.
Hacking – the art of exploitation This book explains the basics of different hacking techniques, especially the non-web hacking techniques: how to find vulnerabilities (and defend against) like buffer overflow or stack-based buffer overflow , how to write shellcodes, some basic concepts on cryptography and attacks linked to the cryptography like the man-in-the-middle attack of an SSL connection. The author tried to make the text easy for non-technical peoples but some programming experience is required (ideally C/C++) in order to get the best of this book. You can see my full review of the book here.
Iron-Clad Java: Building secure web applications This book presents the hacking techniques and the countermeasures for the web applications; you can see this books as complementary of the previous one; the first one contains the non-web hacking techniques, this one contains (only) web hacking techniques; XSS, CSRF, how to protect data at rest, SQL injection and other types of injections attacks. In order to get the most of the book some Java knowledge is required. You can see my full review of the book here.
Software Security-Building security in This books explains how to introduce the security into the SDLC; how to introduce abuse cases and security requirements in the requirements phase, how to introduce risk analysis (also known as Threat Modeling) in the design phase and software qualification phase. I really think that each software developer should at least read the first chapter of the book where the authors explains why the old way of securing application (seeing the software applications as “black boxes” than can be protected using firewalls and IDS/IPS) it cannot work anymore in the today software landscape. You can see my full review of the book here: Part 1, Part 2 and Part 3.
The Tangled Web: A Guide to Securing Modern Web Applications This is another technical book about security on which you will not see a single line of code (the Software Security-Building security in is another one) but it highly instructive especially if you are a web developer. The book presents all the “bricks” of the today Internet: HTTP, WWW, HTML, Cookies, Scripting languages, how these bricks are implemented in different browsers and especially how the browsers are implementing the security mechanism against rogue applications. You can see my full review of the book here.
Threat modeling – designing for security Threat modeling techniques (also known as Architectural Risk Analysis) were around for some time but what it has changed in the last years is the accessibility of these technique for the software developers. This book is one of the reasons for which the threat modeling is accessible to the developers. The book is very dense but it suppose that you have no knowledge about the subject. If you are interested in the threat modeling topic you can check this ticket: threat modeling for mere mortals.
This is not a technical book about the inner workings of Tor or Tails and I think a better title would be “How to use Tor and Tails for dummies”. Almost all the information present in the book can be found in the official documentation, the only positive point is that all the needed information is present in one single place.
Chapter 1. Anonymity and Censorship Circumvention
This first chapter is an introduction to what is on-line anonymity, why (the on-line anonymity ) is important for some people and how it can be achieved using Tor. The chapter contains also the fundamentals of how Tor is working, what it can do to on-line anonymity and some advises about how it can be used safely.
Chapter 2. Using the Tor Browser Bundle
The chapter is presenting the TBB (Tor Browser Bundle) in detail. The TBB is composed of three components; Vidalia which is the control panel for Tor, the Tor software itself and Mozilla Firefox browser. Each of this tree components are described from the user point of view, each of the possible configuration options are presented in detail.
Chapter 3. Using Tails
Tails is a is a Linux distribution that includes Tor and other softwares to provide an operating system that enhances privacy. The Tails network stack has been modified so that all Internet connectivity is routed through the Tor network.
In order to enhance privacy, Tails is delivered with the following packages :
GNU Privacy Guard
Metadata Anonymization Toolkit
Unsafe Web Browser
Detailed instructions are presented about how to create a bootable DVD and a bootable USB stick and how to run and configure the operating system. The persistent storage feature of Tails is presented in detail so that the reader can understand what are the benefits and the drawbacks.
Chapter 4. Tor Relays, Bridges and Obfsproxy
The chapter is about how the Tor adversaries can disrupt the network and how the Tor developers are trying to find new technique to workaround these disruptions.
One way to forbid to the user the access to the Tor network is to filter the (nine) Tor directory authorities, that are servers that distribute information about active Tor entry points. One way to avoid this restriction is the use of Tor bridge relays. A bridge relay is like any other Tor transit relay, the only difference is that it is not publicly listed and it is used only for entering the Tor network from places where public Tor relays are blocked. There are different mechanisms to retrieve the list of this bridge relays, like a web page on the Tor website or emails sent by email.
Another way to disrupt the Tor network is to filter the Tor traffic knowing that the Tor protocol packages have a distinguished signature. One way to avoid the package filtering is to conceal the Tor packages in other kind of packages. The framework that can be used to implement this kind of functionality is called Obfsproxy (obfuscated proxy). Some of the plug-ins that are using Pbfsproxy: StegoTorus, Dust, SkypeMorph.
Chapter 5. Sharing Tor Resources
This chapter describes how a user can share his bandwidth becoming a Tor bridge relay, a Tor transit relay or a Tor exit relay. Detailed settings descriptions are made for each type of relays and also the incurred risks for the user.
Chapter 6. Tor Hidden Services
A (Tor) hidden service is a server that can be accessed by other clients within the anonymity network, while the actual location (IP address) of the server remains anonymous. The hidden service protocol is briefly presented followed by how to set up a hidden service. For the set up a hidden service the main takeaways are:
install Tor and the service that you need on a VM.
run the VM on a VPS (virtoual private server) hosted in a country having privacy-friendly legislation in place.
the VM can/should be encrypted, be power cycled and that has no way to know what IP address or domain name of the computer on which it is running.
Chapter 7. Email Security and Anonymity practices.
This last chapter is about the email anonymity in general and how the use of Tor can improve the email anonymity. The main takeaways :
choose a email provider that do not require another email address or a mobile phone.
choose an email provider that supports HTTPS.
encrypt the content of your emails.
register and connect to the email box using ALWAYS Tor.
I will start with the conclusion because it’s maybe the most important part of this review.
For me this is a must read book if you want to write more robust (web and non web) applications in Java, it covers a very large panel of topics from the basics of securing a web application using HTTP/S response headers to handling the encryption of sensitive informations in the right way.
Chapter 1: Web Application Security Basics
This chapter is an introduction to the security of web application and it can be split in 2 different types of items.
The first type of items is what I would call “low-hanging fruits” or how you could improve the security of your application with a very small effort:
The use of HTTP/S POST request method is advised over the use of HTTP/S GET. In the case of POST the parameters are embedded in the request body, so the parameters are not stored and visible in the browser history and if used over HTTPS are opaques to a possible attacker.
The use of the HTTP/S response headers:
Cache-Control – directive that instructs the browser how should cache the data.
X-Frame-Options – response header that can be used to indicate whether or not a browser should be allowed to render a page in a <frame>, <iframe> or <object> . Sites can use this to avoid clickjacking attacks, by ensuring that their content is not embedded into other sites.
X-XSS-Protection – response header that can help stop some XSS atacks (this is implemented and recognized only by Microsoft IE products).
The second types of items are more complex topics like the input validation and security controls. For this items the authors just scratch the surface because all of this items will be treated in more details in the future chapters of the book.
Chapter 2: Authentication and Session Management
This chapter is about how a secure authentication feature should work; under the authentication topic is included the login process, session management, password storage and the identity federation.
The first part is presenting the general workflow of login and session management (see next picture) and for every step of the workflow some dos and don’t are described.
The second part of the chapter is about common attacks on the authentication and for each kind of attack a solution to mitigated is also presented. This part of the chapter is strongly inspired from the OWASP Session Management Cheat Sheet which is rather normal because one of the authors (Jim Manico) is the project manager of the OWASP Cheat Sheet Series.
Even if you are not implementing an authentication framework for you application, you could still find good advices that can be applied to other web applications; like the use of the use of the secured and http-only attributes for cookies and the increase of the session ID length.
Chapter 3: Access Control
The chapter is about the advantages and pitfalls of implementing an authorization framework and can be split in three parts.
The first part describes the goal of an authorization framework and defines some core terms:
subject : the service making the request
subject attributes : the attributes that defines the service making the request.
group : basic organizational structure
role : a functional abstraction that uniquely describe system collaborators with similar or unique duties.
object : data being operating on.
object attributes : the attributes that defines the type of object being operating on.
access control rules : decision that need to be made to determine if a subject is allowed to access an object.
policy enforcement point : the place in code where the access control check is made.
policy decision point : the engine that takes the subject, subject attributes, object, object attributes and evaluates them to make an access control decision.
policy administration point : administrative entry in the access control system.
The second part of the chapter describes some access control (positive) patterns and anti-patterns.
Some of the (positive) access control patterns: have a centralized policy enforcement point and policy decision point (not spread through the entire code), all the authorization decisions should be taken on server-side only (never trust the client), the changes in the access control rules should be done dynamically (should not be necessary to recompile or restart/redeploy the application).
For the anti-patterns, some of then are just opposite of the (positive) patterns : hard-coded policy (opposite of “changes in the access control rules should be done dynamically”), adding the access control manually to every endpoint (opposite of have a centralized policy enforcement point and policy decision point)
Others anti-patterns are around the idea of never trusting the client: do not use request data to make access control policy decisions and fail open (the control access framework should cope with wrong or missing parameters coming from the client).
The third part of the chapter is about different approaches (actually two) to implement an access control framework. The most used approach is RBAC (Role Based Access Control) and is implemented by some well knows Java access control frameworks like Apache Shiro and Spring Security. The most important limitation of RBAC is the difficulty of implementing data-specific/contextual access control. The use of ABAC (Attribute Based Access Control) paradigm can solve the data-specific/contextual access control but there are no mature frameworks on the market implementing this.
Chapter 4: Cross-Site Scripting Defense
This chapter is about the most common vulnerability found across the web and have two parts; the presentation of different types of cross-site scripting (XSS) and the way to defend against it.
XSS is a type of attack that consists in including untrusted data into the victim (web) browser. There are three types of XSS:
stored XSS (persistent XSS) – the malicious script is stored on the server hosting the vulnerable web application (usually in the database) and it is served later to other users of the web application when the users are loading the vulnerable page. In this case the victim does not require to take any attacker-initiated action.
DOM-based XSS – the attack payload is executed as a result of modifying the DOM “environment” in the victim’s browser.
For the defense techniques the big picture is that the input validation and output encoding should fix (almost) all the problems but very often various factors needs to be considered when deciding the defense technique.
Chapter 5: Cross-Site Request Forgery Defense and Clickjacking
The chapter dedicated to the Cross-Site Request Forgery (CSRF) have the same structure as the previous chapter (dedicated to the XSS); the first part defines the CSRF and how it works and the second part defines mitigations strategies.
CSRF is an attack that can be used to force the victim to trigger unwanted actions on a web application in which they’re currently authenticated.
CSRF and XSS can be related in the sense that a XSS vulnerability could be used in order to embed a CSRF attack in the victim web site but most importantly a XSS vulnerability can be used to avoid the CSRF defenses; XSS can be used to read any (CSRF) tokens from any page or a XSS vulneariblity can be used to access cookies not having the HTTPOnly flag.
The following mitigations techniques are presented:
synchronizer token pattern – an anti-CSRF token is created and stored in the user session and in a hidden field on subsequent form submits. At every submit the server checks the token from the session matches the one submitted from the form. Tomcat 6+ implements this pattern; for more infos please see CSRF Protection Filter.
double-cookie submit pattern – when a user authenticates to a site, the site should generate a (cryptographically strong) pseudo-random value and set it as a cookie on the user’s machine separate from the session id. The site does not have to save this value in any way, thus avoiding server side state. The site then requires that every transaction request include this random value as a hidden form value (or other request parameter). A cross origin attacker cannot read any data sent from the server or modify cookie values, per the same-origin policy. The AngularJS framework implements this pattern out of the box; see Cross Site Request Forgery (XSRF) Protection
challenge/response pattern – the solution consists in forcing the user a value known only to him in order to complete the action.
Chapter 6: Protecting Sensitive Data
This chapter is articulated around three topics; how to protect (sensitive) data in transit, how to protect (sensitive) data at rest and the generation of secure random numbers.
How to protect the data in transit
The standard way to protect data in transit is by use of cryptographic protocol Transport Layer Security (TLS). In the case of web applications all the low level details are handled by the web server/application server and by the client browser but if you need a secure communications channel programmatically you can use the Java Secure Sockets Extension (JSSE). The authors recommendations for the cipher suites is to use the JSSE defaults.
Another topic treated by the authors is about the certificate and key management in Java. The notions of trustore and keystore are very well explained and examples about how to use the keytool tool are provided. Last but not least examples about how to manipulate the trustores and keystores programmatically are also provided.
How to protect data at rest
The goal is how to securely store the data but in a reversible way, so the data must be wrapped in protection when is stored and the protection must be unwrapped later when it is used.
For this kind of scenarios, the authors are focusing on Keyczar which is a (open source) framework created by Google Security Team having as goal to make it easier and safer the use cryptography for the developers. The developers should not be able to inadvertently expose key material, use weak key lengths or deprecated algorithms, or improperly use cryptographic modes.
Examples are provided about how to use Keyczar for encryption (symmetric and asymmetric) and for signing purposes.
Chapter 7: SQL Injection and other injection attacks
This chapter is dedicated to the injections attacks; the sql injection being treated in more details that the other types of injection.
The sql injection mechanism and the usual defenses are very well explained. What is interesting is that the authors are proposing solutions to limit the impact of SQL injections when the “classical” solution of query parametrization cannot be applied (in the case of legacy applications for example): the use of input validation, the use of database permissions and the verification of the number of results.
Other types of injections
XML injection, JSON-Based injection and command injection are very briefly presented and the main takeaways are the following ones:
use a safe parser (like JSON.parse) when parsing untrusted JSON
when received untrusted XML, an XML schema should be applied to ensure proper XML structure.
when XML query language (XPath) is intermixed with untrusted data, query parametrization or encoding is necessary.
Chapter 8: Safe File Upload and File I/O
The chapter speaks about how to safety treat files coming from external sources and to protect against attacks like file path injection, null byte injection, quota overloaded Dos.
The main takeaways are the following ones: validate the filenames (reject filenames containing dangerous characters like “/” or “\”), setting a per-user upload quota, save the file to a non-accessible directory, create a filename reference map to link the actual file name to a machine generated name and use this generated file name.
Chapter 9: Logging, Error Handling and Intrusion Detection
What should be be logged: what happened, who did it, when it happened, what data have been modified, and what should not be logged: sensitive information like sessions IDs, personal informations.
Some logging frameworks for security are presented like OWASP ESAPI Logging and Logback. If you are interested in more details about the security logging you can check OWASP Logging Cheat Sheet.
On the error handling the main idea is to not leak to the external world stacktraces that could give valuable information about your application/infrastructure to an attacker. It is possible to prevent this by registering to the application level static pages for each type of error code or by exception type.
The last part of the chapter is about techniques to help monitor end detect and defend against different types of attacks. Besides the “craft yourself” solutions, the authors also re presenting the OWASP AppSensor application.
Chapter 10: Secure Software Development Lifecycle
The last chapter is about the SSDLC (Secure Software Development Life Cycle) and how the security could be included in each steps of development cycle. For me this chapter is not the best one but if you are interested about this topic I highly recommend the Software Security: Building Security in book (you can read my own review of the book here, here and here).
This first chapter is a quick introduction to microservices, the definition, the concept genesis and the key benefits. The microservices idea have emerged from the (new) ways of crafting software today, this new ways implies the use domain-driven design, the continuous delivery, the virtualization, the infrastructure automation and the small autonomous teams.
The author is defining the microservices as “small, autonomous services that work together”.
The key benefits of the microservices are:
technology heterogeneity; use the right tool for the right job.
resilience; because the microservices have service boundaries quite well defined the failures are not cascading, it’s easy to quick find and isolate the problem(s).
scaling; the microservices can be deployed and run independently, so it is possible to choose which microservices need special attention to be scaled accordingly.
ease of deployment; microservices are independent by nature so, it can be (re)deployed punctually.
optimizing for replaceability; due to autonomous characteristics, the microservices can be easily replaced with new versions.
Chapter 2: The Evolutionary Architect
This chapter is about the role of the architect in the new IT landscape; for the author the qualities of an IT architect a re the following ones: should have a vision and be able to communicate it very clearly, should have empathy so he could understand the impact of his decisions over the colleagues and customers, should be able to collaborate with the others in order to define and execute the vision, should be adaptable so he can change the vision as the changing of requirements, should be autonomous so he could find the right balance between standardizing and enabling autonomy for the team.
For me this chapter it does not feet very well in the book because all the ideas (from the chapter) could very well be applied to monolithic systems also.
Chapter 3: How to model services
The goal of this chapter is to split the services in the right way by finding the boundaries between services. In order to find the right service boundaries, it must see the problem from the model point of view.
The author introduces the notion of bounded context, notion that was coined by Eric Evans’s in Domain-Driven Design book. Any domain consists of multiple bounded contexts, and residing within each are components that do not need to be communicated outside as well as things that should be shared externally with other bounded contexts. By thinking in terms of model, it is possible to avoid the tight coupling pitfall. So, the each bounded context represents an ideal candidate for a microservice.
This cut on the bounded context is rather a vertical slice but in some situation due to technical boundaries, the cut can be done horizontally.
Chapter 4: Integration
All the ideas of this chapter are around 3 axes; inter-microservices integration, user interface integration with microservices and the COTS (Commercial Off the Shelf Software) integration with microservices.
For the inter-microservices integration different communications styles (synchronous versus asynchronous), different ways to manage (complex) business processes (orchestration versus choreography) and technologies (RPC, SOAP, REST) are very carefully explained with all the advantages and drawbacks. The author tend to prefer the asynchronous-choreographic using REST style, but he emphases that there is no ideal solution.
Then some integration issues are tackled; the service versioning problem or how to (wisely) reuse the code between microservices and/or client libraries and no fit all solution is proposed, just different options.
For the user interface integration with microservices some interesting ideas are presented, like the creation of a different backend api if your microservices are used by different ui technologies (create a backend api for the mobile application and a different backend api for the web application). Another interesting idea is to have services directly serving up UI components.
The integration of microservices and the COTS part is around the problems that a team should solve in order to integrate with COTS; lack of control (the COTS could use a different technological stack that your microservices), difficult customization of COTS.
Chapter 5: Splitting the Monolith
The goal of this chapter is to presents some ideas about how to split a monolithic application into microservices. The first proposed step is to find portions of the code that can be treated in isolation and worked on without impacting the rest of the codebase (this portions of code are called seams, this word have been coined by Michael Feather in Working Effectively with Legacy Code). The seams are the perfect candidates for the service boundaries.
The rest of the chapter is about how to find seams in the database and into the code that is using it. The overall idea is that every microservice should have his own independent (DB) schema. Different problems will raise if this happens like the foreign key relationship problem, share of static data stored in the DB, shared tables, the transactional boundaries. Each of this problem is discussed in detail and multiple solutions are presented.
The author recognize that splitting the monolith it’s not trivial at all and it should start very small (for the DB for example a first step would be to split the schema and keep the code as it was before, the second step would be to migrate some parts of the monolithic code towards microservices). It also recognize that sometimes the splitting brings new problems (like the transactional boundaries).
Chapter 6: Deployment
This chapter presents different deployment techniques for the micro services. The first topic that is tackled is how the micro services code should be stored and how the Continuous Integration (CI) process should work; multiple options are discussed: one code repository for all micro services, and one CI server; one code repository by micro service and one CI server, build pipelines by operating system or by directly platform artifacts.
A second topic around the deployment is about the infrastructure on which the micro services are deployed. Multiple deployment scenarios are presented: all micro services deployed on same host, one micro service by host, virtualized hosts, dockerized hosts. The most important idea on this part is that all the deployment and the host creation should be automate; the automation is essential for keeping the team/s productive.
This is a review of the third part of the Software Security: Building Security in book. This part is dedicated to how to introduce a software security program in your company; it’s something that I’m much less interested than the previous topics, so the review will be quite short.
Chapter 10: An Enterprise Software Security Program
The chapter contains some ideas about how to ignite a software security program in a company. The first and most important idea is the software security practices must have a clear and explicit connection with the with the business mission; the goal of (any) software is to fulfill the business needs.
In order to adopt a SDL (Secure Development Lifeycle) the author propose a roadmap of five steps:
Build a plan that is tailored for you. Starting from how the software is done in present, then plan the building blocks for the future change.
Roll out individual best practice initiatives carefully. Establish champions to take ownership of each initiative.
Train your people. Train the developers and (IT) architects to be aware of security and the central role that they play in the SDL (Secure Development Lifeycle).
Establish a metric program. In order to measure the progress some metrics a are needed.
Establish and sustain a continuous improvement capability. Create a situation in which continuous improvement can be sustained by measuring results and refocusing on the weakest aspect of the SDC.
Chapter 11: Knowledge for Software Security
For the author there is a clear difference between the knowledge and the information; the knowledge is information in context, information put to work using processes and procedures. Because the knowledge is so important, the author prose a way to structure the software security knowledge called “Software Security Unified Knowledge Architecture” :
The Software Security Unified Knowledge Architecture has seven catalogs in three categories:
category prescriptive knowledge includes three knowledge catalogs: principles, guidelines and rules. The principles represent high-level architectural principles, the rules can contains tactical low-level rules; the guidelines are in the middle of the two categories.
category diagnostic knowledge includes three knowledge catalogs: attack patterns, exploits and vulnerabilities. Vulnerabilities includes descriptions of software vulnerabilities, the exploits describe how instances of vulnerabilities are exploited and the attack patterns describe common sets of exploits in a form that can be applied acc
category historical knowledge includes the catalog historical risk.
Another initiative that is with mentioning is the Build Security In which is an initiative of Department of Homeland Security’s National Cyber Security Division.
Chapter 3: Introduction to Software Security Touchpoints
This is an introductory chapter for the second part of the book. A very brief description is made for every security touch point.
Each one of the touchpoints are applied on a specific artifact and each touchpoints represents either a destructive or constructive activity. Based on the author experience, the ideal order based on effectiveness in which the touch points should be implemented is the following one:
Another idea to mention that is worth mentioning is that the author propose to add the securty aspects as soon as possible in the software development cycle;moving left as much as possible (see the next figure that pictures the applicability of the touchpoints in the development cycle); for example it’s much better to integrate security in the requirements or architecture and design (using the risk analysis and abuse cases touchpoints) rather than waiting for the penetration testing to find the problems.
Chapter 4: Code review with a tool
For the author the code review is essential in finding security problems early in the process. The tools (the static analysis tools) can help the user to make a better job, but the user should also try to understand the output from the tool; it’s very important to not just expect that the tool will find all the security problems with no further analysis.
In the chapter a few tools (commercial or not) are named, like CQual, xg++, BOON, RATS, Fortify (which have his own paragraph) but the most important part is the list of key characteristics that a good analysis tool should have and some of the characteristics to avoid.
The key characteristics of a static analysis tool:
be designed for security
support multi tiers architecture
be useful for security analysts and developers
support existing development processes
The key characteristics of a static analysis tool to avoid:
too many false positives
spotty integration with the IDE
single-minded support for C language
Chapter 5: Architectural Risk Analysis
Around 50% of the security problems are the result of design flows, so performing an architecture risk analysis at design level is an important part of a solid software security program.
In the beginning of the chapter the author present very briefly some existing security risk analysis methodologies: STRIDE (Microsoft), OCTAVE (Operational Critical Threat, Asset and Vulnerability Evaluation), COBIT (Control Objectives for Information and Related Technologies).
In the last part of the chapter the author present the Cigital way of making architectural risk analysis. The process has 3 steps:
attack resistance analysis – have as goal to define how the system should behave against known attacks.
ambiguity analysis – have as goal to discover new types of attacks or risks, so it relies heavily on the experience of the persons performing the analysis.
weakness analysis – have as goal to understand end asses the impact of external software dependencies.
Chapter 6: Software Penetration Testing
The chapter starts by presenting how the penetration testing is done today. For the author, the penetration tests are misused and are used as a “feel-good exercise in pretend security”. The main problem is that the penetration tests results cannot guarantee that the system is secured after all the found vulnerabilities had been fixed and the findings are treated as a final list of issues to be fixed.
So, for the author the penetration tests are best suited to probing (live like) configuration problems and other environmental factors that deeply impact software security. Another idea is to use the architectural risk analysis as a driver for the penetration tests (the risk analysis could point to more weak part(s) of the system, or can give some attack angles). Another idea, is to treat the findings as a representative sample of faults in the system and all the findings should be incorporated back into the development cycle.
Chapter 7: Risk-Based Security Testing
Security testing should start as the feature or component/unit level and (as the penetration testing) should use the items from the architectural risk analysis to identify risks. Also the security testing should continue at system level and should be directed at properties of the integrated software system. Basically all the tests types that exist today (unit tests, integration tests) should also have a security component and a security mindset applied.
The security testing should involve two approaches:
functional security testing: testing security mechanism to ensure that their functionality is properly implemented (kind of white hat philosophy).
adversarial security testing: tests that are simulating the attacker’s approach (kind of black hat philosophy).
For the author the penetration tests represents a outside->in type of approach, the security testing represents an inside->out approach focusing on the software products “guts”.
Chapter 8: Abuse Case Development
The abuse case development is done in the requirements phase and it is intimately linked to the requirements and use cases. The basic idea is that as we define requirements that suppose to express how the system should behave under a correct usage, we should also define how the system should behave if it’s abused.
This is the process that is proposed by the author to build abuse cases.
The abuse cases are creating using two sources, the anti-requirements (things that you don’t want your software to do) and attack models, which are known attacks or attack types that can apply to your system. Once they are done, the abuse cases can be used as entry point for security testing and especially for the architectural risk analysis.
The main idea is that the security operations peoples and software developers should work together and each category can (and should) learn from the other (category).
The security operation peoples have the security mindset and can use this mindset and their experience in some of the touchpoints presented previously; mainly abuse cased, security testing, architectural risk analysis and penetration testing.
This chapter lands out the landscape for the entire book; the author presents his view on the today challenges in having secure holes free software.
In the today world, the software is everywhere, from microwaves oven to nuclear power-stations, so the “old view” of seeing the software applications as “black boxes” than can be protected using firewalls and IDS/IPS it’s not valid anymore.
And just to make the problem even harder, the computing systems and the software applications are more and more interconnected must be extensible and have more and more complex features.
The author propose a taxonomy of the security problems that can be affected the software applications:
defect: a defect is a problem that may lie dormant in software only to surface in a fielded system with major consequences.
bug: an implementation-level software problem; only fairy simple implementations errors. A large panel of tools are capable to detect a range of implementation bugs.
flaw: a problem at a deeper level; a flow is something that can be present at the code level but it can be also present or absent at the design level. What is very important to remark is that the automated technologies to detect design-level flows do not yet exist, through manual risk-analysis can identity flows.
risk: flaws and bugs lead to risk. Risk capture the chance that a flaw or a bug will impact the purpose of a software.
In order to solve the problem of the software security, the author propose a cultural shift based on three pillars: applied risk management, software security touchpoints and knowledge.
Pillar 1 Applied Risk Management
For the author under the risk management names there 2 different parts; the application of risk analysis at the architectural level (also known as threat modeling or security design analysis or architectural risk analysis) and tracking and mitigating risks as a full life-cycle activity (the author call this approach, the risk management framework – RMF).
Pillar 2 Software security Touchpoints.
Touchpoint it’s just a fancy word for “best practices”. Today there are best practices for design and coding of software system and as the security became a property of a software system, then best practices should also be used to tackle the security problems.Here are the (seven) touch points and where exactly are applied in the development process.
The idea is to introduce as deeply as possible the touch points in the development process. The part 2 of the book is dedicated to the touchpoints.
Pillar 3 Knowledge
For the author the knowledge management and training should play a central role in encapsulation and sharing the security knowledge.The software security knowledge can be organized into seven knowledge catalogs:
How to build the security knowledge is treated in the part 3 of the book.
Chapter 2: A risk management framework
This chapter presents in more details a framework to mitigate the risks as a full lifecycle activity; the author calls this framework the RMF (risk Management Framework).
The purpose of the RMF is to allow a consistent and repeatable expert-driven approach to risk management but the main goal is to find, rank, track and understand the software security risks and how these security risks can affect the critical business decisions.
The RMF consists of five steps:
understand the business context The goal of this step is describe the business goals in order to understand the types of software risks to care about.
identify the business and technical risks. Business risk identification helps to define and steer the use of particular technological methods for measuring and mitigating software risk.The technical risks should be identified and mapped (through business risk) to business goals.
synthesize and rank the risks. The ranking of the risks should take in account which business goals are the most important, which business goals are immediately threatened and how the technical risks will impact the business.
define a risk mitigation strategy. Once the risks have been identified, the mitigation strategy should take into account cost, implementation time, likelihood of success and the impact
carry out required fixes and validate that they are correct. This step represents the execution of the risk mitigation strategy; some metrics should be defined to measure the progress against risks, open risks remaining.
Even if the framework steps are presented sequentially, in practice the steps can overlap and can occur in parallel with standards software development activities. Actually the RMF can be applied at several different level; project level, software lifecycle phase level, requirement analysis, use case analysis level.