Book review: Building microservices (part 1)

This is the first part of the review of the Building Microservices book.

Chapter 1: Microservicesmicroservices

This first chapter is a quick introduction to microservices, the definition, the concept genesis and the key benefits. The microservices idea have emerged from the (new) ways of crafting software today, this new ways implies the use domain-driven design, the continuous delivery, the virtualization, the infrastructure automation and the small autonomous teams.

The author is defining the microservices as “small, autonomous services that work together”.

The key benefits of the microservices are:

  • technology heterogeneity; use the right tool for the right job.
  • resilience; because the microservices have service boundaries quite well defined the failures are not cascading, it’s easy to quick find and isolate the problem(s).
  • scaling; the microservices can be deployed and run independently, so it is possible to choose which microservices need special attention to be scaled accordingly.
  • ease of deployment; microservices are independent by nature so, it can be (re)deployed punctually.
  • optimizing for replaceability; due to autonomous characteristics, the microservices can be easily replaced with new versions.

Chapter 2: The Evolutionary Architect

This chapter is about the role of the architect in the new IT landscape; for the author the qualities of an IT architect a re the following ones: should have a vision and be able to communicate it very clearly, should have empathy so he could understand the impact of his decisions over the colleagues and customers, should be able to collaborate with the others in order to define and execute the vision,  should be adaptable so he can change the vision as the changing of requirements, should be autonomous so he could find the right balance between standardizing and enabling autonomy for the team.

For me this chapter it does not feet very well in the book because all the ideas (from the chapter) could very well be applied to monolithic systems also.

Chapter 3: How to model services

The goal of this chapter is to split the services in the right way by finding the boundaries between services. In order to find the right service boundaries, it must see the problem from the model point of view.

The author introduces the notion of bounded context, notion that was coined by Eric Evans’s in Domain-Driven Design book. Any domain consists of multiple bounded contexts, and residing within each are components that do not need to be communicated outside as well as things that should be shared externally with other bounded contexts. By thinking in terms of model, it is possible to avoid the tight coupling pitfall. So, the each bounded context represents an ideal candidate for a microservice.

This cut on the bounded context is rather a vertical slice but in some situation due to technical boundaries, the cut can be done horizontally.

Chapter 4: Integration

All the ideas of this chapter are around 3 axes; inter-microservices integration, user interface integration with microservices and the COTS (Commercial Off the Shelf Software) integration with microservices.

For the inter-microservices integration different communications styles (synchronous versus asynchronous), different ways to manage (complex) business processes (orchestration versus choreography) and technologies (RPC, SOAP, REST) are very carefully explained with all the advantages and drawbacks. The author tend to prefer the asynchronous-choreographic using REST style, but he emphases that there is no ideal solution.

Then some integration issues are tackled; the service versioning problem or how to (wisely) reuse the code between microservices and/or client libraries and no fit all solution is proposed, just different options.

For the user interface integration with microservices some interesting ideas are presented, like the creation of a different backend api if your microservices are used by different ui technologies (create a backend api for the mobile application and a different backend api for the web application). Another interesting idea is to have services directly serving up UI components.

 The integration of microservices and the COTS part is around the problems that a team should solve in order to integrate with COTS; lack of control (the COTS could use a different technological stack that your microservices), difficult customization of COTS.

Chapter 5: Splitting the Monolith

The goal of this chapter is to presents some ideas about how to split a monolithic application into microservices. The first proposed step is to find portions of the code that can be treated in isolation and worked on without impacting the rest of the codebase (this portions of code are called seams, this word have been coined by Michael Feather in Working Effectively with Legacy Code). The seams are the perfect candidates for the service boundaries.

In order to easily refactor the code to create the seams the authors is advertising the Structure101 application which is an ADE (Architecture Development Environment); for alternatives to Structure 101 you can see Architecture Development Environment: Top 5 Alternatives to Structure101

 The rest of the chapter is about how to find seams in the database and into the code that is using it. The overall idea is that every microservice should have his own independent (DB) schema. Different problems will raise if this happens like the foreign key relationship problem, share of static data stored in the DB, shared tables, the transactional boundaries. Each of this problem is discussed in detail and multiple solutions are presented.

The author recognize that splitting the monolith it’s not trivial at all and it should start very small (for the DB for example a first step would be to split the schema and keep the code as it was before, the second step would be to migrate some parts of the monolithic code towards microservices). It also recognize that sometimes the splitting brings new problems (like the transactional boundaries).

Chapter 6: Deployment

This chapter presents different deployment techniques for the micro services. The first topic that is tackled is how the micro services code should be stored and how the Continuous Integration (CI) process should work; multiple options are discussed: one code repository for all micro services, and one CI server; one code repository by micro service and one CI server, build pipelines by operating system or by directly platform artifacts.

A second topic around the deployment is about the infrastructure on which the micro services are deployed. Multiple deployment scenarios are presented: all micro services deployed on same host, one micro service by host, virtualized hosts, dockerized hosts. The most important idea on this part is that all the deployment and the host creation should be automate; the automation is essential for keeping the team/s productive.

 

 

Book review: Software Security: Building Security in – Part III: Software Security Grows Up

This is a review of the third part of the Software Security: Building Security in book. This part is dedicated to how to introduce a software security program in your company; it’s something that I’m much less interested than the previous topics, so the review will be quite short.

Chapter 10: An Enterprise Software Security ProgramSecuritySoftwareBookCover

The chapter contains some ideas about how to ignite a software security program in a company. The first and most important idea is the software security practices must have a clear and explicit connection with the with the business mission; the goal of (any) software is to fulfill the business needs.

In order to adopt a SDL (Secure Development Lifeycle) the author propose a roadmap of five steps:

  1. Build a plan that is tailored for you. Starting from how the software is done in present, then plan the building blocks for the future change.
  2. Roll out individual best practice initiatives carefully. Establish champions to take ownership of each initiative.
  3. Train your people. Train the developers and (IT) architects to be aware of security and the central role that they play in the SDL (Secure Development Lifeycle).
  4. Establish a metric program. In order to measure the progress some metrics a are needed.
  5. Establish and sustain a continuous improvement capability. Create a situation in which continuous improvement can be sustained by measuring results and refocusing on the weakest aspect of the SDC.

Chapter 11: Knowledge for Software Security

For the author there is a clear difference between the knowledge and the information; the knowledge is information in context, information put to work using processes and procedures. Because the knowledge is so important, the author prose a way to structure the software security knowledge called “Software Security Unified Knowledge Architecture” :

Software Security Unified Knowledge Architecture
Software Security Unified Knowledge Architecture

The Software Security Unified Knowledge Architecture has seven catalogs in three categories:

  • category prescriptive knowledge includes three knowledge catalogs: principles, guidelines and rules. The principles represent high-level architectural principles, the rules can contains tactical low-level rules; the guidelines are in the middle of the two categories.
  • category diagnostic knowledge includes three knowledge catalogs: attack patterns, exploits and vulnerabilities. Vulnerabilities includes descriptions of software vulnerabilities, the exploits describe how instances of vulnerabilities are exploited and the attack patterns describe common sets of exploits in a form that can be applied acc
  • category historical knowledge includes the catalog historical risk.

Another initiative that is with mentioning is the Build Security In which is an initiative of Department of Homeland Security’s National Cyber Security Division.

Book review: Software Security: Building Security in – Part II: Seven Touchpoints for Software Security

This is a review of the second part of the Software Security: Building Security in book.

Chapter 3: Introduction to Software Security TouchpointsSecuritySoftwareBookCover

This is an introductory chapter for the second part of the book. A very brief description is made for every security touch point.

Each one of the touchpoints are applied on a specific artifact and each touchpoints represents either a destructive or constructive activity. Based on the author experience, the ideal order based on effectiveness in which the touch points should be implemented is the following one:

  1. Code Review (Tools). Artifact: code. Constructive activity.
  2. Architectural Risk Analysis. Artifact: design and specification. Constructive activity.
  3. Penetration Testing. Artifact: system in its environment. Destructive activity
  4. Risk-Based Security Testing. Artifact system. Mix between destructive and constructive activities
  5. Abuse Cases. Artifact: requirements and use cases. Predominant destructive activity.
  6. Security Requirements. Artifact: requirements. Constructive activity.
  7. Security Operations. Artifact: fielded system. Constructive activity.

Another idea to mention that is worth mentioning is that the author propose to add the securty aspects as soon as possible in the software development cycle;moving left as much as possible (see the next figure that pictures the applicability of the touchpoints in the development cycle); for example it’s much better to integrate security in the requirements or architecture and design (using the risk analysis and abuse cases touchpoints) rather than waiting for the penetration testing to find the problems.

Security touchpoints
Security touchpoints

Chapter 4: Code review with a tool

For the author the code review is essential in finding security problems early in the process. The tools (the static analysis tools) can help the user to make a better job, but the user should also try to understand the output from the tool; it’s very important to not just expect that the tool will find all the security problems with no further analysis.

In the chapter a few tools (commercial or not) are named, like CQual, xg++, BOON, RATS, Fortify (which have his own paragraph) but the most important part is the list of key characteristics that a good analysis tool should have and some of the characteristics to avoid.

The key characteristics of a static analysis tool:

  • be designed for security
  • support multi tiers architecture
  • be extensible
  • be useful for security analysts and developers
  • support existing development processes

The key characteristics of a static analysis tool to avoid:

  • too many false positives
  • spotty integration with the IDE
  • single-minded support for C language

Chapter 5: Architectural Risk Analysis

Around 50% of the security problems are the result of design flows, so performing an architecture risk analysis at design level is an important part of a solid software security program.

In the beginning of the chapter the author present very briefly some existing security risk analysis methodologies: STRIDE (Microsoft), OCTAVE (Operational Critical Threat, Asset and Vulnerability Evaluation), COBIT (Control Objectives for Information and Related Technologies).

Two things are very important for the author; the ara (architectural risk analysis) must be integrated in and with the Risk Management Framework (presented briefly in Book review: Software Security: Building Security in – Part I: Software Security Fundamentals), and we must have a “forest-level” view of the system.

In the last part of the chapter the author present the Cigital way of making architectural risk analysis. The process has 3 steps:

  1. attack resistance analysis – have as goal to define how the system should behave against known attacks.
  2. ambiguity analysis – have as goal to discover new types of attacks or risks, so it relies heavily on the experience of the persons performing the analysis.
  3. weakness analysis – have as goal to understand end asses the impact of external software dependencies.
Process diagram for architectural risk analysis
Process diagram for architectural risk analysis

Chapter 6: Software Penetration Testing

The chapter starts by presenting how the penetration testing is done today. For the author, the penetration tests are misused and are used as a “feel-good exercise in pretend security”. The main problem is that the penetration tests results cannot guarantee that the system is secured after all the found vulnerabilities had been fixed and the findings are treated as a final list of issues to be fixed.

So, for the author the penetration tests are best suited to probing (live like) configuration problems and other environmental factors that deeply impact software security. Another idea is to use the architectural risk analysis as a driver for  the penetration tests (the risk analysis could point to more weak part(s) of the system, or can give some attack angles). Another idea, is to treat the findings as a representative sample of faults in the system and all the findings should be incorporated back into the development cycle.

Chapter 7: Risk-Based Security Testing

Security testing should start as the feature or component/unit level and (as the penetration testing) should use the items from the architectural risk analysis to identify risks. Also the security testing should continue at system level and should be directed at properties of the integrated software system. Basically all the tests types that exist today (unit tests, integration tests) should also have a security component and a security mindset applied.

The security testing should involve two approaches:

  • functional security testing: testing security mechanism to ensure that their functionality is properly implemented (kind of white hat philosophy).
  • adversarial security testing: tests that are simulating the attacker’s approach (kind of black hat philosophy).

For the author the penetration tests represents a outside->in type of approach, the security testing represents an inside->out approach focusing on the software products “guts”.

Chapter 8: Abuse Case Development

The abuse case development is done in the requirements phase and it is intimately linked to the requirements and use cases. The basic idea is that as we define requirements that suppose to express how the system should behave under a correct usage, we should also define how the system should behave if it’s abused.

This is the process that is proposed by the author to build abuse cases.

Diagram for building abuse cases
Diagram for building abuse cases

The abuse cases are creating using two sources, the anti-requirements (things that you don’t want your software to do) and attack models, which are known attacks or attack types that can apply to your system. Once they are done, the abuse cases can be used as entry point for security testing and especially for the architectural risk analysis.

Chapter 9: Software Security Meets Security Operations

The main idea is that the security operations peoples and software developers should work together and each category can (and should) learn from the other (category).

The security operation peoples have the security mindset and can use this mindset and their experience in some of the touchpoints presented previously; mainly abuse cased, security testing, architectural risk analysis and penetration testing.

Book review: Software Security: Building Security in – Part I: Software Security Fundamentals

This is a review of the first part of the Software Security: Building Security in book.

Chapter 1: Defining a disciplineSecuritySoftwareBookCover

This chapter lands out the landscape for the entire book; the author presents his view on the today challenges in having secure holes free software.
In the today world, the software is everywhere, from microwaves oven to nuclear power-stations, so the “old view” of seeing the software applications as “black boxes” than can be protected using firewalls and IDS/IPS it’s not valid anymore.

And just to make the problem even harder, the computing systems and the software applications are more and more interconnected must be extensible and have more and more complex features.

The author propose a taxonomy of the security problems that can be affected the software applications:

  • defect: a defect is a problem that may lie dormant in software only to surface in a fielded system with major consequences.
  • bug: an implementation-level software problem; only fairy simple implementations errors. A large panel of tools are capable to detect a range of implementation bugs.
  • flaw: a problem at a deeper level; a flow is something that can be present at the code level but it can be also present or absent at the design level. What is very important to remark is that the automated technologies to detect design-level flows do not yet exist, through manual risk-analysis can identity flows.
  • risk: flaws and bugs lead to risk. Risk capture the chance that a flaw or a bug will impact the purpose of a software.

In order to solve the problem of the software security, the author propose a cultural shift based on three pillars: applied risk management, software security touchpoints and knowledge.

Pillar 1 Applied Risk Management

For the author under the risk management names there 2 different parts; the application of risk analysis at the architectural level (also known as threat modeling or security design analysis or architectural risk analysis) and tracking and mitigating risks as a full life-cycle activity (the author call this approach, the risk management framework – RMF).

Pillar 2 Software security Touchpoints.

Touchpoint it’s just a fancy word for “best practices”. Today there are best practices for design and coding of software system and as the security became a property of a software system, then best practices should also be used to tackle the security problems.Here are the (seven) touch points and where exactly are applied in the development process.

Security Touchpoints
Security Touchpoints

The idea is to introduce as deeply as possible the touch points in the development process. The part 2 of the book is dedicated to the touchpoints.

Pillar 3 Knowledge

For the author the knowledge management and training should play a central role in encapsulation and sharing the security knowledge.The software security knowledge can be organized into seven knowledge catalogs:

  • principles
  • guideline
  • rules
  • vulnerabilities
  • exploits
  • attack patterns
  • historical risks

How to build the security knowledge is treated in the part 3 of the book.

Chapter 2: A risk management framework

This chapter presents in more details a framework to mitigate the risks as a full lifecycle activity; the author calls this framework the RMF (risk Management Framework).

The purpose of the RMF is to allow a consistent and repeatable expert-driven approach to risk management but the main goal is to find, rank, track and understand the software security risks and how these security risks can affect the critical business decisions.

Risk Management Analysis steps
Risk Management Analysis steps

The RMF consists of five steps:

  1. understand the business context The goal of this step is describe the business goals in order to understand the     types of software risks to care about.
  2. identify the business and technical risks. Business risk identification helps to define and steer the use of particular technological methods for measuring and mitigating software risk.The technical risks should be identified and mapped (through business risk) to business goals.
  3. synthesize and rank the risks. The ranking of the risks should take in account which business goals are the most important, which business goals are immediately threatened and how the technical risks will impact the business.
  4. define a risk mitigation strategy. Once the risks have been identified, the mitigation strategy should take into account cost, implementation time, likelihood of success and the impact
  5. carry out required fixes and validate that they are correct. This step represents the execution of the risk mitigation strategy; some metrics should be defined to measure the progress against risks, open risks remaining.

Even if the framework steps are presented sequentially, in practice the steps can overlap and can occur in parallel with standards software development activities. Actually the RMF can be applied at several different level; project level, software lifecycle phase level, requirement analysis, use case analysis level.

Book review: Continuous Enterprise Development in Java

This is a very short review of the Continuous Enterprise Development in Java book.

The book can be easily split in two parts.

The first part of the book from chapter 1 to chapter 4 javaArguillian contains general information about the difficulty of testing the JEE applications, about the software development cycles, the types of testing and some more technical details about the testing frameworks (JUnit and TestNG), about build tools like Maven and (JBoss) Forge, about version control (only Git deserves a paragraph) and (finally) about the Arquillian which is presented as “an innovative and highly extensible testing platform for the JVM”.

A very nice introduction is done to the ShrinkWrap  which is an API to create programmatically deployable JEE archives (jars, wars, ears).

An entire chapter (Chapter 3) is dedicated to write and deploy some business code and the associated Arquillian tests. Almost all the tools used in this chapter are JBoss or RedHat tools; Forge for the build of the application, JBoss Application Server to deploy the application, JBoss Developper Studio to deploy on the (Red Hat) OpenShift cloud service.

The second part of the book from chapter 4 to chapter 12 contains the implementation of the http://geekseek.continuousdev.org/app/root/show application, which is the JEE application. Every chapter is treating one aspect of the application: chapter 5  treats the persistence layer, chapter 6 the integration with NoSql data bases, chapter 7 the services layer, chapter 8 the REST services, chapter 9 the security, chapter 10 the user interface and chapter 11 the deployment on live.

Every chapter follow the same pattern, it starts with an introduction to the technology that will be used within the chapter, then the use cases and the business requirements are presented then it follows the implementation of the requirements and lastly the testing of the implementation using Arquillian.

I will conclude my ticket with a few points about what i like and what i don’t like about this book.

What i like about this book:

  • The author clearly masters the different JEE components; the technology introduction paragraphs of each chapter of the second part of the book are very clear and easy to understand.
  • The author knows the Arquillian framework inside out; all the examples are well explained and the introduction to ShrinkWrap is very well done.
  • Some of the chapters contains very valuable external references, like PicketLink for the security or Richardson Maturity Model for REST.

What i do not like about this book:

  • Too much marketing of the RedHat/JBoss products; I would have preferred to have more vendor agnostic examples of use for the Arquillian framework.
  • The subtitle of the book is “Testable Solutions with Arquillian” so it suppose to focus more on the testing part of the applications. Unfortunately, for me the book is not focusing on testing the applications but rather tries to  present how to continuously develop (JEE) applications and integration testing is only one part of this.
  • Nothing is said about the integration of Arquillian with other Java (non JEE) projects/frameworks like Spring and Guice and how Arquillian could ease (if it it can) the testing of the applications using these frameworks.