Book Review: API Security In Action

This is the review of the API Security in action book.

(My) Conclusion

This book is doing a very good job in covering different mechanisms that could be used in order to build secure (RESTful) APIs. For each security control the author explains what kind of attacks the respective control is able to mitigate.

The reader should be comfortable with Java and Maven because most of the code examples of the book (and there are a lot) are implemented in Java.

The diagram of all the security mechanism presented:

Part 1: Foundations

The goal of the first part is to learn the basics of securing an API. The author starts by explaining what is an API from the user and from developer point of view and what are the security properties that any software component (APIs included) should fill in:

  • Confidentiality – Ensuring information can only be read by its intended audience
  • Integrity – Preventing unauthorized creation, modification, or destruction of information
  • Availability – Ensuring that the legitimate users of an API can access it when they need to and are not prevented from doing so.

Even if this security properties looks very theoretical the author is explaining how applying specific security controls would fulfill the previously specified security properties. The following security controls are proposed:

  • Encryption of data in transit and at rest – Encryption prevents data being read or modified in transit or at rest
  • Authentication – Authentication is the process of verifying whether a user is who they say they are.
  • Authorization/Access Control – Authorization controls who has access to what and what actions they are allowed to perform
  • Audit logging – An audit log is a record of every operation performed using an API. The purpose of an audit log is to ensure accountability
  • Rate limiting – Preserves the availability in the face of malicious or accidental DoS attacks.

This different controls should be added into a specific order as shown in the following figure:

Different security controls that could/should be applied for any API

To illustrate each control implementation, an example API called Natter API is used. The Natter API is written in Java 11 using the Spark Java framework. To make the examples as clear as possible to non-Java developers, they are written in a simple style, avoiding too many Java-specific idioms. Maven is used to build the code examples, and an H2 in-memory database is used for data storage.

The same API is also used to present different types of vulnerabilities (SQL Injection, XSS) and also the mitigations.

Part 2: Token-based Authentication

This part presents different techniques and approaches for the token-based authentication.

Session cookie authentication

The first authentication technique presented is the “classical” HTTP Basic Authentication. HTTP Basic Authentication have a few drawbacks like there is no obvious way for the user to ask the browser to forget the password, the dialog box presented by browsers for HTTP Basic authentication cannot be customized.

But the most important drawback is that the user’s password is sent on every API call, increasing the chance of it accidentally being exposed by a bug in one of those operations. This is not very practical that’s why a better approach for the user is to login once then be trusted for a specific period of time. This is basically the definition of the Token-Based authentication:

Token Based authentication

The first presented example of Token-Based authentication is using the HTTP Base Authentication for the dedicated login endpoint (step number 1 from the previous figure) and the session cookies for moving the generated token between the client and the API server.

The author take the opportunity to explain how session cookies are working and what are the different attributes but especially he presents the attacks that are possible in the case of using session cookies. The session fixation attack and the Cross-Site Request Forgery attack (CSRF) are presented in details with different options to avoid or mitigate those attacks.

Tokens whiteout cookies

The usage of session cookies is tightly linked to a specific domain and/or sub-domains. In case you want to make requests cross domains then the CORS (Cross-Origin Resource Sharing) mechanism can be used. The last part of the chapter treating the usage of session cookies contains detailed explanations of CORS mechanism.

Using the session cookies as a mechanism to store the authentication tokens have a few drawbacks like the difficulty to share cookies between different distinguished domains or the usage of API clients that do not understand the web standards (mobile clients, IOT clients).

Another option that is presented are the tokens without cookies. On the client side the tokens are stored using the WebStorage API. On the server side the tokens are stored into a “classical” relational data base. For the authentication scheme the Bearer authentication is used (despite the fact that the Bearer authentication scheme was created in the context of OAuth 2.0 Authorization framework is rather popular in other contexts also).

In case of this solution the least secure component is the storage of the authentication token into the DB. In order to mitigate the risk of the tokens being leaked different hardening solutions are proposed:

  • store into the DB the hash of tokens
  • store into the DB the HMAC of the tokens and the (API) client will then send the bearer token and the HMAC of the token

This authentication scheme is not vulnerable to session fixation attacks or CSRF attacks (which was the case of the previous scheme) but an XSS vulnerability on the client side that is using the WebStorage API would defeat any kind of mitigation control put in place.

Self-contained tokens and JWTs

The last chapter of this this (second) part of the book treats the self-contained or stateless tokens. Rather than store the token state in the database as it was done in previous cases, you can instead encode that state directly into the token ID and send it to the client.

The most client-side tokens used are the Json Web Token/s (JWT). The main features of a JWT token are:

  • A standard header format that contains metadata about the JWT, such as which MAC or encryption algorithm was used.
  • A set of standard claims that can be used in the JSON content of the JWT, with defined meanings, such as exp to indicate the expiry time and sub for the subject.
  • A wide range of algorithms for authentication and encryption, as well as digital signatures and public key encryption.

A JWT token can have three parts:

  • Header – indicates the algorithm of how the JWT was produced, the key used to authenticate the JWT to or an ID of the key used to authenticate. Some of the header values:
    • alg: Identifies which algorithm is used to generate the signature
    • kid: Key Id; as the key ID is just a string identifier, it can be safely looked up in server-side set of keys.
    • jwk: The full key. This is not a safe header to use; Trusting the sender to give you the key to verify a message loses all security properties.
    • jku: An URL to retrieve the full key. This is not a safe header to use. The intention of this header is that the recipient can retrieve the key from a HTTPS endpoint, rather than including it directly in the message, to save space.
  • Payload/Claims – pieces of information asserted about a subject. The list of standard claims:
    • iss (issuer): Issuer of the JWT
    • sub (subject): Subject of the JWT (the user)
    • aud (audience): Recipient for which the JWT is intended
    • exp (expiration time): Time after which the JWT expires
    • nbf (not before time): Time before which the JWT must not be accepted for processing
    • iat (issued at time): Time at which the JWT was issued; can be used to determine age of the JWT
    • jti (JWT ID): Unique identifier; can be used to prevent the JWT from being replayed (allows a token to be used only once)
  • Signature – Securely validates the token. The signature is calculated by encoding the header and payload using Base64url Encoding and concatenating the two together with a period separator. That string is then run through the cryptographic algorithm specified in the header.
Example of JWT token

Even if the JWT could be used as self-contained token by adding the algorithm and the signing key into the header, this is a very bad idea from the security point of view because you should never trust a token sign by an external entity. A better solution is to store the algorithm as metadata associated with a key on the server.

Storing the algorithm and the signing key on the server side it also helps to implement a way to revoke tokens. For example changing the signing key it can revoke all the tokens using the specified key. Another way to revoke tokens more selectively would be to add to the DB some token metadata like token creation date and use this metadata as revocation criteria.

Part 3: Authorization

OAuth2 and OpenID Connect

A way to implement authorization using JWT tokens is by using scoped tokens. Typically, the scope of a token is represented as one or more string labels stored as an attribute of the token. Because there may be more than one scope label associated with a token, they are often referred to as scopes. The scopes (labels) of a token collectively define the scope of access it grants.

A scoped token limits the operations that can be performed with that token. The set of operations that are allowed is known as the scope of the token. The scope of a token is specified by one or more scope labels, which are often referred to collectively as scopes.

Scopes allow a user to delegate part of their authority to a third-party app, restricting how much access they grant using scopes. This type of control is called discretionary access control (DAC) because users can delegate some of their permissions to other users.

Another type of control is the mandatory access control (MAC), in this case the user permissions are set and enforced by a central authority and cannot be granted by users themselves.

OAuth2 is a standard to implement the DAC. OAuth uses the following specific terms:

  • The authorization server (AS) authenticates the user and issues tokens to clients.
  • The user also known as the resource owner (RO), because it’s typically their resources that the third-party app is trying to access.
  • The third-party app or service is known as the client.
  • The API that hosts the user’s resources is known as the resource server (RS).

To access an API using OAuth2, an app must first obtain an access token from the Authorization Server (AS). The app tells the AS what scope of access it requires. The AS verifies that the user consents to this access and issues an access token to the app. The app can then use the access token to access the API on the user’s behalf.

One of the advantages of OAuth2 is the ability to centralize authentication of users at the AS, providing a single sign-on (SSO) experience. When the user’s client needs to access an API, it redirects the user to the AS authorization endpoint to get an access token. At this point the AS authenticates the user and asks for consent for the client to be allowed access.

OAuth can provide basic SSO functionality, but the primary focus is on delegated third-party access to APIs rather than user identity or session management. The OpenID Connect (OIDC) suite of standards extend OAuth2 with several features:

  • A standard way to retrieve identity information about a user, such as their name, email address, postal address, and telephone number.
  • A way for the client to request that the user is authenticated even if they have an existing session, and to ask for them to be authenticated in a particular way, such as with two-factor authentication.
  • Extensions for session management and logout, allowing clients to be notified when a user logs out of their session at the AS, enabling the user to log out of all clients at once.

Identity-based access control

In this chapter the author introduces the notion of users, groups, RBAC (Role-Based Access Control) and ABAC (Access-Based Access Control). For each type of access control the author propose an ad-hoc implementation (no specific framework is used) for the Natter API (which is the API used all over the book to present different security controls.)

Capability-based security and macaroons

A capability is an unforgeable reference to an object or resource together with a set of permissions to access that resource. Compared with the more dominant identity-based access control techniques like RBAC and ABAC capabilities have several differences:

  • Access to resources is via unforgeable references to those objects that also grant authority to access that resource. In an identity-based system, anybody can attempt to access a resource, but they might be denied access depending on who they are. In a capability-based system, it is impossible to send a request to a resource if you do not have a capability to access it.
  • Capabilities provide fine-grained access to individual resources.
  • The ability to easily share capabilities can make it harder to determine who has access to which resources via your API.
  • Some capability-based systems do not support revoking capabilities after they have been granted. When revocation is supported, revoking a widely shared capability may deny access to more people than was intended.

The way to use capability-based security in the context of a REST API is via capabilities URIs. A capability URI (or capability URL) is a URI that both identifies a resource and conveys a set of permissions to access that resource. Typically, a capability URI encodes an unguessable token into some part of the URI structure. To create a capability URI, you can combine a normal URI with a security token.

The author adds the capability URI to the Netter API and implements this with the token encoded
into the query parameter because this is simple to implement. To mitigate any threat from tokens leaking in log files, a short-lived tokens are used.

But putting the token representing the capability in the URI path or query parameters is less than ideal because these can leak in audit logs, Referer headers, and through the browser history. These risks are limited when capability URIs are used in an API but can be a real problem when these URIs are directly exposed to users in a web browser client.

One approach to this problem is to put the token in a part of the URI that is not usually sent to the server or included in Referer headers.

The capacities URIs can be also be mixed with identity for handling authentication and authorization.There are a few ways to communicate identity in a capability-based system:

  • Associate a username and other identity claims with each capability token. The permissions in the token are still what grants access, but the token additionally authenticates identity claims about the user that can be used for audit logging or additional access checks. The major downside of this approach is that sharing a capability URI lets the recipient impersonate you whenever they make calls to the API using that capability.
  • Use a traditional authentication mechanism, such as a session cookie, to identify the user in addition to requiring a capability token. The cookie would no longer be used to authorize API calls but would instead be used to identify the user for audit logging or for additional checks. Because the cookie is no longer used for access control, it is less sensitive and so can be a long-lived persistent cookie, reducing the need for the user to frequently log in

The last part of the chapter is about macaroons which is a technology invented by Google (https://research.google/pubs/pub41892/). The macaroons are extending the capabilities based security by adding more granularity.

A macaroon is a type of cryptographic token that can be used to represent capabilities and other authorization grants. Anybody can append new caveats to a macaroon that restrict how it can be used

For example is possible to add new capabilities that allows only read access to a message created after a specific date. This new added extensions are called caveats.

Part 4: Microservice APIs in Kubernetes

Microservice APIs in K8S

This chapter is an introduction to Kubernetes orchestrator. The introduction is very basic but if you are interested in something more complete then Kubernetes in Action, Second Edition is the best option. The author also is deploying on K8S a (H2) database, the Natter API (used as demo through the entire book) and a new API called Linked-Preview service; as K8S “cluster” the Minikube is used.

Having an application with multiple components is helping him to show how to secure communication between these components and how to secure incoming (outside) requests. The presented solution for securing the communication is based on the service mesh idea and K8s network policies.

A service mesh works by installing lightweight proxies as sidecar containers into every pod in your network. These proxies intercept all network requests coming into the pod (acting as a reverse proxy) and all requests going out of the pod.

Securing service-to-service APIs

The goal of this chapter is to apply the authentication and authorization techniques already presented in previous chapters but in the context of service-to-service APIs. For the authentication the API’s keys, the JWT are presented. To complement the authentication scheme, the mutual TLS authentication is also used.

For the authorization the OAuth2 is presented. A more flexible alternative is to create and use service accounts which act like regular user accounts but are intended for use by services. Service accounts should be protected with strong authentication mechanisms because they often have elevated privileges compared to normal accounts.

The last part of the chapter is about managing service credentials in the context of K8s. Kubernetes includes a simple method for distributing credentials to services, but it is not very secure (the secrets are Base64 encoded and can be leaked by cluster administrator).

Secret vaults and key management services provide better security but need an initial credential to access. Using secret vaults have the following benefits:

  • The storage of the secrets is encrypted by default, providing better protection of secret data at rest.
  • The secret management service can automatically generate and update secrets regularly (secret rotation).
  • Fine-grained access controls can be applied, ensuring that services only have access to the credentials they need.
  • The access to secrets can be logged, leaving an audit trail.

Part 5: APIs for the Internet of Things

Securing IoT communications

This chapter is treating how different IoT devices could communicate securely with an API running on a classical system. The IoT devices, compared with classical computer systems have a few constraints:

  • An IOT device has significantly reduced CPU power, memory, connectivity, or energy availability compared to a server or traditional API client machine.
  • For efficiency, devices often use compact binary formats and low-level networking based on UDP rather than high-level TCP-based protocols such as HTTP and TLS.
  • Some commonly used cryptographic algorithms are difficult to implement securely or efficiently on devices due to hardware constraints or threats from physical attackers.

In order to cope with this constraints new protocols have been created based on the existing protocols and standards:

  • Datagram Transport Layer Security (DTLS). DTLS is a version of TLS designed to work with connectionless UDP-based protocols rather than TCP based ones. It provides the same protections as TLS, except that packets may be reordered or replayed without detection.
  • JOSE (JSON Object Signing and Encryption) standards. For IoT applications, JSON is often replaced by more efficient binary encodings that make better use of constrained memory and network bandwidth and that have compact software implementations.
  • COSE (CBOR Object Signing and Encryption) provides encryption and digital signature capabilities for CBOR and is loosely based on JOSE.

In the case when the devices needs to use public key cryptography then the key distribution became a complex problem. This problem could be solved by generating random keys during manufacturing of the IOT device (device-specific keys will be derived from a master key and some device-specific information) or through the use of key distribution servers.

Securing IoT APIs

The last chapter of the book is focusing on how to secure access to APIs in Internet of Things (IoT) environments meaning APIs provided by the devices or cloud APIs which are consumed by devices itself.

For the authentication part, the IoT devices could be identified using credentials associated with a device profile. These credentials could be an encrypted pre-shared key or a certificate containing a public key for the device.

For the authorization part, the IoT devices could use the OAuth2 for IoTwhich is a new specification that adapts the OAuth2 specification for constrained environments .

How to write a (Java) Burp Suite Professional extension for Tabnabbing attack

Context and goal

The goal of this ticket is to explain how to create an extension for the Burp Suite Professional taking as implementation example the “Reverse Tabnabbing” attack.

“Reverse Tabnabbing” is an attack where an (evil) page linked from the (victim) target page is able to rewrite that page, such as by replacing it with a phishing site. The cause of this attack is the capacity of a new opened page to act on parent page’s content or location.

For more details about the attack himself you can check the OWASP Reverse Tabnabbing.

The attack vectors are the HTML links and JavaScript window.open function so to mitigate the vulnerability you have to add the attribute value: rel="noopener noreferrer" to all the HTML links and for JavaScriptadd add the values noopener,noreferrer in the windowFeatures parameter of the window.openfunction. For more details about the mitigation please check the OWASP HTML Security Check.

Basic steps for (any Burp) extension writing

The first step is to add to create an empty (Java) project and add into your classpath the Burp Extensibility API (the javadoc of the API can be found here). If you are using Maven then the easiest way is to add this dependency into your pom.xml file:

<dependency>
    <groupId>net.portswigger.burp.extender</groupId>
    <artifactId>burp-extender-api</artifactId>
    <version>LATEST</version>
</dependency>

Then the extension should contain  a class called BurpExtender (into a package called burp) that should implement the IBurpExtender interface.

The IBurpExtender  interface have only a single method (registerExtenderCallbacks) that is invoked by burp when the extension is loaded.

For more details about basics of extension writing you can read Writing your first Burp Suite extension from the PortSwigger website.

Extend the (Burp) scanner capabilities

In order to find the Tabnabbing vulnerability we must scan/parse the HTML responses (coming from the server), so the extension must extend the Burp scanner capabilities.

The interface that must be extended is IScannerCheck interface. The BurpExtender class (from the previous paragraph) must register the custom scanner, so the BurpExtender code will look something like this (where ScannerCheck is the class that extends the IScannerCheck interface):

public class BurpExtender implements IBurpExtender {

    @Override
    public void registerExtenderCallbacks(
            final IBurpExtenderCallbacks iBurpExtenderCallbacks) {

        // set our extension name
        iBurpExtenderCallbacks.setExtensionName("(Reverse) Tabnabbing checks.");

        // register the custom scanner
        iBurpExtenderCallbacks.registerScannerCheck(
                new ScannerCheck(iBurpExtenderCallbacks.getHelpers()));
    }
}

Let’s look closer to the methods offered by the IScannerCheck interface:

  • consolidateDuplicateIssues – this method is called by Burp engine to decide whether the issues found for the same url are duplicates.
  • doActiveScan – this method is called by the scanner for each insertion point scanned. In the context of Tabnabbing extension this method will not be implemented.
  • doPassiveScan – this method is invoked for each request/response pair that is scanned.  The extension will implement this method to find the Tabnabbing vulnerability. The complete signature of the method is the following one: List<IScanIssue> doPassiveScan(IHttpRequestResponse baseRequestResponse). The method receives as parameter an IHttpRequestResponse instance which contains all the information about the HTTP request and HTTP response. In the context of the Tabnabbing extension we will need to check the HTTP response.

Parse the HTTP response and check for Tabnabbing vulnerability

As seen in the previous chapter the Burp runtime gives access to the HTTP requests and responses. In our case we will need to access the HTTP response using the method IHttpRequestResponse#getResponse. This method returns a byte array (byte[]) representing the HTTP response as HTML.

In order to find the Tabnabbing vulnerability we must parse the HTML represented by the HTML response. Unfortunately, there is nothing in the API offered by Burp for parsing HTML.

The most efficient solution that I found to parse HTML was to create few classes and interfaces that are implementing the observer pattern (see the next class diagram ):

 

The most important elements are :

The following sequence diagram try to explains how the classes are interacting  together in order to find the Tabnabbing vulnerability.

Final words

If you want to download the code or try the extension you can find all you need on github repository: tabnabbing-burp-extension.

If you are interested about some metrics about the code you can the sonarcloud.io: tabnnabing project.

 

 

How to programmatically set-up a (HTTP) proxy for a Selenium test

Context

In the context of a (Java) Selenium test it was needed to set-up a http proxy at the level of the browser. What I wanted to achieve it was exactly what is shown in the next picture but programmatically. In this specific case the proxy was BurpPro proxy but the same workflow can be applied for any kind of (http) proxy.

Solution

I know this is not really rocket science but I didn’t found elsewhere any clear explanation about how to do it. In my code the proxy url is injected via a (Java) system property called “proxy.url“.

And the  code looks like this:

String proxyUrl = System.getProperty("proxy.url");
if (proxyUrl != null) {
    Proxy proxy = new Proxy();
    proxy.setHttpProxy(proxyUrl);

    FirefoxOptions options = new FirefoxOptions();
    options.setProxy(proxy);
    
    driver = new FirefoxDriver(options);
} else {
    driver = new FirefoxDriver();
}

(My) CSSLP Notes – Secure Software Concepts

Note: This notes were strongly inspired by the following book: CSSLP Certification All in one.

General Security Concepts

BasicsCSSLP-logo

The security of IT systems can be defined using the following attributes:

  • confidentiality – how the system prevents the disclosure of information.
  • integrity – how the system protects data from the unauthorized access.
  • availability – access to the system by authorized personnel.
  • authentication – process of determining the identity of a user. Three methods can be used to authenticate a user:
    • something you know (ex: password, pin code)
    • something you have (ex: token, card)
    • something you are (ex: biometrics mechanisms)
  • authorization – process of applying access control rules to a user process to determine if a particular user process can access an object.
  • accounting (auditing) – records historical events on a system.
  • non-repudiation – preventing a subject from denying a previous action with an object in a system.

System principles

  • session management – design and implementation of controls to ensure that the communications channels are secured from unauthorized access and disruption of communications.
  • exception management – the process of handling any errors that could appear during the system execution.
  • configuration management – identification and management of the configuration items (initialization parameters, connection strings, paths, keys).

Secure design principles

  • good enough security – there is a trade off between security and other aspects associated with a system. The level of required security must be determined at design time.
  • least privilege – a subject should have only the necessary rights and privileges to perform a specific task.
  • separation of duties – for any given task, more than one individual needs to be involved.
  • defense in depth (layered security) – apply multiple dissimilar security defenses.
  • fail-safe – when a system experience a failure, it should fail to a safe state; all the attributes associated with the system security (confidentiality, integrity, availability) should be appropriately maintained.
  • economy of mechanism – keep the design of the system simple and less complex; reduce the number of dependencies and/or services that the system needs in order to operate.
  • complete mediation – checking permission each time subject requests access to objects.
  • open design – design is not a secret, implementation of safeguard is. (ex: cryptography algorithms are open but the keys used are secret)
  • least common mechanism – minimize the amount of mechanism common to more than one user and depended on by all users. Every shared mechanism (especially one involving shared variables) represents a potential information path between users and must be designed with great care to be sure it does not unintentionally compromise security.
  • psychological acceptability – accessibility to resources should not be inhibited by security mechanisms. If security mechanisms hinder the usability or accessibility of resources, then users may opt to turn off those mechanisms.
  • weakest link – attackers are more likely to attack a weak spot in a software system than to penetrate a heavily fortified component.
  • leverage existing components – component reuse have many advantages, including the increasing of efficiency and security. From the security point of view the component reuse is reducing the attack surface.
  • single point of failure – a system design should not be susceptible to a single point of failure.

Security Models

Access Control Models

Access controls define what actions a subject can perform on specific objects.

  • Bell-LaPadula confidentiality model – It is focused on maintaining the confidentiality of objects. Bell-LaPadula operates by observing two rules: the Simple Security Property and the * Security Property.
    • The Simple security property states that there is “no read up:” a subject at a specific classification level cannot read an object at a higher classification level.
    • The * Security Property is “no write down:”a subject at a higher classification level cannot write to a lower classification level.
  • Take-Grant  – systems specify the rights that a subject can transfer to a from another subject or object. The model is based on representation of the controls in forms of directed graphs with the vertices being the subjects and the objects. The edges between them represent the right between the subject and objects. The representation of rights takes the form of {t (take), g (grant), r (read), w (write)}.
  • Role-based Access control – users are assign a set of roles they may perform. The roles are associated to the access permissions necessary to perform the tasks.
  • MAC (Mandatory Access Control) Model – in MAC systems the owner or subject cannot determine whether access is to be granted to another subject; it is the job of the operating system to decide.
  • DAC (Discretionary Access Control) Model – in DAC systems the owner of an object can decide which other subjects may have access to the object what specific access they may have.

Integrity Models

  • Biba integrity model  – (sometimes referred as Bell-LaPadula upside down) was the first formal integrity model.  Biba is the model of choice when integrity protection is vital. The Biba model has two primary rules: the Simple Integrity Axiom and the * Integrity Axiom. 
    • The Simple Integrity Axiom is “no read down:”a subject at a specific classification level cannot read data at a lower classification. This protects integrity by preventing bad information from moving up from lower integrity levels.
    • The * Integrity Axiom is “no write up:”a subject at a specific classification level cannot write to data at a higher classification. This protects integrity by preventing bad information from moving up to higher integrity levels.
  • Clark-Wilson  –  (this is an informal model) that protects integrity by requiring subjects to access objects via programs. Because the programs have specific limitations to what they can and cannot do to objects, Clark-Wilson effectively limits the capabilities of the subject.Clark-Wilson uses two primary concepts to ensure that security policy is enforced; well-formed transactions and Separation of Duties.

Information Flow Models

Information in a system must be protected when at rest, in transit and in use.

  • The Chinese Wall model – designed to avoid conflicts of interest by prohibiting one person, such as a consultant, from accessing multiple conflict of interest categories (CoIs). The Chinese Wall model requires that CoIs be identified so that once a consultant gains access to one CoI, they cannot read or write to an opposing CoI.

 

Risk Management

Vocabulary

  • risk – possibility of suffering harm or loss
  • residual risk – risk that remains after a control was added to mitigate the initial risk.
  • total risk – the sum of all risks associated with an asset.
  • asset – resource an organization needs to conduct his business.
  • threat – circumstance or event with the potential to cause harm to an asset.
  • vulnerability – any characteristic if an asset that can be exploited by a threat to cause harm.
  • attack – attempting to use a vulnerability.
  • impact – loss resulting when a threat exploits a vulnerability.
  • mitigate – action taken to reduce the likelihood of a threat.
  • control – measure taken to detect, prevent or mitigate the risk associated with a threat.
  • risk assessment – process of identifying risks and mitigating actions.
  • qualitative risk assessment – subjectively determining the impact of an event that effects assets.
  • quantitative risk assessment –  objectively determining the impact of an event that effects assets.
  • single loss expectation (SLE) – linked to the quantitative risk assessment, it represents the monetary loss or impact of each occurrence of a threat.
    • SLE = asset value * exposure factor
  • exposure factor – linked to the quantitative risk assessment, is a measure of the magnitude of a loss.
  • annualized rate of occurrence (ARO) – linked to the quantitative risk assessment, is the frequency with an event is expected to occur on an annualized basis.
    • ARO = number of events / number of years
  • annualized loss of expectancy (ALE) – linked to the quantitative risk assessment, it represents how much an event is expected to cost per year.
    • ALE = SLE * ARO

Types of risks:

  • Business Risks:
    • fraud
    • regulatory
    • treasury management
    • revenue management
    • contract management
  • Technology Risks:
    • security
    • privacy
    • change management

Types of controls

Controls can be classified on types of actions they perform. Three classes of controls exist:

  • administrative
  • technical
  • physical

For each of these classes, there are four types of controls:

  • preventive (deterrent) – used to prevent the vulnerability
  • detective – used to detect the presence of an attack.
  • corrective (recovery) – correct a system after a vulnerability is exploited and an impact has occurred; backups are  a common form of corrective controls.
  • compensation – designed to act when a primary set of controls has failed.

Risk management models

General risk management model

The steps contained in a general risk management model:

  1. Asset identification – identify and clarify all the assets, systems and processes that need to be protected.
  2. Threat assessment – identify the threats and vulnerabilities associated with each asset.
  3. Impact determination and qualification
  4. Control design and evaluation – determine which controls to put in place to mitigate the risks.
  5. Residual risk management – evaluate residual risks to identify where additional controls are needed.

Risk management model proposed by Software Engineering Institute

SEI model steps :

  1. Identity – enumerate potential risks.
  2. Analyze – convert the risk data gather into information that can be used to make decisions.
  3. Plan – decide the actions to take to mitigate them.
  4. Track – monitor the risks and mitigations plans.
  5. Control – make corrections for deviations from the risk mitigation plan.

Security Policies and Regulations

One of the most difficult aspects of prosecution of computer crimes is attribution. Meeting the burden of proof requirement in criminal proceedings, beyond a reasonable doubt, can be difficult given an attacker can often spoof the source of the crime or can leverage different systems under someone else’s control.

Intellectual property

Intellectual property is protected by the U.S law under one of four classifications:

  • patents – Patents provide a monopoly to the patent holder on the right to use, make, or sell an invention for a period of time in exchange for the patent holder’s making the invention public.
  • trademarks – Trademarks are associated with marketing: the purpose is to allow for the creation of a brand that distinguishes the source of products or services.
  • copyrights – represents a type of intellectual property that protects the form of expression in artistic, musical, or literary works, and is typically denoted by the circle c symbol. Software is typically covered by copyright as if it were a literary work. Two important limitations on the exclusivity of the copyright holder’s monopoly exist: the doctrines of first sale and fair use. The first sale doctrine allows a legitimate purchaser of copyrighted material to sell it to another person. If the purchasers of a CD later decide that they no longer cared to own the CD, the first sale doctrine gives them the legal right to sell the copyrighted material even though they are not the copyright holders.
  • trade secrets – business-proprietary information that is important to an organization’s ability to compete. Software source code or firmware code are examples of computer-related objects that an organization may protect as trade secrets.

Privacy and data protection laws

Privacy and data protection laws are enacted to protect information collected and maintained on individuals from unauthorized disclosure or misuse.

Several important pieces of privacy and data protection legislation include :

  • U.S. Federal Privacy Act of 1974 – protects records and information maintained by U.S. government agencies about U.S. citizens and lawful permanent residents.
  •  U.S. Health Insurance Portability and Accountability Act (HIPAA) of 1996 – seeks to guard protected health information from unauthorized use or disclosure.
  • Payment Card Industry Data Security Standard (PCI-DSS) – the goal is to ensure better protection of card holder data through mandating security policy, security devices, control techniques and monitoring of systems and networks.
  • U.S. Gramm-Lech-Bliley Financial Services Modernization Act (GLBA) – requires financial institutions to protect the confidentiality and integrity of consumer financial information.
  • U.S. Sarbanes-Oxley Act of 2002 (SOX) – the primary goal of SOX is to ensure adequate financial disclosure and financial auditor independence.

Secure Software Architecture – Security Frameworks

  • COBIT (Control Objectives for Information and Related Technology)– assist management in bringing the gap between control requirements, technological issues and business risks.
  • COSO (Committee of Sponsoring Organizations of the Treadway Commission) – COSO has established a Enterprise Risk Management -Integrated Framework against which companies and organizations may assess their control systems.
  • ITIL (Information Technology Infrastructure Library) – describes a set of practices focusing on aligning IT services with business needs.
  • SABSA (Sherwood Applied Business Security Architecture) – framework and methodology for developing risk-driven enterprise information security architecture.
  • CMMI (Capability Maturity Model Integration) – process metric model that rates the process maturity of an organization on a 1 to 5 scale.
  • OCTAVE (Operationally Critical Threat, Asset and Vulnerability Evaluation) – suite of tools, techniques and methods for risk-based information security assessment.

 

Software Development Methodologies

Secure Development Lifecycle Components

  • software team awareness and education – all team members should have appropriate training. The key element of team awareness and education is to ensure that all the members are properly equipped with the correct knowledge.
  • gates and security requirements – the term gates it signifies a condition that one must pass through. To pass the security gate a review of the appropriate security requirements is conducted.
  • threat modeling – design technique used to communicate information associated with a threat throughout the development team (for more infos’ you could check my other ticket : threat modeling for mere mortals).
  • fuzzing – a test technique where the tester applies a series of inputs to an interface in an automated fashion and examines the output for undesired behaviors.
  • security reviews – process to ensure that the security-related steps are being carried out and not being short-circuited.

Software Development Models

  • waterfall model – is a linear application development model that uses rigid phases; when one phase ends, the next begins.
  • spiral model – repeats steps of a project, starting with modest goals, and expanding outwards in ever wider spirals (called rounds). Each round of the spiral constitutes a project, and each round may follow traditional software development methodology such as Modified Waterfall. A risk analysis is performed each round.
  • prototype model – working model of software with some limited functionality. Prototyping is used to allow the users evaluate developer proposals and try them out before implementation.
  • agile model
    • Scrum  – contain small teams of developers, called the Scrum Team. They are supported by a Scrum Master, a senior member of the organization who acts like a coach for the team. Finally, the Product Owner is the voice of the business unit.
    • Extreme Programming (XP) – method that uses pairs of programmers who work off a detailed specification.

Microsoft Security Development Lifecycle

SDL is software development process designed ti enable development teams to build more secure software and address security compliance requirements.

SDC is build around the following three elements:

  • (security) by design – the security thinking is incorporated as part of design process.
  • (security) by default – the default configuration of the software is by design as secure as possible.
  • (security) in deployment – security and privacy elements are properly understood and managed through the deployment process.

SDL components:

  • training   security training for all personnel, targeted to their responsibility associated with the development effort.
  • requirements
    • establishment of the security and privacy requirements for the software.
    • creation of quality gates ans bug bars. Defining minimum acceptable levels of security and privacy quality at the start helps a team understand risks associated with security issues, identify and fix security bugs during development, and apply the standards throughout the entire project.Setting a meaningful bug bar involves clearly defining the severity thresholds of security vulnerabilities (for example, no known vulnerabilities in the application with a “critical” or “important” rating at time of release) and never relaxing it once it’s been set.
    • development of security and privacy risk assessment. Examining software design based on costs and regulatory requirements helps a team identify which portions of a project will require threat modeling and security design reviews before release and determine the Privacy Impact Rating of a feature, product, or service.
  • design – establish design requirements, perform attack/surface analysis/reduction and use threat modeling.
  • implementation – application of secure coding practices and the use of static program checkers to find common errors.
  • verification – perform dynamic analysis (tools that monitor application behavior for memory corruption, user privilege issues, and other critical security problems), fuzz testing and conduct attack surface review.
  • release – conduct final security review and create an incident response plan.
  • response – execute incident response plan.

A Java implementation of CSRF mitigation using “double submit cookie” pattern

Goal of this articlecsrf

The goal of this article is to present an implementation of the “double submit cookie” pattern used to mitigate the Cross Site Request Forgery (CSRF) attacks. The proposed implementation is a Java filter plus a few auxiliary classes and it is (obviously) suitable for projects using the Java language as back-end technology.

Definition of CSRF and possible mitigations

In the case of a CSRF attack, the browser is tricked into making unauthorized requests on the victim’s behalf, without the victim’s knowledge. The general attack scenario contains the following steps:

  1. the victim connects to the vulnerable web-site, so it have a real, authenticated session.
  2. the hacker force the victim (usually using a spam/fishing email) to navigate to another (evil) web-site containing the CSRF attack.
  3. when the victim browser execute the (evil) web-site page, the browser will execute a (fraudulent) request to the vulnerable web-site using the user authenticated session. The user is not aware at all of the fact that navigating on the (evil) web-site will trigger an action on the vulnerable web-site.

For deeper explanations I strongly recommend  to read chapter 5 of Iron-Clad Java: Building Secure Applications book and/or the OWASP Cross-Site Request Forgery (CSRF) Prevention Cheat Sheet.

Definition of “double submit cookie” pattern

When a user authenticates to a site, the site should generate a (cryptographically strong) pseudo-random value and set it as a cookie on the user’s machine separate from the session id. The server does not have to save this value in any way, that’s why this patterns is sometimes also called Stateless CSRF Defense.

The site then requires that every transaction request include this random value as a hidden form value (or other request parameter). A cross origin attacker cannot read any data sent from the server or modify cookie values, per the same-origin policy.

In the case of this mitigation technique the job of the client is very simple; just retrieve the CSRF cookie from the response and add it into a special header to all the requests:

csrfclient
Client workflow

The job of the server is a little more complex; create the CSRF cookie and for each request asking for a protected resource, check that the CSRF cookie and the CSRF header of the request are matching:

csrfserver
Server workflow

Note that some JavaScript frameworks like AngularJS implements the client worflow out of the box; see Cross Site Request Forgery (XSRF) Protection

Java implementation of “double submit cookie” pattern

The proposed implementation is on the form of a (Java) Servlet filter and can be found here: GenericCSRFFilter GitHub.

In order to use the filter, you must define it into you web.xml file:

<filter>
   <filter-name>CSRFFilter</filter-name>
   <filter-class>com.github.adriancitu.csrf.GenericCSRFStatelessFilter</filter-class>
<filter>

<filter-mapping>
   <filter-name>CSRFFilter</filter-name>
   <url-pattern>/*</url-pattern>
</filter-mapping>

 

The filter can have 2 optional initialization parameters: csrfCookieName representing the name of the cookie that will store the CSRF token and csrfHeaderName representing the name of the HTTP header that will be also contains the CSRF token.

The default values for these parameters are “XSRF-TOKEN” for the csrfCookieName and “X-XSRF-TOKEN” for the csrhHeaderName, both of them being the default values that AngularJS is expecting to have in order to implement the CSRF protection.

By default the filter have the following features:

  • works with AngularJS.
  • the CSRF token will be a random UUID.
  • all the resources that are NOT accessed through a GET request method will be CSRF protected.
  • the CSRF cookie is replaced after each non GET request method.

How it’s working under the hood

The most of the functionality is in the GenericCSRFStatelessFilter#doFilter method; here is the sequence diagram that explains what’s happening in this method:

doFilter method sequence diagram
doFilter method sequence diagram

The doFilter method is executed on each HTTP request:

  1. The filter creates an instance of ExecutionContext class; this class is a simple POJO containing the initial HTTP request, the HTTP response, the CSRF cookies (if more than one cookie with the csrfCookieName is present) and implementation of the ResourceCheckerHook , TokenBuilderHook and ResponseBuilderHook .(see the next section for the meaning of this classes).
  2. The filter check the status of the HTTP resource, that status can be:MUST_NOT_BE_PROTECTED, MUST_BE_PROTECTED_BUT_NO_COOKIE_ATTACHED,MUST_BE_PROTECTED_AND_COOKIE_ATTACHED (see ResourceStatus enum) using an instance of ResourceCheckerHook.
  3. If the resource status is ResourceStatus#MUST_NOT_BE_PROTECTED
    ResourceStatus#MUST_BE_PROTECTED_BUT_NO_COOKIE_ATTACHED then
    the filter creates a CSRF cookie having as token the token generated by an instance of TokenBuilderHook.
  4. if the resource status ResourceStatus#MUST_BE_PROTECTED_AND_COOKIE_ATTACHED
    then compute the CSRFStatus of the resource and then use an instance of ResponseBuilderHook to return the response to the client.

How to extend the default behavior

It is possible to extend or overwrite the default behavior by implementing the hooks interfaces. All the hooks implementations must be thread safe.

  1. The ResourceCheckerHook is used to check the status of a requested resource. The default implementation is DefaultResourceCheckerHookImpl and it will return ResourceStatus#MUST_NOT_BE_PROTECTED for any HTTP GET method, for all the other request types, it will return {@link ResourceStatus#MUST_BE_PROTECTED_BUT_NO_COOKIE_ATTACHED if any CSRF cookie is present in the query or ResourceStatus#MUST_BE_PROTECTED_BUT_NO_COOKIE_ATTACHED otherwise.The interface signature is the following one:
    public interface ResourceCheckerHook extends Closeable {
        ResourceStatus checkResourceStatus(ExecutionContext executionContext);
    }  
  2. The TokenBuilderHook hook is used to generate the token that will be used to create the CSRF cookie. The default implementation  is DefaultTokenBuilderHookImpl and it uses a call to UUID.randomUUID to fetch a token. The interface signature is the following one:
    public interface TokenBuilderHook extends Closeable {
        String buildToken(ExecutionContext executionContext);
    }
  3. The ResponseBuilderHook is used to generate the response to the client depending of the CSRFStatus of the resource. The default implementation is DefaultResponseBuilderHookImpl and it throws a SecurityException if the CSRF status is CSRFStatus#COOKIE_NOT_PRESENT, CSRFStatus#HEADER_TOKEN_NOT_PRESENT or CSRFStatus#COOKIE_TOKEN_AND_HEADER_TOKEN_MISMATCH. If the CSRF status is CSRFStatus#COOKIE_TOKEN_AND_HEADER_TOKEN_MATCH then the old CSRF cookies are deleted and a new CSRF cookie is created. The interface signature is the following one:
    public interface ResponseBuilderHook extends Closeable {
        ServletResponse buildResponse(ExecutionContext executionContext,
                                      CSRFStatus status);
    }
    

The hooks are instantiated inside the GenericCSRFStatelessFilter#init method using the ServiceLoader Java 6 loading facility. So if you want to use your implementation of one of the hooks then you have to create a  META-INF/services directory that contains a text file whose name matches the fully-qualified interface class name of the hook that you want to replace.

Here is the sequence diagram representing the hooks initializations:

initmethod