Containers landscape: seen through OCI and CNCF standards lens

Introduction

Today, the container landscape is rather crowded and Docker is not the predominant player anymore.

The goal of this ticket is to present different products and/or projects and/or vendors that are part of the containers landscape and classify them using the existing standards.

For this classification I will use the standards from Open Container Initiative (OCI) and Cloud Native Computing Foundation (CNCF).

Open Container Initiative

The goal of the Open Container Initiative (OCI) is to promote a set of common, minimal, open standards and specifications around container technology more precisely container formats and runtime . At the moment of this writing it offers the following standards:

Cloud Native Computing Foundation

The goal of Cloud Native Computing Foundation (CNCF) is to drive adoption of cloud native technologies (Containers, service meshes, microservices, immutable infrastructure) by fostering and sustaining an ecosystem of open source, vendor-neutral projects. In the specific case of containers we will focus on the Container Runtime Interface (CRI).

If you wonder if there is any link between OCI and CNCF, the answer is that both initiatives are operating under the Linux Foundation umbrella, the OCI focusing only on the container formats and runtime.

OCI Image Specification

The image specification defines the structure of an OCI Image which should contain a manifest, an image index (optional), a set of filesystem layers, and a configuration. The goal of this specification is to enable the creation of interoperable tools for building, transporting, and preparing a container image to run.

In order to see the content of an OCI image the following command could be used (for a nginx image in this example):

docker save nginx > nginx.tar
tar -xvf nginx.tar

The content of the tar file will look like:

71388...b14.json
c827b...b2014e92/
c827b...b2014e92/VERSION
c827b...b2014e92/json
c827b...b2014e92/layer.tar
manifest.json
repositories

On the tooling side here are a few tools that are able to generate OCI compliant images and this list is far from being exhaustive:

  • Kaniko – tool to build container images from a Dockerfile, inside a container or Kubernetes cluster. Kaniko doesn’t depend on a Docker daemon and executes each command within a Dockerfile completely in userspace. This enables building container images in environments that can’t easily or securely run a Docker daemon, such as a standard Kubernetes cluster. The tool was created by Google.
  • Jib – tool to build Docker and OCI images for your Java applications without a Docker daemon. It is available as plugins for Maven and Gradle and as a Java library. The tool was created by Google.
  • Buildah – tool to build OCI images from a Dockerfile that is daemonless and rootless. It is also capable to generate a pod file from one or more images and also mimic the execution of a pod.
Tools to build OCI compliant images

OCI Runtime Specification

The runtime specification goal is to specify the configuration, execution environment, and lifecycle of a container. A container’s configuration is specified as the config.json for the supported platforms and details the fields that enable the creation of a container.

The execution environment is specified to ensure that applications running inside a container have a consistent environment between runtimes along with common actions defined for the container’s lifecycle.

From the implementation point of view we can distinguish three types of runtimes.

Native runtimes are using the host kernel to run the containers. The isolation of containers which are sharing the same kernel can be improved by using “out of the box” security mechanisms like seccomp, AppArmor or SELinux.

The most used native container runtimes projects today are (a few other existed in the past but now are deprecated: railcar, rkt):

  • runC – the de-facto standard container runtime maintained by Docker under Apache 2 license
  • cRun – created and maintained by RedHat is part of the Podman/Buildah family tools.

Sandboxed runtimes instead of sharing the host kernel, the containerized process runs on a kernel proxy layer, which then interacts with the host kernel on the container’s behalf. Because of this increased isolation, these runtimes have a reduced attack surface and make it less likely that a containerized process can “escape” from the original container.

  • gVisor
    • gVisor provides a virtualized environment in order to sandbox containers. The system interfaces normally implemented by the host kernel are moved into a distinct, per-sandbox application kernel in order to minimize the risk of a container escape exploit.
    • To do this, a component of gVisor called the Sentry intercepts syscalls from the application. Sentry is heavily sandboxed using seccomp, such that it is unable to access filesystem resources itself. When it needs to make systemcalls related to file access, it off-loads them to an entirely separate process called the Gofer. Even those system calls that are unrelated to filesystem access are not passed through to the host kernel directly but instead are reimplemented within the Sentry. Essentially it’s a guest kernel, operating in user space.
  • Nabla
    • A containerized application can avoid making a Linux system call if it links to a library OS component that implements the system call functionality. Nabla containers use library OS (aka unikernel) techniques, specifically those from the Solo5 project, to avoid system calls and thereby reduce the attack surface. Nabla containers only use 7 system calls; all others are blocked via a Linux seccomp policy.

Virtualized runtimes takes a different approach to achieve container isolation; the goal of virtualized runtimes is to create a lightweight virtual machines on which the host kernel and application will run.

  • Kata Containers
    • The idea with Kata Containers is to run containers within a separate virtual machine. This approach gives the ability to run applications from regular OCI format container images, with all the isolation of a virtual machine.
    • Kata uses a proxy between the container runtime and a separate target host where the application code runs. The runtime proxy creates a separate virtual machine using QEMU to run the container on its behalf.
  • Firecracker
    • Is a virtual machine offering the benefits of secure isolation through a hypervisor and no shared kernel, but with startup times around 100ms.
    • Firecracker designers have stripped out functionality that is generally included in a kernel but that isn’t required in a container like enumerating devices. The main saving comes from a minimal device model that strips out all but the essential devices.
Tools to run OCI images

OCI Distribution Specification

The Open Container Initiative Distribution Specification (a.k.a. “OCI Distribution Spec”) defines an API protocol to facilitate and standardize the distribution of content. The specification is designed to be agnostic of content types, OCI Image types being currently the most prominent.

The specification is strongly inspired from the Docker Registry HTTP API V2.

The Github repository notes four potential use cases:

  • Artifact Verification: Help enable trust between registries.
  • Resumable Push: Improve communication to help reduce re-uploading for transfers interrupted midway.
  • Resumable Pull: Similarly, it could help reduce unneeded downloads on the incoming side.
  • Layer Upload Deduplication: The spec could help container registries avoid duplicating layers.

On the tooling side, most probably any tool/vendor that offers support for Docker Registry Api will also support the new specification.

CNCF Container Runtime Interface (CRI)

In the first versions of Kubernetes (prior to 1.5) Docker was used as (default and only) container runtime engine; the usage of Docker was hardcoded into the Kubelet (k8s component that is installed and running on each cluster node having as goal to ensure that the containers described by Pod specifications are running and healthy).

The goal of CRI was to define an API that any container runtime should implement in order to be used by Kubelet.

The most important tools implementing CRI are :

  • containerd is Docker high-level container runtime, able to push and pull images, manage storage and define network capabilities. It is also capable of managing the lifecycle of running containers by passing corresponding commands to a low-level container runtime like runc.
  • CRI-O is the RedHat implementation of CRI and is the default container runtime for OpenShift since version 4. CRI-O has less features compared to containerd and delegates to components from “Container Tools” project for image management and storage. Most probably as a low level runtime container it’s using cRun.
Tools implementing CRI

Most probably my previous list is not exhaustive and other projects are implementing the CRI specification; for example PouchContainer which is container engine open sourced by Alibaba. If you are interested in knowing how the CRI implementation is done you could read this: Design and Implementation of PouchContainer CRI.

Book Review: API Security In Action

This is the review of the API Security in action book.

(My) Conclusion

This book is doing a very good job in covering different mechanisms that could be used in order to build secure (RESTful) APIs. For each security control the author explains what kind of attacks the respective control is able to mitigate.

The reader should be comfortable with Java and Maven because most of the code examples of the book (and there are a lot) are implemented in Java.

The diagram of all the security mechanism presented:

Part 1: Foundations

The goal of the first part is to learn the basics of securing an API. The author starts by explaining what is an API from the user and from developer point of view and what are the security properties that any software component (APIs included) should fill in:

  • Confidentiality – Ensuring information can only be read by its intended audience
  • Integrity – Preventing unauthorized creation, modification, or destruction of information
  • Availability – Ensuring that the legitimate users of an API can access it when they need to and are not prevented from doing so.

Even if this security properties looks very theoretical the author is explaining how applying specific security controls would fulfill the previously specified security properties. The following security controls are proposed:

  • Encryption of data in transit and at rest – Encryption prevents data being read or modified in transit or at rest
  • Authentication – Authentication is the process of verifying whether a user is who they say they are.
  • Authorization/Access Control – Authorization controls who has access to what and what actions they are allowed to perform
  • Audit logging – An audit log is a record of every operation performed using an API. The purpose of an audit log is to ensure accountability
  • Rate limiting – Preserves the availability in the face of malicious or accidental DoS attacks.

This different controls should be added into a specific order as shown in the following figure:

Different security controls that could/should be applied for any API

To illustrate each control implementation, an example API called Natter API is used. The Natter API is written in Java 11 using the Spark Java framework. To make the examples as clear as possible to non-Java developers, they are written in a simple style, avoiding too many Java-specific idioms. Maven is used to build the code examples, and an H2 in-memory database is used for data storage.

The same API is also used to present different types of vulnerabilities (SQL Injection, XSS) and also the mitigations.

Part 2: Token-based Authentication

This part presents different techniques and approaches for the token-based authentication.

Session cookie authentication

The first authentication technique presented is the “classical” HTTP Basic Authentication. HTTP Basic Authentication have a few drawbacks like there is no obvious way for the user to ask the browser to forget the password, the dialog box presented by browsers for HTTP Basic authentication cannot be customized.

But the most important drawback is that the user’s password is sent on every API call, increasing the chance of it accidentally being exposed by a bug in one of those operations. This is not very practical that’s why a better approach for the user is to login once then be trusted for a specific period of time. This is basically the definition of the Token-Based authentication:

Token Based authentication

The first presented example of Token-Based authentication is using the HTTP Base Authentication for the dedicated login endpoint (step number 1 from the previous figure) and the session cookies for moving the generated token between the client and the API server.

The author take the opportunity to explain how session cookies are working and what are the different attributes but especially he presents the attacks that are possible in the case of using session cookies. The session fixation attack and the Cross-Site Request Forgery attack (CSRF) are presented in details with different options to avoid or mitigate those attacks.

Tokens whiteout cookies

The usage of session cookies is tightly linked to a specific domain and/or sub-domains. In case you want to make requests cross domains then the CORS (Cross-Origin Resource Sharing) mechanism can be used. The last part of the chapter treating the usage of session cookies contains detailed explanations of CORS mechanism.

Using the session cookies as a mechanism to store the authentication tokens have a few drawbacks like the difficulty to share cookies between different distinguished domains or the usage of API clients that do not understand the web standards (mobile clients, IOT clients).

Another option that is presented are the tokens without cookies. On the client side the tokens are stored using the WebStorage API. On the server side the tokens are stored into a “classical” relational data base. For the authentication scheme the Bearer authentication is used (despite the fact that the Bearer authentication scheme was created in the context of OAuth 2.0 Authorization framework is rather popular in other contexts also).

In case of this solution the least secure component is the storage of the authentication token into the DB. In order to mitigate the risk of the tokens being leaked different hardening solutions are proposed:

  • store into the DB the hash of tokens
  • store into the DB the HMAC of the tokens and the (API) client will then send the bearer token and the HMAC of the token

This authentication scheme is not vulnerable to session fixation attacks or CSRF attacks (which was the case of the previous scheme) but an XSS vulnerability on the client side that is using the WebStorage API would defeat any kind of mitigation control put in place.

Self-contained tokens and JWTs

The last chapter of this this (second) part of the book treats the self-contained or stateless tokens. Rather than store the token state in the database as it was done in previous cases, you can instead encode that state directly into the token ID and send it to the client.

The most client-side tokens used are the Json Web Token/s (JWT). The main features of a JWT token are:

  • A standard header format that contains metadata about the JWT, such as which MAC or encryption algorithm was used.
  • A set of standard claims that can be used in the JSON content of the JWT, with defined meanings, such as exp to indicate the expiry time and sub for the subject.
  • A wide range of algorithms for authentication and encryption, as well as digital signatures and public key encryption.

A JWT token can have three parts:

  • Header – indicates the algorithm of how the JWT was produced, the key used to authenticate the JWT to or an ID of the key used to authenticate. Some of the header values:
    • alg: Identifies which algorithm is used to generate the signature
    • kid: Key Id; as the key ID is just a string identifier, it can be safely looked up in server-side set of keys.
    • jwk: The full key. This is not a safe header to use; Trusting the sender to give you the key to verify a message loses all security properties.
    • jku: An URL to retrieve the full key. This is not a safe header to use. The intention of this header is that the recipient can retrieve the key from a HTTPS endpoint, rather than including it directly in the message, to save space.
  • Payload/Claims – pieces of information asserted about a subject. The list of standard claims:
    • iss (issuer): Issuer of the JWT
    • sub (subject): Subject of the JWT (the user)
    • aud (audience): Recipient for which the JWT is intended
    • exp (expiration time): Time after which the JWT expires
    • nbf (not before time): Time before which the JWT must not be accepted for processing
    • iat (issued at time): Time at which the JWT was issued; can be used to determine age of the JWT
    • jti (JWT ID): Unique identifier; can be used to prevent the JWT from being replayed (allows a token to be used only once)
  • Signature – Securely validates the token. The signature is calculated by encoding the header and payload using Base64url Encoding and concatenating the two together with a period separator. That string is then run through the cryptographic algorithm specified in the header.
Example of JWT token

Even if the JWT could be used as self-contained token by adding the algorithm and the signing key into the header, this is a very bad idea from the security point of view because you should never trust a token sign by an external entity. A better solution is to store the algorithm as metadata associated with a key on the server.

Storing the algorithm and the signing key on the server side it also helps to implement a way to revoke tokens. For example changing the signing key it can revoke all the tokens using the specified key. Another way to revoke tokens more selectively would be to add to the DB some token metadata like token creation date and use this metadata as revocation criteria.

Part 3: Authorization

OAuth2 and OpenID Connect

A way to implement authorization using JWT tokens is by using scoped tokens. Typically, the scope of a token is represented as one or more string labels stored as an attribute of the token. Because there may be more than one scope label associated with a token, they are often referred to as scopes. The scopes (labels) of a token collectively define the scope of access it grants.

A scoped token limits the operations that can be performed with that token. The set of operations that are allowed is known as the scope of the token. The scope of a token is specified by one or more scope labels, which are often referred to collectively as scopes.

Scopes allow a user to delegate part of their authority to a third-party app, restricting how much access they grant using scopes. This type of control is called discretionary access control (DAC) because users can delegate some of their permissions to other users.

Another type of control is the mandatory access control (MAC), in this case the user permissions are set and enforced by a central authority and cannot be granted by users themselves.

OAuth2 is a standard to implement the DAC. OAuth uses the following specific terms:

  • The authorization server (AS) authenticates the user and issues tokens to clients.
  • The user also known as the resource owner (RO), because it’s typically their resources that the third-party app is trying to access.
  • The third-party app or service is known as the client.
  • The API that hosts the user’s resources is known as the resource server (RS).

To access an API using OAuth2, an app must first obtain an access token from the Authorization Server (AS). The app tells the AS what scope of access it requires. The AS verifies that the user consents to this access and issues an access token to the app. The app can then use the access token to access the API on the user’s behalf.

One of the advantages of OAuth2 is the ability to centralize authentication of users at the AS, providing a single sign-on (SSO) experience. When the user’s client needs to access an API, it redirects the user to the AS authorization endpoint to get an access token. At this point the AS authenticates the user and asks for consent for the client to be allowed access.

OAuth can provide basic SSO functionality, but the primary focus is on delegated third-party access to APIs rather than user identity or session management. The OpenID Connect (OIDC) suite of standards extend OAuth2 with several features:

  • A standard way to retrieve identity information about a user, such as their name, email address, postal address, and telephone number.
  • A way for the client to request that the user is authenticated even if they have an existing session, and to ask for them to be authenticated in a particular way, such as with two-factor authentication.
  • Extensions for session management and logout, allowing clients to be notified when a user logs out of their session at the AS, enabling the user to log out of all clients at once.

Identity-based access control

In this chapter the author introduces the notion of users, groups, RBAC (Role-Based Access Control) and ABAC (Access-Based Access Control). For each type of access control the author propose an ad-hoc implementation (no specific framework is used) for the Natter API (which is the API used all over the book to present different security controls.)

Capability-based security and macaroons

A capability is an unforgeable reference to an object or resource together with a set of permissions to access that resource. Compared with the more dominant identity-based access control techniques like RBAC and ABAC capabilities have several differences:

  • Access to resources is via unforgeable references to those objects that also grant authority to access that resource. In an identity-based system, anybody can attempt to access a resource, but they might be denied access depending on who they are. In a capability-based system, it is impossible to send a request to a resource if you do not have a capability to access it.
  • Capabilities provide fine-grained access to individual resources.
  • The ability to easily share capabilities can make it harder to determine who has access to which resources via your API.
  • Some capability-based systems do not support revoking capabilities after they have been granted. When revocation is supported, revoking a widely shared capability may deny access to more people than was intended.

The way to use capability-based security in the context of a REST API is via capabilities URIs. A capability URI (or capability URL) is a URI that both identifies a resource and conveys a set of permissions to access that resource. Typically, a capability URI encodes an unguessable token into some part of the URI structure. To create a capability URI, you can combine a normal URI with a security token.

The author adds the capability URI to the Netter API and implements this with the token encoded
into the query parameter because this is simple to implement. To mitigate any threat from tokens leaking in log files, a short-lived tokens are used.

But putting the token representing the capability in the URI path or query parameters is less than ideal because these can leak in audit logs, Referer headers, and through the browser history. These risks are limited when capability URIs are used in an API but can be a real problem when these URIs are directly exposed to users in a web browser client.

One approach to this problem is to put the token in a part of the URI that is not usually sent to the server or included in Referer headers.

The capacities URIs can be also be mixed with identity for handling authentication and authorization.There are a few ways to communicate identity in a capability-based system:

  • Associate a username and other identity claims with each capability token. The permissions in the token are still what grants access, but the token additionally authenticates identity claims about the user that can be used for audit logging or additional access checks. The major downside of this approach is that sharing a capability URI lets the recipient impersonate you whenever they make calls to the API using that capability.
  • Use a traditional authentication mechanism, such as a session cookie, to identify the user in addition to requiring a capability token. The cookie would no longer be used to authorize API calls but would instead be used to identify the user for audit logging or for additional checks. Because the cookie is no longer used for access control, it is less sensitive and so can be a long-lived persistent cookie, reducing the need for the user to frequently log in

The last part of the chapter is about macaroons which is a technology invented by Google (https://research.google/pubs/pub41892/). The macaroons are extending the capabilities based security by adding more granularity.

A macaroon is a type of cryptographic token that can be used to represent capabilities and other authorization grants. Anybody can append new caveats to a macaroon that restrict how it can be used

For example is possible to add new capabilities that allows only read access to a message created after a specific date. This new added extensions are called caveats.

Part 4: Microservice APIs in Kubernetes

Microservice APIs in K8S

This chapter is an introduction to Kubernetes orchestrator. The introduction is very basic but if you are interested in something more complete then Kubernetes in Action, Second Edition is the best option. The author also is deploying on K8S a (H2) database, the Natter API (used as demo through the entire book) and a new API called Linked-Preview service; as K8S “cluster” the Minikube is used.

Having an application with multiple components is helping him to show how to secure communication between these components and how to secure incoming (outside) requests. The presented solution for securing the communication is based on the service mesh idea and K8s network policies.

A service mesh works by installing lightweight proxies as sidecar containers into every pod in your network. These proxies intercept all network requests coming into the pod (acting as a reverse proxy) and all requests going out of the pod.

Securing service-to-service APIs

The goal of this chapter is to apply the authentication and authorization techniques already presented in previous chapters but in the context of service-to-service APIs. For the authentication the API’s keys, the JWT are presented. To complement the authentication scheme, the mutual TLS authentication is also used.

For the authorization the OAuth2 is presented. A more flexible alternative is to create and use service accounts which act like regular user accounts but are intended for use by services. Service accounts should be protected with strong authentication mechanisms because they often have elevated privileges compared to normal accounts.

The last part of the chapter is about managing service credentials in the context of K8s. Kubernetes includes a simple method for distributing credentials to services, but it is not very secure (the secrets are Base64 encoded and can be leaked by cluster administrator).

Secret vaults and key management services provide better security but need an initial credential to access. Using secret vaults have the following benefits:

  • The storage of the secrets is encrypted by default, providing better protection of secret data at rest.
  • The secret management service can automatically generate and update secrets regularly (secret rotation).
  • Fine-grained access controls can be applied, ensuring that services only have access to the credentials they need.
  • The access to secrets can be logged, leaving an audit trail.

Part 5: APIs for the Internet of Things

Securing IoT communications

This chapter is treating how different IoT devices could communicate securely with an API running on a classical system. The IoT devices, compared with classical computer systems have a few constraints:

  • An IOT device has significantly reduced CPU power, memory, connectivity, or energy availability compared to a server or traditional API client machine.
  • For efficiency, devices often use compact binary formats and low-level networking based on UDP rather than high-level TCP-based protocols such as HTTP and TLS.
  • Some commonly used cryptographic algorithms are difficult to implement securely or efficiently on devices due to hardware constraints or threats from physical attackers.

In order to cope with this constraints new protocols have been created based on the existing protocols and standards:

  • Datagram Transport Layer Security (DTLS). DTLS is a version of TLS designed to work with connectionless UDP-based protocols rather than TCP based ones. It provides the same protections as TLS, except that packets may be reordered or replayed without detection.
  • JOSE (JSON Object Signing and Encryption) standards. For IoT applications, JSON is often replaced by more efficient binary encodings that make better use of constrained memory and network bandwidth and that have compact software implementations.
  • COSE (CBOR Object Signing and Encryption) provides encryption and digital signature capabilities for CBOR and is loosely based on JOSE.

In the case when the devices needs to use public key cryptography then the key distribution became a complex problem. This problem could be solved by generating random keys during manufacturing of the IOT device (device-specific keys will be derived from a master key and some device-specific information) or through the use of key distribution servers.

Securing IoT APIs

The last chapter of the book is focusing on how to secure access to APIs in Internet of Things (IoT) environments meaning APIs provided by the devices or cloud APIs which are consumed by devices itself.

For the authentication part, the IoT devices could be identified using credentials associated with a device profile. These credentials could be an encrypted pre-shared key or a certificate containing a public key for the device.

For the authorization part, the IoT devices could use the OAuth2 for IoTwhich is a new specification that adapts the OAuth2 specification for constrained environments .

(My) BruCON 2018 Notes

Here are my quick notes from the BruCON 2018 conference. All the slides of the conference can be found here.

$SignaturesAreDead = “Long Live RESILIENT Signatures” wide ascii nocase (by Daniel Matthew)

Background

Signatures and indicators: what is a good signature ? A good signature depends of the context but the main properties are:

  • More resilient than rigid (resist evasion and normal changes).
  • More methodology-based than specific (capture methods or techniques).
  • More proactive than reactive (identifies new technologies )

Process

  • Define detection
    • what. where, when to find.
  • Assemble a sample set
    • collected sample set.
    • generated sample set.
    • try to enumerate the entire problem set.
  • Test existing detection/s
    • Test existing detection capabilities for any free wins.
    • Adjust priorities of existing applicable existing detections.
  • Generate data
    • logs.
    • binary metadata.
  • Write detection
    • start broad and tune after.
  • Test and tune

Process Walk-through for binaries

It applies the previous process to binaries.Malware binaries changes very often. In this case can’t rely on anti-viruses.

Process Walk-through for regsvr32.exe

It applies the previous process to the regsvr32.exe. It shows that is rather difficult to detect the regsvr32 arguments or process name
because there are multiple possibilities for the parameters for ex: /s or -s /u or -s or /us or -us.

Approaches that payed off to detect the execution of regsvr32.:

  • Handle obfuscation separately.
  • Handle renamed .exe/.dll separately

Takeaways

  • Know what you are detecting today and HOW you are detecting it.
  • Capture result of hunts as new detections.

All Your Cloud Are Belong To Us – Hunting Compromise in Azure (by Nate Warfield)

Traditional network (old days) Cloud Network
server restriction was restricted every vm exposed to internet
many layers of ACLs + segmentation VM’s deployed with predefined firewall
dedicated deployment teams anyone with access can expose bad things
well-defined patch cadence patch management decentralized

NoSQL problem

NoSQL solutions were never intended for internet exposure
BUT (naturally) peoples exposed them to internet.

Hunting NoSql Compromise in Azure

Port scans are slow and each NoSQL solution runs on different ports.

The author used shodan:

  • rich metadata for each IP
  • DB names are indexed
  • JSON export allows for automated hunting

The code was added to shodan in dec 2017 but requires shodan enterprise api access.

Network Security Group

Network Security Group is the VM firewall.

  • Configurable during deployment (optional)
  • 46% of images expose ports by default
  • 96% expose more than management

Your Iaas security is your responsibility
Pass and Saas are shared responsibility

  • Patches handled by Microsoft:
    • sas 100% transparent for you
    • paas requires configuration

Cloud marketplaces are supply chains

  • supply chain attacks are increasingly common.
  • cloud marketplaces are the next targets
  • minimal validation of 3rs party images
    • 3rd party iaas imaged are old
    • average azure age 140 days
    • average AWS Age: 717 days

2018 year of the cryptominer

  • cryptomining is the new ransomware
  • open s3 buckets are attacked
  • any vulnerable system is a target

 

Disrupting the Kill Chain (by Vineet Bhatia)

What is this talk about:

  • how to make the adversaries intrusion cost prohibitive.
  • how to monitor and secure Windows 10 environments.
  • how to recover from an intrusion.

Computer scientists at Lockheed-Martin corporation described a new “intrusion kill chain” framework; see KillChain.

PRE-ATT&CK: Adversarial Tactics, Techniques & Common Knowledge for Left-of-Exploit is a curated knowledge base and model for cyber adversary behavior, reflecting the various phases of an adversary’s lifecycle and the platforms they are known to target.

PRE-ATT&CK consists of 15 tactics and 151 techniques.

ATT&CK: Adversarial Tactics, Techniques, and Common Knowledge for Enterprise is an adversary model and framework for describing the actions an adversary may take to compromise and operate within an enterprise network. The model can be used to better characterize and describe post-compromise adversary behavior.

Summary of the adversary behavior:

  • know when they are coming, use PRE-ATT&CK
  • see them when they operate on your infrastructure, use ATT&CK.
  • map their activities, use the “kill chain”.

Don’t jump directly to attacker remediation; If an adversary perceives you as hostile (e.g.: hacking back), they will react differently.

How to make intrusions cost prohibitive:

  • reduce attack surface area.
  • detect early and remediate swiftly.
  • deceive, disrupt and deteriorate.

The rest of the talk was about the windows10 security:

Hunting Android Malware: A novel runtime technique for identifying malicious applications (by Christopher Leroy)

Malware is a constant threat to the Android ecosystem. How to protect from the malware:

  • have to look to the APK file/s:
    • statically
    • or in a sandbox
  • looking for:
    • (code) signatures
    • hashes
    • permissions reputations

What are the shortcomings of the current detection techniques:

  • static analysis is hard and it only can reveal a subset of the functionality.
  • bypass the AV products is easy.
  • cannot do forensics on realtime.

Idea: look to the application heap because the Android apps make us of objects. But the novelty is that should instrument the code before the execution:

  • objects exist on the heap so they are accessible.
  • trace calls and monitor the behavior.
  • great way to gain insight into applications

The authors presented his own framework called UITKYK. Uitkyk is a framework that allows you to identify Android malware according to the instantiated objects on the heap for a specific Android process.

The framework is also integrates with Frida framework which is a “dynamic instrumentation toolkit for developers, reverse-engineers, and security researchers”.

Exploits in Wetware (by Robert Sell)

This was also a talk about social engineering. From my point of view it does not bring new things comparing to the talk “Social engineering for penetration testers” from previous day.

Dissecting Of Non-Malicious Artifacts: One IP At A Time (by Ido Naor and Dani Goland)

The talk was about how can find very valuable information that is uploaded (accidentally or not) on different public cloud services.

 

 

 

(My) BruCON 2018 Notes (Retro Day)

Here are my quick notes from the BruCON 2018 conference.This first day was called “Retro Day” because it contained the best (as chosen by peoples) previous talks. All the slides of the conference can be found here.

Advanced WiFi Attacks using Commodity Hardware (by Mathy Vanhoef)

Wifi devices assume  that each device is behaving fairy,share the bandwidth with the other devices for example.

With special hardware it is possible to modify this behavior ; It is possible to do:

  • continous jamming; channel unusable.
  • selective jamming; block specific packets.

Implementing of selfish behavior using cheap devices

Steps to send a frame:
frame1 + SIFS + AIFSN + backoff + frame2

  • SIFS : represents the time to let the hardware process the frame.
  • Backoff :  random amount of time, used to avoid collisions.

Implement the selfish behavior (this was done by modifying the firmware):

  • disable Backoff.
  • reduce AIFSN.

Countermeasures to this problem:

  • DOMINO defense system detects selfish devices

What if are multiple selfish stations ?
in theory : in collision both frames are lost but in reality due to the “capture effect” in a collision the frame with best signal and lowest
bit-rate is decoded (similar to FM radio).

Continuous jamming

how it works:

  • instant transmit:disable carrier sense
  • no interruptions : queue infinite packets

This will
– only first package visible in monitor mode
– other devices are silcenced

What is the impact in practice:
We can jam any device that use the 2.4 and 5 GHz band, not only wifi, but other devices like security cameras.

Selective jammer

Decides based on the header whether the jam the frame
so it should:

  • detect and decode the header.
  • abort receiving current frame.
  • inject dummy packet

The hard part is the first step. This is done by monitoring the (RAM) memory written by the radio chip.

Impact of the attacks on higher layers

Breaking WPA2; this is a shorter version of :KRACKing WPA2 in Practice Using Key Reinstallation Attacks.

Hacking driverless vehicles (by Zoz)

Drivelless vehicles advantages:

  • energy efficiency
  • time efficiency

Main roadblocks:
– shared infrastructure (have to share road/s with card driven by humans)
– acceptance (safety robustness).

Classical failures:

  • RQ-3 DarkStar – self flying drone; it crashed due to cracks into the runway.
  • sandstorm ; self driving car contest: in this case the mismatch between GPS info and other sensor.

Autonomous vehicle logic structure:

Mission task planners
|
Navigation
|
Collision avoidance
|
Control lops

Sensors used by driveless vehicles:

  • active vs passive sensors
  • common sensors:
    • gps
    • lidar
    • cameras
    • wave radar
    • digiwheel encoderes

Sensor attacks

2 kinds:

  • denial
  • spoofing – craft false data

GPS:

  • denial – jamming
  • spoofing – fake GPS satellite signals

LIDAR

  • denial:
    • active overpowering
    • preventing returning signal
  • spoofing
    • can fake road markings invisible to humans
    • can make solid looking objects

Digital compass:

  • extremely difficult to interfere with acoustic attacks.
  • gyroscope vibrates and has a resonance frequency.

Levelling Up Security @ Riot Games (by Mark Hillick)

The talk was structured in 2 parts; what RiotGames do/did to enhance security in 2015 and what they are doing to enhance security in 2018

2015

  • introduced the idea of security champion.
  • introduced the RFC (Review For Commens) document = Technical Design.
    • not an approval process it’s more about receiving advice
    • becomes a standards through adoption.
    • introduction of bug bounty program.

2018

  • security team had doubled in size.
  • sec-ops team and read team are working together.
  • put in place an anti-cheating strategy:
    • prevention
    • detection
    • deterence

Top8 vulnerabilities:

  • improper authentication.
  • open redirect.
  • information disclosure.
  • business error.

Challenges around secrets:

  •  detected an api key from AWS in a commit.
  • how to fix it.
    • provide temporary AWS API token
    • remove the usage of long-lived AWS Api keys.

Social engineering for penetration testers (by Sharon Conheady)

Definition: efforts to influence popular attitudes and social behavior.

Main take away (for 2018); the social engineering (a.k.a SE) is used more and more and actually the techniques it didn’t change too much.

what has changed since 2009 ?
nothing

example of social enginnering through history:

What had changed since 2009 (when the same talk has been given):

  • the scale of the attacks.
  • sophistication
  • more targeted
  • ethical SE is mostly phishing.

Why social engineering (still) works:

  • peoples want to help.
  • greed
  • tendency to trust
  • complacency
  • peoples do not like confrontations.

Stages of an attack

  • target identification
  • reconnaissance
    • passive information gathering
    • physical reconnaissance
    • google map
    • where are the security guards
  • sample scenarios
    • tailgate
  • going in for the attack
    • use your scenario to get in
    • prove you were there
    • have an exit strategy
  • write the report
  • tell the story

The 99c heart surgeon dilemma (by Stefan Friedli)

The presentation was about pen test bad examples and how to make the things better.

It starts with examples of bad pen test:

  • Unclear impact metrics.
  • Accidentally pasting other customer names.
  • Reported false positives.

How to make the things better:

  • Avoid security companies offering bad services. How:
    • Ask about procedures, standards.
    • Ask to talk to the testers
    • Check for community participation
    • Look at sample deliverables
  • How to fix Penetration Testing:
    • Involve more people.
    • Have more conversations.
    • Don’t stop at the report

(My) OWASP Belgium Chapter meeting notes

These are my notes of OWASP Belgium Chapter meeting of 17th of September.

Docker Threat Modeling and Top 10 (by Dirk Wetter)

Docker not really new:

  • FreeBSD – Jails year 2000
  • Solaris : Zones/Container year 2004

Threat Vectors on the (Docker) containers:

  1. Application escape
  2. Orchestration tool
  3. Other containers
  4. Platform host; especially after the discovery of vulnerabilities into microprocessors (Spectre, Foreshadow).
  5. Network: not properly secured network.
  6. Integrity and confidentiality of OS images.

Top 10 Docker security

  1. Docker insecure default running code as privileged user
    • workaround : remap user namespaces user_namespaces (7)
  2. Patch management
    • Host
    • Container Orchestration
    • OS Images
  3. Network separation and firewalling
    • use basic DMZ techniques
    • allow only what is needed on the firewall level
    • (for external network connection) do not allow initiating outgoing TCP connections.
  4. Maintain security contexts
    • do not mix Development/Production images
    • do not mix Front-End and Back-End services
    • do not run arbitrary images.
  5. Secrets management
    • where to store keys, certificates, credentials
    • not easy to solved problem
  6. Resource protection
    • limit memory (--memory=), swap (memory-swap=), cpu usage (--cpu-*), --pids-limit xx
    • do not mount external disks if not necessary, if really necessary then mount it as r/o.
  7. Image integrity and origin
  8. Follow the immutable paradigm
    • run the container in read only mode: docker run --read-only... or docker run –v /hostdir:/containerdir:ro
  9. Hardening
    • Container
      • docker run --cap-drop option, you can lock down root in a container so that it has limited access within the container.
      • --security-opt=no-new-privileges prevents the uid transition while running a setuid binary meaning that even if the image has dangerous code in it, we can still prevent the user from escalating privileges
    • Host
      • networking – only SSH and NTP
  10. Logging

Securing Containers on the High Seas (by Jack Mannino)

The entire presentation is around the 4 phases used to create an application that runs on containers:

  • Design
  • Build
  • Ship
  • Run

Design (secure the design)

  • Understand how the system will be used and abused.
  • Beware of tightly-coupled components.
  • Can solve security issues through patterns that lift security out of the container itself. ex Service Mesh Pattern.

Build (secure the build process)

  • Build first level of security controls into containers.
  • Orchestration systems can override these controls and mutate containers through an extra layer of abstraction.
  • Use base images that ship with minimal installed packages and
    dependencies.
  • Use version tags vs. image:latest; do not use latest !
  • Use images that support security kernel features
  • Limit privileges
    • Often, we only need a subset of capabilities
      • ex: Ping command requires CAP_NET_RAW. So we can run docker image like this:

docker run -d --cap-drop=all --cap-add=net_raw my-image

  • Kernel Hardening
    • Seccomp is a Linux kernel feature that allows you to filter dangerous syscalls.
  • MAC (Mandatory Access Control)
    • SELinux and AppArmor allow you to set granular controls on files and network access.
    • Docker leads the way with its default AppArmor profile.

Ship

  • Validate the integrity of the container.
    • ex: Docker Content Trust & Notary
    • Consume only trusted content for tagged Docker builds.
  • Validate security pre-conditions.
    • Allow or deny a container’s cluster admission.
    • Centralized interfaces and validation.

Run

  • Containers are managed through orchestration systems.
  • Management API – used to deploy, modify and kill services.
    • Frequently deployed without authentication or access control.
  • Authentication
    • Authenticate subjects (users and service accounts) to the cluster.
    • Avoid sharing service accounts across multiple services.
    • Subjects should only have access to the resources they need.
  • Secrets management
    • Safely inject secrets into containers at runtime.
    • Anti-patterns:
      • Hardcoded.
      • Environment variables.