Book review: Hacking – the art of exploitation, 2-end edition

This is a review of the Hacking – the art of exploitation, 2-end edition book.hck2ed

Chapter 0x100 Introduction

Very short chapter (2 pages and 1/2) in which the author gives his definition of a hacker; person that find unusual solutions to any kind of problems, not only technical problems. The author also expresses very clearly the goal of his book: “The intent of this book is to teach you the true spirit of hacking. We will look at various hacking techniques, from the past to the present, dissecting them to learn how and why they work”.

Chapter 0x200 Programming

The chapter is an introduction to C programming language and to assembler for Intel 8086 processors. The entry level is very low, it starts by explaining the use of pseudo-code and then very gradually introduces many of the structures of the C language: variables, variables scopes, control structures, structs, functions, pointers (don’t expect to have a complete introduction to C or to find advanced material).

The chapter contains a lot of code examples very clearly explained using the GDB debugger. Since all the examples are running under Linux, the last part of the chapter contains some basics about the programming on Linux operating system like file permissions, uid, guid, setuid.

Chapter 0x300 Exploitation

This chapter it builds on the knowledge learned in the previous one and it’s dedicated to the buffer overflow exploits. The most part of the chapter treats the stack-based buffer overflow in great detail using gradual complexity examples. Overflow vulnerabilities on other memory segments are also presented, overflows on the heap and on the BSS.

The last part of the chapter is about format string exploits. Some of the string vulnerabilities use specific GNU C compiler structures (.dtors and .ctors). In almost all the examples, the author uses the GDB to explain the details of the vulnerabilities and of the exploits.

One negative remark is that in some of the exploits the author use shell codes without explaining how these shell codes have been crafted (on the other side an entire chapter is devoted to shell codes).

Chapter 0x400 Networking

This chapter is dedicated to the network hacking(s) and can be split in 3 parts. The first part is rather theoretical, the ISO OSI model is presented and some of the layers (data-link layer, network layer and transport layer) are explained in more depth.

The second part of the chapter is more practical; different network protocols are presented like ARP, ICMP, IP, TCP; the author explains the structure of the packets/datagrams for the protocols and the communication workflow between the hosts. On the programming side, the author makes a very good introduction to sockets in the C language.

The third part of the chapter is devoted to the hacks and is build on the top of the first two parts. For the  package sniffing hacks the author introduces the libpcap library and for the package injection hacks the author uses the libnet library (ARP cache poisoning, SYN flooding, TCP RST hijacking). Other networking hacks are presented like different port scanning techniques, denial of service and the exploitation of a buffer overflow over the network.  In most of the hacks the authors it’s crafting his own tools but sometimes he uses tools like nemesis and nmap.

Chapter 0x500 Shellcode

This chapter is an introduction to the shellcode writing. In order to be injected in the target program the shelcode must be as compact as possible so the best suitable programing language for this task is the assembler language.

The chapter starts with an introduction to the assembler language for the Linux platform and continues with an example of a “hello word” shellcode. The goal of the “hello word” shellcode is to present different techniques to make the shellcode memory position-independent.

The rest of the chapter is dedicated to the shell-spawning(local) and port-binding (remote) shellcodes. In both cases the same presentation pattern is followed: the author starts with an example of the shellcode in C and then he translates and adapts (using GDB)  the shellcode in assembler language.

Chapter 0x600 Countermeasures

The chapter is about the countermeasures that an intruder should apply in order to cover his tracks and became as undetectable as possible but also the countermeasures that a victim should apply in order reduce or nullify the effect of an attack.

The chapter is organized around the exploits of a very simple web server. The exploits proposed are increasingly complex and stealthier; from the “classical” port-biding shellcode that can be easily detected to more advanced camouflage techniques like forking the shellcode in order to keep the target program running, spoofing the logged IP address of the attacker or reusing an already open socket for the shellcode communication.

In the last part of the chapter some defensive countermeasures are presented like non-executable stack and randomized stack space. For each of this hardening countermeasures some partial workarounds are explained.

Chapter 0x700 Cryptology

The last chapter treats the cryptology, an subject very hard to explain to a neophyte. The first part of the chapter contains information about the algorithmic complexity, the symmetric and asymmetric encryption algorithms; the author brilliantly demystifies the operation of the RSA algorithm.

On the hacking side the author presents some attacks linked to the cryptography like the man-in-the-middle attack of an SSL connection (using the mitm-ssh tool  and THC Fuzzy Fingerprint) and cracking of passwords generated by Linux crypt function (using dictionary attacks, brute-force attacks and rainbow tables attacks).

The last part of the chapter is quite outdated in present day (the book was edited in 2008) and is dedicated to the wireless 802.11 b encryption and to the weaknesses of the WEP.

Chapter 0x800 Conclusion

As for the introduction chapter, this chapter is very short and as in the first chapter the authors repeats that the hacking it’s state of mind and the hackers are people with innovative spirits.

(My) Conclusion

The book it’s a very good introduction to different technical topics of IT security. Even if the author tried to make the text easy for non-technical peoples (the chapter about programming starts with an explanation about pseudo-codes) some programming experience is required (ideally C/C++) in order to get the best of this book.

GDB debugger for the dummies FAQ

This ticket is a small FAQ about the  GDB debugger; it it’s strongly inspired from the chapter 2 of Hacking: the art of exploitation (2-end edition) book.

How to pass arguments to the debugged program

Use the command run <program_arguments> which will (re)run the program to be debugged.

How to add a breakpoint

Use the command break with different parameters:

break <line_number>

break <filename>:<line_number>

break <function>

break <filename>:<function>

How to set the disassembly syntax

The disassembly syntax can be set to Intel by typing:

set disassembly syntax_flavor or set dis syntax_favor

where syntax flavor can be intel or att (the default).

If you want that this parameter to be applied to all of your executions of GDB, then create a .gdbinit file in your home directory and add the previous line.

How to disassembly the debugged code

Use the command disassemble (or short disass) with parameters:

disass <file_name>:<function> 

disass <function>

dissass <start_address>, <end_address>

dissass <start_address>, +<length>

Use the /m flag if you want to print mixed source+disassembly code.

How to examine the memory content

Use the command x which is the short for examine.

The examine command expects 2 arguments: the location of the memory to examine and how to display that memory content.

x/nfu <address>

  • n is how many memory units to print (default to 1).
  • f  is format character. He are some common format letter:
    • o – display in octal.
    • x – display in hexadecimal.
    • u – display in base 10.
    • t – display in binary.
    • i – display the memory as disassembled assembly language instructions.
    • c – automatically lookup a byte on the ASCII table. (should be used with b unit).
    • s – display an entire string of character data.
  • u is unit. It can be :
    • b – byte.
    • h – half word (2 bytes).
    • w – word (4 bytes) – default.
    • g – giant word (8 bytes).

How to get information about registers

Use the command info registers <register_name>  or short i r <register_name>register_name may be any register name valid on the machine

GDB has four “standard” register names that are available (in expressions) on most machines–whenever they do not conflict with an architecture’s canonical mnemonics for registers. The register names $pc and $sp are used for the program counter register and the stack pointer. $fp is used for a register that contains a pointer to the current stack frame, and $ps is used for a register that contains the processor status.

How to list the content of source code

Use the command list (abbreviated l) with different parameters:

list <filename>:<function>

list <filename>:<line_number> 

By default GDB prints 10 code lines; the number of lines to print can be modified using the set listsize count command.

How to inspect the content of the stack

Use the command backtrace (abbreviated bt) with different parameters:

backtrace  – print the entire stack.

backtrace n – print the first n entries of the stack.

backtrace -n – print the last n entries of the stack.

backtrace full – print the local variables contained in each stack frame.

(My) CISSP Notes – Application development security

Note: This notes were made using the following books: “CISPP Study Guide” and “CISSP for dummies”.

Programming concepts

Machine code (also called machine language) is a software that is executed directly by the CPU. Machine code is CPU-dependent; it is a series of 1s and 0s that translate to instructions that are understood by the CPU.

Source code is computer programming language instructions which are written in text that must be translated into machine code before execution by the CPU.

Assembly language is a low-level computer programming language.

Compilers take source code, such as C or Basic, and compile it into machine code.

Interpreted languages differ from compiled languages: interpreted code (such as shell code) is compiled on the fly each time the program is run.

Procedural languages (also called procedure-oriented languages) use subroutines, procedures, and functions.

Object-oriented languages attempt to model the real world through the use of objects which combine methods and data.

The different generations of languages:

Application Development Methods

The Waterfall Model is a linear application development model that uses rigid phases; when one phase ends, the next begins.

The waterfall model contains the following steps:

  • System requirements
  • Software Requirements
  • Analysis
  • Program Design
  • Coding
  • Testing
  • Operations

An unmodified waterfall does not allow iteration: going back to previous steps. This places a heavy planning burden on the earlier steps. Also, since each subsequent step cannot begin until the previous step ends, any delays in earlier steps cascade through to the later steps.

The unmodified Waterfall Model does not allow going back. The modified Waterfall Model allows going back at least one step.

The Sashimi Model has highly overlapping steps; it can be thought of as a real-world successor to the Waterfall Model (and is sometimes called the Sashimi Waterfall Model).

Sashimi’s steps are similar to the Waterfall Model’s; the difference is the explicit overlapping,

Agile Software Development evolved as a reaction to rigid software development models such as the Waterfall Model. Agile methods include Scrum and Extreme Programming (XP).

Scrum contain small teams of developers, called the Scrum Team. They are supported by a Scrum Master, a senior member of the organization who acts like a coach for the team. Finally, the Product Owner is the voice of the business unit.

Extreme Programming (XP) is an Agile development method that uses pairs of programmers who work off a detailed specification.

The Spiral Model is a software development model designed to control risk.

The spiral model repeats steps of a project, starting with modest goals, and expanding outwards in ever wider spirals (called rounds). Each round of the spiral constitutes a project, and each round may follow traditional software development methodology such as Modified Waterfall. A risk analysis is performed each round.

The Systems Development Life Cycle (SDLC, also called the Software Development Life Cycle or simply the System Life Cycle) is a system development model.

SDLC focuses on security when used in context of the exam.

No metter what development model is used, these principles are important in order to ensure that the resulting software is secure:

  • security in the requirements – even before the developers design the software, the organization should determine what security features the software needs.
  • security in the design – the design of the application should include security features, ranging from input checking, dtrong authentication, audit logs.
  • security in testing – the organization needs to test all the security requirements and design characteristics before declaring the software ready for production use.
  • security in the implementation
  • ongoing security testing – after an application is implemented, security testing should be performed regularly, in order to make sure that no new security defects are introduced into the software.  

Software escrow describes the process of having a third party store an archive or computer software.

Software vulnerabilities testing

Software testing methods

  • Static testing – tests the code passively, the code is not running, this includes syntax checking, code reviews.
  • Dynamic testing – tests the code while it executing it.
  • White box testing – gives the tester access to program source code.
  • Black box testing – gives the tester no internal details, the application is treated as a black box that receives inputs.

Software testing levels

  • Unit Testing – Low-level tests of software components, such as functions, procedures or objects
  • Installation Testing – Testing software as it is installed and first operated
  • Integration Testing – Testing multiple software components as they are combined into a working system.
  • Regression Testing – Testing software after updates, modifications, or patches • Acceptance Testing: testing to ensure the software meets the customer’s operational requirements.

Fuzzing (also called fuzz testing) is a type of black box testing that enters random, malformed data as inputs into software programs to determine if they will crash.

Combinatorial software testing is a black-box testing method that seeks to identify and test all unique combinations of software inputs. An example of combinatorial software testing is pairwise testing (also called all pairs testing).

Software Capability Maturity Model

The Software Capability Maturity Model (CMM) is a maturity framework for evaluating and improving the software development process.

The goal of CMM is to develop a methodical framework for creating quality software which allows measurable and repeatable results.

The five levels of CMM :

  1. Initial: The software process is characterized as ad hoc, and occasionally even chaotic.
  2. Repeatable: Basic project management processes are established to track cost, schedule, and functionality.
  3. Defined: The software process for both management and engineering activities is documented, standardized, and integrated into a standard software process for the organization.
  4. Managed: Detailed measures of the software process and product quality are collected, analyzed, and used to control the process. Both the software process and products are quantitatively understood and controlled.
  5. Optimizing: Continual process improvement is enabled by quantitative feedback from the process and from piloting innovative ideas and technologies.

Databases

A database is a structured collection of related data.

Types of databases :

  • relational databases – the structure of the relation database its defined by its schema. Records are called rows, and rows are stored in tables. Databases must ensure the integrity of the data. There are three integrity issues that must be addressed beyond the correctness of the data itself: referential integrity (every foreign key in a secondary table matches a primary key in the parent table), semantic integrity (each column value is consistent with attribute data type) and entity integrity (each tuple has a unique primary key that is not null). Data definition language (DDL) is used to create, modify and delete tables. Data manipulation language (DML) is used to query and update data stored in tables.
  • hierarchical – data in a hierarchical database is arranged in tree structures, with parent records at the top of the database, and a hierarchy of child records in successive layers.
  • object oriented – the objects in a object database include data records, as well as their methods.

Database normalization seeks to make the data in a database table logically concise, organized, and consistent. Normalization removes redundant data, and improves the integrity and availability of the database.

Databases may be highly available, replicated over multiple servers containing multiple copies of data. Database replication mirrors a live database, allowing simultaneous reads and writes to multiple replicated databases. A two-phase commit can be used to ensure integrity.

A shadow database is similar to a replicated database with one key difference, a shadow database mirrors all changes made to the primary database, but the clients do not have access to the shadow.

Knowledge-based systems

Expert systems consist of two main components. The first is a knowledge base that consists of “if/then” statements. These statements contain rules that the expert system uses to make decisions. The second component is an inference engine that follows the tree formed by the knowledge base, and fires a rule when there is a match.

Neural networks mimic the biological function of the brain. A neural-network accumulates knowledge by observing events; it measures their inputs and outcome. Over time, the neural network becomes proficient at correctly predicting an outcome because it has observers several repetitions of the circumstances ans is also told the outcome each time.