(My) CISSP Notes – Bibliography

This is the last ticket from/for “(My) CISSP Notes” and will contains links to the materials that I used for the preparation of the exam.


  • CISSP study guide – This was my main source of information.
  • CISSP for dummies – This was my second source of information; very easy to read and understand but not sufficient to pass the exam.
  • 11th Hour CISSP: Study guide – This is a short version of the CISSP study guide . I use it for a quick review the last few days before the exam.
  • CISSP Boxed Set – This box set contains a study guide and a book with practice exams. I used the study guide only punctually for specific topics and I found it very complete. I tried at the beginning the use it as the main source of information but the main problem is the huge quantity of information (1400 pages).

Practice Exams

Audio Postcasts

  • McGraw-Hill Education CISSP Podcasts – The podcasts are covering all the domains and are of quite good (audio) quality.  If you don’t have the time to listen everything then you could choose the listen only the review podcasts containing only the most important information.

(My) CISSP Notes – Security Operations

Note: This notes were made using the following books: “CISPP Study Guide” and “CISSP for dummies”.

Operations Security is concerned with threats to a production operating environment.

So operations security is about people, data, media, hardware, and the threats associated with each of these in a production environment.

Administrative security

One fundamental aspect of operations security is ensuring that controls are in place to inhibit people either inadvertently or intentionally compromising the confidentiality, integrity, or availability of data or the systems and media holding that data. Administrative Security provides the means to control people’s operational access to data.

Administrative personnel controls :

  • least privilege – the principle of least privilege dictates that persons have no more than the access that is strictly required for the performance of their duties.
  • need to know – only people with a valid need to know certain information in order to perform their job functions, should have access to that information.An extension to the principle of least privilege in MAC environments is the concept of compartmentalizationCompartmentalization, a method for enforcing need to know, goes beyond the mere reliance upon clearance level and necessitates simply that someone requires access to information.
  • separation of duties –  prescribes that multiple people are required to complete critical or sensitive transactions.The goal of separation of duties is to ensure that in order for someone to be able to abuse their access to sensitive data or transactions; they must convince another party to act in concert. Collusion is the term used for the two parties conspiring to undermine the security of the transaction.
  • rotation of duties – also known as job rotation or rotation of responsibilities, provides an organization with a means to help mitigate the risk associated with any one individual having too many privileges.Rotation of duties simply requires that critical functions or responsibilities are not continuously performed by the same single person without interruption.
  • mandatory leave – an additional operational control that is closely related to rotation of duties is that of mandatory leave, also known as forced vacation.
  • non-disclouse agreement (NDA) – is a work-related contractual agreement that ensures that, prior to being given access to sensitive information or data, an individual or organization appreciates their legal responsibility to maintain the confidentiality of sensitive information.
  • background checks – also known as background investigations or preemployment screening; are an additional administrative control commonly employed by many organizations.

Sensitive information/media security

Wherever the data exists, there must be processes that ensure the data is not destroyed or inaccessible (a breach of availability), disclosed, (a breach of confidentiality) or altered (a breach of integrity).

Perhaps the most important step in media security is the process of locating sensitive information, and labeling or marking it as sensitive.

People handling sensitive media should be trusted individuals who have been vetted by the organization.

When storing sensitive information, it is preferable to encrypt the data. Encryption of data at rest greatly reduces the likelihood of the data being disclosed in an unauthorized fashion due to media security issues.

The term data remanence is important to understand when discussing media sanitization and data destruction. Data remanence is data that persists beyond noninvasive means to delete it.

Wiping, also called overwriting or shredding, writes new data over each bit or block of file data.

By introducing an external magnetic field through use of a degausser, the data on magnetic storage media can be made unrecoverable.

Asset management

  • Patch management – One of the most basic, yet still rather difficult, tasks associated with maintaining strong system security configuration is patch management, the process of managing software updates.
  • Vulnerability management – Vulnerability scanning is a way to discover poor configurations and missing patches in an environment. While it might seem obvious, it bears mentioning that vulnerability scanning devices are only capable of discovering the existence of known vulnerabilities. The term vulnerability management is used rather than just vulnerability scanning to emphasize the need for management of the vulnerability information.
  • Change management – In order to maintain consistent and known operational security, a regimented change management or change control process needs to be followed. The purpose of the change control process is to understand, communicate, and document any changes with the primary goal of being able to understand, control, and avoid direct or indirect negative impact that the change might impose.

Continuity of operation

Three basic types of backups exist:

  • full backup – is the easiest to understand of the types of backup; it simply is a replica of all allocated data on a hard disk. Full backups contain all of the allocated data on the hard disk, which makes them simple from a recovery standpoint in the event of a failure.
  • incremental backup – one alternative to exclusively relying upon full backups is to leverage incremental backups. Incremental backups only archive files that have changed since the last backup of any kind was performed. Since fewer files are backed up, the time to perform the incremental backup is greatly reduced.
  • differential backup – while the incremental backup only archived those files that had changed since any backup, the differential method will back up any files that have been changed since the last full backup.

Redundant array of inexpensive disks (RAID)

The goal of a Redundant Array Inexpensive Disks (RAID) is to help mitigate the risk associated with hard disk failures.

Three terms that are important to understand with respect to RAID are:

  • mirroring – is the most obvious and basic of the fundamental RAID concepts, and is simply used to achieve full data redundancy by writing the same data to multiple hard disks.
  • striping – is a RAID concept that is focused on increasing the read and write performance by spreading data across multiple hard disks. With data being spread amongst multiple disk drives, reads and writes can be performed in parallel across multiple disks rather than serially on one disk.
  • parity – is a means to achieve data redundancy without incurring the same degree of cost as that of mirroring in terms of disk usage and write performance.

RAID 0: Striped Set As is suggested by the title, RAID 0 employs striping to increase the performance of read and writes. By itself, striping offers no data redundancy so RAID 0 is a poor choice if recovery of data is the reason for leveraging RAID.

RAID 1: Mirrored Set This level of RAID is perhaps the simplest of all RAID levels to understand. RAID 1 creates/writes an exact duplicate of all data to an additional disk. The write performance is decreased, though the read performance can see an increase.

RAID 2: Hamming Code RAID 2 is not considered commercially viable for hard disks and is not used. This level of RAID would require either 14 or 39 hard disks and a specially designed hardware controller, which makes RAID 2 incredibly cost prohibitive. RAID 2 is not likely to be tested.

RAID 3: Striped Set with Dedicated Parity (byte level) Striping is desirable due to the performance gains associated with spreading data across multiple disks. However, striping alone is not as desirable due to the lack of redundancy. With RAID 3 data, at the byte level, is striped across multiple disks, but an additional disk is leveraged for storage of parity information, which is used for recovery in the event of a failure.

RAID 4: Striped Set with Dedicated Parity (block level) RAID 4 provides the exact same configuration and functionality as that of RAID 3, but stripes data at the block, rather than byte, level.

RAID 5: Striped Set with Distributed Parity One of the most popular RAID configurations is that of RAID 5, Striped Set with Distributed Parity. Again with RAID 5 there is a focus on striping for the performance increase it offers, and RAID 5 leverages a block level striping. Like RAIDs 3 and 4, RAID 5 writes parity information that is used for recovery purposes. However, unlike RAIDs 3 and 4, which require a dedicated disk for parity information, RAID 5 distributes the parity information across multiple disks.

RAID 6: Striped Set with Dual Distributed Parity While RAID 5 accommodates the loss of any one drive in the array, RAID 6 can allow for the failure of two drives and still function. This redundancy is achieved by writing the same parity information to two different disks.

System redundancy

The most common example of this in-built redundancy is systems or devices which have redundant onboard power in the event of a power supply failure. In addition to redundant power, it is also common to find redundant network interface cards (NICs), as well as redundant disk controllers.

Some applications and systems are so critical that they have more stringent uptime requirements than can be met by standby redundant systems, or spare hardware. These systems and applications typically require what is commonly referred to as a high-availability (HA) or failover cluster. A high-availability cluster employs multiple systems that are already installed, configured, and plugged in, such that if a failure causes one of the systems to fail then the other can be seamlessly leveraged to maintain the availability of the service or application being provided.

Incident response management

Incident handling or incident response are the terms most commonly associated with how an organization proceeds to identify, react, and recover from security incidents.

Computer Security Incident Response Team (CSIRT) is a term used for the group that is tasked with monitoring, identifying, and responding to security incidents.

Phases of incident responses :

  • detection – one of the most important steps in the incident response process is the detection phase. Detection is the phase in which events are analyzed in order to determine whether these events might comprise a security incident.
  • containment – is the point at which the incident response team attempts to keep further damage from occurring as a result of the incident.This phase is also typically where a binary (bit by bit) forensic backup is made of systems involved in the incident.
  • eradication – involves the process of understanding the cause of the incident so that the system can be reliably cleaned and ultimately restored to operational status later in the recovery phase.
  •  recovery – involves cautiously restoring the system or systems to operational status.
  • reporting – is the one most likely to be neglected in immature incident response programs. This fact is unfortunate because the reporting phase, if done right, is the phase that has the greatest potential to effect a positive change in security posture. The goal of the reporting phase is to provide a final report on the incident, which will be delivered to management.

(My) CISSP Notes – Physical Security

Note: This notes were made using the following books: “CISPP Study Guide” and “CISSP for dummies”.

Physical (Environmental) security protects the Confidentiality, Integrity and Availability  of physical assets: people, buildings, systems, and data. The CISSP® exam considers human safety as the most critical concern of this domain, which trumps all other concerns.

Physical security protects against threats such as unauthorized access and disasters, both man-made and natural. Controls used in this domain are primarily physical (such as locks, fences, guards, etc.); administrative controls (such as policy and procedures) and technical (such as biometrics) are also used.

Physical access control

Physical access control consists of the systems and techniques used to restrict access to a security perimeter and provide boundary protection.

Types of Vehicle Gates :

  • class 1 – residential (home use)
  • class 2 – commercial/general access (parking garage)
  • class 3 – industrial/limited access
  • class 4 – restricted access

A traffic bollard is a strong post designed to stop a car.

Lock picking is the art of opening a lock without a key.

The master key opens any lock for a given security zone in a building.

The core key is used to remove the lock core in interchangeable core locks (where the lock core may be easily removed and replaced with another core).

A smart card is physical access control device which is often used for electronic locks, credit card purchases, or dual-factor authentication systems.

A magnetic stripe card contains a magnetic stripe which stores information.

A mantrap is a preventive physical control with two doors. The first door must close and lock before the second door may be opened. Each door typically requires a separate form of authentication to open.

Turnstiles are designed to prevent tailgating by enforcing a “one person per authentication” rule, just as they do in subway systems.

Technical controls

Technical control include monitoring and surveillance, intrusion detection systems and alarms.

Closed Circuit Television (CCTV) is a detective device used to aid guards in detecting the presence of intruders in restricted areas.Key issues include depth of field (the area that is in focus) and field of view (the entire area viewed by the camera). More light allows a larger depth of field because a smaller aperture places more of the image in focus. CCTV displays may display a fixed camera view, autoscan (show a given camera for a few seconds before moving to the next), or multiplexing (where multiple camera feeds are fed into one display).

Ultrasonic, microwave, and infrared motion sensors are active sensors, which means they actively send energy.

If you see the term “intrusion” on the exam, be sure to look for the context (human or network-based).

Door hinges should face inward, or be otherwise protected. Externally-facing hinges that are not secured pose a security risk: attackers can remove the hinge pins with a hammer and screwdriver, allowing the door to be opened from the hinge side.

Use of simple glass windows in a secure perimeter requires a compensating control such as window burglar alarms.

Environmental and life safety controls

Environmental controls are designed to provide a safe environment for personnel and equipment. Power, HVAC, and fire safety are considered environmental controls.

The following are common types of electrical faults:

  •  Blackout: prolonged loss of power
  • Brownout: prolonged low voltage
  • Fault: short loss of power
  • Surge: prolonged high voltage
  • Spike: temporary high voltage
  • Sag: temporary low

Heat detectors, flame detectors, and smoke detectors provide three methods for detecting fire.

The two primary evacuation roles are safety warden and meeting point leader.

Classes of Fire and Suppression Agents :

  • Class A  – fires are common combustibles such as wood, paper, etc. This type of fire is the most common and should be extinguished with water or soda acid.
  • Class B  – fires are burning alcohol, oil, and other petroleum products such as gasoline. They are extinguished with gas or soda acid. You should never use water to extinguish a class B fire.
  • Class C  – fires are electrical fires which are fed by electricity and may occur in equipment or wiring. Electrical fires are Conductive fires, and the extinguishing agent must be non-Conductive, such as any type of gas.
  • Class D  – fires are burning metals and are extinguished with dry powder.
  • Class K – fires are kitchen fires, such as burning oil or grease. Wet chemicals are used to extinguish class K fires.

Experts always prefer to prevent a fire rather than extinguish one, and are often generous with their time dedicated to preventive measures.

All fire suppression agents work via four methods (sometimes in combination): reducing the temperature of the fire, reducing the supply of oxygen, reducing the supply of fuel, and interfering with the chemical reaction within fire.

Always consider “hire or ask an expert” as a valid choice for any exam question asking about “the best thing to do.” Do not fall for the engineer’s trap of “I will figure this out on my own.”

Water suppresses fire by lowering the temperature below the kindling point (also called the ignition point). Water is the safest of all suppressive agents, and recommended for extinguishing common combustible fires such as burning paper or wood.

In addition to suppressing fire by lowering temperature, soda acid also has additional suppressive properties beyond plain water: it creates foam which can float on the surface of some liquid fires, starving the oxygen supply.

Extinguishing a fire with dry powder (such as sodium chloride) works by lowering temperature and smothering the fire, starving it of oxygen. Dry powder is primarily used to extinguish metal fires.

Wet chemicals are primarily used to extinguish kitchen fires (type K fires in the U.S.; type F in Europe), but may also be used on common combustible fires (type A).

CO2, oxygen, and nitrogen are what we breathe as air. Fires require oxygen as fuel, so fires may be smothered by removing the oxygen: this is how CO2 fire suppression works. A risk associated with CO2 is it is odorless and colorless, and our bodies will breathe it as air. By the time we begin suffocating due to lack of oxygen, it is often too late.

Halon extinguishes fire via a chemical reaction that consumes energy and lowers the temperature of the fire.Halon has ozone-depleting properties. Due to this effect, the 1989 Montreal Protocol (formally called the “Montreal Protocol on Substances That Deplete the Ozone Layer”) banned production and consumption of new halon in developed countries by January 1, 1994.

Recommended replacements for halon include the following systems: • Argon • FE-13 • FM-200

CO2, halon, and halon substitutes such as FM-200 are considered gas-based systems. All gas systems should use a countdown timer (both visible and audible) before gas is released. This is primarily for safety reasons, to allow personnel evacuation before release. A secondary effect is to allow personnel to stop the release in case of false alarm.

Water is usually the recommended fire suppression agent. Water (in the absence of electricity) is the safest suppression agent for people.

Dry pipe systems also have closed sprinkler heads: the difference is the pipes are filled with compressed air. The water is held back by a valve that remains closed as long as sufficient air pressure remains in the pipes. As the dry pipe sprinkler heads open, the air pressure drops in each pipe, allowing the valve to open and send water to that head.

Dry pipes are often used in areas where water may freeze, such as parking garages.

Deluge systems are similar to dry pipes, except the sprinkler heads are open and larger than dry pipe heads. The pipes are empty at normal air pressure; the water is held back by a deluge valve.

(My) CISSP Notes – Cryptography

Note: This notes were made using the following books: “CISPP Study Guide” and “CISSP for dummies”.

Cryptographic concepts

Cryptology is the science of secure communications. Cryptography creates messages whose meaning is hidden; cryptanalysis is the science of breaking encrypted messages (recovering their meaning).

A cipher is a cryptographic algorithm. A plaintext is an unencrypted message.

Cryptography can provide confidentiality (secrets remain secret) and integrity (data is not altered in an unauthorized manner). Cryptography can also provide authentication (proving an identity claim). Additionally, cryptography can provide nonrepudiation, which is an assurance that a specific user performed a specific transaction and that the transaction did not change.

Diffusion means the order of the plaintext should be “diffused” (or dispersed) in the ciphertext. Confusion means that the relationship between the plaintext and ciphertext should be as confused (or random) as possible.

Cryptographic substitution replaces one character for another; this provides diffusion. Permutation (also called transposition) provides confusion by rearranging the characters of the plaintext, anagram-style.

The work factor describes how long it will take to break a cryptosystem (decrypt a ciphertext without the key).

A monoalphabetic cipher uses one alphabet: a specific letter (like “E”) is substituted for another (like “X”). A polyalphabetic cipher uses multiple alphabets: “E” may be substituted for “X” one round, and then “S” the next round.

There are three primary types of modern encryption: symmetric, asymmetric, and hashing.

one-time pad uses identical paired pads of random characters, with a set amount of characters per page.

The one-time pad is the only encryption method that is mathematically proven to be secure, if the following three conditions are met: the characters on the pad are truly random, the pads are kept secure, and no page is ever reused.

The first known use of a one-time pad was the Vernam Cipher, named after Gilbert Vernam, an employee of AT&T Bell Laboratories.

COCOM is the Coordinating Committee for Multilateral Export Controls, which was in effect from 1947 to 1994.

Symmetric encryption

Symmetric encryption uses one key to encrypt and decrypt.

Symmetric encryption may have stream and block modes. Stream mode means each bit is independently encrypted in a “stream.” Block mode ciphers encrypt blocks of data each round.

An initialization vector is used in some symmetric ciphers to ensure that the first encrypted block of data is random. This ensures that identical plaintexts encrypt to different ciphertexts.

Chaining (called feedback in stream modes) seeds the previous encrypted block into the next block to be encrypted.

Symmetric encryption advantages: speed, strength (strength is gained when used with large keys, 128 bits, 256 bits or larger), availability (there are many algorithms available to select and use).

Symmetric encryption disadvantages: keys distribution (secure distribution of the keys is absolutely required), scalability (a different key is required for each pair of communication parties), limited functionality (symmetry systems can’t provide authentication or non-repudiation).

DES (Data Encryption Standard)

DES is a block cypher that uses a 64-bit block size (meaning it encrypts 64 bits each round) and a 56-bit key.

DES can use five different modes to encrypt data. The modes’ primary difference is block versus (emulated) stream, the use of initialization vectors, and whether errors in encryption will propagate to subsequent blocks.

The five modes of DES are:

  • Electronic Code Book (ECB) – is the simplest and weakest form of DES. It uses no initialization vector or chaining. Identical plaintexts with identical keys encrypt to identical ciphertexts.
  • Cipher Block Chaining (CBC) – is a block mode of DES that XORs the previous encrypted block of ciphertext to the next block of plaintext to be encrypted. The first encrypted block is an initialization vector that contains random data. This “chaining” destroys patterns. One limitation of CBC mode is that encryption errors will propagate: an encryption error in one block will cascade through subsequent blocks due to the chaining, destroying their integrity.
  • Cipher Feedback (CFB) – is very similar to CBC; the primary difference is CFB is a stream mode.Errors will not propagate.
  • Output Feedback (OFB) – is also a stream cipher very similar to CFB. In this mode, previous plaintext is used as feedback (chaining) for key generation.
  • Counter Mode (CTR) This mode shares the same advantages as OFB (patterns are destroyed and errors do not propagate) with an additional advantage: since the feedback can be as simple as an ascending number, CTR mode encryption can be done in parallel.

Triple DES

Triple DES applies single DES encryption three times per block.

Triple DES has held up well after years of cryptanalysis; the primary weakness is that it is slow and complex compared to newer symmetric algorithms such as AES or Twofish.

Triple DES applies DES encryption three times per block. FIPS 46-3 describes Encrypt, Decrypt, Encrypt” (EDE) order using three keying options: one, two, or three unique keys (called 1TDES EDE, 2TDES EDE, and 3TDES EDE, respectively).

If you “decrypt” with a different key than the one used to encrypt, you are really encrypting further. Also, EDE with one key allows backwards compatibility with single DES.

2TDES EDE uses key 1 to encrypt, key 2 to “decrypt,” and key 1 to encrypt. This results in 112 bits of key length.

3TDES EDE (three different keys) is the strongest form, with 168 bits of key length.

International Data Encryption Algorithm

The International Data Encryption Algorithm is a symmetric block cipher designed as an international replacement to DES. The IDEA algorithm is patented in many countries. It uses a 128-bit key and 64-bit block size.

Advanced Encryption Standards (AES)

AES was designed to replace DES. Two- and three-key TDES EDE remain a FIPS-approved standard until 2030, to allow transition to AES. Single DES is not a current standard, and not recommended.

AES has four functions:

  • SubBytes –  provides confusion by substituting the bytes of the State. The bytes are substituted according to a substitution table (also called an S-Box).
  • ShiftRows –  provides diffusion by shifting rows of the State.
  • MixColumns – provides diffusion by “mixing” the columns of the State via finite field mathematics,
  • AddRoundKey –  is the final function applied in each round. It XORS the State with the subkey. The subkey is derived from the key, and is different for each round of.

Blowfish and Twofish are symmetric block ciphers created by teams lead by Bruce Schneier,

RC5 and RC6 are symmetric block ciphers by RSA Laboratories.

RC6 was an AES finalist. It is based on RC5, altered to meet the AES requirements.

Asymmetric encryption

Asymmetric encryption uses two keys: if you encrypt with one key, you may decrypt with the other.

The main disadvantage of asymmetric encryption is theirs lower speed.

The main significant advantages of the asymmetric encryptions are extended functionality (can provide both confidentiality and authentication) and scalability (it solves the key management issues associated with symmetric keys systems).

Some mathematical concepts

The asymmetric algorithms uses “one-way functions”. An example of one-way function is factoring a composite number into its primes. Multiplying the prime number 6269 by the prime number 7883 results in the composite number 49 418 527. That “way” it’s easy to compute. Answering the question “which prime number times which prime number equals 49,418,527” is much more difficult. That problem is called factoring, This is the basis of RSA algorithm.

Factoring a large composite number (one thousands of bits long) is so difficult that the composite number can be safely publicly posted (this is the public key).

The primes that are multiplied to create the public key must be kept private (they are the private key).

A logarithm is the opposite of exponentiation. Computing 7 to the 13th power (exponentiation) is easy on a modern calculator: 96,889,010,407. Asking the question “96,889,010,407 is 7 to what power” (finding the logarithm) is more difficult. This is the basis of the Diffie-Hellman algorithm.

Key agreement allows two parties to securely agree on a symmetric key via a public channel, such as the Internet, with no prior key exchange.

Asymmetric and symmetric encryption are typically used together: use an asymmetric algorithm such as RSA to securely send someone an AES (symmetric) key. The symmetric key is called the session key; a new session key may be retransmitted periodically via RSA.

Use the slower and weaker asymmetric system for the one part that symmetric encryption cannot do: securely preshare keys. Once shared, leverage the fast and strong symmetric encryption to encrypt all further traffic.

Hash functions

Hash functions are primarily used to provide integrity: if the hash of a plaintext changes, the plaintext itself has changed.

MD5 creates a 128-bit hash value based on any input length.

MD6 is the newest version of the MD family.

SHA-1 creates a 160-bit hash value.

SHA-2 includes SHA-224, SHA-256, SHA-384, and SHA-512, named after the length of the message digest each creates.

HAVAL uses some of the design principles behind the MD family of hash algorithms, and is faster than MD5.

Cryptographic attacks

Cryptographic attacks are used by cryptanalysts to recover the plaintext without the  key.

  • brute-force attack  – generates the entire keyspace, which is every possible key.
  • known plaintext –  relies on recovering and analyzing a matching plaintext and ciphertext pair: the goal is to derive the key which was used.
  • chosen plaintext and adaptive chosen plaintext– a cryptanalyst chooses the plaintext to be encrypted in a chosen plaintext attack; the goal is to derive the key. Adaptive chosen plaintext begins with a chosen plaintext attack in round 1. The cryptanalyst then “adapts” further rounds of encryption based on the previous round.
  • chosen cipher text and adaptive chosen cipher text -chosen cipher text attacks mirror chosen plaintext attacks: the difference is that the cryptanalyst chooses the cipher text to be decrypted. This attack is usually launched against asymmetric crypto-systems, where the cryptanalyst may choose public documents to decrypt which are signed (encrypted) with a user’s public key.
  • meet-in-the-middle attack – encrypts on one side, decrypts on the other side, and meets in the middle.
  • known key attack – the term “known key attack” is misleading: if the cryptanalyst knows the key, the attack is over. Known key means the cryptanalyst knows something about the key, to reduce the efforts used to attack it. If the cryptanalyst knows that the key is an uppercase letter and a number only, other characters may be omitted in the attack.
  • differential cryptanalysis  – seeks to find the “difference” between related plaintexts that are encrypted.
  • linear cryptanalysis – is a known plaintext attack where the cryptanalyst finds large amounts of plaintext/ciphertext pairs created with the same key.
  • side-channel attacks – use physical data to break a cryptosystem, such as monitoring CPU cycles or power consumption used while encrypting or decrypting.
  • the birthday attack – is used to create hash collisions. Just as matching your birthday is difficult, finding a specific input with a hash that collides with another input is difficult. However, just like matching any birthday is easier, finding any input that creates a colliding hash with any other input is easier due to the birthday attack.
  • key clustering – occurs when two symmetric keys applied to the same plaintext produce the same ciphertext. This allows two different keys to decrypt the ciphertext.

(My) CISSP Notes – Legal, regulations , investigations and compliance

Note: This notes were made using the following books: “CISPP Study Guide” and “CISSP for dummies”.

This chapter will introduce some of the basic concepts that are important to all information security professionals. The actual implementation of laws surrounding intellectual property, privacy, reasonable searches, and breach notification, to name a few, will differ amongst various regions of the world, but the importance of these concepts is still universal.

Major types and classifications of laws

The three major systems of law are:

  • civil – by far the most common of the major legal systems is that of civil law, which is employed by many countries throughout the world. The system of civil law leverages codified laws or statutes to determine what is considered within the bounds of law.
  • common – Common law is the legal system used in the United States, Canada, the United Kingdom, and most former British colonies, amongst others.The primary distinguishing feature of common law is the significant emphasis on particular cases and judicial precedents as determinants of laws.
  • religious – religious law serves as the third of the major legal systems. Religious doctrine or interpretation serves as a source of legal understanding and statutes. However, the extent and degree to which religious texts, practices, or understanding are consulted can vary greatly.

The most significant difference between civil and common law is that, under civil law, judicial precedents and particular case rulings do not carry the weight they do under common law.

Within common law there are various branches of laws:

  • criminal law – pertains to those laws where the victim can be seen as society itself. The goals of criminal law are to deter crime and to punish offenders.
  • civil law -Another term associated with civil law is tort law, which deals with injury, loosely defined, that results from someone violating their responsibility to provide a duty of care. Tort law is the primary component of civil law, and is the most significant source of lawsuits seeking damages.While, in criminal law, society is seen as the victim, in civil law the victim will be an individual, group, organization.Another difference between criminal and civil law is the goal of each. The focus of criminal law is punishment and deterrence; civil law focuses on compensating the victim.The most common outcome of a successful ruling against a defendant is requiring the payment of financial damages.
  • administrative law – Administrative law or regulatory law is law enacted by government agencies.Some examples of administrative law are FCC regulations, HIPAA Security mandates, FDA regulations, and FAA regulations.

Types of laws relevant to computer crimes

One of the most difficult aspects of prosecution of computer crimes is attribution. Meeting the burden of proof requirement in criminal proceedings, beyond a reasonable doubt, can be difficult given an attacker can often spoof the source of the crime or can leverage different systems under someone else’s control.

Intellectual property

Intelectual property is protected by the U.S law under one of four classifications:

  • patents – Patents provide a monopoly to the patent holder on the right to use, make, or sell an invention for a period of time in exchange for the patent holder’s making the invention public.
  • trademarks – Trademarks are associated with marketing: the purpose is to allow for the creation of a brand that distinguishes the source of products or services.
  • copyrights – represents a type of intellectual property that protects the form of expression in artistic, musical, or literary works, and is typically denoted by the circle c symbol. Software is typically covered by copyright as if it were a literary work. Two important limitations on the exclusivity of the copyright holder’s monopoly exist: the doctrines of first sale and fair use. The first sale doctrine allows a legitimate purchaser of copyrighted material to sell it to another person. If the purchasers of a CD later decide that they no longer cared to own the CD, the first sale doctrine gives them the legal right to sell the copyrighted material even though they are not the copyright holders.
  • trade secrets – business-proprietary information that is important to an organization’s ability to compete. Software source code or firmware code are examples of computer-related objects that an organization may protect as trade secrets.

Privacy and data protection laws

Privacy and data protection laws are enacted to protect information collected and maintained on individuals from unauthorized disclosure or misuse.

Several important pieces of privacy and data protection legislation include :

Associated with personal data privacy concerns are the recent development of breach notification laws. The push for mandatory notification of persons whose personal data has been, or is likely to have been, compromised started with state laws.

Legal liability is another important legal concept for information security professionals and their employers. Society has grown quite litigious over the years, and the question of whether an organization is legally liable for specific actions or inactions can prove costly.

Two important terms to understand are due care and due diligence, which have become common standards that are used in determining corporate liability in courts of law.

The standard of due care, or a duty of care, provides a framework that helps to define a minimum standard of protection that business stakeholders must attempt to achieve.

Due care discussions often reference the Prudent Man Rule, and require that the organization engage in business practices that a prudent, right thinking, person would consider to be appropriate.

A concept closely related to due care is due diligence. While due care intends to set a minimum necessary standard of care to be employed by an organization, due diligence requires that an organization continually scrutinize their own practices to ensure that they are always meeting or exceeding the requirements for protection of assets and stakeholders. Due diligence is the management of due care: it follows a formal process.

Computer crime and information security laws

Legal aspects of investigations

Digital forensics

Digital forensics provides a formal approach to dealing with investigations and evidence with special consideration of the legal aspects of this process.

The main distinction between forensics and incident response is that forensics is evidence-centric and typically more closely associated with crimes, while incident response is more dedicated to identifying, containing, and recovering from security incidents.

The forensic process must preserve the “crime scene” and the evidence in order to prevent unintentionally violating the integrity of either the data or the data’s environment. A primary goal of forensics is to prevent unintentional modification of the system.

Anti-forensics makes forensic investigation difficult or impossible.

The general phases of the forensic process are: the identification of potential evidence; the acquisition of that evidence; analysis of the evidence; and finally production of a report.

While forensics investigators traditionally removed power from a system, the typical approach now is to gather volatile data. Acquiring volatile data is called live forensics, as opposed to the post-mortem forensics associated with acquiring a binary disk image from a powered down system.


Evidence is one of the most important legal concepts for information security professionals to understand.

Evidence should be relevant, authentic, accurate, complete, and convincing. Evidence gathering should emphasize these criteria.

  • Real (or physical) evidence  – consists of tangible or physical objects.
  • Direct evidence  – is testimony provided by a witness regarding what the witness actually experienced with her five senses.
  • Circumstantial evidence –  is evidence which serves to establish the circumstances related to particular points or even other evidence.In order to strengthen a particular fact or element of a case there might be a need for corroborative evidence. This type of evidence provides additional support for a fact that might have been called into question.
  • Hearsay evidence – constitutes second-hand evidence. As opposed to direct evidence, which someone has witnessed with her five senses, hearsay evidence involves indirect information.Business and computer generated records are generally considered hearsay evidence, but case law and updates to the Federal Rules of Evidence have established exceptions to the general rule of business records and computer generated data and logs being hearsay.

Courts prefer the best evidence possible. Original documents are preferred over copies: conclusive tangible objects are preferred over oral testimony. Recall that the five desirable criteria for evidence suggest that, where possible, evidence should be: relevant, authentic, accurate, complete, and convincing.

Secondary evidence is a class of evidence common in cases involving computers. Secondary evidence consists of copies of original documents and oral descriptions.

Computer-generated logs and documents might also constitute secondary rather than best evidence.

Evidence must be reliable. It is common during forensic and incident response investigations to analyze digital media. It is critical to maintain the integrity of the data during the course of its acquisition and analysis. Checksums can ensure that no data changes occurred as a result of the acquisition and analysis.

In addition to the use of integrity hashing algorithms and checksums, another means to help express the reliability of evidence is by maintaining chain of custody documentation. Chain of custody requires that once evidence is acquired, full documentation regarding who, what, and when and where evidence was handled is maintained.

Entrapment is when law enforcement, or an agent of law enforcement, persuades someone to commit a crime when the person otherwise had no intention to commit a crime. Entrapment can serve as a legal defense in a court of law, and, therefore, should be avoided if prosecution is a goal.

A closely related concept is enticement. Enticement could still involve agents of law enforcement making the conditions for commission of a crime favorable, but the difference is that the person is determined to have already broken a law or is intent on doing so.

Professional ethics

Ethics help to describe what you should do in a given situation based on a set of principles or values.

ISC2 Code of ethics contains 4 mandatory canons:

  1. protect society, the commonwealth and the infrastructure.
  2. act honorably, honestly, justly, responsibly and legally.
  3. provide diligent and competent service to principals.
  4. advance and protect the profession.

(My) CISSP Notes – Application development security

Note: This notes were made using the following books: “CISPP Study Guide” and “CISSP for dummies”.

Programming concepts

Machine code (also called machine language) is a software that is executed directly by the CPU. Machine code is CPU-dependent; it is a series of 1s and 0s that translate to instructions that are understood by the CPU.

Source code is computer programming language instructions which are written in text that must be translated into machine code before execution by the CPU.

Assembly language is a low-level computer programming language.

Compilers take source code, such as C or Basic, and compile it into machine code.

Interpreted languages differ from compiled languages: interpreted code (such as shell code) is compiled on the fly each time the program is run.

Procedural languages (also called procedure-oriented languages) use subroutines, procedures, and functions.

Object-oriented languages attempt to model the real world through the use of objects which combine methods and data.

The different generations of languages:

Application Development Methods

The Waterfall Model is a linear application development model that uses rigid phases; when one phase ends, the next begins.

The waterfall model contains the following steps:

  • System requirements
  • Software Requirements
  • Analysis
  • Program Design
  • Coding
  • Testing
  • Operations

An unmodified waterfall does not allow iteration: going back to previous steps. This places a heavy planning burden on the earlier steps. Also, since each subsequent step cannot begin until the previous step ends, any delays in earlier steps cascade through to the later steps.

The unmodified Waterfall Model does not allow going back. The modified Waterfall Model allows going back at least one step.

The Sashimi Model has highly overlapping steps; it can be thought of as a real-world successor to the Waterfall Model (and is sometimes called the Sashimi Waterfall Model).

Sashimi’s steps are similar to the Waterfall Model’s; the difference is the explicit overlapping,

Agile Software Development evolved as a reaction to rigid software development models such as the Waterfall Model. Agile methods include Scrum and Extreme Programming (XP).

Scrum contain small teams of developers, called the Scrum Team. They are supported by a Scrum Master, a senior member of the organization who acts like a coach for the team. Finally, the Product Owner is the voice of the business unit.

Extreme Programming (XP) is an Agile development method that uses pairs of programmers who work off a detailed specification.

The Spiral Model is a software development model designed to control risk.

The spiral model repeats steps of a project, starting with modest goals, and expanding outwards in ever wider spirals (called rounds). Each round of the spiral constitutes a project, and each round may follow traditional software development methodology such as Modified Waterfall. A risk analysis is performed each round.

The Systems Development Life Cycle (SDLC, also called the Software Development Life Cycle or simply the System Life Cycle) is a system development model.

SDLC focuses on security when used in context of the exam.

No metter what development model is used, these principles are important in order to ensure that the resulting software is secure:

  • security in the requirements – even before the developers design the software, the organization should determine what security features the software needs.
  • security in the design – the design of the application should include security features, ranging from input checking, dtrong authentication, audit logs.
  • security in testing – the organization needs to test all the security requirements and design characteristics before declaring the software ready for production use.
  • security in the implementation
  • ongoing security testing – after an application is implemented, security testing should be performed regularly, in order to make sure that no new security defects are introduced into the software.  

Software escrow describes the process of having a third party store an archive or computer software.

Software vulnerabilities testing

Software testing methods

  • Static testing – tests the code passively, the code is not running, this includes syntax checking, code reviews.
  • Dynamic testing – tests the code while it executing it.
  • White box testing – gives the tester access to program source code.
  • Black box testing – gives the tester no internal details, the application is treated as a black box that receives inputs.

Software testing levels

  • Unit Testing – Low-level tests of software components, such as functions, procedures or objects
  • Installation Testing – Testing software as it is installed and first operated
  • Integration Testing – Testing multiple software components as they are combined into a working system.
  • Regression Testing – Testing software after updates, modifications, or patches • Acceptance Testing: testing to ensure the software meets the customer’s operational requirements.

Fuzzing (also called fuzz testing) is a type of black box testing that enters random, malformed data as inputs into software programs to determine if they will crash.

Combinatorial software testing is a black-box testing method that seeks to identify and test all unique combinations of software inputs. An example of combinatorial software testing is pairwise testing (also called all pairs testing).

Software Capability Maturity Model

The Software Capability Maturity Model (CMM) is a maturity framework for evaluating and improving the software development process.

The goal of CMM is to develop a methodical framework for creating quality software which allows measurable and repeatable results.

The five levels of CMM :

  1. Initial: The software process is characterized as ad hoc, and occasionally even chaotic.
  2. Repeatable: Basic project management processes are established to track cost, schedule, and functionality.
  3. Defined: The software process for both management and engineering activities is documented, standardized, and integrated into a standard software process for the organization.
  4. Managed: Detailed measures of the software process and product quality are collected, analyzed, and used to control the process. Both the software process and products are quantitatively understood and controlled.
  5. Optimizing: Continual process improvement is enabled by quantitative feedback from the process and from piloting innovative ideas and technologies.


A database is a structured collection of related data.

Types of databases :

  • relational databases – the structure of the relation database its defined by its schema. Records are called rows, and rows are stored in tables. Databases must ensure the integrity of the data. There are three integrity issues that must be addressed beyond the correctness of the data itself: referential integrity (every foreign key in a secondary table matches a primary key in the parent table), semantic integrity (each column value is consistent with attribute data type) and entity integrity (each tuple has a unique primary key that is not null). Data definition language (DDL) is used to create, modify and delete tables. Data manipulation language (DML) is used to query and update data stored in tables.
  • hierarchical – data in a hierarchical database is arranged in tree structures, with parent records at the top of the database, and a hierarchy of child records in successive layers.
  • object oriented – the objects in a object database include data records, as well as their methods.

Database normalization seeks to make the data in a database table logically concise, organized, and consistent. Normalization removes redundant data, and improves the integrity and availability of the database.

Databases may be highly available, replicated over multiple servers containing multiple copies of data. Database replication mirrors a live database, allowing simultaneous reads and writes to multiple replicated databases. A two-phase commit can be used to ensure integrity.

A shadow database is similar to a replicated database with one key difference, a shadow database mirrors all changes made to the primary database, but the clients do not have access to the shadow.

Knowledge-based systems

Expert systems consist of two main components. The first is a knowledge base that consists of “if/then” statements. These statements contain rules that the expert system uses to make decisions. The second component is an inference engine that follows the tree formed by the knowledge base, and fires a rule when there is a match.

Neural networks mimic the biological function of the brain. A neural-network accumulates knowledge by observing events; it measures their inputs and outcome. Over time, the neural network becomes proficient at correctly predicting an outcome because it has observers several repetitions of the circumstances ans is also told the outcome each time.

(My) CISSP Notes – Telecommunications and network security (II)

Note: This notes were made using the following books: “CISPP Study Guide” and “CISSP for dummies”.

Network Layer protocols and concepts

Routing protocols

Routing protocols are defined at the network level and specify how routers communicate with one another or a WAN.The goals of routing protocols are to automatically learn a network topology, and learn the best routes between all network points.Routing protocols are classified as static or dynamic.

static routing protocol requieres an administrator to create and update routes manually on the router. A dynamic routing protocol can discover routes and determine the best route to a given destination at any given time.

Metrics are used to determine the “best” route across a network. The simplest metric is hop count.

Distance vector routing protocols use simple metrics such as hop count, and are prone to routing loops, where packets loop between two routers.

  • RIP(Routing Information Protocol) is a distance vector routing protocol that uses hop count as its metric.RIP does not have a full view of a network: it can only “see” directly connected routers. Convergence is slow. Convergence means that all routers on a network agree on the state of routing. A network that has had no recent outages is normally “converged”: all routers see all routes as available. Then a circuit goes down. The routers closest to the outage will know right away; routers that are further away will not. The network now lacks convergence. RIP is used by the UNIX routed command, and is the only routing protocol universally supported by UNIX.RIP is quite limited. Each router has a partial view of the network and each sends updates every 30 seconds, regardless of change. Convergence is slow.

Link state routing protocols factor in additional metrics for determining the best route, including bandwidth.

  •  OSPF (Open Shortest Path First) is an open link state routing protocol. OSPF routers learn the entire network topology for their “area” (the portion of the network they maintain routes for, usually the entire network for small networks). OSPF it’s considered an Interior Gateway Protocol (IGP) because it performs routing within a single autonomous system. An autonomous system (AS) is a group of IP address uder the control of the a single Internet entity.
  • BGP (Border Gateway Protocol) is the routing protocol used on the Internet. BGP it’s considered an Exterior Gateway Protocol (EGP) because it performs routing between separate autonomous systems.

Routed protocols

Routed protocols are network layer protocols that address packets with routing information, which allows those packets to be transported across networks by using routing protocols.

IP (Internet Protocol) – IPv4 is Internet Protocol version 4, commonly called “IP.” It is the fundamental protocol of the Internet, designed in the 1970s to support packet-switched networking for the United States Defense Advanced Research Projects Agency (DARPA).

IP is a simple protocol, designed to carry data across networks.IP is connectionless and unreliable: it provides “best effort” delivery of packets. If connections or reliability are required, they must be provided by a higher level protocol carried by IP, such as TCP.IPv4 uses 32-bit source and destination addresses.

If a packet exceeds the Maximum Transmission Unit (MTU) of a network, it may be fragmented by a router along the path. An MTU is the maximum PDU size on a network. Fragmentation breaks a large packet into multiple smaller packets.

The original IPv4 networks were “classful”, classified in classes:

Class Leading
Size of network
 bit field
Size of rest
bit field
of networks
per network
Start address End address
Class A     0     8     24     128 (27)     16,777,216 (224)
Class B     10     16     16     16,384 (214)     65,536 (216)
Class C     110     24     8     2,097,152 (221)     256 (28)
Class D (multicast)     1110     not defined     not defined     not defined     not defined
Class E (reserved)     1111     not defined     not defined     not defined     not defined

IPv6 is the successor to IPv4, featuring far larger address space (128 bit addresses compared to IPv4’s 32 bits), simpler routing, and simpler address assignment.IPv6 hosts can statelessly autoconfigure a unique IPv6 address, omitting the need for static addressing or DHCP. IPv6 stateless autoconfiguration takes the host’s MAC address and uses it to configure the IPv6 address.

Stateless autoconfiguration removes the requirement for DHCP (Dynamic Host Configuration Protocol), but DHCP may be used with IPv6: this called “stateful autoconfiguration,” part of DHCPv6.

IPv6’s much larger address space also makes NAT (Network Address Translation) unnecessary, but various IPv6 NAT schemes have been proposed, mainly to allow easier transition from IPv4 to IPv6.

Hosts may also access IPv6 networks via IPv4; this is called tunneling. Another IPv6 address worth noting is the loopback address: ::1. This is equivalent to the IPv4 address of

Hosts may also access IPv6 networks via IPv4; this is called tunneling. Another IPv6 address worth noting is the loopback address: ::1. This is equivalent to the IPv4 address of

An IPv6-enabled system will automatically configure a link-local address (beginning with fe80:…) without the need for any other ipv6-enabled infrastructure. That host can communicate with other link-local addresses on the same LAN. This is true even if the administrators are unaware that IPv6 is now flowing on their network.

Network Address Translation (NAT) is used to translate IP addresses. It is frequently used to translate RFC1918 addresses as they pass from intranets to the Internet.

Three types of NAT are static NATpool NAT (also known as dynamic NAT), and Port Address Translation (PAT, also known as NAT overloading). Static NAT makes a one-to-one translation between addresses, such as→ Pool NAT reserves a number of public IP addresses in a pool, such as→ Addresses can be assigned from the pool, and then returned. Finally, PAT typically makes a many-to-one translation from multiple private addresses to one public IP address, such as 192.168.1.⁎ to PAT is a common solution for homes and small offices: multiple internal devices such as laptops, desktops and mobile devices share one public IP address.

Other network layer protocols

  • ICMP (Internet Control Message Protocol) – reports errors and other information back to the source regarding the processing of transmitted IP packets.
  • SKIP (Simple Key Management for Internet Protocols) – is a key management protocol used to share encryptions keys.

Network equipement

Routers are Layer 3 devices that route traffic from one LAN to another. IP-based routers make routing decisions based on the source and destination IP addresses.For simple routing needs, static routes may suffice. Static routes are fixed routing entries.Most SOHO (Small Office/Home Office) routers have a static “default route” that sends all external traffic to one router (typically controlled by the ISP).

Static routes work fine for simple networks with limited or no redundancy, like SOHO networks. More complex networks with many routers and multiple possible paths between networks have more complicated routing needs.

Transport Layer protocols and concepts

  • TCP (Transmission Control Protocol) – TCP uses a three-way handshake to establish a reliable connection. The connection is full duplex, and both sides synchronize (SYN) and acknowledge (ACK) each other. The exchange of these four flags is performed in three steps: SYN, SYN-ACK, ACK. TCP connects from a source port to a destination port. The TCP port field is 16 bits, allowing port numbers from 0 to 65535. There are 2 types of ports: reserved and ephemeral. A reserved port is 1023 or lower, ephemeral ports are 1024-65535. TCP is connection-oriented (establishes and manages a direct virtual connection to the remote device), is reliable (guarantees delivery by acknowledging received packets) and slow (because of the additional overhead associated with initial handshaking).
  • UDP (User Datagram Protocol) – UDP has no handshake, session or reliability. UDP header fields include source IP, destination IP, packet length (header and data), and a simple (and optional) checksum. If used, the checksum provides limited integrity to the UDP header and data. Unlike TCP, data usually is transferred immediately, in the first UDP packet. UDP operates at Layer 4. So, UDP is connectionless (don’t pre-establish a communication circuit with the remote host), is best-effort (don’t guarantees delivery) and fast (no overhead associated with circuit establishment).
  • SPX (Sequenced Packet Exchange) – the protocol is used to guarantee data delivery in older Novell NetWare networks.
  • SSL/TLS (Secure Sockets Layer/Transport Layer Security) – provides session-based encryption and authentication for secure communication between clients and servers on Internet.

Session Layer protocols and concepts

The session layer is responsible for establishing, coordinating and terminating communication protocols.

Some examples of Session Layer protocols include:

  • Telnet – provides terminal emulation over the network; Telnet provides no confidentiality and has limited integrity.
  • SSH (Secure Shell) – was designed as a secure replacement for Telnet.
  • SIP (Session Initiation Protocol) – protocol for establishing, managing and terminating real-time communications.

Network Security

Network security is implemented with various technologies, including firewalls, intrusion detection systems (IDSs), intrusion prevention systems (IPSs) and virtual private networks (VPNs).


Firewalls filter traffic between networks. Three basic classification of firewalls have been established:

  • packet-filtering – permits or denies trafic based solely on the TCP, UDP ICMP and IP header of the individual packets.This information is compared with predefined rules that have been configured in the access control lists (ACLs) to determine whether a package should be permitted or denied. A packet filter is a simple and fast firewall. It has no concept of “state”: each filtering decision must be made on the basis of a single packet. Stateful firewalls have a state table that allows the firewall to compare current packets to previous ones. Stateful firewalls are slower than packet filters, but are far more secure.
  • circuit-level gateways – controls access by maintaining state information about established connections. When permuted connection is established between two hosts, a tunnel (or virtual circuit) is created for the session, allowing the packets to flow freely between the two hosts.
  • application-level –  firewalls operate up to Layer 7. Unlike packet filter and stateful firewalls which make decisions based on layers 3 and 4 only, application-layer proxies can make filtering decisions based on application-layer data, such as HTTP traffic, in addition to layers 3 and 4. Application-layer proxies must understand the protocol that is proxied, so dedicated proxies are often required for each protocol: an FTP proxy for FTP traffic, an HTTP proxy for Web traffic, etc.

Firewall design has evolved over the years, from simple and flat designs such as dual-homed host and screened host, to layered designs such as the screened subnet.

This evolution has incorporated network defense in depth, leading to the use of DMZ.

A bastion host is any host placed on the Internet which is not protected by another device (such as a firewall). Bastion hosts must protect themselves, and be hardened to withstand attack.

A dual-homed host has two network interfaces: one connected to a trusted network, and the other connected to an untrusted network, such as the Internet.

A DMZ is a Demilitarized Zone network; the name is based on real-world military DMZ. Network servers that receive traffic from untrusted networks such as the Internet should be placed on DMZ networks for this reason. A DMZ is designed with the assumption that any DMZ host may be compromised.


An Intrusion Detection System (IDS) is a detective device designed to detect malicious (including policy-violating) actions. An Intrusion Prevention System (IPS) is a preventive device designed to prevent malicious actions. There are two basic types of IDSs and IPSs: network-based and host-based.

IDS are classified in many different ways, including active (IPS) and passive (IDS), network-based and host-based and knowledge based and behavior-based.

There are four types of IDS events: true positive, true negative, false positive, and false negative.

A Network-based Intrusion Detection System (NIDS) detects malicious traffic on a network. NIDS usually require promiscuous network access in order to analyze all traffic, including all unicast traffic. NIDS are passive devices that do not interfere with the traffic they monitor.

The difference between a NIDS and a NIPS is that the NIPS alters the flow of network traffic.

Host-based Intrusion Detection Systems (HIDS) and Host-based Intrusion Prevention Systems (HIPS) are host-based cousins to NIDS and NIPS.

Knowledge based and behavior-based IDS

A Pattern Matching IDS works by comparing events to static signatures.Pattern Matching works well for detecting known attacks, but usually does poorly against new attacks. A Protocol Behavior IDS models the way protocols should work, often by analyzing RFCs. An Anomaly Detection IDS works by establishing a baseline of normal traffic. The Anomaly Detection IDS then ignores that traffic, reporting on traffic that fails to meet the baseline.

Unlike Pattern Matching, Anomaly Detection can detect new attacks. The challenge is establishing a baseline of “normal”: this is often straightforward on small predictable networks, but can be quite difficult (if not impossible) on large complex networks.

VPNs (Virtual Private Networks)

Virtual Private Networks (VPNs) secure data sent via insecure networks such as the Internet. Common VPN protocol standards include:

  • PPTP (Point-to-Point Tunneling Protocol) – protocol developed by Microsoft for tunneling PPP via IP
  • L2F (Layer 2 Forwarding Protocol) – protocol developed by Cisco that offers similar functionality as PPTP
  • L2TP (Layer 2 Tunneling Protocol) – combines PPTP and L2F (Layer 2 Forwarding, designed to tunnel PPP). L2TP focuses on authentication and does not provide confidentiality: it is frequently used with IPSec to provide encryption.
  • IPSec – IPv4 has no built-in confidentiality; higher-layer protocols such as TLS are used to provide security. To address this lack of security at Layer 3, IPSec (Internet Protocol Security) was designed to provide confidentiality, integrity, and authentication via encryption for IPv6. IPSec has been ported to IPv4. IPSec is a suite of protocols; the major two are Encapsulating Security Protocol (ESP) and Authentication Header (AH).  IPSec has three architectures: host-to-gateway, gateway-to-gateway, and host-to-host. Host-to-gateway mode (also called client mode) is used to connect one system which runs IPSec client software to an IPSec gateway. Gateway-to-gateway (also called point-to-point) connects two IPSec gateways, which form an IPSec connection that acts as a shared routable network connection, like a T1. Finally, host-to-hostmode connects two systems (such as file servers) to each other via IPSec. IPSec can be used in tunnel mode or transport mode. Tunnel mode provides confidentiality (ESP) and/or authentication (AH) to the entire original packet, including the original IP headers. New IP headers are added (with the source and destination addresses of the IPSec gateways). Transport mode protects the IP data (layers 4-7) only, leaving the original IP headers unprotected.

Wireless LAN Security

Wireless Local Area Networks (WLANs) transmit information via electromagnetic waves (such as radio) or light.The most common form of wireless data networking is the 802.11 wireless standard, and the first 802.11 standard with reasonable security is 802.11i.

Frequency Hopping Spread Spectrum (FHSS) and Direct Sequence Spread Spectrum (DSSS) are two methods for sending traffic via a radio band. Some bands, like the 2.4-GHz ISM band, can be quite polluted with interference: Bluetooth, some cordless phones, some 802.11 wireless, baby monitors, and even microwaves can broadcast or interfere with this band. Both DSSS and FHSS are designed to maximize throughput while minimizing the effects of interference.

802.11 wireless NICs can operate in four modes: managed, master, ad hoc, and monitor mode.

  • managed mode – 802.11 wireless clients connect to an access point in managed mode (also called client mode). Once connected, clients communicate with the access point only; they cannot directly communicate with other clients.
  • master mode  – (also called infrastructure mode) is the mode used by wireless access points. A wireless card in master mode can only communicate with connected clients in managed mode.
  • ad hoc mode  – is a peer-to-peer mode with no central access point. A computer connected to the Internet via a wired NIC may advertise an ad hoc WLAN to allow Internet sharing.
  • monitor mode – is a read-only mode used for sniffing WLANs. Wireless sniffing tools like Kismet or Wellenreiter use monitor mode to read all 802.11 wireless frames.

802.11 WLANs use a Service Set Identifier (SSID), which acts as a network name. Wireless clients must know the SSID before joining that WLAN, so the SSID is a configuration parameter.

Another common 802.11 wireless security precaution is restricting client access by filtering the wireless MAC address, allowing only trusted clients. This provides limited security: MAC addresses are exposed in plaintext on 802.11 WLANs: trusted MACS can be sniffed, and an attacker may reconfigure a nontrusted device with a trusted MAC address in software.

WEP is the Wired Equivalent Privacy protocol, an early attempt (first ratified in 1999) to provide 802.11 wireless security. WEP has proven to be critically weak: new attacks can break any WEP key in minutes.

802.11i is the first 802.11 wireless security standard that provides reasonable security. 802.11i describes a Robust Security Network (RSN), which allows pluggable authentication modules. RSN is also known as WPA2 (Wi-Fi Protected Access 2), a full implementation of 802.11i. By default, WPA2 uses AES encryption to provide confidentiality, and CCMP (Counter Mode CBC MAC Protocol) to create a Message Integrity Check (MIC), which provides integrity.

The less secure WPA (without the “2”) was designed for access points that lack the power to implement the full 802.11i standard, providing a better security alternative to WEP. WPA uses RC4 for confidentiality and TKIP for integrity.

Bluetooth, described by IEEE standard 802.15, is a Personal Area Network (PAN) wireless technology, operating in the same 2.4 GHz frequency as many types of 802.11 wireless.

The Wireless Application Protocol (WAP) was designed to provide secure Web services to handheld wireless devices such as smart phones. WAP is based on HTML, and includes HDML (Handheld Device Markup Language).

Radio Frequency Identification (RFID) is a technology used to create wirelessly readable tags for animals or objects. There are three types of RFID tags: Active, semi-passive, and passive. Active and semi-passive RFID tags have a battery; an active tag broadcasts a signal; semi-passive RFID tags rely on a RFID reader’s signal for power.