(My) CSSLP Notes – Secure Software Design

Note: This notes were strongly inspired by the following books: CSSLP Certification All in one and Official (ISC)2 Guide to the CSSLP CBK, Second Edition

Design Process

Attack Surface Evaluation

A software or application’s attack surface is the measure of its exposure of CSSLP-logobeing exploited by a threat agent i.e., weaknesses in its entry and exit points that a malicious attacker can exploit to his or her advantage.
The attack surface evaluation attempts to enumerate the list of features that
an attacker will try to exploit.

Threat Modeling

Threat modeling is the process used to identify and document all the threats to  system.

The threat modeling process have 3 phases:

  1. model the system for which you want to find the threats.
  2. find the threats.
    1. STRIDE model.
    2. attack trees – An attack tree is a hierarchical tree-like structure, which has either an attacker’s objective (e.g., gain administrative level privilege, determine application makeup and configuration, bypass authentication mechanisms, etc.) or a type of attack
      (e.g., buffer overflow, cross site scripting, etc.) at its root node.
  3. address each threat found in the previous step. Once identified,each threat must be evaluated according to the risk attached to it. There are several ways to quantitatively or qualitatively determine the risk ranking for a threat. These range from the simple, non-scientific, Delphi heuristic methodology to more statistically sound risk ranking using the probability of impact and the business impact.
  4. document and validate.

More details about threat modeling can be found here : Threat Modeling for mere mortals and (My) OWASP BeNeLux Days 2016 Notes – Training Day.

Design Considerations

This part is linked to the Secure Software Concepts and contains how the security software concepts can be applied to have a secured application.

  • confidentiality – use cryptographic and masking techniques
  • integrity – use hashing (or hash functions), referential integrity design (uses primary keys and related foreign keys in the database to assure data integrity), resource locking (when two concurrent operations are not allowed on the same object (say a record in the database), because one of the operations locks that record from allowing any changes to it, until it completes its operation, it is referred to as resource locking), and code signing.
  • availability – replication, fail-over and scalability techniques can be used to design the software for availability.
  • authentication – use multi-factor authentication and single sign on (SSO). Rely of already existing mechanism if possible (like the ones offered by the operating system).
  • authorization – rely of already existing mechanism if possible.
  • accounting (audit) – determine of what elements should be logged and under what circumstances.
Some of the common, insecure design issues observed in software are the
following:
  • improper implementation of least privilege
  • software fails insecurely
  • authentication mechanisms are easily bypassed
  • security through obscurity
  • improper error handling
  • weak input validation

Architecture system with secured design principles:

  • good enough security – care should be taken to ensure that the security elements are in response with the actual risk associated with the potential vulnerability; do not over-engineer.
  • least privilege – use of accounts with non-administrative abilities.
    Modular programming is a software design technique in which the entire program is broken down into smaller sub-units or modules. Each module is discrete with unitary functionality and is said to be therefore cohesive, meaning each module is designed to perform one and only one logical operation.
  • separation of duties – the programmer should not be allowed to review his own code nor should a programmer have access to deploy code to the production environment.
  • defense in depth
    • use of input validation along with prepared statements or stored
      procedures, disallowing dynamic query constructions using user
      input to defend against injection attacks.
    • disallowing active scripting in conjunction with output encoding
      and input- or request-validation to defend against Cross-Site
      Scripting (XSS).
  • fail safe
    • the user is denied access by default and the account is locked out after the maximum number (clipping level) of access attempts is tried.
    • errors and exceptions are explicitly handled and the error messages are non-verbose in nature.
    •  not designing the software to ignore the error and resume next
      operation
  • economy of mechanism – trade-off that happens between the
    usability of the software and the security features that need to be designed and built in.
    • Unnecessary functionality or unneeded security mechanisms should be avoided.
    • Strive for simplicity.
    • Strive for operational ease of use.
  • complete mediation
  • open design – the inverse of the open design principle is security through obscurity, which means that the software employs protection mechanisms whose strength is dependent on the obscurity of the design.
  • least common mechanism – mechanisms common to more than one user or process are designed not to be shared. Design should compartmentalize or isolate the code (functions) by user roles, since this increases the security of the software by limiting the exposure.
  • psychological acceptance – security principle that states that security mechanisms should be designed to maximize usage, adoption, and automatic application.The security protection mechanisms:
    • are easy to use,
    • do not affect accessibility.
    • are transparent to the user.
  • weakest link – when designing software, careful attention must be
    given so that there are no exploitable components.
  • leverage existing components – reusing tested and proven, existing libraries and common components has good security benefits.

Securing commonly used architectures

  • mainframe architecture
  • distributed architecture
    • client/server
    • p2p
  • service oriented architecture
    • An ESB is a software architectural pattern that facilitates communication between mutually interacting software application.
    • web-services
      • SOAP
      • REST
  • rich internet aplications (RIA)

Service models:

  • Infrastructure as a Service (IaaS)  -infrastructural components such as networking equipment, storage, servers and virtual machines are provided as services and managed by the cloud service provider.
  • Platform as a Service (PaaS) -in addition to infrastructural components, platform components such as operating systems, middleware and runtime are also provided as services and managed by the cloud service provider.
  • Software as a Service (SaaS) – in addition to infrastructural and platform components, data hosting and software applications are provided as services and managed by the cloud service provider.

Digital Rights Management

The expression of rights is made possible by formal language, known as Rights Expression Language (REL). Some examples of REL include the following:
  • Open Digital Rights Language (ODRL)  – A generalized, open standard under development that expresses rights using XML.
  •  eXtensible rights Markup Language (XrML) – Another generalized REL that is more abstract than ODRL. XrML is more of a meta-language that can be used for developing other RELs.
  • Publishing Requirements for Industry Standard Metadata
    (PRISM) – Unlike ODRL and XrML, PRISM can be used to express
    rights specific to a task and is used for syndication of print media
    content such as newspapers and magazine.

Trusted computing:

  • Trusted Platform Module (TPM) – specification used in personal computers and other systems to ensure protection against disclosure of sensitive or private information as well as the implementation of the specification itself.
  • Trusted Computing Base (TCB) – the set of all hardware, firmware and software components that are critical to its security.

(My) CSSLP Notes – Secure Software Concepts

Note: This notes were strongly inspired by the following book: CSSLP Certification All in one.

General Security Concepts

BasicsCSSLP-logo

The security of IT systems can be defined using the following attributes:

  • confidentiality – how the system prevents the disclosure of information.
  • integrity – how the system protects data from the unauthorized access.
  • availability – access to the system by authorized personnel.
  • authentication – process of determining the identity of a user. Three methods can be used to authenticate a user:
    • something you know (ex: password, pin code)
    • something you have (ex: token, card)
    • something you are (ex: biometrics mechanisms)
  • authorization – process of applying access control rules to a user process to determine if a particular user process can access an object.
  • accounting (auditing) – records historical events on a system.
  • non-repudiation – preventing a subject from denying a previous action with an object in a system.

System principles

  • session management – design and implementation of controls to ensure that the communications channels are secured from unauthorized access and disruption of communications.
  • exception management – the process of handling any errors that could appear during the system execution.
  • configuration management – identification and management of the configuration items (initialization parameters, connection strings, paths, keys).

Secure design principles

  • good enough security – there is a trade off between security and other aspects associated with a system. The level of required security must be determined at design time.
  • least privilege – a subject should have only the necessary rights and privileges to perform a specific task.
  • separation of duties – for any given task, more than one individual needs to be involved.
  • defense in depth (layered security) – apply multiple dissimilar security defenses.
  • fail-safe – when a system experience a failure, it should fail to a safe state; all the attributes associated with the system security (confidentiality, integrity, availability) should be appropriately maintained.
  • economy of mechanism – keep the design of the system simple and less complex; reduce the number of dependencies and/or services that the system needs in order to operate.
  • complete mediation – checking permission each time subject requests access to objects.
  • open design – design is not a secret, implementation of safeguard is. (ex: cryptography algorithms are open but the keys used are secret)
  • least common mechanism – minimize the amount of mechanism common to more than one user and depended on by all users. Every shared mechanism (especially one involving shared variables) represents a potential information path between users and must be designed with great care to be sure it does not unintentionally compromise security.
  • psychological acceptability – accessibility to resources should not be inhibited by security mechanisms. If security mechanisms hinder the usability or accessibility of resources, then users may opt to turn off those mechanisms.
  • weakest link – attackers are more likely to attack a weak spot in a software system than to penetrate a heavily fortified component.
  • leverage existing components – component reuse have many advantages, including the increasing of efficiency and security. From the security point of view the component reuse is reducing the attack surface.
  • single point of failure – a system design should not be susceptible to a single point of failure.

Security Models

Access Control Models

Access controls define what actions a subject can perform on specific objects.

  • Bell-LaPadula confidentiality model – It is focused on maintaining the confidentiality of objects. Bell-LaPadula operates by observing two rules: the Simple Security Property and the * Security Property.
    • The Simple security property states that there is “no read up:” a subject at a specific classification level cannot read an object at a higher classification level.
    • The * Security Property is “no write down:”a subject at a higher classification level cannot write to a lower classification level.
  • Take-Grant  – systems specify the rights that a subject can transfer to a from another subject or object. The model is based on representation of the controls in forms of directed graphs with the vertices being the subjects and the objects. The edges between them represent the right between the subject and objects. The representation of rights takes the form of {t (take), g (grant), r (read), w (write)}.
  • Role-based Access control – users are assign a set of roles they may perform. The roles are associated to the access permissions necessary to perform the tasks.
  • MAC (Mandatory Access Control) Model – in MAC systems the owner or subject cannot determine whether access is to be granted to another subject; it is the job of the operating system to decide.
  • DAC (Discretionary Access Control) Model – in DAC systems the owner of an object can decide which other subjects may have access to the object what specific access they may have.

Integrity Models

  • Biba integrity model  – (sometimes referred as Bell-LaPadula upside down) was the first formal integrity model.  Biba is the model of choice when integrity protection is vital. The Biba model has two primary rules: the Simple Integrity Axiom and the * Integrity Axiom. 
    • The Simple Integrity Axiom is “no read down:”a subject at a specific classification level cannot read data at a lower classification. This protects integrity by preventing bad information from moving up from lower integrity levels.
    • The * Integrity Axiom is “no write up:”a subject at a specific classification level cannot write to data at a higher classification. This protects integrity by preventing bad information from moving up to higher integrity levels.
  • Clark-Wilson  –  (this is an informal model) that protects integrity by requiring subjects to access objects via programs. Because the programs have specific limitations to what they can and cannot do to objects, Clark-Wilson effectively limits the capabilities of the subject.Clark-Wilson uses two primary concepts to ensure that security policy is enforced; well-formed transactions and Separation of Duties.

Information Flow Models

Information in a system must be protected when at rest, in transit and in use.

  • The Chinese Wall model – designed to avoid conflicts of interest by prohibiting one person, such as a consultant, from accessing multiple conflict of interest categories (CoIs). The Chinese Wall model requires that CoIs be identified so that once a consultant gains access to one CoI, they cannot read or write to an opposing CoI.

 

Risk Management

Vocabulary

  • risk – possibility of suffering harm or loss
  • residual risk – risk that remains after a control was added to mitigate the initial risk.
  • total risk – the sum of all risks associated with an asset.
  • asset – resource an organization needs to conduct his business.
  • threat – circumstance or event with the potential to cause harm to an asset.
  • vulnerability – any characteristic if an asset that can be exploited by a threat to cause harm.
  • attack – attempting to use a vulnerability.
  • impact – loss resulting when a threat exploits a vulnerability.
  • mitigate – action taken to reduce the likelihood of a threat.
  • control – measure taken to detect, prevent or mitigate the risk associated with a threat.
  • risk assessment – process of identifying risks and mitigating actions.
  • qualitative risk assessment – subjectively determining the impact of an event that effects assets.
  • quantitative risk assessment –  objectively determining the impact of an event that effects assets.
  • single loss expectation (SLE) – linked to the quantitative risk assessment, it represents the monetary loss or impact of each occurrence of a threat.
    • SLE = asset value * exposure factor
  • exposure factor – linked to the quantitative risk assessment, is a measure of the magnitude of a loss.
  • annualized rate of occurrence (ARO) – linked to the quantitative risk assessment, is the frequency with an event is expected to occur on an annualized basis.
    • ARO = number of events / number of years
  • annualized loss of expectancy (ALE) – linked to the quantitative risk assessment, it represents how much an event is expected to cost per year.
    • ALE = SLE * ARO

Types of risks:

  • Business Risks:
    • fraud
    • regulatory
    • treasury management
    • revenue management
    • contract management
  • Technology Risks:
    • security
    • privacy
    • change management

Types of controls

Controls can be classified on types of actions they perform. Three classes of controls exist:

  • administrative
  • technical
  • physical

For each of these classes, there are four types of controls:

  • preventive (deterrent) – used to prevent the vulnerability
  • detective – used to detect the presence of an attack.
  • corrective (recovery) – correct a system after a vulnerability is exploited and an impact has occurred; backups are  a common form of corrective controls.
  • compensation – designed to act when a primary set of controls has failed.

Risk management models

General risk management model

The steps contained in a general risk management model:

  1. Asset identification – identify and clarify all the assets, systems and processes that need to be protected.
  2. Threat assessment – identify the threats and vulnerabilities associated with each asset.
  3. Impact determination and qualification
  4. Control design and evaluation – determine which controls to put in place to mitigate the risks.
  5. Residual risk management – evaluate residual risks to identify where additional controls are needed.

Risk management model proposed by Software Engineering Institute

SEI model steps :

  1. Identity – enumerate potential risks.
  2. Analyze – convert the risk data gather into information that can be used to make decisions.
  3. Plan – decide the actions to take to mitigate them.
  4. Track – monitor the risks and mitigations plans.
  5. Control – make corrections for deviations from the risk mitigation plan.

Security Policies and Regulations

One of the most difficult aspects of prosecution of computer crimes is attribution. Meeting the burden of proof requirement in criminal proceedings, beyond a reasonable doubt, can be difficult given an attacker can often spoof the source of the crime or can leverage different systems under someone else’s control.

Intellectual property

Intellectual property is protected by the U.S law under one of four classifications:

  • patents – Patents provide a monopoly to the patent holder on the right to use, make, or sell an invention for a period of time in exchange for the patent holder’s making the invention public.
  • trademarks – Trademarks are associated with marketing: the purpose is to allow for the creation of a brand that distinguishes the source of products or services.
  • copyrights – represents a type of intellectual property that protects the form of expression in artistic, musical, or literary works, and is typically denoted by the circle c symbol. Software is typically covered by copyright as if it were a literary work. Two important limitations on the exclusivity of the copyright holder’s monopoly exist: the doctrines of first sale and fair use. The first sale doctrine allows a legitimate purchaser of copyrighted material to sell it to another person. If the purchasers of a CD later decide that they no longer cared to own the CD, the first sale doctrine gives them the legal right to sell the copyrighted material even though they are not the copyright holders.
  • trade secrets – business-proprietary information that is important to an organization’s ability to compete. Software source code or firmware code are examples of computer-related objects that an organization may protect as trade secrets.

Privacy and data protection laws

Privacy and data protection laws are enacted to protect information collected and maintained on individuals from unauthorized disclosure or misuse.

Several important pieces of privacy and data protection legislation include :

  • U.S. Federal Privacy Act of 1974 – protects records and information maintained by U.S. government agencies about U.S. citizens and lawful permanent residents.
  •  U.S. Health Insurance Portability and Accountability Act (HIPAA) of 1996 – seeks to guard protected health information from unauthorized use or disclosure.
  • Payment Card Industry Data Security Standard (PCI-DSS) – the goal is to ensure better protection of card holder data through mandating security policy, security devices, control techniques and monitoring of systems and networks.
  • U.S. Gramm-Lech-Bliley Financial Services Modernization Act (GLBA) – requires financial institutions to protect the confidentiality and integrity of consumer financial information.
  • U.S. Sarbanes-Oxley Act of 2002 (SOX) – the primary goal of SOX is to ensure adequate financial disclosure and financial auditor independence.

Secure Software Architecture – Security Frameworks

  • COBIT (Control Objectives for Information and Related Technology)– assist management in bringing the gap between control requirements, technological issues and business risks.
  • COSO (Committee of Sponsoring Organizations of the Treadway Commission) – COSO has established a Enterprise Risk Management -Integrated Framework against which companies and organizations may assess their control systems.
  • ITIL (Information Technology Infrastructure Library) – describes a set of practices focusing on aligning IT services with business needs.
  • SABSA (Sherwood Applied Business Security Architecture) – framework and methodology for developing risk-driven enterprise information security architecture.
  • CMMI (Capability Maturity Model Integration) – process metric model that rates the process maturity of an organization on a 1 to 5 scale.
  • OCTAVE (Operationally Critical Threat, Asset and Vulnerability Evaluation) – suite of tools, techniques and methods for risk-based information security assessment.

 

Software Development Methodologies

Secure Development Lifecycle Components

  • software team awareness and education – all team members should have appropriate training. The key element of team awareness and education is to ensure that all the members are properly equipped with the correct knowledge.
  • gates and security requirements – the term gates it signifies a condition that one must pass through. To pass the security gate a review of the appropriate security requirements is conducted.
  • threat modeling – design technique used to communicate information associated with a threat throughout the development team (for more infos’ you could check my other ticket : threat modeling for mere mortals).
  • fuzzing – a test technique where the tester applies a series of inputs to an interface in an automated fashion and examines the output for undesired behaviors.
  • security reviews – process to ensure that the security-related steps are being carried out and not being short-circuited.

Software Development Models

  • waterfall model – is a linear application development model that uses rigid phases; when one phase ends, the next begins.
  • spiral model – repeats steps of a project, starting with modest goals, and expanding outwards in ever wider spirals (called rounds). Each round of the spiral constitutes a project, and each round may follow traditional software development methodology such as Modified Waterfall. A risk analysis is performed each round.
  • prototype model – working model of software with some limited functionality. Prototyping is used to allow the users evaluate developer proposals and try them out before implementation.
  • agile model
    • Scrum  – contain small teams of developers, called the Scrum Team. They are supported by a Scrum Master, a senior member of the organization who acts like a coach for the team. Finally, the Product Owner is the voice of the business unit.
    • Extreme Programming (XP) – method that uses pairs of programmers who work off a detailed specification.

Microsoft Security Development Lifecycle

SDL is software development process designed ti enable development teams to build more secure software and address security compliance requirements.

SDC is build around the following three elements:

  • (security) by design – the security thinking is incorporated as part of design process.
  • (security) by default – the default configuration of the software is by design as secure as possible.
  • (security) in deployment – security and privacy elements are properly understood and managed through the deployment process.

SDL components:

  • training   security training for all personnel, targeted to their responsibility associated with the development effort.
  • requirements
    • establishment of the security and privacy requirements for the software.
    • creation of quality gates ans bug bars. Defining minimum acceptable levels of security and privacy quality at the start helps a team understand risks associated with security issues, identify and fix security bugs during development, and apply the standards throughout the entire project.Setting a meaningful bug bar involves clearly defining the severity thresholds of security vulnerabilities (for example, no known vulnerabilities in the application with a “critical” or “important” rating at time of release) and never relaxing it once it’s been set.
    • development of security and privacy risk assessment. Examining software design based on costs and regulatory requirements helps a team identify which portions of a project will require threat modeling and security design reviews before release and determine the Privacy Impact Rating of a feature, product, or service.
  • design – establish design requirements, perform attack/surface analysis/reduction and use threat modeling.
  • implementation – application of secure coding practices and the use of static program checkers to find common errors.
  • verification – perform dynamic analysis (tools that monitor application behavior for memory corruption, user privilege issues, and other critical security problems), fuzz testing and conduct attack surface review.
  • release – conduct final security review and create an incident response plan.
  • response – execute incident response plan.

(My) CEH cheat sheet

This is the small (and I hope) useful cheat sheet for the CEH V8 certification.

This is strongly inspired from the CEH Certified Ethical Hacker Bundle, Second Edition book.

Basics

“Bit flipping” is one form of an integrity attack. In bit flipping, the attacker isn’t interested in learning the entirety of the plaintext message.

There are three main phases to a pen test: preparation, assessment, and conclusion.

Black box testing, the ethical hacker has absolutely no knowledge of the TOE. It’s designed to simulate an outside, unknown attacker, takes the most amount of time to complete.

White box testing, pen testers have full knowledge of the network, system, and infrastructure they’re targeting.

Gray box testing, is also known as partial knowledge testing. What makes this different from black box testing is the assumed level of elevated privileges the tester has. Whereas black box testing is generally done from the network administration level, gray box testing assumes only that the attacker is an insider.

Attack Types

EC Council broadly defines four attack types categories:

  • Operating system attacks Generally speaking, these attacks target the common mistake many people make when installing operating systems— accepting and leaving all the defaults. Things like administrator accounts with no passwords, all ports left open, and guest accounts (the list could go on forever) are examples of settings the installer may forget about.
  • Application-level attacks These are attacks on the actual programming codes of an application. Although most people are very cognizant of securing their OS and network, it’s amazing how often they discount the applications running on their OS and network. Many applications on a network aren’t tested for vulnerabilities as part of their creation and, as such, have many vulnerabilities built into them. Applications on a network are a goldmine for most hackers.
  • Shrink-wrap code attacks These attacks take advantage of the built-in code and scripts most off-the-shelf applications come with. These scripts and code pieces are designed to make installation and administration easier, but can lead to vulnerabilities if not managed appropriately.
  • Misconfiguration attacks These attacks take advantage of systems that are, on purpose or by accident, not configured appropriately for security.

An asset is an item of economic value owned by an organization or an individual. Identification of assets within the risk analysis world is the first and most important step.

A threat is any agent, circumstance, or situation that could cause harm or loss to an IT asset.

A vulnerability is any weakness, such as a software flaw or logic design, that could be exploited by a threat to cause damage to an asset.

18 U.S.C § 1029 and 1030

Basically, the law gives the U.S. government the authority to prosecute criminals who traffic in, or use, counterfeit access devices. In short, the section criminalizes the misuse of any number of credentials, including pass- words, PIN numbers, token cards, credit card numbers, and the like.

Cryptography

Symmetric Encryption – The formula for calculating how many key pairs you will need is N (N – 1) / 2 where N is the number of nodes in the network

Symmetric algorithms:

  • DES A block cipher that uses a 56-bit key (with 8 bits reserved for parity); fixed blocked size.
  • 3DES A block cipher that uses a 168-bit key. 3DES (called triple DES) can use up to three keys in a multiple-encryption method.
  • AES (Advanced Encryption Standard) A block cipher that uses a key length of 128, 192, or 256 bits, and effectively replaces DES.
  • IDEA (International Data Encryption Algorithm) A block cipher that uses a 128-bit key.
  • Twofish A block cipher that uses a key size up to 256 bits.
  • Blowfish A fast block cipher, largely replaced by AES, using a 64-bit block size and a key from 32 to 448 bits.
  • RC (Rivest Cipher) Encompasses several versions from RC2 through RC6. A block cipher that uses a variable key length up to 2,040 bits. RC6, the latest version, uses 128-bit blocks, whereas RC5 uses variable block sizes (32, 64, or 128).

Asymmetric Encryption

Generally: public key = encrypt, private key = decrypt.

Asymmetric algorithms:

  • Diffie-Hellman Developed for use as a key exchange protocol, Diffie- Hellman is used in Secure Sockets Layer (SSL) and IPSec encryption.
  • Elliptic Curve Cryptosystem (ECC) Uses points on an elliptical curve, in conjunction with logarithmic problems, for encryption and signatures. Uses less processing power than other methods, making it a good choice for mobile devices.
  • El Gamal Not based on prime number factoring, this method uses the solving of discrete logarithm problems for encryption and digital signatures.
  • RSA An algorithm that achieves strong encryption through the use of two large prime numbers. Factoring these numbers creates key sizes up to 4,096 bits. RSA can be used for encryption and digital signatures and is the modern de facto standard.

Hash algorithms:

  • MD5 (Message Digest algorithm) Produces a 128-bit hash value output, expressed as a 32-digit hexadecimal.
  • SHA-1 Developed by the NSA (National Security Agency), SHA-1 produces a 160-bit value output, and was required by law for use in U.S. government applications.
  • SHA-2 Developed by the NSA, actually holds four separate hash functions that produce outputs of 224, 256, 384, and 512 bits.

Trust Models

  • web of trust, multiple entities sign certificates for one another.
  • single authority system has a CA at the top that creates and issues certs. Users trust each other based on the CA itself.
  • hierarchical trust system also has a CA at the top (which is known as the root CA), but makes use of one or more intermediate CAs underneath it— known as registration authorities (RAs)—to issue and manage certificates.

Cryptography Attacks:

  • Known plaintext attack In this attack, the hacker has both plaintext and corresponding ciphertext messages—the more, the better. The plaintext copies are scanned for repeatable sequences, which are then compared to the ciphertext versions. Over time, and with effort, this can be used to decipher the key.
  • Ciphertext-only attack In this attack, the hacker gains copies of several messages encrypted in the same way (with the same algorithm). Statistical analysis can then be used to reveal, eventually, repeating code, which can be used to decode messages later on.
  • Replay attack Most often performed within the context of a man-in-the- middle attack. The hacker repeats a portion of a cryptographic exchange in hopes of fooling the system into setting up a communications channel. The attacker doesn’t really have to know the actual data (such as the password) being exchanged, he just has to get the timing right in copying and then replaying the bit stream. Session tokens can be used in the communications process to combat this attack.

A digital certificate is an electronic file that is used to verify a user’s identity, providing non-repudiation throughout the system.

  • Version This identifies the certificate format.. The most common version in use is 1.
  • Serial Number Fairly self-explanatory, the serial number is used to uniquely identify the certificate itself.
  • Subject Whoever or whatever is being identified by the certificate.
  • Algorithm ID (or Signature Algorithm) Shows the algorithm that was used to create the digital signature.
  • Issuer Shows the entity that verifies the authenticity of the certificate. The issuer is the one who creates the certificates.
  • Valid From and Valid To These fields show the dates the certificate is good through.
  • Key Usage Shows for what purpose the certificate was created.
  • Subject’s Public Key A copy of the subject’s public key is included in the digital certificate.
  • Optional fields These fields include Issuer Unique Identifier, Subject Alternative Name, and Extensions.

Reconnaissance:

FOR ECCouncil Vulnerability Research is part of the reconnaissance.

Difference in definition between reconnaissance and footprinting:

For many, recon is more of an overall, over-arching term for gathering information on targets, whereas footprint- ing is more of an effort to map out, at a high level, what the landscape looks like. They are interchangeable terms in CEH parlance, but if you just remember that footprinting is part of reconnaissance.

 

DNS is using port 53; Name lookups generally use UDP, whereas zone transfers use TCP.

DNS record types:

  • SRV- Service Defines the host name and port number of servers providing specific services, such as a Directory Services server.
  • SOA – Start Of Authority This record identifies the primary name server for the zone.The SOA record contains the host name of the server responsible for all DNS records within the namespace, as well as the basic properties of the domain.
  • PTR – Pointer Maps an IP address to a host name (providing for reverse DNS lookups).You don’t absolutely need a PTR record for every entry in your DNS namespace, but these are usually associated with e-mail server records.
  • NS – Name Server This record defines the name servers within your namespace.These servers are the ones that respond to your clients’ requests for name resolution.
  • MX -Mail Exchange This record identifies your e-mail servers within your domain.
  • CNAME – Canonical Name This record provides for domain name aliases within your zone. For example, you may have an FTP service and a web service running on the same IP address. CNAME records could be used to list both within DNS for you.
  • A – Address This record maps an IP address to a host name, and is used most often for DNS lookups.

DNS Footprinting tools: whois, nslookup, dig

Scanning and Enumeration

Relevant ICMP Message Types

  • 0: Echo Reply – Answer to a Type 8 Echo Request
  • 3: Destination Unreachable

Error message indicating the host or network cannot be reached.

Codes:

0—Destination network unreachable

1—Destination host unreachable

6—Network unknown

7—Host unknown

9—Network administratively prohibited 10—Host administratively prohibited

13—Communication administratively prohibited

  • 4: Source Quench A congestion control message
  • 5: Redirect Sent when there are two or more gateways available for the sender to use, and     the best route available to the destination is not the configured default gateway.

Codes:

0—Redirect datagram for the network

1—Redirect datagram for the host

  • 8: ECHO Request A ping message, requesting an Echo reply
  • 11:Time Exceeded The packet took too long to be routed to the destination (Code 0 is TTL expired).

The port numbers range from 0 to 65,535 and are split into three different groups:

  • Well-known: 0–1023
  • Registered: 1024–49151
  • Dynamic: 49152–65535

Some of the more important well-known port numbers to remember are:

  • FTP (20/21)
  • Telnet (23)
  • SMTP (25)
  • DNS (53),
  • POP3 (110)
  • NetBIOS (137–139)
  • SNMP (161/162)

The TCP header flags are:

  • URG (Urgent) When this flag is set, it indicates the data inside is being sent out of band.
  • ACK (Acknowledgment) This flag is set as an acknowledgment to SYN flags. This flag is set on all segments after the initial SYN flag.
  • PSH (Push) This flag forces delivery of data without concern for any buffering.
  • RST (Reset) This flag forces a termination of communications (in both directions).
  • SYN (Synchronize) This flag is set during initial communication establishment. It indicates negotiation of parameters and sequence numbers.
  • FIN (Finish) This flag signifies an ordered close to communications.

Nmap is the de-facto tool for footprinting networks. It is capable of finding live hosts, access points, fingerprinting

operating systems, and verifying services. It also has important IDS evasion capabilities.

nmap <scan options> <target>

-sA ACK scan

-sI FIN scan

-sL IDLE scan

-sN DNS scan

-sO NULL scan

-sP Ping scan

-sR RPC scan

-sS SYN scan

-sT TCP connect scan

-sW Window scan

-sX XMAS tree scan

-PI ICMP ping

-Po No ping

-PS SYN ping

-PT TCP ping

-oN normal output

-oX XML output

-T0 slowest – T4 fastest

Seven generic scan types for port scanning:

  • TCP Connect Runs through a full connection (three-way handshake) on all ports. Easiest to detect, but possibly the most reliable. Open ports will respond with a SYN/ACK, closed ports with a RST/ACK.
  • SYN Known as a “half-open scan.” Only SYN packets are sent to ports (no completion of the three-way handshake ever takes place). Open ports will respond with a SYN/ACK, closed ports with a RST/ACK.
  • FIN scans run the communications setup in reverse, sending a packet with the FIN flag set. Closed ports will respond with RST, whereas open ports won’t respond at all.
  • XMAS A Christmas scan is so named because the packet is sent with multiple flags (FIN, URG, and PSH) set. Closed ports will respond with RST, whereas open ports won’t respond at all
  • ACK Used mainly for Unix/Linux-based systems. Open ports will send RST, closed ports, no answer
  • IDLE Uses a spoofed IP address to elicit port responses during a scan. Designed for stealth, this scan uses a SYN flag and monitors responses as with a SYN scan.
  • NULL Almost the opposite of the XMAS scan. The NULL scan sends packets with no flags set. Responses will vary, depending on the OS and version, but NULL scans are designed for Unix/Linux machines.

War dialing is a process by which an attacker dials a set of phone numbers specifi- cally looking for an open modem.

War driving used to refer to, quite literally, driving around in a car looking for open access points. In the ethical hacking realm, it still indicates a search for open WAP

Simple Network Management Protocol was designed to manage IP-enabled devices across a network. As a result, if it is in use on the subnet, you can find out loads of information with properly formatted SNMP requests. Later versions of SNMP make this a little more difficult, but plenty of systems out there are still using the protocol in version 1.

Sniffers and Evasion

All Peoples Seems To Need Data Processing – mnemonic phrase to remember the layers.

  • ISO/OSI layers

Application – data

Presentation – data

Session   – data

Transport – segment

Network – packet

Data     -frames

Physical – byes

ISO/OSI to TCP/IP mapping(3 1 1 2)

  • TCP/IP Model layers

Application

Transport

Internet

Network Access

The MAC address that is burned onto a NIC is actually made of two sections.The first half of the address, 3 bytes (24 bits), is known as the Organizational Unique Identifier, and is used to identify the card manufacturer.The second half is a unique number burned in at manufacturing, to ensure no two cards on any given subnet will have the same address.

MAC Spoofing Set the MAC address of a NIC to the same value as another

MAC Flooding Overwhelm the CAM (content addressable memory) table of the switch so it coverts to hub mode

ARP Poisoning Inject incorrect information into the ARP caches of two or more endpoints.

 

Snort rule:

alert tcp !HOME_NET any -> $HOME_NET 31337 (msg :"BACKDOOR ATTEMPT-Backorifice")

If you happen to come across a packet from any address that is not my home network, using any source port, intended for an address within my home network on port 31337, alert me with the message ‘BACKDOOR ATTEMPT-Back- orifice.’”

Span port = port mirroring

False negative – when an IDS reports a particular stream as clean but it’s not

Wireshark display filters

Display filters work basically like: proto.field operator value

Analyse the following examples:

tcp.flags == 0x29
ip.addr != 192.168.1.1
tcp.port eq 25 or icmp
ip.src==192.168.0.0/16 and ip.dst==192.168.0.0/16
http.request.uri matches "login.html"

Tcpdump syntax:

tcmdump flag(s) interface

Attacking a System

EC-Council rules for the passwords:

  • The password must not contain any part of the user’s name. For example, a password of “MattIsGr@8!” wouldn’t work for the CEH exam, because you can clearly see my name there.
  • The password must have a minimum of eight characters. Eight is okay. Nine is better. Seven? Not so good.
  • The password must contain characters from at least three of the four major components of complexity—that is, special symbols (such as @&*#$), uppercase letters, lowercase letters, and numbers. U$e8Ch@rs contains all four, whereas use8chars uses only two.

LM Hashing – 7 spaces hashed = AAD3B435B51404EE

Four main attack types are defined within CEH.

  • passive online attack basically amounts to sniffing a wire in the hopes of either intercepting a password in clear text or attempting a replay or man-in- the-middle (MITM) attack.
  • active online, occurs when the attacker begins simply trying passwords—guessing them, for lack of a better word
  • offline attacks occur when the hacker steals a copy of the password file (remember our discussion on the SAM file earlier?) and works the cracking efforts on a separate system.
  • non-electronic = the social engineering.

sidejacking. The idea is to steal the cookies exchanged between two systems and ferret out which one to use as a replay-style attack

Social Engineering and Physical Security

Human-Based Attacks

  • Dumpster diving
  • Impersonation
  • Technical support
  • Shoulder surfing
  • Tailgating and piggybacking

Computer-Based Attacks

  • phishing.

Types of Social Engineers

  • Insider Associates Have limited authorized access, and escalate privileges from there.
  • Insider Affiliates Are insiders by virtue of an affiliation, they spoof the identity of the insider.
  • Outsider Affiliates Are nonͲtrusted outsiders that use an access point that was left open.

Physical Security

Three major categories of physical security measures:

  • Physical measures include all the things you can touch, taste, smell, or get shocked by. For example, lighting, locks, fences, and guards with Tasers are all physical measures.
  • Technical measures are a little more com- plicated. These are measures taken with technology in mind, to protect explicitly at the physical level. For example, authentication and permissions may not come across as phys- ical measures, but if you think about them within the context of smart cards and bio- metrics, it’s easy to see how they should become technical measures for physical security.
  • Operational measures are the policies and procedures you set up to enforce a security-minded operation.

Web-Based Hacking

This dot-dot-slash attack is also known as a variant of “Unicode” or unvalidated input attack.

SQL injection attacks types:

  • Union query The thought here is to make use of the UNION command to return the union of your target database with one you’ve crafted to steal data from it.
  • Tautology An overly complex term used to describe the behavior of a database system when deciding whether or not a statement is true. Because user IDs and passwords are often compared, and the “true” measure allows access, if you trick the database by providing something that is already true (1 does, indeed, equal 1), then you can sneak by.
  • Blind SQL injection This occurs when the attacker knows the database is susceptible to injection, but the error messages and screen returns don’t come back to the attacker. Because there’s a lot of guesswork and trial and error, this attack takes a long while to pull off.
  • Error-based SQL injection This isn’t necessarily an attack so much as an enumeration technique. The objective is to purposely enter poorly constructed statements in an effort to get the database to respond with table names and other information in its error messages.

The buffer overflow attack categories are as follows:

  • Stack This idea comes from the basic premise that all program calls are kept in a stack and executed in order. If you affect the stack with a buffer overflow, you can perhaps change a function pointer or variable to allow code execution.
  • Heap Also referred to as heap overflow, this attack takes advantage of the memory “on top of” the application, which is allocated dynamically at runtime. Because this memory usually contains program data, you can cause the application to overwrite function pointers.
  • NOP Sled A NOP sled makes use of a machine instruction called “no-op.” In the attack, a hacker sends a large number of NOP instructions into the buffer, appending command code instruction at the end. Because this attack is so common, most IDSs protect against it.

Dangerous Functions for buffer overflows

The following functions are dangerous because they do not check the size of the destination buffers:

gets()

strcpy()

strcat()

printf()

Wireless Network hacking

802.11 Specifications

Spec   Distance Speed Freq

802.11a 30M     54Mbps   5Ghz

802.11b 100M   11Mbps 2.4Ghz

802.11g 100M   54Mbps 2.4Ghz

802.11n 125M   100Mbps+ 2.4Ghz, 5Ghz

 

WEP Uses RC4 for the stream cipher with a 24b initialization vector

Key sizes are 40b or 104b

WPA Uses RC4 for the stream cipher but supports longer keys; 48 bits IV

WPA/TKIP Changes the IV with each frame and includes key mixing

WPA2 Uses AES as the stream cipher and includes all the features of TKIP; 48 bits IV.

 

Rogue APs (evil twins) may also be referenced as a “mis- association” attack.

 

Bluetooth attackes :

Bluesmacking is simply a denial-of-service attack against a device.

Bluejacking consists of sending unsolicited messages to, and from, mobile devices.

Bluesniffing is exactly what it sounds like, and, finally.

Bluescarfing is the actual theft of data from a mobile device.

Trojans and Other Attacks

Windows will automatically run everything located in Run, RunServices, RunOnce, and RunServicesOnce

Virus types:

  • Boot sector virus Also known as a system virus, this virus type actually moves the boot sector to another location on the hard drive, forcing the virus code to be executed first. They’re almost impossible to get rid of once you get infected. You can re-create the boot record—old-school fdisk or mbr could do the trick for you—but it’s not necessarily a walk in the park.
  • Shell virus Working just like the boot sector virus, this virus type wraps itself around an application’s code, inserting its own code before the application’s. Every time the application is run, the virus code is run first.
  • Multipartite virus Attempts to infect both files and the boot sector at the same time. This generally refers to a virus with multiple infection vectors. This link describes one such DOS-type virus: http://www.f-secure.com/v-descs/ neuroqui.shtml. It was multipartite, polymorphic, retroviral, boot sector, and generally a pretty wild bit of code.
  • Macro virus Usually written with VBA (Visual Basic for Applications), this virus type infects template files created by Microsoft Office—normally Word and Excel. The Melissa virus was a prime example of this.
  • Polymorphic code virus This virus mutates its code using a built-in polymorphic engine. These viruses are very difficult to find and remove because their signatures constantly change.
  • Metamorphic virus This virus type rewrites itself every time it infects a new file.

DOS attack types:

  • SYN attack The hacker will send thousands upon thousands of SYN packets to the machine with a false source IP address. The machine will attempt to respond with a SYN/ACK but will be unsuccessful (because the address is false). Eventually, all the machine’s resources are engaged and it becomes a giant paperweight.
  • SYN flood In this attack, the hacker sends thousands of SYN packets to the target, but never responds to any of the return SYN/ACK packets. Because there is a certain amount of time the target must wait to receive an answer to the SYN/ACK, it will eventually bog down and run out of available connections.
  • ICMP flood Here, the attacker sends ICMP Echo packets to the target with a spoofed (fake) source address. The target continues to respond to an address that doesn’t exist and eventually reaches a limit of packets per second sent.
  • Application level A simple attack whereby the hacker simply sends more “legitimate” traffic to a web application than it can handle, causing the system to crash.
  • Smurf The attacker sends a large number of pings to the broadcast address of the subnet, with the source IP spoofed to that of the target. The entire subnet will then begin sending ping responses to the target, exhausting the resources there. A fraggle attack is similar, but uses UDP for the same purpose.
  • Ping of death In the ping of death, an attacker fragments an ICMP message to send to a target. When the fragments are reassembled, the resultant ICMP packet is larger than the maximum size and crashes the system.

(My) CISSP Notes – Access control

Note: This notes were made using the following books: “CISPP Study Guide” and “CISSP for dummies”.

The purpose of access control is to allow authorized users access to appropriate data and deny access to unauthorized users and the mission and purpose of access control is to protect the confidentiality, integrity, and availability of data. Access control is performed by implementing strong technical, physical and administrative measures. Access control protect against threats such as unauthorized access, inappropriate modification of data, loss of confidentiality.

Basic concepts of access control

CIA triad and his opposite (DAD) – see (My) CISSP Notes – Information Security Governance and Risk Management

A subject is an active entity on a data system. Most examples of subjects involve people accessing data files. However, running computer programs are subjects as well. A Dynamic Link Library file or a Perl script that updates database files with new information is also a subject.

An object is any passive data within the system. Objects can range from databases to text files. The important thing remember about objects is that they are passive within the system. They do not manipulate other objects.

Access control systems provide three essential services:

  • Authentication – determines whether a subject can log in.
  • Authorization – determines what an subject can do.
  • Accountability – describes the ability to determine which actions each user performed on a system.

Access control models

Discretionary Access Control (DAC)

Discretionary Access Control (DAC) gives subjects full control of objects they have been given access to, including sharing the objects with other subjects. Subjects are empowered and control their data.

Standard UNIX and Windows operating systems use DAC for filesystems.

  • Access control list (ACLs) provides a flexible method for applying discretionary access controls. An ACL lists the specific rights and permissions that are assigned to a subject fora given object.
  • Role-Based Access Control (RBAC) is another method for implementing discretionary access controls. RBAC defines how information is accessed on a system based on the role of the subject. A role could be a nurse, a backup administrator, a help desk technician, etc. Subjects are grouped into roles and each defined role has access permissions based upon the role, not the individual.

Major disadvantages of DAC include:

  • lack of centralized administration.
  • dependence of security-conscious resource owners.
  • difficult auditing because of the large volume of log entries that can be generated.

Mandatory Access Control (MAC)

Mandatory Access Control (MAC) is system-enforced access control based on subject’s clearance and object’s labels. Subjects and Objects have clearances and labels, respectively, such as confidential, secret, and top secret.

A subject may access an object only if the subject’s clearance is equal to or greater than the object’s label. Subjects cannot share objects with other subjects who lack the proper clearance, or “write down” objects to a lower classification level (such as from top secret to secret). MAC systems are usually focused on preserving the confidentiality of data.

In MAC, the system determines the access policy.

Common MACs models includes Bell-La Padula, Biba, Clark-Wilson; for more infos about these models please see : (My) CISSP Notes – Security Architecture and Design .

Major disadvantages of MAC control techniques include:

  • lack of flexibility.
  • difficulty in implementing and programming.

Access control administration

An organization must choose the type of access control model : DAC or MAC. After choosing a model, the organization must select and implement different access control technologies and techniques. What is left to work out is how the organization will administer the access control model. Access control administration comes in two basic flavors: centralized and decentralized.

Centralized access models systems maintains user account information in a central location. Centralized access control systems allow organizations to implement a more consistent, comprehensive security policy, but they may not be practical in large organizations.

Exemples  of centralized access control systems and protocols commonly used for authentication of remote users:

  • LDAP
  • RAS – Remote Access Service servers utilize the Point-to-Point Protocol (PPP) to encapsulate IP packets. PPP incorporates the following three authentication protocols: PAP (Password Authentication Protocol), CHAP (Challenge Handshake Authentication Protocol), EAP (Extensible Authentication Protocol).
  • RADIUS – The Remote Authentication Dial In User Service protocol is a third-party authentication system. RADIUS is described in RFCs 2865 and 2866, and uses the User Datagram Protocol (UDP) ports 1812 (authentication) and 1813 (accounting).
  • Diameter is RADIUS’ successor, designed to provide an improved Authentication, Authorization, and Accounting (AAA) framework. RADIUS provides limited accountability, and has problems with flexibility, scalability, reliability, and security. Diameter also uses Attribute Value Pairs, but supports many more: while RADIUS uses 8 bits for the AVP field (allowing 256 total possible AVPs), Diameter uses 32 bits for the AVP field (allowing billions of potential AVPs). This makes Diameter more flexible, allowing support for mobile remote users, for example.
  • TACACS -The Terminal Access Controller Access Control System is a centralized access control system that requires users to send an ID and static (reusable) password for authentication. TACACS uses UDP port 49 (and may also use TCP).

Decentralized access control allows IT administration to occur closer to the mission and operations of the organization. In decentralized access control, an organization spans multiple locations, and the local sites support and maintain independent systems, access control databases, and data. Decentralized access control is also called distributed access control.

Access control defensive categories and types

Access control is achieved throughout an entire et of control which , identified by purpose, include;

  • preventive controls, for reducing risks.
  • detective controls, for identifying violations and incidents.
  • corrective controls, for remedying violations and incidents.
  • deterrent controls, for discouraging violations.
  • recovery controls, for restoring systems and informations.
  • compensating controls, for providing alternative ways of achieving a task.

These access control types can fall into one of three categories: administrative, technical, or physical.

  1. Administrative (also called directive) controls are implemented by creating and following organizational policy, procedure, or regulation.
  2. Technical controls are implemented using software, hardware, or firmware that restricts logical access on an information technology system.
  3. Physical controls are implemented with physical devices, such as locks, fences, gates, security guards, etc.

Preventive controls prevents actions from occurring.

Detective controls are controls that alert during or after a successful attack.

Corrective controls work by “correcting” a damaged system or process. The corrective access control typically works hand in hand with detective access controls.

After a security incident has occurred, recovery controls may need to be taken in order to restore functionality of the system and organization.

The connection between corrective and recovery controls is important to understand. For example, let us say a user downloads a Trojan horse. A corrective control may be the antivirus software “quarantine.” If the quarantine does not correct the problem, then a recovery control may be implemented to reload software and rebuild the compromised system.

Deterrent controls deter users from performing actions on a system. Examples include a “beware of dog” sign:

A compensating control is an additional security control put in place to compensate for weaknesses in other controls.

Here are more clear-cut examples:

Preventive

  • Physical: Lock, mantrap.
  • Technical: Firewall.
  • Administrative: Pre-employment drug screening.

Detective

  • Physical: CCTV, light (used to see an intruder).
  • Technical: IDS.
  • Administrative: Post-employment random drug tests.

Deterrent

  • Physical: “Beware of dog” sign, light (deterring a physical attack).
  • Administrative: Sanction policy.

Authentication methods

A key concept for implementing any type of access control is controlling the proper authentication of subjects within the IT system.

There are three basic authentication methods:

  • something you know – requires testing the subject with some sort of challenge and response where the subject must respond with a knowledgeable answer.
  • something you have – requires that users posses something, such as a token, which proves they are an authenticated user.
  • something you are – is biometrics, which uses physical characteristics as a means of identification or authentication.
  • A fourth type of authentication is some place you are – describes location-based access control using technologies such as the GPS, IP address-based geo location. these controls can deny access if the subject is in incorrect location.

Biometric Enrollment and Throughput

Enrollment describes the process of registering with a biometric system: creating an account for the first time.

Throughput describes the process of authenticating to a biometric system.

Three metrics are used to judge biometric accuracy:

  • the False Reject Rate (FRR) or Type I error- a false rejection occurs when an authorized subject is rejected by the biometric system as unauthorized.
  • the False Accept Rate (FAR) or Type II error- a false acceptance occurs when an unauthorized subject is accepted as valid.
  • the Crossover Error Rate (CER) –  describes the point where the False Reject Rate (FRR) and False Accept Rate (FAR) are equal. CER is also known as the Equal Error Rate (EER). The Crossover Error Rate describes the overall accuracy of a biometric system.
Use CER to compare FAR and FRR
Use CER to compare FAR and FRR

Types of biometric control

Fingerprints are the most widely used biometric control available today.

A retina scan is a laser scan of the capillaries which feed the retina of the back of the eye.

An iris scan is a passive biometric control. A camera takes a picture of the iris (the colored portion of the eye) and then compares photos within the authentication database.

In hand geometry biometric control, measurements are taken from specific points on the subject’s hand: “The devices use a simple concept of measuring and recording the length, width, thickness, and surface area of an individual’s hand while guided on a plate.”

Keyboard dynamics refers to how hard a person presses each key and the rhythm by which the keys are pressed.

Dynamic signatures measure the process by which someone signs his/her name. This process is similar to keyboard dynamics, except that this method measures the handwriting of the subjects while they sign their name.

A voice print measures the subject’s tone of voice while stating a specific sentence or phrase. This type of access control is vulnerable to replay attacks (replaying a recorded voice), so other access controls must be implemented along with the voice print.

Facial scan technology has greatly improved over the last few years. Facial scanning (also called facial recognition) is the process of passively taking a picture of a subject’s face and comparing that picture to a list stored in a database.

Access control technologies

There are several technologies used for the implementation of access control.

Single Sign-On (SSO) allows multiple systems to use a central authentication server (AS). This allows users to authenticate once, and then access multiple, different systems.

SSO is an important access control and can offer the following benefits:

  • Improved user productivity.
  • Improved developer productivity – SSO provides developers with a common authentication framework.
  • Simplified administration.

The disadvantages of SSO are listed below and must be considered before implementing SSO on a system:

  • Difficult to retrofit.
  • Unattended desktop. For example a malicious user could gain access to user’s resources if the user walks away from his machine and leaves in log in.
  • Single point of attack .

SSO is commonly implemented by third-party ticket-based solutions including Kerberos, SESAME or KryptoKnight.

Kerberos is a third-party authentication service that may be used to support Single Sign-On. Kerberos uses secret key encryption and provides mutual authentication of both clients and servers. It protects against network sniffing and replay attacks.

Kerberos has the following components:

  • Principal: Client (user) or service
  • Realm: A logical Kerberos network
  • Ticket: Data that authenticates a principal’s identity
  • Credentials: a ticket and a service key
  • KDC: Key Distribution Center, which authenticates principals
  • TGS: Ticket Granting Service
  • TGT: Ticket Granting Ticket
  • C/S: Client Server, regarding communications between the two

Kerberos provides mutual authentication of client and server.Kerberos mitigates replay attacks (where attackers sniff Kerberos credentials and replay them on the network) via the use of timestamps.

The primary weakness of Kerberos is that the KDC stores the plaintext keys of all principals (clients and servers). A compromise of the KDC (physical or electronic) can lead to the compromise of every key in the Kerberos realm. The KDC and TGS are also single points of failure.

SESAME is Secure European System for Applications in a Multi-vendor Environment, a single sign-on system that supports heterogeneous environments.

“SESAME adds to Kerberos: heterogeneity, sophisticated access control features, scalability of public key systems, better manageability, audit and delegation.”20 Of those improvements, the addition of public key (asymmetric) encryption is the most compelling. It addresses one of the biggest weaknesses in Kerberos: the plaintext storage of symmetric keys.

Assessing access control

A number of processes exist to assess the effectiveness of access control. Tests with a narrower scope include penetration tests, vulnerability assessments, and security audits.

Penetration tests

Penetration tests may include the following tests:

  • Network (Internet)
  • Network (internal or DMZ)
  • Wardialing
  • Wireless
  • Physical (attempt to gain entrance into a facility or room)

A zero-knowledge (also called black box) test is “blind”; the penetration tester begins with no external or trusted information, and begins the attack with public information only.

A full-knowledge test (also called crystal-box) provides internal information to the penetration tester, including network diagrams, policies and procedures, and sometimes reports from previous penetration testers.

Penetration testers use the following methodology:

  • Planning
  • Reconnaissance
  • Scanning (also called enumeration)
  • Vulnerability assessment
  • Exploitation
  • Reporting

Vulnerability testing

Vulnerability scanning (also called vulnerability testing) scans a network or system for a list of predefined vulnerabilities such as system misconfiguration, outdated software, or a lack of patching. A vulnerability testing tool such as Nessus (http://www.nessus.org) or OpenVAS (http://www.openvas.org) may be used to identify the vulnerabilities.

Security audit

A security audit is a test against a published standard. Organizations may be audited for PCI (Payment Card Industry) compliance, for example. PCI includes many required controls, such as firewalls, specific access control models, and wireless encryption.

Security assessments

Security assessments view many controls across multiple domains, and may include the following:

  • Policies, procedures, and other administrative controls
  • Assessing the real world-effectiveness of administrative controls
  • Change management
  • Architectural review
  • Penetration tests
  • Vulnerability assessments
  • Security audits

(My) CISSP Notes – Business Continuity and Disaster Recovery Planning

Note: This notes were made using the following books: “CISPP Study Guide” and “CISSP for dummies”.

Business Continuity and Disaster Recovery Planning is an organization’s last line of defense: when all other controls have failed, BCP/DRP is the final control that may prevent drastic events such as injury, loss of life, or failure of an organization.

An additional benefit of BCP/DRP is that an organization that forms a business continuity team, and conducts a thorough BCP/DRP process, is forced to view the organization’s critical processes and assets in a different, often clarifying, light. Critical assets must be identified and key business processes understood. Standards are employed. Risk analysis conducted during a BCP/DRP plan can lead to immediate mitigating steps.

BCP

The overarching goal of a BCP is for ensuring that the business will continue to operate before, throughout, and after a disaster event is experienced. The focus of a BCP is on the business as a whole, and ensuring that those critical services that the business provides or critical functions that the business regularly performs can still be carried out both in the wake of a disruption as well as after the disruption has been weathered.

Business Continuity Planning provides a long-term strategy for ensuring that continued successful operation of an organization in spite of inevitable disruptive events and disasters.

BCP deals with keeping business operations running, perhaps in other location or using different tools and processes, after the disaster has struck.

DRP

The DRP provides a short-term plan for dealing with specific IT-oriented disruptions. The DRP focuses on efficiently attempting to mitigate the impact of a disaster and the immediate response and recovery of critical IT systems in the face of a significant disruptive event.The DRP does not focus on long-term business impact in the same fashion that a BCP does. DRP deals with restoring normal business operations after the disaster takes place.

These two plans, which have different scopes, are intertwined. The Disaster Recovery Plan serves as a subset of the overall Business Continuity Plan, because a BCP would be doomed to fail if it did not contain a tactical method for immediately dealing with disruption of information systems.

Defining disastrous events

The three common ways of categorizing the causes for disasters are as to whether the threat agent is natural, human, or environmental in nature.

  • Natural disasters – fires and explosions, earthquakes, storms, floods, hurricanes, tornadoes, landslices, tsunamis, pandemics
  • Human disasters (intentional or unintentional threat) – accidents, crime and mischief, war and terrorism, cyber attacks/cyber warfare, civil disturbance
  • Environmental disasters – this class of threat includes items such as power issues (blackout, brownout, surge, spike), system component or other equipment failures, application or software flaws.

Though errors and omissions are the most common threat faced by an organization, they also represent the type of threat that can be most easily avoided.

The safety of an organization’s personnel should be guaranteed even at the expense of efficient or even successful restoration of operations or recovery of data.

Recovering from a disaster

The general process of disaster recovery involves responding to the disruption; activation of the recovery team; ongoing tactical communication of the status of disaster and its associated recovery; further assessment of the damage caused by the disruptive event; and recovery of critical assets and processes in a manner consistent with the extent of the disaster.

  • Respond – In order to begin the disaster recovery process, there must be an initial response that begins the process of assessing the damage. The initial assessment will determine if the event in question constitutes a disaster.
  • Activate Team – If during the initial response to a disruptive event a disaster is declared, then the team that will be responsible for recovery needs to be activated.
  • Communicate – After the successful activation of the disaster recovery team, it is likely that many individuals will be working in parallel on different aspects of the overall recovery process. In addition to communication of internal status regarding the recovery activities, the organization must be prepared to provide external communications, which involves disseminating details regarding the organization’s recovery status with the public.
  • Assess – A more detailed and thorough assessment will be done by the, now activated, disaster recovery team. The team will proceed to assess the extent of damage to determine the proper steps to ensure the organization’s mission is fulfilled.
  • Reconstitution   – The primary goal of the reconstitution phase is to successfully recover critical business operations either at primary or secondary site.

BCP/DRP Project elements

A BCP project typically has four components: Scope determination, business impact assessment, identify preventive controls and implementation.

BCP Scope

The success and effectiveness of a BCP depends greatly on whether senior management and the project team properly defines the scope. Specific questions will need to be asked of the BCP/DRP planning team like “What is in and out of scope of this plan”.

Business impact assessment (BIA)

The BIA describes the impact that a disaster is expected to have on business operations. Any BIA should contains the following tasks:

  • Perform an vulnerability Assessment – The goal of the vulnerability assessment is to determine the impact of the loss of a critical business function.
  • Perform a critically assessmentThe team members need to estimate the duration of a disaster event to effectively prepare the critically assessment. Project team members needs to consider the impact of a disruption based on the length of time that a disasters impairs critical business functions.
  • Determine the Maximum Tolerable DowntimeThe primary goal of the BIA is to determine the Maximum Tolerable Downtime (MTD), also known as Maximum Tolerable Period Of Disruption (MTPD) for a specific IT asset. MTD is the maximum period of time that a critical business function can be inoperative before the company incurs significant and log-lasting damage.
  • Establish recovery targetsThese targets represent the period of time from the start of a disaster until critical processes have resumes functioning. Two primary recovery targets are established for each business process: Recovery Time Objective (RTO) and Recovery Point Objective(RPO).RTO is the maximum period of time in which a business prices must be restored after a disaster. The RTO is also called the system recovery time.

    RPO is the maximum period of time in which data might be lost if a disaster strikes. The RPO represents the maximum acceptable amount of data/work loss for a given process because of a disaster or disruptive event.

  • Determine ressource requirements – This portion of the BIA is a listing of the resources that an organization needs in order to continue operating each critical business function.

Identify preventive controls

Preventive controls prevent disruptive events from having an impact. The BIA will identify some risks which might be mitigated immediately. Once the BIA is complete, the BCP team knows the Maximum Tolerable Downtime. This metric, as well as others including the Recovery Point Objective and Recovery Time Objective, are used to determine the recovery strategy.

Once an organization has determined its maximum tolerable downtime, the choice of recovery options can be determined. For example, a 10-day MTD indicates that a cold site may be a reasonable option. An MTD of a few hours indicates that a redundant site or hot site is a potential option.

  • A redundant site is an exact production duplicate of a system that has the capability to seamlessly operate all necessary IT operations without loss of services to the end user of the system.
  • A hot site is a location that an organization may relocate to following a major disruption or disaster.It is important to note the difference between a hot and redundant site. Hot sites can quickly recover critical IT functionality; it may even be measured in minutes instead of hours. However, a redundant site will appear as operating normally to the end user no matter what the state of operations is for the IT program.
  • A warm sitehas some aspects of a hot site, for example, readily-accessible hardware and connectivity, but it will have to rely upon backup data in order to reconstitute a system after a disruption.An organizations will have to be able to withstand an MTD of at least 1-3 days in order to consider a warm site solution.
  • A cold site is the least expensive recovery solution to implement. It does not include backup copies of data, nor does it contain any immediately available hardware.
  • Reciprocal agreements are a bi-directional agreement between two organizations in which one organization promises another organization that it can move in and share space if it experiences a disaster.
  • Mobile sites are “datacenters on wheels”: towable trailers that contain racks of computer equipment.

As discussed previously, the Business Continuity Plan is an umbrella plan that contains others plans. In addition to the Disaster recovery plan, other plans include the Continuity of Operations Plan (COOP), the Business Resumption/Recovery Plan (BRP), Continuity of Support Plan, Cyber Incident Response Plan, Occupant Emergency Plan (OEP), and the Crisis Management Plan (CMP).

The Business Recovery Plan (also known as the Business Resumption Plan) details the steps required to restore normal business operations after recovering from a disruptive event. This may include switching operations from an alternate site back to a (repaired) primary site.

The Continuity of Support Plan focuses narrowly on support of specific IT systems and applications. It is also called the IT Contingency Plan, emphasizing IT over general business support.

The Cyber Incident Response Plan is designed to respond to disruptive cyber events, including network-based attacks, worms, computer viruses, Trojan horses.

The Occupant Emergency Plan(OEP) provides the “response procedures for occupants of a facility in the event of a situation posing a potential threat to the health and safety of personnel, the environment, or property”.  This plan is facilities-focused, as opposed to business or IT-focused.

The Crisis Management Plan(CMP) is designed to provide effective coordination among the managers of the organization in the event of an emergency or disruptive event. A key tool leveraged for staff communication by the Crisis Communications Plan is the Call Tree, which is used to quickly communicate news throughout an organization without overburdening any specific person. The call tree works by assigning each employee a small number of other employees they are responsible for calling in an emergency event.

Implementation

The implementation phase consists in testing, training and awareness and continued maintenance.

In order to ensure that a Disaster Recovery Plan represents a viable plan for recovery, thorough testing is needed. There are different types of testing:

  • The DRP Review is the most basic form of initial DRP testing, and is focused on simply reading the DRP in its entirety to ensure completeness of coverage.
  • Checklist(also known as consistency) testing lists all necessary components required for successful recovery, and ensures that they are, or will be, readily available should a disaster occur.Another test that is commonly completed at the same time as the checklist test is that of the structured walkthrough, which is also often referred to as a tabletop exercise.
  • A simulation test, also called a walkthrough drill (not to be confused with the discussion-based structured walkthrough), goes beyond talking about the process and actually has teams to carry out the recovery process. A pretend disaster is simulated to which the team must respond as they are directed to by the DRP.
  • Another type of DRP test is that of parallel processing. This type of test is common in environments where transactional data is a key component of the critical business processing. Typically, this test will involve recovery of critical processing components at an alternate computing facility, and then restore data from a previous backup. Note that regular production systems are not interrupted.
  • Arguably, the most high fidelity of all DRP tests involves business interruption testing. However, this type of test can actually be the cause of a disaster, so extreme caution should be exercised before attempting an actual interruption test.Once the initial BCP/DRP plan is completed, tested, trained, and implemented, it must be kept up to date.BCP/DRP plans must keep pace with all critical business and IT changes.Business continuity and disaster recovery planning are a business’ last line of defense against failure. If other controls have failed, BCP/DRP is the final control. If it fails, the business may fail.

    A handful of specific frameworks are worth discussing, including NIST SP 800-34, ISO/IEC-27031, and BCI.