(My) CISSP Notes – Telecommunications and network security (I)

Note: This notes were made using the following books: “CISPP Study Guide” and “CISSP for dummies”.

Telecommunications and Network Security employs defense-in-depth, as we do in all 10 domains of the common body of knowledge. Any one control may fail, so multiple controls are always recommended.

Network concepts

Simplex communication is one-way, like a car radio tuned to a music station. Half-duplex communication sends or receives at one time only (not simultaneously), like a walkie-talkie. Full-duplex communications sends and receive simultaneously, like two people having a face-to-face conversation.

Baseband networks have one channel, and can only send one signal at a time. Ethernet networks are baseband: a “100baseT” UTP cable means 100 megabit, baseband, and twisted pair.

Broadband networks have multiple channels and can send multiple signals at a time, like cable TV.

A LAN is a Local Area Network. A LAN is a comparatively small network, typically confined to a building or an area within one. A MAN is a Metropolitan Area Network, which is typically confined to a city, a zip code, a campus, or office park. A WAN is a Wide Area Network, typically covering cities, states, or countries. A GAN is a Global Area Network; a global collection of WANs.

The Internet is a global collection of peered networks running TCP/IP, providing best effort service. An Intranet is a privately owned network running TCP/IP, such as a company network. An Extranet is a connection between private Intranets, such as connections to business partner Intranets.

Circuit-switched networks can provide dedicated bandwidth to point-to-point connections, such as a T1 connecting two offices. One drawback of circuit-switched networks: once a channel or circuit is connected, it is dedicated to that purpose, even while no data is being transferred.

Packet-switched networkswere designed to handle network failures more robustly. Instead of using dedicated circuits, data is broken into packets, each sent individually. If multiple routes are available between two points on a network, packet switching can choose the best route, and fall back to secondary routes in case of failure. Packets may take any path (and different paths) across a network, and are then reassembled by the receiving node.

Packet switched networks may use Quality of Service (QoS) to give specific traffic precedence over other traffic. For example: QoS is often applied to Voice over IP (VoIP) traffic (voice via packet-switched data networks), to avoid interruption of phone calls.

Network models such as OSI and TCP/IP are designed in layers. Each layer performs a specific function, and the complexity of that functionality is contained within its layer. Changes in one layer do not directly affect another.

A network model is a description of how a network protocol suite operates, such as the OSI Model or TCP/IP Model. A network stack is a network protocol suite programmed in software or hardware.

Network models

The ISO OSI reference model

The OSI (Open System Interconnection) Reference Model is a layered network model. The model is abstract: we do not directly run the OSI model in our systems.

The OSI model has seven layers. The layers may be listed in top-to-bottom or bottom-to-top order. Using the latter, they are Physical, Data Link, Network, Transport, Session, Presentation, and Application.

The physical layer describes units of data such as bits represented by energy (such as light, electricity, or radio waves) and the medium used to carry them (such as copper or fiber optic cables). WLANs have a physical layer, even though we cannot physically touch it. Layer 1 devices include hubs and repeaters.

The Data Link layer handles access to the physical layer as well as local area network communication. An Ethernet card and its MAC (Media Access Control) address are at Layer 2, as are switches and bridges. Layer 2 is divided into two sub-layers: Media Access Control (MAC) and Logical Link Control (LLC).

The Network layer describes routing: moving data from a system on one LAN to a system on another. IP addresses and routers exist at Layer 3.

The Transport layer handles packet sequencing, flow control, and error detection. TCP and UDP are Layer 4 protocols.

The Session layer manages sessions, which provide maintenance on connections.A good way to remember the session layer’s function is “connections between applications.” The Session Layer uses simplex, half-duplex, and full-duplex communication.

The Presentation layer presents data to the application (and user) in a comprehensible way.

The Application layer is where you interface with your computer application.

Try creating a mnemonic to recall the layers of he OSI model, such as : Please Do Not Throw Sausage Pizza Away or Adult People Should Try New Dairy Products.

The TCP/IP model

TCP/IP is an informal name, the formal name is Internet Protocol Suite. The TCP/IP model is simpler that ISO/OSI.

The Network Access Layer of the TCP/IP model combines layers 1 (Physical) and 2 (Data Link) of the OSI model.

The Internet Layer of the TCP/IP model aligns with the Layer 3 (Network) layer of the OSI model. This is where IP addresses and routing live.

The Host-to-Host Transport Layer (sometimes called either “Host-to-Host” or, more commonly, “Transport” alone.It is where applications are addressed on a network, via ports. TCP and UDP are the two Transport Layer protocols of TCP/IP.

The TCP/IP Application Layer combines Layers 5 though 7 (Session, Presentation, and Application) of the OSI model. Most of these protocols use a client-server architecture, where a client (such as ssh) connects to a listening server (called a daemon on UNIX systems) such as sshd.

Here is the mapping between ISO/OSI model and TCP/IP model:

ISO/OSI vs TCP/IP
ISO/OSI vs TCP/IP

Encapsulation takes information from a higher layer and adds a header to it, treating the higher layer information as data.For example, as the data moves down the stack in the TCP/IP model, application layer data is encapsulated in a layer 4 TCP segment. That TCP segment is encapsulated in a Layer 3 IP packet. That IP packet is encapsulated in a Layer 2 Ethernet frame. The frame is then converted into bits at Layer 1 and sent across the local network.

Data, segments, packets, frames, and bits are examples of Protocol Data Units (PDUs).

Physical Layer Protocols and concepts

LAN Physical Network Technologies

  • Bus – All devices are connected to a single cable that’s terminated on both ends. Each node inspects the data as it passes along the bus. Network buses are fragile; should the network cable beak anywhere along the bus, the entire bus will go down.
  • Tree – A tree is also called hierarchical network: a network with a root node, and branch nodes that are at least three levels deep (two levels would make it a star).
  • Ring – A physical ring connects network nodes in a ring: if you follow the cable from node to node, you will finish where you began. In the ring topology, all communication ravels in a single direction around the ring.
  • Star – Each node is connected directly to a central device such as a hub or a switch. Star topology has become the dominant physical topology for LANs.
  • Mesh – All systems are interconnected to provide multiple path to all other resources. In most networks a partial mesh is implemented for only the most critical networks components, such as routers, switches and servers. Meshes heave superior availability and are often used for highly available (HA) server clusters.

Cable and connector types

Cables carry the electrical or light signals that represents data, between devices on a network.Fundamental network cabling terms to understand include EMI, noise, crosstalk, and attenuation.

Electro Magnetic Interference (EMI) is interference caused by magnetism created by electricity. Crosstalk occurs when a signal crosses from one cable to another. Attenuation is the weakening of signal as it travels further from the source.

  • Coaxial cable – consists of single, solid copper-wire-core, surrounded by a plastic insulator, braided-metal shielding, all covered with a plastic sheath. This construction makes the cable very durable and resistant to EMI and Radio Frequency Interference (RFI) signals. Coaxial cables comes in 2 flavors: thick and thin.
  • Twinaxial cable – very similar to coax cables, but it consists of two solid copper-wire cores, rather that a single one. Twinax ix used to achieve hight data (10 Gb) transmission speed over very short distances (10 meter).
  • Twisted-pair cable  are classified by categories according to rated speed. Tighter twisting results in more dampening: a Category 6 designed for gigabit networking have far twitter twisting than a Category 3 fast Ethernet cable.
Category Speed Use
1 1 Mbps Voice Only (Telephone Wire)
2 4 Mbps LocalTalk & Telephone (Rarely used)
3 16 Mbps 10BaseT Ethernet
4 20 Mbps Token Ring (Rarely used)
5 100 Mbps (2 pair) 100BaseT Ethernet
1000 Mbps (4 pair) Gigabit Ethernet
5e 1,000 Mbps Gigabit Ethernet
6 10,000 Mbps Gigabit Ethernet
  • Fiber-optic cable – Fiber Optic network cable (simply called “fiber”) uses light to carry information, which can carry a tremendous amount of information. Fiber can be used to transmit via long distances: past 50 miles, much further than any copper cable. Fiber’s advantages are speed, distance, and immunity to EMI. Disadvantages include cost and complexity.

Network equipement

  • Network Interface Cards (NIC) are used to connect a computer to the  network.
  • Repeater – is a non intelligent device that simply amplifies a signal.A repeater receives bits on one port, and “repeats” them out the other port. The repeater has no understanding of protocols; it simply repeats bits.
  • Hub– A hub is a repeater with more than two ports. It receives bits on one port and repeats them across all other ports.Hubs provide no traffic isolation and have no security: all nodes see all traffic sent by the hub.Hubs are also half-duplex devices: they cannot send and receive simultaneously.Hubs also have one “collision domain”: any node may send colliding traffic with another.

Data Link Layer Protocols and concepts

LAN protocols and transmission methods

Common LAN protocols defines at the Layer 2 level are :

  • Ethernet – is a dominant local area networking technology that transmits network data via frames. It originally used a physical bus topology, but later added support for physical star. Carrier Sense Multiple Access (CSMA) is designed to address collisions.CSMA/CD is used for systems that can send and receive simultaneously, such as wired Ethernet. CSMA/CA (Collision Avoidance) is used for systems such as 802.11 wireless that cannot send and receive simultaneously.
  • ARCNet (Attached Resource Computer Network) – is one of the earliest LAN technology. It is implemented in the star topology by using the coaxial cable.
  • Token-Ring – like ARCNet,  the Token Ring is a legacy LAN technologies. Both pass network traffic via tokens. Possession of a token allows a node to read or write traffic on a network. This solves the collision issue faced by Ethernet: nodes cannot transmit without a token.
  • FDDI (Fiber Distributed Data Interface)  – is another legacy LAN technology, running a logical network ring via a primary and secondary counter-rotating fiber optic ring.
  • ARP (Address Resolution Protocol) – is used to translate between Layer 2 MAC addresses and Layer 3 IP addresses. ARP discovers physical addresses of attached devices by broadcasting ARP query messages on the network segment. IP-address-to-MAC-address translations are then maintained in a dynamic table. ARP maps an IP address to a MAC address and is used to identify a device’s hardware address when only the IP address is known.
  • RARP (Reverse Address Resolution Protocol) – is used by diskless workstations to request an IP address. The system broadcasts a RARP message that that provides the system MAC address and request to be informed of its IP address. RARP maps between a MAC address to an IP address and is used to identify a device’s IP address when only the MAC address is known.

LAN data transmissions are classified as :

  • Unicast  – is one-to-one traffic, such as a client surfing the Web.
  • Multicast – Packets are copied and sent from the source to multiple destination devices by using a special multicast IP address.
  • Broadcast traffic is sent to all stations on a LAN. There are two types of IPv4 broadcast addresses: limited broadcast and directed broadcast. The limited broadcast address is 255.255.255.255. It is “limited” because it is never forwarded across a router, unlike a directed broadcast.

WLAN protocols and transmission methods

Wireless Local Area Networks (WLANs) transmit information via electromagnetic waves (such as radio) or light.

The most common form of wireless data networking is the 802.11 wireless standard, and the first 802.11 standard with reasonable security is 802.11i.

Protocol Release Date Op. Frequency Data Rate (Typical) Data Rate (Max) Range (Indoor)
Legacy 1997 2.4 -2.5 GHz 1 Mbit/s 2 Mbit/s ?
802.11a 1999 5.15-5.35/5.47-5.725/5.725-5.875 GHz 25 Mbit/s 54 Mbit/s ~30 meters (~100 feet)
802.11b 1999 2.4-2.5 GHz 6.5 Mbit/s 11 Mbit/s ~50 meters (~150 feet)
802.11g 2003 2.4-2.5 GHz 11 Mbit/s 54 Mbit/s ~30 meters (~100 feet)
802.11n 2006 (draft) 2.4 GHz or 5 GHz bands 200 Mbit/s 540 Mbit/s ~50 meters (~160 feet)

Another WLAN standard is IEEE standard 802.15 a.k.a Bluetooth. Bluetooth, is a Personal Area Network (PAN) wireless technology, operating in the same 2.4 GHz frequency as many types of 802.11 wireless. Sensitive devices should disable automatic discovery by other Bluetooth devices. The “security” of the discovery relies on the secrecy of the 48-bit MAC address of the Bluetooth adapter. Even when disabled, the Bluetooth devices can be discovered by guessing the MAC address.

WAN technologies and protocols

WAN protocols define how frames are carried across a single data link between two devices.

Point-to-point links – these links provide a single, pre-establish WAN communications path from the customer network, to a remote network.

  • SLIP (Serial Line IP) – was originally developed to support TCP?IP networking over low-speed asynchronous serial lines.
  • PPP (Pont-to-Point protocol) – the successor of SLIP, provides router-to-router and host-to-network connections over synchronous and asynchronous circuits. PPP adds confidentiality, integrity and authentication via point-to-point links.
  • PPTP (Point-to-Point Tunneling Protocol) – tunnels PPP via IP. PPTP doesn’t provides encryption or confidentiality, instead relying on other protocols such as PAP, CHAP and EAP, for security.
  • L2F (Layer 2 Forwarding Protocol) – a tunneling protocol developed by Cisco and used to implement VPNs.

Circuit-switched networks – a dedicated physical circuit path is established, maintained and terminated between the sender and receiver across a carrier network for each communications session. This network type is used extensively in telephone company networks.

  • xDSL (Digital Subscriber Line) – uses existing analog phones lines to deliver high-bandwith connectivity ti remote customers.
  • DOCSIS (Data Over Cable Services Interface Specifications) – is a communication protocol for transmitting high speed data over an existing cable TV system.
  • ISDN (Integrated Services Digital Network) – is a communication protocol that operates over analog phone lines that have been converted to use digital signaling.

Packet-switched networks – devices share bandwidth on communications links to transport packets between a sender and receiver across a carrier network. This type of network is more resilient to errors and congestions than circuit-switched networks.

  • Frame Relay – protocol that provides no error recovery and focuses on speed.
  • X.25 – provided a cost-effective way to transmit data over long distances in the 1970s though 1990s.
  • ATM (Asynchronous Transfer Mode) – is a WAN technology that uses fixed length cels.ATM cells are 53 bytes long with a 5-bytes header and 48-bytes data portion.

Network equipement

Bridges and switches are Layer 2 devices.

A bridge has two ports, and connects network segments together. Each segment typically has multiple nodes, and the bridge learns the MAC addresses of nodes on either side.Traffic sent from two nodes on the same side of the bridge will not be forwarded across the bridge. Traffic sent from a node on one side of the bridge to the other side will forward across.A bridge has two collision domains.

A switch is a bridge with more than two ports. Also, it is best practice to only connect one device per switch port. Otherwise, everything that is true about a bridge is also true about a switch.A switch shrinks the collision domain to a single port. You will normally have no collisions assuming one device is connected per port (which is best practice). Additionally a switch can be used to implement virtual LANs, which logically segregate a network and limit broadcast domains.

Since switches provide traffic isolation, a Networked Intrusion Detection System (NIDS) connected to a 24-port switch will not see unicast traffic sent to and from other devices on the same switch.

Configuring a Switched Port Analyzer(SPAN) port is one way to solve this problem, by mirroring traffic from multiple switch ports to one “SPAN port.” SPAN is a Cisco term; HP switches use the term “Mirror port.” One drawback to using a switch SPAN port is port bandwidth overload.

A TAP (Test Access Port) provides a way to “tap” into network traffic, and see all unicast streams on a network. Taps are the preferred way to provide promiscuous network access to a sniffer or NIDS.

(My) CISSP Notes – Access control

Note: This notes were made using the following books: “CISPP Study Guide” and “CISSP for dummies”.

The purpose of access control is to allow authorized users access to appropriate data and deny access to unauthorized users and the mission and purpose of access control is to protect the confidentiality, integrity, and availability of data. Access control is performed by implementing strong technical, physical and administrative measures. Access control protect against threats such as unauthorized access, inappropriate modification of data, loss of confidentiality.

Basic concepts of access control

CIA triad and his opposite (DAD) – see (My) CISSP Notes – Information Security Governance and Risk Management

A subject is an active entity on a data system. Most examples of subjects involve people accessing data files. However, running computer programs are subjects as well. A Dynamic Link Library file or a Perl script that updates database files with new information is also a subject.

An object is any passive data within the system. Objects can range from databases to text files. The important thing remember about objects is that they are passive within the system. They do not manipulate other objects.

Access control systems provide three essential services:

  • Authentication – determines whether a subject can log in.
  • Authorization – determines what an subject can do.
  • Accountability – describes the ability to determine which actions each user performed on a system.

Access control models

Discretionary Access Control (DAC)

Discretionary Access Control (DAC) gives subjects full control of objects they have been given access to, including sharing the objects with other subjects. Subjects are empowered and control their data.

Standard UNIX and Windows operating systems use DAC for filesystems.

  • Access control list (ACLs) provides a flexible method for applying discretionary access controls. An ACL lists the specific rights and permissions that are assigned to a subject fora given object.
  • Role-Based Access Control (RBAC) is another method for implementing discretionary access controls. RBAC defines how information is accessed on a system based on the role of the subject. A role could be a nurse, a backup administrator, a help desk technician, etc. Subjects are grouped into roles and each defined role has access permissions based upon the role, not the individual.

Major disadvantages of DAC include:

  • lack of centralized administration.
  • dependence of security-conscious resource owners.
  • difficult auditing because of the large volume of log entries that can be generated.

Mandatory Access Control (MAC)

Mandatory Access Control (MAC) is system-enforced access control based on subject’s clearance and object’s labels. Subjects and Objects have clearances and labels, respectively, such as confidential, secret, and top secret.

A subject may access an object only if the subject’s clearance is equal to or greater than the object’s label. Subjects cannot share objects with other subjects who lack the proper clearance, or “write down” objects to a lower classification level (such as from top secret to secret). MAC systems are usually focused on preserving the confidentiality of data.

In MAC, the system determines the access policy.

Common MACs models includes Bell-La Padula, Biba, Clark-Wilson; for more infos about these models please see : (My) CISSP Notes – Security Architecture and Design .

Major disadvantages of MAC control techniques include:

  • lack of flexibility.
  • difficulty in implementing and programming.

Access control administration

An organization must choose the type of access control model : DAC or MAC. After choosing a model, the organization must select and implement different access control technologies and techniques. What is left to work out is how the organization will administer the access control model. Access control administration comes in two basic flavors: centralized and decentralized.

Centralized access models systems maintains user account information in a central location. Centralized access control systems allow organizations to implement a more consistent, comprehensive security policy, but they may not be practical in large organizations.

Exemples  of centralized access control systems and protocols commonly used for authentication of remote users:

  • LDAP
  • RAS – Remote Access Service servers utilize the Point-to-Point Protocol (PPP) to encapsulate IP packets. PPP incorporates the following three authentication protocols: PAP (Password Authentication Protocol), CHAP (Challenge Handshake Authentication Protocol), EAP (Extensible Authentication Protocol).
  • RADIUS – The Remote Authentication Dial In User Service protocol is a third-party authentication system. RADIUS is described in RFCs 2865 and 2866, and uses the User Datagram Protocol (UDP) ports 1812 (authentication) and 1813 (accounting).
  • Diameter is RADIUS’ successor, designed to provide an improved Authentication, Authorization, and Accounting (AAA) framework. RADIUS provides limited accountability, and has problems with flexibility, scalability, reliability, and security. Diameter also uses Attribute Value Pairs, but supports many more: while RADIUS uses 8 bits for the AVP field (allowing 256 total possible AVPs), Diameter uses 32 bits for the AVP field (allowing billions of potential AVPs). This makes Diameter more flexible, allowing support for mobile remote users, for example.
  • TACACS -The Terminal Access Controller Access Control System is a centralized access control system that requires users to send an ID and static (reusable) password for authentication. TACACS uses UDP port 49 (and may also use TCP).

Decentralized access control allows IT administration to occur closer to the mission and operations of the organization. In decentralized access control, an organization spans multiple locations, and the local sites support and maintain independent systems, access control databases, and data. Decentralized access control is also called distributed access control.

Access control defensive categories and types

Access control is achieved throughout an entire et of control which , identified by purpose, include;

  • preventive controls, for reducing risks.
  • detective controls, for identifying violations and incidents.
  • corrective controls, for remedying violations and incidents.
  • deterrent controls, for discouraging violations.
  • recovery controls, for restoring systems and informations.
  • compensating controls, for providing alternative ways of achieving a task.

These access control types can fall into one of three categories: administrative, technical, or physical.

  1. Administrative (also called directive) controls are implemented by creating and following organizational policy, procedure, or regulation.
  2. Technical controls are implemented using software, hardware, or firmware that restricts logical access on an information technology system.
  3. Physical controls are implemented with physical devices, such as locks, fences, gates, security guards, etc.

Preventive controls prevents actions from occurring.

Detective controls are controls that alert during or after a successful attack.

Corrective controls work by “correcting” a damaged system or process. The corrective access control typically works hand in hand with detective access controls.

After a security incident has occurred, recovery controls may need to be taken in order to restore functionality of the system and organization.

The connection between corrective and recovery controls is important to understand. For example, let us say a user downloads a Trojan horse. A corrective control may be the antivirus software “quarantine.” If the quarantine does not correct the problem, then a recovery control may be implemented to reload software and rebuild the compromised system.

Deterrent controls deter users from performing actions on a system. Examples include a “beware of dog” sign:

A compensating control is an additional security control put in place to compensate for weaknesses in other controls.

Here are more clear-cut examples:

Preventive

  • Physical: Lock, mantrap.
  • Technical: Firewall.
  • Administrative: Pre-employment drug screening.

Detective

  • Physical: CCTV, light (used to see an intruder).
  • Technical: IDS.
  • Administrative: Post-employment random drug tests.

Deterrent

  • Physical: “Beware of dog” sign, light (deterring a physical attack).
  • Administrative: Sanction policy.

Authentication methods

A key concept for implementing any type of access control is controlling the proper authentication of subjects within the IT system.

There are three basic authentication methods:

  • something you know – requires testing the subject with some sort of challenge and response where the subject must respond with a knowledgeable answer.
  • something you have – requires that users posses something, such as a token, which proves they are an authenticated user.
  • something you are – is biometrics, which uses physical characteristics as a means of identification or authentication.
  • A fourth type of authentication is some place you are – describes location-based access control using technologies such as the GPS, IP address-based geo location. these controls can deny access if the subject is in incorrect location.

Biometric Enrollment and Throughput

Enrollment describes the process of registering with a biometric system: creating an account for the first time.

Throughput describes the process of authenticating to a biometric system.

Three metrics are used to judge biometric accuracy:

  • the False Reject Rate (FRR) or Type I error- a false rejection occurs when an authorized subject is rejected by the biometric system as unauthorized.
  • the False Accept Rate (FAR) or Type II error- a false acceptance occurs when an unauthorized subject is accepted as valid.
  • the Crossover Error Rate (CER) –  describes the point where the False Reject Rate (FRR) and False Accept Rate (FAR) are equal. CER is also known as the Equal Error Rate (EER). The Crossover Error Rate describes the overall accuracy of a biometric system.
Use CER to compare FAR and FRR
Use CER to compare FAR and FRR

Types of biometric control

Fingerprints are the most widely used biometric control available today.

A retina scan is a laser scan of the capillaries which feed the retina of the back of the eye.

An iris scan is a passive biometric control. A camera takes a picture of the iris (the colored portion of the eye) and then compares photos within the authentication database.

In hand geometry biometric control, measurements are taken from specific points on the subject’s hand: “The devices use a simple concept of measuring and recording the length, width, thickness, and surface area of an individual’s hand while guided on a plate.”

Keyboard dynamics refers to how hard a person presses each key and the rhythm by which the keys are pressed.

Dynamic signatures measure the process by which someone signs his/her name. This process is similar to keyboard dynamics, except that this method measures the handwriting of the subjects while they sign their name.

A voice print measures the subject’s tone of voice while stating a specific sentence or phrase. This type of access control is vulnerable to replay attacks (replaying a recorded voice), so other access controls must be implemented along with the voice print.

Facial scan technology has greatly improved over the last few years. Facial scanning (also called facial recognition) is the process of passively taking a picture of a subject’s face and comparing that picture to a list stored in a database.

Access control technologies

There are several technologies used for the implementation of access control.

Single Sign-On (SSO) allows multiple systems to use a central authentication server (AS). This allows users to authenticate once, and then access multiple, different systems.

SSO is an important access control and can offer the following benefits:

  • Improved user productivity.
  • Improved developer productivity – SSO provides developers with a common authentication framework.
  • Simplified administration.

The disadvantages of SSO are listed below and must be considered before implementing SSO on a system:

  • Difficult to retrofit.
  • Unattended desktop. For example a malicious user could gain access to user’s resources if the user walks away from his machine and leaves in log in.
  • Single point of attack .

SSO is commonly implemented by third-party ticket-based solutions including Kerberos, SESAME or KryptoKnight.

Kerberos is a third-party authentication service that may be used to support Single Sign-On. Kerberos uses secret key encryption and provides mutual authentication of both clients and servers. It protects against network sniffing and replay attacks.

Kerberos has the following components:

  • Principal: Client (user) or service
  • Realm: A logical Kerberos network
  • Ticket: Data that authenticates a principal’s identity
  • Credentials: a ticket and a service key
  • KDC: Key Distribution Center, which authenticates principals
  • TGS: Ticket Granting Service
  • TGT: Ticket Granting Ticket
  • C/S: Client Server, regarding communications between the two

Kerberos provides mutual authentication of client and server.Kerberos mitigates replay attacks (where attackers sniff Kerberos credentials and replay them on the network) via the use of timestamps.

The primary weakness of Kerberos is that the KDC stores the plaintext keys of all principals (clients and servers). A compromise of the KDC (physical or electronic) can lead to the compromise of every key in the Kerberos realm. The KDC and TGS are also single points of failure.

SESAME is Secure European System for Applications in a Multi-vendor Environment, a single sign-on system that supports heterogeneous environments.

“SESAME adds to Kerberos: heterogeneity, sophisticated access control features, scalability of public key systems, better manageability, audit and delegation.”20 Of those improvements, the addition of public key (asymmetric) encryption is the most compelling. It addresses one of the biggest weaknesses in Kerberos: the plaintext storage of symmetric keys.

Assessing access control

A number of processes exist to assess the effectiveness of access control. Tests with a narrower scope include penetration tests, vulnerability assessments, and security audits.

Penetration tests

Penetration tests may include the following tests:

  • Network (Internet)
  • Network (internal or DMZ)
  • Wardialing
  • Wireless
  • Physical (attempt to gain entrance into a facility or room)

A zero-knowledge (also called black box) test is “blind”; the penetration tester begins with no external or trusted information, and begins the attack with public information only.

A full-knowledge test (also called crystal-box) provides internal information to the penetration tester, including network diagrams, policies and procedures, and sometimes reports from previous penetration testers.

Penetration testers use the following methodology:

  • Planning
  • Reconnaissance
  • Scanning (also called enumeration)
  • Vulnerability assessment
  • Exploitation
  • Reporting

Vulnerability testing

Vulnerability scanning (also called vulnerability testing) scans a network or system for a list of predefined vulnerabilities such as system misconfiguration, outdated software, or a lack of patching. A vulnerability testing tool such as Nessus (http://www.nessus.org) or OpenVAS (http://www.openvas.org) may be used to identify the vulnerabilities.

Security audit

A security audit is a test against a published standard. Organizations may be audited for PCI (Payment Card Industry) compliance, for example. PCI includes many required controls, such as firewalls, specific access control models, and wireless encryption.

Security assessments

Security assessments view many controls across multiple domains, and may include the following:

  • Policies, procedures, and other administrative controls
  • Assessing the real world-effectiveness of administrative controls
  • Change management
  • Architectural review
  • Penetration tests
  • Vulnerability assessments
  • Security audits

(My) CISSP Notes – Business Continuity and Disaster Recovery Planning

Note: This notes were made using the following books: “CISPP Study Guide” and “CISSP for dummies”.

Business Continuity and Disaster Recovery Planning is an organization’s last line of defense: when all other controls have failed, BCP/DRP is the final control that may prevent drastic events such as injury, loss of life, or failure of an organization.

An additional benefit of BCP/DRP is that an organization that forms a business continuity team, and conducts a thorough BCP/DRP process, is forced to view the organization’s critical processes and assets in a different, often clarifying, light. Critical assets must be identified and key business processes understood. Standards are employed. Risk analysis conducted during a BCP/DRP plan can lead to immediate mitigating steps.

BCP

The overarching goal of a BCP is for ensuring that the business will continue to operate before, throughout, and after a disaster event is experienced. The focus of a BCP is on the business as a whole, and ensuring that those critical services that the business provides or critical functions that the business regularly performs can still be carried out both in the wake of a disruption as well as after the disruption has been weathered.

Business Continuity Planning provides a long-term strategy for ensuring that continued successful operation of an organization in spite of inevitable disruptive events and disasters.

BCP deals with keeping business operations running, perhaps in other location or using different tools and processes, after the disaster has struck.

DRP

The DRP provides a short-term plan for dealing with specific IT-oriented disruptions. The DRP focuses on efficiently attempting to mitigate the impact of a disaster and the immediate response and recovery of critical IT systems in the face of a significant disruptive event.The DRP does not focus on long-term business impact in the same fashion that a BCP does. DRP deals with restoring normal business operations after the disaster takes place.

These two plans, which have different scopes, are intertwined. The Disaster Recovery Plan serves as a subset of the overall Business Continuity Plan, because a BCP would be doomed to fail if it did not contain a tactical method for immediately dealing with disruption of information systems.

Defining disastrous events

The three common ways of categorizing the causes for disasters are as to whether the threat agent is natural, human, or environmental in nature.

  • Natural disasters – fires and explosions, earthquakes, storms, floods, hurricanes, tornadoes, landslices, tsunamis, pandemics
  • Human disasters (intentional or unintentional threat) – accidents, crime and mischief, war and terrorism, cyber attacks/cyber warfare, civil disturbance
  • Environmental disasters – this class of threat includes items such as power issues (blackout, brownout, surge, spike), system component or other equipment failures, application or software flaws.

Though errors and omissions are the most common threat faced by an organization, they also represent the type of threat that can be most easily avoided.

The safety of an organization’s personnel should be guaranteed even at the expense of efficient or even successful restoration of operations or recovery of data.

Recovering from a disaster

The general process of disaster recovery involves responding to the disruption; activation of the recovery team; ongoing tactical communication of the status of disaster and its associated recovery; further assessment of the damage caused by the disruptive event; and recovery of critical assets and processes in a manner consistent with the extent of the disaster.

  • Respond – In order to begin the disaster recovery process, there must be an initial response that begins the process of assessing the damage. The initial assessment will determine if the event in question constitutes a disaster.
  • Activate Team – If during the initial response to a disruptive event a disaster is declared, then the team that will be responsible for recovery needs to be activated.
  • Communicate – After the successful activation of the disaster recovery team, it is likely that many individuals will be working in parallel on different aspects of the overall recovery process. In addition to communication of internal status regarding the recovery activities, the organization must be prepared to provide external communications, which involves disseminating details regarding the organization’s recovery status with the public.
  • Assess – A more detailed and thorough assessment will be done by the, now activated, disaster recovery team. The team will proceed to assess the extent of damage to determine the proper steps to ensure the organization’s mission is fulfilled.
  • Reconstitution   – The primary goal of the reconstitution phase is to successfully recover critical business operations either at primary or secondary site.

BCP/DRP Project elements

A BCP project typically has four components: Scope determination, business impact assessment, identify preventive controls and implementation.

BCP Scope

The success and effectiveness of a BCP depends greatly on whether senior management and the project team properly defines the scope. Specific questions will need to be asked of the BCP/DRP planning team like “What is in and out of scope of this plan”.

Business impact assessment (BIA)

The BIA describes the impact that a disaster is expected to have on business operations. Any BIA should contains the following tasks:

  • Perform an vulnerability Assessment – The goal of the vulnerability assessment is to determine the impact of the loss of a critical business function.
  • Perform a critically assessmentThe team members need to estimate the duration of a disaster event to effectively prepare the critically assessment. Project team members needs to consider the impact of a disruption based on the length of time that a disasters impairs critical business functions.
  • Determine the Maximum Tolerable DowntimeThe primary goal of the BIA is to determine the Maximum Tolerable Downtime (MTD), also known as Maximum Tolerable Period Of Disruption (MTPD) for a specific IT asset. MTD is the maximum period of time that a critical business function can be inoperative before the company incurs significant and log-lasting damage.
  • Establish recovery targetsThese targets represent the period of time from the start of a disaster until critical processes have resumes functioning. Two primary recovery targets are established for each business process: Recovery Time Objective (RTO) and Recovery Point Objective(RPO).RTO is the maximum period of time in which a business prices must be restored after a disaster. The RTO is also called the system recovery time.

    RPO is the maximum period of time in which data might be lost if a disaster strikes. The RPO represents the maximum acceptable amount of data/work loss for a given process because of a disaster or disruptive event.

  • Determine ressource requirements – This portion of the BIA is a listing of the resources that an organization needs in order to continue operating each critical business function.

Identify preventive controls

Preventive controls prevent disruptive events from having an impact. The BIA will identify some risks which might be mitigated immediately. Once the BIA is complete, the BCP team knows the Maximum Tolerable Downtime. This metric, as well as others including the Recovery Point Objective and Recovery Time Objective, are used to determine the recovery strategy.

Once an organization has determined its maximum tolerable downtime, the choice of recovery options can be determined. For example, a 10-day MTD indicates that a cold site may be a reasonable option. An MTD of a few hours indicates that a redundant site or hot site is a potential option.

  • A redundant site is an exact production duplicate of a system that has the capability to seamlessly operate all necessary IT operations without loss of services to the end user of the system.
  • A hot site is a location that an organization may relocate to following a major disruption or disaster.It is important to note the difference between a hot and redundant site. Hot sites can quickly recover critical IT functionality; it may even be measured in minutes instead of hours. However, a redundant site will appear as operating normally to the end user no matter what the state of operations is for the IT program.
  • A warm sitehas some aspects of a hot site, for example, readily-accessible hardware and connectivity, but it will have to rely upon backup data in order to reconstitute a system after a disruption.An organizations will have to be able to withstand an MTD of at least 1-3 days in order to consider a warm site solution.
  • A cold site is the least expensive recovery solution to implement. It does not include backup copies of data, nor does it contain any immediately available hardware.
  • Reciprocal agreements are a bi-directional agreement between two organizations in which one organization promises another organization that it can move in and share space if it experiences a disaster.
  • Mobile sites are “datacenters on wheels”: towable trailers that contain racks of computer equipment.

As discussed previously, the Business Continuity Plan is an umbrella plan that contains others plans. In addition to the Disaster recovery plan, other plans include the Continuity of Operations Plan (COOP), the Business Resumption/Recovery Plan (BRP), Continuity of Support Plan, Cyber Incident Response Plan, Occupant Emergency Plan (OEP), and the Crisis Management Plan (CMP).

The Business Recovery Plan (also known as the Business Resumption Plan) details the steps required to restore normal business operations after recovering from a disruptive event. This may include switching operations from an alternate site back to a (repaired) primary site.

The Continuity of Support Plan focuses narrowly on support of specific IT systems and applications. It is also called the IT Contingency Plan, emphasizing IT over general business support.

The Cyber Incident Response Plan is designed to respond to disruptive cyber events, including network-based attacks, worms, computer viruses, Trojan horses.

The Occupant Emergency Plan(OEP) provides the “response procedures for occupants of a facility in the event of a situation posing a potential threat to the health and safety of personnel, the environment, or property”.  This plan is facilities-focused, as opposed to business or IT-focused.

The Crisis Management Plan(CMP) is designed to provide effective coordination among the managers of the organization in the event of an emergency or disruptive event. A key tool leveraged for staff communication by the Crisis Communications Plan is the Call Tree, which is used to quickly communicate news throughout an organization without overburdening any specific person. The call tree works by assigning each employee a small number of other employees they are responsible for calling in an emergency event.

Implementation

The implementation phase consists in testing, training and awareness and continued maintenance.

In order to ensure that a Disaster Recovery Plan represents a viable plan for recovery, thorough testing is needed. There are different types of testing:

  • The DRP Review is the most basic form of initial DRP testing, and is focused on simply reading the DRP in its entirety to ensure completeness of coverage.
  • Checklist(also known as consistency) testing lists all necessary components required for successful recovery, and ensures that they are, or will be, readily available should a disaster occur.Another test that is commonly completed at the same time as the checklist test is that of the structured walkthrough, which is also often referred to as a tabletop exercise.
  • A simulation test, also called a walkthrough drill (not to be confused with the discussion-based structured walkthrough), goes beyond talking about the process and actually has teams to carry out the recovery process. A pretend disaster is simulated to which the team must respond as they are directed to by the DRP.
  • Another type of DRP test is that of parallel processing. This type of test is common in environments where transactional data is a key component of the critical business processing. Typically, this test will involve recovery of critical processing components at an alternate computing facility, and then restore data from a previous backup. Note that regular production systems are not interrupted.
  • Arguably, the most high fidelity of all DRP tests involves business interruption testing. However, this type of test can actually be the cause of a disaster, so extreme caution should be exercised before attempting an actual interruption test.Once the initial BCP/DRP plan is completed, tested, trained, and implemented, it must be kept up to date.BCP/DRP plans must keep pace with all critical business and IT changes.Business continuity and disaster recovery planning are a business’ last line of defense against failure. If other controls have failed, BCP/DRP is the final control. If it fails, the business may fail.

    A handful of specific frameworks are worth discussing, including NIST SP 800-34, ISO/IEC-27031, and BCI.

     

(My) CISSP Notes – Security Architecture and Design

Note: This notes were made using the following books: “CISPP Study Guide” and “CISSP for dummies”.

Security Architecture and Design describes fundamental logical hardware, operating system, and software security components, and how to use those components to design, architect, and evaluate secure computer systems.

Security Architecture and Design is a three-part domain. The first part covers the hardware and software required to have a secure computer system. The second part covers the logical models required to keep the system secure, and the third part covers evaluation models that quantify how secure the system really is.

Secure system design concepts

Layering separates hardware and software functionality into modular tiers. A generic list of security architecture layers is as follows:

1. Hardware

2. Kernel and device drivers

3. Operating System

4. Applications

Abstraction hides unnecessary details from the user. Complexity is the enemy of security: the more complex a process is, the less secure it is.

A security domainis the list of objects a subject is allowed to access. Confidential, Secret, and Top Secret are three security domains used by the U.S. Department of Defense (DoD), for example. With respect to kernels, two domains are user mode and kernel mode.

The ring model is a form of CPU hardware layering that separates and protects domains (such as kernel mode and user mode) from each other. Many CPUs, such as the Intel ×86 family, have four rings, ranging from ring 0 (kernel) to ring 3 (user).

The rings are (theoretically) used as follows:

• Ring 0: Kernel

• Ring 1: Other OS components that do not fit into Ring 0

• Ring 2: Device drivers

• Ring 3: User applications

Processes communicate between the rings via system calls, which allow processes to communicate with the kernel and provide a window between the rings.The ring model also provides abstraction: the nitty-gritty details of saving the file are hidden from the user, who simply presses the “save file” button.  A new mode called hypervisor mode (and informally called “ring 1”) allows virtual guests to operate in ring 0, controlled by the hypervisor one ring “below”.

An open system uses open hardware and standards, using standard components from a variety of vendors. An IBM-compatible PC is an open system.

A closed system uses proprietary hardware or software.

Secure hardware architecture

Secure Hardware Architecture focuses on the physical computer hardware required to have a secure system.

The system unit is the computer’s case: it contains all of the internal electronic computer components, including motherboard, internal disk drives, power supply, etc. The motherboard contains hardware including the CPU, memory slots, firmware, and peripheral slots such as PCI (Peripheral Component Interconnect) slots.

A computer bus, is the primary communication channel on a computer system. Communication between the CPU, memory, and input/output devices such as keyboard, mouse, display, etc., occur via the bus. Some computer designs use two buses: a northbridge and southbridge. The northbridge, also called the Memory Controller Hub (MCH), connects the CPU to RAM and video memory. The southbridge, also called the I/O Controller Hub (ICH), connects input/output (I/O) devices, such as disk, keyboard, mouse, CD drive, USB ports, etc. The northbridge is directly connected to the CPU, and is faster than the southbridge.

The “fetch and execute” (also called “Fetch, Decode, Execute,” or FDX) process actually takes four steps: 1. Fetch Instruction 1 2. Decode Instruction 1 3. Execute Instruction 1 4. Write (save) result 1 These four steps take one clock cycle to complete.

Pipelining combines multiple steps into one combined process, allowing simultaneous fetch, decode, execute, and write steps for different instructions.

An interrupt indicates that an asynchronous event has occurred. CPU interrupts are a form of hardware interrupt that cause the CPU to stop processing its current task, save the state, and begin processing a new request. When the new task is complete, the CPU will complete the prior task.

A processis an executable program and its associated data loaded and running in memory. A parent process may spawn additional child processes called threads. A thread is a light weight process (LWP). Threads are able to share memory, resulting in lower overhead compared to heavy weight processes.

Applications run as processes in memory, comprised of executable code and data. Multitasking allows multiple tasks (heavy weight processes) to run simultaneously on one CPU.

Multiprogramming is multiple programs running simultaneously on one CPU; multitasking is multiple tasks (processes) running simultaneously on one CPU, and multithreading is multiple threads (light weight processes) running simultaneously on one CPU.

Multiprocessing has a fundamental difference from multitasking: it runs multiple processes on multiple CPUs.

A watchdog timer is designed to recover a system by rebooting after critical processes hang or crash. The watchdog timer reboots the system when it reaches zero.

CISC (Complex Instruction Set Computer) and RISC(Reduced Instruction Set Computer) are two forms of CPU design. CISC uses a large set of complex machine language instructions, while RISC uses a reduced set of simpler instructions.

Real (or primary) memory, such as RAM, is directly accessible by the CPU and is used to hold instructions and data for currently executing processes. Secondary memory, such as disk-based memory, is not directly accessible.

Cache memoryis the fastest memory on the system, required to keep up with the CPU as it fetches and executes instructions. The fastest portion of the CPU cache is the register file. The next fastest form of cache memory is Level 1 cache, located on the CPU itself. Finally, Level 2 cache is connected to (but outside) the CPU.

RAM is volatile memory used to hold instructions and data of currently running programs.

Static Random Access Memory (SRAM) is expensive and fast memory.

Dynamic Random Access Memory (DRAM) stores bits in small capacitors (like small batteries), and is slower and cheaper than SRAM.

ROM (Read Only Memory) is nonvolatile: data stored in ROM maintains integrity after loss of power.

Addressing modes are CPU-dependent; commonly supported modes include direct, indirect, register direct, and register indirectDirect mode says “Add X to the value stored in memory location #YYYY.” That location stores the number 7, so the CPU adds X + 7. Indirectstarts the same way: “Add X to the value stored in memory location #YYYY.”  The difference is #YYYY stores another memory location (#ZZZZ). The CPU follows to pointer to #ZZZZ, which holds the value 7, and adds X + 7. Register direct addressing is the same as direct addressing, except it references a CPU cache register. Register indirect is also the same as indirect, except the pointer is stored in a register.

Memory protectionprevents one process from affecting the confidentiality, integrity, or availability of another.

Process isolation is a logical control that attempts to prevent one process from interfering with another. This is a common feature among multiuser operating systems such as Linux, UNIX, or recent Microsoft Windows operating systems.

Hardware segmentation takes process isolation one step further by mapping processes to specific memory locations.

Virtual memory provides virtual address mapping between applications and hardware memory.

Swapping uses virtual memory to copy contents in primary memory (RAM) to or from secondary memory (not directly addressable by the CPU, on disk). Swap space is often a dedicated disk partition that is used to extend the amount of available memory. If the kernel attempts to access a page (a fixed-length block of memory) stored in swap space, a page fault occurs (an error that means the page is not located in RAM), and the page is “swapped” from disk to RAM. The terms “swapping” and “paging” are often used interchangeably, but there is a slight difference: paging copies a block of memory to or from disk, while swapping copies an entire process to or from disk.

Firmware stores small programs that do not change frequently, such as a computer’s BIOS (discussed below), or a router’s operating system and saved configuration. Various types of ROM chips may store firmware, including PROM, EPROM, and EEPROM.

Flash memory (such as USB thumb drives) is a specific type of EEPROM, used for small portable disk drives. The difference is any byte of an EEPROM may be written, while flash drives are written by (larger) sectors. This makes flash memory faster than EEPROMs, but still slower than magnetic disks.

The IBM PC-compatible BIOS(Basic Input Output System) contains code in firmware that is executed when a PC is powered on. It first runs the Power-On Self-Test (POST), which performs basic tests, including verifying the integrity of the BIOS itself, testing the memory, identifying system devices, among other tasks. Once the POST process is complete and successful, it locates the boot sector (for systems which boot off disks), which contains the machine code for the operating system kernel. The kernel then loads and executes, and the operating system boots up.

WORM(Write Once Read Many) Storage can be written to once, and read many times. WORM storage helps assure the integrity of the data it contains: there is some assurance that it has not been (and cannot be) altered, short of destroying the media itself. The most common type of WORM media is CD-R (Compact Disc Recordable) and DVD-R (Digital Versatile Disk Recordable). Note that CD-RW and DVD-RW (Read/Write) are not WORM media.

Techniques used to provide process isolation include virtual memory (discussed in the next section), object encapsulation, and time multiplexing.

Secure operating system and software architecture

Secure Operating System and Software Architecture builds upon the secure hardware described in the previous section, providing a secure interface between hardware and the applications (and users) which access the hardware.

Kernels have two basic designs: monolithic and microkernel. A monolithic kernelis compiled into one static executable and the entire kernel runs in supervisor mode. All functionality required by a monolithic kernel must be precompiled in. Microkernelsare modular kernels. A microkernel is usually smaller and has less native functionality than a typical monolithic kernel (hence the term “micro”), but can add functionality via loadable kernel modules. Microkernels may also run kernel modules in user mode (usually ring 3), instead of supervisor mode.  A core function of the kernel is running the reference monitor, which mediates all access between subjects and objects. It enforces the system’s security policy, such as preventing a normal user from writing to a restricted file, such as the system password file.

Microsoft NTFS (New Technology File System) has the following basic file permissions:

• Read

• Write

• Read and execute

• Modify

• Full control (read, write, execute, modify, and delete)

Setuid is a Linux and UNIX file permission that makes an executable run with the permissions of the file’s owner, and not as the running user. Setgid (set group ID) programs run with the permissions of the file’s group. Setuid programs must be carefully scrutinized for security holes: attackers may attempt to trick the passwd command to alter other files.

Virtualization adds a software layer between an operating system and the underlying computer hardware. This allows multiple “guest” operating systems to run simultaneously on one physical “host” computer. There are two basic virtualization types: transparent virtualization (sometimes called full virtualization) and paravirtualization. Transparent virtualization runs stock operating systems, such as Windows 7 or Ubuntu Linux 9.10, as virtual guests. No changes to the guest OS are required. Paravirtualization runs specially modified operating systems, with modified kernel system calls.

Thin clientsare simpler than normal computer systems, with hard drives, full operating systems, locally installed applications, etc. They rely on central servers, which serve applications and store the associated data.Thin client applications normally run on a system with a full operating system, but use a Web browser as a universal client, providing access to robust applications which are downloaded from the thin client server and run in the client’s browser.

A diskless workstation (also called diskless node) contains CPU, memory, and firmware, but no hard drive. Diskless devices include PCs, routers, embedded devices, and others.

System vulnerabilities, threads and countermeasures

System Threats, Vulnerabilities, and Countermeasures describe security architecture and design vulnerabilities, and the corresponding exploits that may compromise system security.

Emanations are energy that escape an electronic system, and which may be remotely monitored under certain circumstances.

A covert channelis any communication that violates security policy. Two specific types of covert channels are storage channels and timing channels. The opposite of as covert channel is an overt channel: authorized communication that complies with security policy. A storage channel example uses shared storage, such as a temporary directory, to allow two subjects to signal each other. A covert timing channel relies on the system clock to infer sensitive information.

Buffer overflows can occur when a programmer fails to perform bounds checking.

Time of Check/Time of Use (TOCTOU) attacks are also called race conditions: an attacker attempts to alter a condition after it has been checked by the operating system, but before it is used. The term race condition comes from the idea of two events or signals that are racing to influence an activity.

A backdoor is a shortcut in a system that allows a user to bypass security checks (such as username/password authentication) to log in.

Malicious Code or Malware is the generic term for any type of software that attacks an application or system.

  • Zero-day exploits are malicious code (a threat) for which there is no vendor-supplied patch (meaning there is an unpatched vulnerability). Zero-day exploits are malicious code (a threat) for which there is no vendor-supplied patch (meaning there is an unpatched vulnerability).
  •  A rootkitis malware which replaces portions of the kernel and/or operating system. A user-mode rootkit operates in ring 3 on most systems, replacing operating system components in “userland.” Commonly rootkitted binaries include the ls or ps commands on Linux/UNIX systems, or dir or tasklist on Microsoft Windows systems. A kernel-mode rootkit replaces the kernel, or loads malicious loadable kernel modules. Kernel-mode rootkits operate in ring 0 on most operating systems.
  • A logic bomb is a malicious program that is triggered when a logical condition is met.
  • Packers provide runtime compression of executables. The original exe is compressed, and a small executable decompresser is prepended to the exe. Upon execution, the decompresser unpacks the compressed executable machine code and runs it.

Server-side attacks

Server-side attacks (also called service-side attacks) are launched directly from an attacker (the client) to a listening service. Server-side attacks exploit vulnerabilities in installed services.

Client-side attacks

Client-side attacks occur when a user downloads malicious content. The flow of data is reversed compared to server-side attacks: client-side attacks initiate from the victim who downloads content from the attacker.

Security Assertion Markup Language (SAML) is an XML-based framework for exchanging security information, including authentication data.

Polyinstantiation allows two different objects to have the same name. The name is based on the Latin roots for multiple (poly) and instances (instantiation).

Database polyinstantiation means two rows may have the same primary key, but different data (!!!????).

Inference and aggregation occur when a user is able to use lower level access to learn restricted information.

Inference requires deduction: clues are available, and a user makes a logical deduction.

Aggregation is similar to inference, but there is a key difference: no deduction is required.

Security Countermeasures

The primary countermeasure to mitigate the attacks described in the previous section is defense in depth: multiple overlapping controls spanning across multiple domains, which enhance and support each other.

System hardening , systems configured according to the following concepts:

  • remove all unnecessary components.
  • remove all unnecessary accounts.
  • close all unnecessary network listening ports.
  • change all default passwords to complex, difficult to guess passwords
  • all necessary programs should be run at the lowest possible privilege.
  • security patches should be install as soon as they are available.

Heterogenous environment  The advantage of heterogenous environment is its variety of systems; for one thing, the various types of systems probably won’t possess common vulnerabilities, which makes them harder to attack.

System resilience The resilience of a system is a measure of its ability to keep running, even under less-than-ideal conditions.

Security models

Security models help us to understand sometimes-complex security mechanisms in information systems. Security models illustrate simple concepts that we can use when analyzing an existing system or designing a new one.

The concepts of reading down and writing upapply to Mandatory Access Control models such as Bell-LaPadula. Reading down occurs when a subject reads an object at a lower sensitivity level, such as a top secret subject reading a secret object. There are instances when a subject has information and passes that information up to an object, which has higher sensitivity than the subject has permission to access. This is called “writing up” because the subject does not see any other information contained within the object. The only difference between reading up and writing down is the direction that information is being passed.

Access Control Models

  • A state machine model is a mathematical model that groups all possible system occurrences, called states. Every possible state of a system is evaluated, showing all possible interactions between subjects and objects. If every state is proven to be secure, the system is proven to be secure.
  • The Bell-LaPadula model was originally developed for the U.S. Department of Defense. It is focused on maintaining the confidentiality of objects.Bell-LaPadula operates by observing two rules: the Simple Security Property and the * Security PropertyThe Simple security property states that there is “no read up:” a subject at a specific classification level cannot read an object at a higher classification level. The * Security Property is “no write down:”a subject at a higher classification level cannot write to a lower classification level. Bell-LaPadula also defines 2 additional properties that will dictate how the system will issue security labels for objects.  The Strong Tranquility Propertystates that security labels will not change while the system is operating.The Weak Tranquility Property states that security labels will not change in a way that conflicts with defined security properties.
  • Take-Grant systems specify the rights that a subject can transfer to a from another subject or object. These rights are defined through four basic operations: create, revoke, take and grant.
  • Biba integrity model (sometimes referred as Bell-LaPadula upside down) was the first formal integrity model.  Biba is the model of choice when integrity protection is vital. The Biba model has two primary rules: the Simple Integrity Axiom and the * Integrity Axiom. The Simple Integrity Axiom is “no read down:”a subject at a specific classification level cannot read data at a lower classification. This protects integrity by preventing bad information from moving up from lower integrity levels.The * Integrity Axiom is “no write up:”a subject at a specific classification level cannot write to data at a higher classification. This protects integrity by preventing bad information from moving up to higher integrity levels.

Biba takes the Bell-LaPadula rules and reverses them, showing how confidentiality and integrity are often at odds. If you understand Bell LaPadula (no read up; no write down), you can extrapolate Biba by reversing the rules: no read down; no write up.

  • Clark-Wilson is a real-world integrity model (this is an informal model) that protects integrity by requiring subjects to access objects via programs. Because the programs have specific limitations to what they can and cannot do to objects, Clark-Wilson effectively limits the capabilities of the subject.Clark-Wilson uses two primary concepts to ensure that security policy is enforced; well-formed transactions and Separation of Duties.
  • The Chinese Wall model is designed to avoid conflicts of interest by prohibiting one person, such as a consultant, from accessing multiple conflict of interest categories (CoIs). The Chinese Wall model requires that CoIs be identified so that once a consultant gains access to one CoI, they cannot read or write to an opposing CoI.
  • The noninterference model ensures that data at different security domains remain separate from one another.

Evaluation methods, certification and accreditation

Evaluation criteria provide a standard for qualifying the security of a computer system or network. These criteria include the Trusted Computer System Evaluation Criteria (TCSEC), Trusted Network Interpretation (TNI), European Information Technology Security Evaluation Criteria (ITSEC) and the Common Criteria.

 Trusted Computer System Evaluation Criteria (TCSEC)

TCSEC commonly known as the Orange Book and it’s the formal implementation of the Bell-LPadula model. The evaluation criteria were developed to achieve the following objectives:

  • Measurement Provides a metric for assessing comparative levels of trust between different computer systems.
  • Guidance Identifies standard security requirements that vendors must build into systems to achieve a given trust level.
  • Acquisition Provides customers a standard for specifying acquisition requirements and identifying systems that meet those requirements.

The Orange Book was the first significant attempt to define differing levels of security and access control implementation within an IT system.

The Orange Book defines four major hierarchical classes of security protection and numbered subclasses (higher numbers indicate higher security) :

  • D: Minimal protection
  • C: Discretionary protection (C1 and C2)
  • B: Mandatory protection (B1, B2 and B3)
  • A: Verified protection (A1)

Trusted Network Interpretation (TNI)

TNI adresses confidentiality and integrity in trusted computer/communications network systems. Within the Rainbow Series, it’s known as the Red Book.

European Information Technology Security Evaluation Criteria (ITSEC)

ITSEC addresses confidentiality, integrity and availability, as well as evaluating an entire system defined as Target of Evaluation (TOE), rather than a single computing platform.

ITSEC evaluates functionality (F, how well the system works) and assurance (E the ability to evaluate the security of  a system).   Assurance correctness ratings range from E0 to E6.

The equivalent ITSEC/TCSEC ratings are:

  • E0:D
  • F-C1,E1:C1
  • F-C2,E2:C2
  • F-B1,E3:B1
  • F-B2,E4:B2
  • F-B3,E5:B3
  • F-B3,E6:A1

Common criteria

The International Common Criteria is an internationally agreed upon standard for describing and testing the security of IT products. It is designed to avoid requirements beyond current state of the art and presents a hierarchy of requirements for a range of classifications and systems.

The common criteria defines eight evaluation assurance levels (EALs): EAL0 through EAL7 in order of increasing level of trust.

System Certification and Accreditation

System certification is a formal methodology for comprehensive testing and documentation of information system security safeguards, both technical and non-technical, in a given environment by using established evaluation criteria (the TCSEC).

Accreditation is an official, written approval for the operation of a specific system in a specific environment, as documented in the certification report.

(My) CISSP Notes – Information Security Governance and Risk Management

Note: This notes were made using the following books: “CISPP Study Guide” and “CISSP for dummies”.
The Information Security Governance and Risk Management domain focuses on risk analysis and mitigation. This domain also details security governance, or the organizational structure required for a successful information security program.

CIA triad

  •  Confidentiality seeks to prevent the unauthorized disclosure of information. In other words, confidentiality seeks to prevent unauthorized read access to data.
  • Integrity seeks to prevent unauthorized modification of information. In other words, integrity seeks to prevent unauthorized write.
  • Availability ensures that information is available when needed.

The CIA triad may also be described by its opposite: Disclosure, Alteration, and Destruction (DAD).

The term “AAA” is often used, describing cornerstone concepts Authentication, Authorization, and Accountability.

  • Authorization describes the actions you can perform on a system once you have identified and authenticated.
  • Accountability holds users accountable for their actions. This is typically done by logging and analyzing audit data
  • Nonrepudiation means a user cannot deny (repudiate) having performed a transaction. It combines authentication and integrity: nonrepudiation authenticates the identity of a user who performs a transaction, and ensures the integrity of that transaction. You must have both authentication and integrity to have nonrepudiation.

Least privilege means users should be granted the minimum amount of access (authorization) required to do their jobs, but no more.

Need to know is more granular than least privilege: the user must need to know that specific piece of information before accessing it.

Defense-in-Depth (also called layered defenses) applies multiple safeguards (also called controls: measures taken to reduce risk) to protect an asset.

Risk analysis

  • Assets are valuable resources you are trying to protect.
  • A threat is a potentially harmful occurrence, like an earthquake, a power outage, or a network-based worm. A threat is a negative action that may harm a system.
  • A vulnerability is a weakness that allows a threat to cause harm.

Risk = Threat × Vulnerability

To have risk, a threat must connect to a vulnerability.

The “Risk = Threat × Vulnerability” equation sometimes uses an added variable called impact: “Risk = Threat × Vulnerability × Impact.

Impact is the severity of the damage, sometimes expressed in dollars.

Loss of human life has near-infinite impact on the exam. When calculating risk using the “Risk = Threat × Vulnerability × Impact” formula, any risk involving loss of human life is extremely high, and must be mitigated.

The Annualized Loss Expectancy (ALE) calculation allows you to determine the annual cost of a loss due to a risk. Once calculated, ALE allows you to make informed decisions to mitigate the risk.

The Asset value (AV) is the value of the asset you are trying to protect.

PIIPersonally Identifiable Information

The Exposure Factor (EF) is the percentage of value an asset lost due to an incident.

The Single Loss Expectancy (SLE) is the cost of a single loss. SLE  = AV x EF.

The Annual Rate of Occurrence (ARO) is the number of losses you suffer per year.

The Annualized Loss Expectancy (ALE) is your yearly cost due to a risk. It is calculated by multiplying the Single Loss Expectancy (SLE) times the Annual Rate of Occurrence (ARO).

The Total Cost of Ownership (TCO) is the total cost of a mitigating safeguard. TCO combines upfront costs (often a one-time capital expense) plus annual cost of maintenance, including staff hours, vendor maintenance fees, software subscriptions, etc.

The Return on Investment (ROI) is the amount of money saved by implementing a safeguard.

Risk Choices

Once we have assessed risk, we must decide what to do. Options include accepting the risk, mitigating or eliminating the risk, transferring the risk, and avoiding the risk.

Quantitative and Qualitative Risk Analysis are two methods for analyzing risk. Quantitative Risk Analysis uses hard metrics, such as dollars. Qualitative Risk Analysis uses simple approximate values. Quantitative is more objective; qualitative is more subjective.

The risk management process

Risk Management Guide for Information Technology Systems (see http://csrc.nist.gov/publications/nistpubs/800-30/sp800-30.pdf).

The guide describes a 9-step Risk Analysis process:

1. System Characterization – System characterization describes the scope of the risk management effort and the systems that will be analyzed.

2. Threat Identification –

Threat Identification and Vulnerability Identification, identify the threats and vulnerabilities, required to identify risks using the “Risk = Threat × Vulnerability” formula.

3. Vulnerability Identification

4. Control Analysis – Control Analysis, analyzes the security controls (safeguards) that are in place or planned to mitigate risk.

5. Likelihood Determination

6. Impact Analysis

7. Risk Determination

8. Control Recommendations

9. Results Documentation

Information Security Governance

Information Security Governance is information security at the organizational level.

Security Policy and related documents

  • Policies are high-level management directives. Policy is high level: it does not delve into specifics. All policy should contain these basic components: Purpose, Scope, Responsibilities , Compliance.  NIST Special Publication 800-12 (see http://csrc.nist.gov/publications/nistpubs/800-12/800-12-html/chapter5.html) discusses three specific policy types: program policy, issue-specific policy, and system-specific policy. Program policy establishes an organization’s information security program.
  • A procedure is a step-by-step guide for accomplishing a task. They are low level and specific. Like policies, procedures are mandatory.
  • A standard describes the specific use of technology, often applied to hardware and software. Standards are mandatory. They lower the Total Cost of Ownership of a safeguard. Standards also support disaster recovery.
  • Guidelines are recommendations (which are discretionary).
  • Baselines are uniform ways of implementing a safeguard.

Roles and responsibilities

Primary information security roles include senior management, data owner, custodian, and user.

  • Senior Managementcreates the information security program and ensures that is properly staffed, funded, and has organizational priority. It is responsible for ensuring that all organizational assets are protected.
  • The Data Owner (also called information owner or business owner) is a management employee responsible for ensuring that specific data is protected. Data owners determine data sensitivity labels and the frequency of data backup. The Data Owner (capital “O”) is responsible for ensuring that data is protected. A user who “owns” data (lower case “o”) has read/write access to objects.
  • A Custodian provides hands-on protection of assets such as data. They perform data backups and restoration, patch systems, configure antivirus software, etc. The Custodians follow detailed orders; they do not make critical decisions on how data is protected.
  • Users must follow the rules: they must comply with mandatory policies procedures, standards, etc.

Complying with laws and regulations is a top information security management priority: both in the real world and on the exam.

The exam will hold you to a very high standard in regard to compliance with laws and regulations. We are not expected to know the law as well as a lawyer, but we are expected to know when to call a lawyer.

The most legally correct answer is often the best for the exam.

Privacy is the protection of the confidentiality of personal information.

Due care and Due Diligence

Due care is doing what a reasonable person would do. It is sometimes called the “prudent man” rule. The term derives from “duty of care”: parents have a duty to care for their children, for example. Due diligence is the management of due care.

Due care is informal; due diligence follows a process.

Gross negligence is the opposite of due care. It is a legally important concept. If you suffer loss of PII, but can demonstrate due care in protecting the PII, you are on legally stronger ground, for example.

Auditing and Control Frameworks

Auditing means verifying compliance to a security control framework (or published specification).

A number of control frameworks are available to assist auditing Risk Analysis. Some, such as PCI (Payment Card Industry), are industry-specific (vendors who use credit cards in the example). Others, such as OCTAVE, ISO 17799/27002, and COBIT.

OCTAVE stands for Operationally Critical Threat, Asset, and Vulnerability Evaluation, a risk management framework from Carnegie Mellon University. OCTAVE describes a three-phase process for managing risk. Phase 1 identifies staff knowledge, assets, and threats. Phase 2 identifies vulnerabilities and evaluates safeguards. Phase 3 conducts the Risk Analysis and develops the risk mitigation strategy. OCTAVE is a high-quality free resource which may be downloaded from: http://www.cert.org/octave/ ISO 17799 and the ISO 27000 Series.

ISO 17799 had 11 areas, focusing on specific information security controls:

1. Policy

2. Organization of information security

3. Asset management

4. Human resources security

5. Physical and environmental security

6. Communications and operations management

7. Access control

8. Information systems acquisition, development, and maintenance

9. Information security incident management

10. Business continuity management

11. Compliance3 ISO 17799 was renumbered to ISO 27002 in 2005, to make it consistent with the 27000 series of ISO security standards.

Simply put, ISO 27002 describes information security best practices (Techniques), and ISO 27001 describes a process for auditing (requirements) those best practices.

COBIT (Control Objectives for Information and related Technology) is a control framework for employing information security governance best practices within an organization.  COBIT was developed by ISACA (Information Systems Audit and Control Association.

ITIL(Information Technology Infrastructure Library) is a framework for providing best services in IT Service Management (ITSM). ITIL contains five “Service Management Practices—Core Guidance” publications: • Service Strategy • Service Design • Service Transition • Service Operation • Continual Service Improvement

Certification and Accreditation

Certification is a detailed inspection that verifies whether a system meets the documented security requirements.

Accreditation is the Data Owner’s acceptance of the risk represented by that system.