Linux Hardening

Hardening In Theory

Defense in Depth

  • Firewalls protect the entry points

  • Network Intrusion Detection and protection system monitor transactions and find malicious activity based on behavior or indicator of compromise (IOC)

  • Deep packet inspection looks at the contents of individual packets for anything suspicious

  • Firewalls, Anti-Virus, and Anti-Malware services run on hosts as well

Reducing the Attack Surface

Attack surface as the entire network and software environment that is exposed to remote or local attacks.

services include

  • patching all software

  • Disabling or Uninstalling unused services

  • disabling unused user accounts

  • setting requirements for stronger passwords

Layers of Defense that you must harden to achieve defense in Depth:

  • Hardware/BIOS

  • Bootloader

  • Operating System

  • Services

  • Administration

  • Users

Role of CIS (Center of Internet Security)

Leading the global community to secure our ever-changing connected world.

CIS provides best practices from multiple operating systems.

The CIS Benchmarks and controls are best based on best practices for securing IT systems

The benchmarks provide "prescriptive guidance for establishing a secure configuration posture" for the various OS including Ubuntu 20.04 LTS (CIS Ubuntu 20.04 LTS benchmark, 2021 p13)

Advantage of Hardening at the Hardware Level

  • Hardening the BIOS means security measures are in effect before the operating system starts

  • These changes cannot be modified by remote attacks

  • These changes cannot be undone at the OS level

Hardening the Bootloader

The bootloader manages the loading of the OS and it is where an administrator can configure options

Hardening the bootloader will prevent

  • Modifying boot configurations

  • The loading of a malicious kernel

Hardening the Kernel

The Kernel manages the resources for the OS

  • Access to system resources (e.g. the CPU, Input/Output Devices, etc.)

  • Shares resources between multiple requests

  • Allocates and deallocates memory for running processes

  • Allocates devices based on requests for them

Hardening the Kernel protects the OS by

  • Preventing the misallocation of system resources

  • Preventing the misallocation of memory

Hardening Storage Devices

Data-at-Rest is

  • Files on a logical or physical storage device

  • Records in a database

  • NOT files in use (e.g. documents open in an editor)

  • NOT files in transit (e.g. html file moving from server to browser)

A solution for protecting data-at-rest is encryption

  • The individual file

  • The directory

  • The volume

Access Control Lists

  • Read/Write?execute are basic settings for any file and these are defined for the owning user, owning group, and everyone

  • Does not offer any more granularity than a single user, single group, and everyone

  • Access control Lists allow more granularity

  • ACLs allow for the assignment of multiple individual users and multiple groups

Danger of World-writable file

  • Data in a world-writable file can be modified by any user

  • if the world-writable file is a script, anyone can modify it to do something malicious

  • World-writable configuration files make the scripts and services that use them vulnerable

Layers of defense

Firewalls and IPS/IDS as Endpoint Security

Role of Firewalls

  • Originally the term firewsll referred to a fire-resistent wall between structures

  • It was designed to stop the spread of fire from one structure to another

  • The technological firewalls follow the same concept

  • Except a technological firewall can be configured to selectively allow traffic through

  • Network firewalls c an block or allow inbound and outbound traffic

  • Host based firewalls work in the same manner (looking into the data directly related to that host)

Intrusion Detection and Intrusion Prevention System

IDS Monitors and IPS intiates control

Reasons behind host-based protection in addition to network-based protection

At a basic level, host based firewalls and intrusion systems are defence in depth measures

A host firewall can protect the host and prevent a hacker from using it to his benefit

  • Blocaking unwanted inbounnd traffic can stop certain types of attacks

  • Blocking outbound traffic means computer cannot become a pivot point during an attack

Host based IDS and IPS (HIDS and HIPS) focuses on the individual machine

  • Rule based on behavior are more likely to detect unusual activity to and from the machine

Managing Services

Why to avoid legacy services and how to disable them and xinetd services

Legacy Services

It is older services that emoloy less secure means of interaction

  • FTP, TFTP, and TFTPD are 3 unencypted means of transferring files

  • Telnet is an unencrypted means of connecting to a remote server

  • RSH is another service for connecting to the remote server

  • HTTP is also unencrypted but there are ways allowing for the redirection of traffic over HTTPS

The risk of Legacy Services

  • Competent hackers can leverage unencrypted transactions for account credentials

  • unscrupulous individuaks and government can eavesdrop on private communications

  • Hacker can insert malicious cookies or code into the stream

  • Hackers can copy the content of those streams as well

What is xinetd?

  • is a "Super server" running on many Unix-like system including Linux

  • it monitors all standard internet ports for traffic

  • when it receives a request for a particular services (i.e. http) it will start the appropriate service (apache or nginx) and direct the requests

Risk of xinetd?

The xinetd has vulnerabilities dating back to 2000

  • exposes all servives and allows attackers to bypass intended access restrictions

  • hackers can exploit it and use it in a Denial of Service attack against the server

  • It doesnot enforce user and group directives and causes service to be run by the root user

  • One exploited the adversary has access to the computer with elevated permissions

Why to disable or uninstall unused services

  • The fewer ports listening the better

  • Running services can be exploited to gain access to the server

  • if there is no reason to use the service, remove the service

  • Disabling a service stops it from listening, but someone can always start it again

Lifecycle Management

How to prepare workstations and servers for lifecycle retirement?

  • As resource demands rise it becomes necessary to replace hardware in order to keep up with the demand

  • Also it may be more fiscally resposible to replace a ccomputer vs. upgrading components to ensure complience

  • Replacing equipment periodically also address compliance with OS requirements (i.e. TPM 2.0)

Life cycle consideration

  • Data backups

    • Information is money

    • Lost datacan have a significant impact on the bisiness

    • Therefore, it is important to make a good backup of data or copy files to a network drive

  • Wiping storage and Degaussing drives

    • Once data on the computer has been saved it is important to wipe the internal drive clean

    • if the data on the machine is highly sensitive more drastic steps may be required

    • Deleting the contents off the drive

    • Overwriting the drive

    • Degaussing the drive

    • Destroying the drive

  • Destroying RAM and CPUs

    • When the sensitivity of the information is so severe that its release could cause grave harm to the business or nation, it may be necessary to go a step further

    • in these cases, the orgranizatoion's policy or national regulations may require the destruction of the CPI and the RAM

Recommendation for Integraring Hardening Measures

Test Hardened Environment

Before implementing a plan on your network

  • Build the hardened OS in a VM

  • test the VM thoroughly

  • Get ready to roll out incrementally

The Phased Approach to Roll-out

Other considerations for rollout...

  • Leave the user's old computer in place during the rollout in case the user experiences issues with the new computer

  • identify one or two senior executives for phase 4 updates

  • Roll out to the majority of senior executives should take place in Phase 5

System Hardening in Practice

Hardening the hardware

What to protect in the BIOS/UEFI

There are several things we can protect at the BIOS or UEFI level

  • Secure BIOS or UEFI with a password

  • Disable booting from any drive but the computer's hard drive

Protecting the BIOS or UEFI is defence-in-depth

  • Protecting the BIOS and UEFI with a password prevents users from making changes

  • is redundancy for a similar change in the OS

Functionality configured in the BIOS or UEFI cannot be overridden by the OS

Hardening the Bootloader

Role of the Bootloader

The bootloader is responsible for putting the OS into memory

  • It is the place where an administrator can configure different options for the OS

  • It is where the user can choose which configuration he need to run

  • Dual-booting is made possible because of the bootloader

Different hardening steps

  • Securing the bootloader focuses on password protecting each of the boot options

  • It can be further secured by limiting access to each boot option based on the user accessing it

Password protect the bootloader

  • Prevent a random individual from starting the OS

  • Prevent a thief from even getting to a login prompt on a laptop

Password protecting each boot option

  • This allows any group of users to access the version of the OS they need to use

  • This allows for the configuration of services for each boot option

Limit user Permissions

  • Block users from modifying the bootloader configuration

  • Stops users from getting around the security control the security in place on the computer

How to configure the Bootloader

  • Starts with the /etc/default/grub file and the scripts in /etc/grub.d/

  • make changes across the various files to build the bootloader that fulfills your requirements

  • Once your edits are complete run sudo update-grub

  • There are other bootloaders (i.e. Linux LOader) but GRUB2 is the default bootloader for Ubuntu Linux

Password Protecting the Bootloader

Generate separate passwords for the superuser an deach user grub-mkpasswd-pbkdf2

Edit /etc/grub.d/00_header using your editor of choice

Note: You will have to start the editor using sudo

At end of the file add following

cat << EOF

set superusers="admin"

password_pbkdf2 admin HASH1

password_pbkdf2 user1 HASH2

password_pbkdf2 user2 HASH3

EOF

  • With the passwords stored it is time to configure the bootloader

  • You do not need to specify the superuser because the superuser can boot all images

  • On subsequent reboots, grub will give you the option to choose whicch OS to run and ask for the username and associated password

  • Edit /boot/grub/grub.cfg

  • For each menuentry block specify which user can use that option with the --users parameter

Securing the Kernel

What the kernel is and to keep it up to date

Refer section 2

What kernel parameters are and how to configure them

sysctl enable an administrator to set kernel parameters until the next reboot of the machine. This is useful as a means of testing how a change in settings will impact the usability of the server. Editing Settings in the sysctl.conf file (found in /etc/ directory) will remain persistent even after reboot

It is better to create a file in the /etc/sysctl.d directoty rather than editing sysctl.conf directly

  • These are also persistent changes

  • It is the better method becaise a future update can reset the sysctl.conf file

  • If this setting does not have the desired effect check to see if there is a conflicting setting elsewhere (systemcontrol.com or one of the files in /etc/system.d directory)

Hiding process from other Users

By default, a user can see all processes running on the server

It can reveal too much information to a hacker

A User should only be able to see the processes his account owns

To change this behavior, edit the /etc/fstab file

  • Open the file in your own favorite editor

  • Add the hidepid=2 parameter so the line reads proc /proc proc hidepid=2 0 0

  • Remount the /proc volume sudo mount -o remount proc

It is important to protect the system from internal and external threats

Control Groups

Relatively new (2010) to Linux, control Groups (aka cgroups) allow processes to run in their own kernel and memory spaces.

  • Administrators can configure how many resources to give to each service independently

  • This also reduces the risk that the failure of one process impacts another

  • The customization is configured through the systemctl command sudo systemctl set-property httpd.service CPUQuota=40% MemoryLimit=500M

  • This command appends these change to existing files in the /etc/systemd/system.control/ directory or create new files

  • This command can also be used to place resource limits on user accounts

Namespace Isolation

Namespaces were introduced back in 2002

Namespaces are used to assign resources to a process and other processes cannot see those resources

Seven of better known namespaces:

  • Mount (mnt) - where the process has its own root filesystem

  • Process ID (pid) - PID namespace provides processes with an independent set of process IDs

  • UTS - processes have its own unique hostname and domain

  • Network (net) - Allows for the creation of a virtual network

  • Interprocess Communications (ipc) - prevent data leakage

  • Control Group (cgroup)

  • User- Allows users to have different level of permissions on different processes

Role of Exec Shield, AppArmour and Security Enhanced Linux (SELinux)

Exex Shield, originally designed by RedHat, is designed to protect the system against multiple overflows:

  • Buffer

  • Stack

  • Function Pointer

AppArmor is a Linux kernel security module that allows the system administrator to restrict programs capabilities with per-program profiles

Security Enhanced Linux (SELinux) provides a mechanism for supporting access control security policies, including mandatory access control

These features are enabled by default in many Linux distributions including Ubuntu.

How to disable the key combination "Ctrl-Alt-Delete"

  • Users logged in at the server can use this well known key combination

  • Unlike the init command, this option does not require root permission

  • Therefore, disabling ctrl-alt-delete prevents an unauthorized reboot of the server

  • the command to disable is as follows:

    • sudo systemctl mask ctrl-alt-del.target && sudo systemctl daemon-reload

  • The Second command enables the change without a reboot

  • if you find you have to undo this change the command is

    • sudo systemctl unmask ctrl-alt-del.target && sudo systemctl daemon-reload

Securing Storage Devices

How to partition disks, and encrypt volumes

Historically, it was a best practice to create multiple volumes for many of the key components of Linux uses to store files

  • Root (/) - where the bulk of the OS resides

  • /boot - contains the kernel images and other files for starting the OS

    • Once the OS is loaded and running the volume is no longer used. However, there is the threat of a malicious individual compromising the computer by modifying the content of the directory

    • The simplest solution is to mount the /boot volume read-only

    • CAVEAT: Mounting the /boot volume read-only prevents automated updates to update the kernel.

    • During installation the boot volume is easy

    • It is more complicated when creating one in a machine that is operational

    • If there is no unallocated storage one has to shrink one volume with fdisk and resize2fs

    • The boot volume requires between 512MB and 1GB

    • Warning: Before manipulating volumes take the time to determine if the benefit outweighs the cost if there are issues

    • Recommendation: Perform a complete backup of the computer before manipulating volumes

  • /var - logs, databases, and websites are stored here by default

  • /home - users' directories

  • swap - disk storage used for memory swapping

Today the default for the Ubuntu install creates only two partitions

  • Root (/)

  • swap

Steps to create /Boot Volume in free space
  1. Determine the device lebel of the target drive with sudo fdisk -l

  2. Create a new volume with sudo fdisk /dev/sdb

    • at the prompt type, n to create a new partition

    • Select the drive with unallocated space

    • Please enter to choose the starting block (Default to first free block)

    • Specify the size with +1G

    • at the prompt type w to write changes to the disk

  3. Run partprobe to detect the volume and reboot only if the volume isn't detected

  4. Format the volume with sudo mke2fs -j /dev/sdb#

  5. Mount the volume temporarily sudo mount /dev/sdb# /mount/temp/path

  6. Migrate the files sudo mv /boot/. /mount/temp/path

  7. Mount the volume at its permanent mount point sudo mount /dev/sdb /boot

  8. Add the new volume to /etc/fstab so it mounts after every reboot /dev/sdb# /mnt/example ext4 defaults 0 2

Shrinking an existing Partition
  1. Run sudo fdisk /dev/sdb

    • At the prompt type p for the list of partitions

  2. After existing run sudo resize2fs -P /dev/sdb# to determine the minimum size needed

    • If you cannot free up 1GB of space STOP

  3. If you can proceed unmount the volume with sudo unmount /dev/sdb#

  4. Resize the volume by running resize2fs -p /dev/sdb# new_size where new_size is 1GB less than the current size

At this point follow the steps for using Free Space

Encrypting a Volume

Encrypting a Volume
  • Encrypting volumes during the installation process is less complicated than encrypting volumes afterwards

  • There is also no risk to data because there is none on the system

  • Encrypting a volume with data comes with the risk that a power outage or other disruption could mean the loss of the data

  • Recommendation: Add a new drive to the computer and create a new encrypted volume

  • Any time you are altering an existing partition is highly recommended to back it up

Steps

  1. Determine the volume's partition with fdisk -l (i.e /dev/sdc1)

  2. Follow the previously covered instructions for creating a new volume

  3. install the tools for encrypting the partition with sudo apt install -y cryptsetup-bin

  4. Encrypt the volume with sudo cryptsetup luks /dev/sdc

  5. Mount the encrypted volume cryptsetup open /dev/sdb encrypted

  6. Format disk with mkfs.ext4 /dev/mapper/encrypted

  7. Unlock the volume cryptsetup --type luks open /dev/sdb encrypted

  8. Mount the new volume under /mnt/newmount with sudo mount -t ext4 /dev/mapper/encrypted /mnt/encrypted

  9. Copy contents from old volume to new volume sudo cp -R /path/to/oldmount/. /mnt encrypted/

  10. complete unmounted the old volume and mount the new volume where the old one was

Get and ste file permissions through Access Control Lists

Basic way to get Permissions

the first way most people learn to determine a file's permissions is with the ls -la command

ls -la lists all of the files in a directory and shows their associated permissions

  • r-- means a file can be read

  • -w- means a file can be written

  • --x means a file can be executed

Basic way to set Permissions

The first way most people learn to set permissions with the chmod command

Each permission has a numerical value

  • 4 equals to read permission

  • 2 equals to the write permissions

  • 1 equals to the execute permissions

Add the appropriate number together to find the numerical value you need to set (5-for read-execute, 6 for read-write, 7 for all three)

The chmod syntax is chmod ### file where the first # is the owner's permission, the 2nd groups, and 3rd everyone

Better way to get and Set Permissions

Instead of using ls -la we use getfacl to see all the users and groups with permissions and a file's mask

  • To set a new user's permission we use setfacl -m u:bob:rw file

  • to revoke certain permissions for a user we use setfacl -x u:alice:wx file

  • To copy the ACL from one file to another get file1 | setfacl -set-file=- file2

Access the ownership of a file as well as the Permissions

Disable SUID and SGID permissions

SUID and SGID are special permissions for executable files

  • SUID allows the file to be executed with the owner's permissions

  • SGID allows the file to be executed with the group's permissions

It doesn't matter if the user executing the script is the owner or in the group

Unlike sudo any user can run these scripts as root

If the root user owns the file then the script would run as if root executed it

How to set SUID and SGID

As with standard read, write, execute permissions we use chmod to set SUID and SGID permissions by adding a prefix:

  • 2 for SGID

  • 4 for SUID

  • 6 for both SGID and SUID

  • therefore the syntax would be chmod 4775 script_name to allow any user to run the script with the owner's permissions

Find and Remove SUID and SGID settings

  • To find files with the SUID and SGID bits are set run: find / -perm /6000

  • to find files with just SUID bit is set run: find / -perm /4000

  • To find files with just SGID bit is set run: find / -perm /2000

  • if you want to filter the search to just find scripts owned by root add the -user parameter find / -user root -perm /6000

  • Searching for and removing the SUID and GUID bits is as easy as: find / -perm /6000 -o -perm /4000 -o -perm /2000 chmod u-s, g-s file_name

How to make the boot partition read-only

Blocking Unwanted activities and Traffic

Install and configure IPTables, and Intrusion Detection Systems (IDS)

Installing the firewall

Today, a fresh install of Linux includes the iptables firewall

  • if not installed, install the firewall with sudo apt install iptables -y

  • ensure it is installed with sudo systemctl status iptables

  • If it is not running, sudo systemctl start iptables

  • Ensure it will start on reboot with sudo systemctl enable iptables

Understanding firewall Rules

Rules consist of several parameters sources, destination and port, and protocol (TCP, or UDP)

A series of rules is called a chain

Each packet is processed through the appropriate chain to determine how it is handled

The default chains are

  • INPUT for incoming traffic

  • FORWARD for incoming traffic that needs to be routed somewhere else

  • OUTPUT for outgoing traffic

Understanding Firewall Actions

for each rule there are three possible actions:

  • ACCEPT for allowing the packet

  • DROP which rejects the packet

  • RETURN stops a packet from traversing the current chain and returns it to the previous chain ( in this case malicious senders know that packet reached the machine but rejected)

// Firewall rule Syntax
Sudo iptables
    -A <chain>                   #which chain to use
    -i <interface>               # Destination IP
    -p <Protocol [UDP | TCP]>    # Protocol and Protocol type
    -s <source>                  # Source IP
    --dport <port>               # Destination Port
    -j [Accept | Drop | Return]  # Target Action

Approach to Rule Chains

  • Chains should start with the most permissive rules and transition down to the most restrictive

  • The final rule should be an explicit rule to block all unresolved packets

  • Blocking outbound traffic is just as important as blocking inbound traffic

  • You do not want hackers to use your server or workstation as a pivot point to attack other notes

  • New rules are just stored in memory

  • They are made permanent with the command sudo /sbin/iptables-save

  • All rules can be deleted with the -F flag

  • An individual rule can be deleted with: sudo iptables -D <chain_name> <rule_number>

Take steps for preventing Denial of Service attacks

Snort

Installing

Snort is an industry-recognized IDS

  1. Make sure ubuntu is fully updated by running sudo apt update && suod apt dist-upgrade -y

  2. Download the package with wget (or curl): wget http://mirrors.kernel.org/ubuntu/pool/universe/s/snort/snort_2.9.7.0-5build1_amd64.deb

  3. To install Snort run sudo apt install ./snort_2.9.7.0-5build1_amd64 -y

  4. During the install you will be asked to provide the servers network interface and the ip address or range if addresses to monitor

  5. Check snort's status by running: sudo systemctl status snort

Updating Snort's Configuration, and Rulesets

  1. The snort configuration is located in /etc/snort

  2. the local rules are located i /etc/snort/rules

  3. Download the latest rulesets from snort.org: wget https://www.snort.org/downloads/community/community-rules.tar.gz

  4. Expand the tarball and then copy the new rules files to the /etc/snort/rules directory and the .conf configuration file to /etc/snort

  5. Restart snort with the command sudo systemctl restart snort.service

Detecting and stopping Denial of service attcks

  • First us ss (netstat is deprecated) to see what devices have connections with your server ss -pltun

  • Blocking these connections is as simple as adding rules to your firewall sudo iptables -A INPUT -s ADDRESS/SUBNET -j DROP (use IP address/CIDR)

Minimizing the OS attack surface

How to ensure regular patching of software

Regular patching

Keeping a server or workstation up to date helps to keep it secure

  • By default, Ubuntu's Banner message tells user if there are packages that need updating and how many are security related

  • NOTE: Consider editing the default banner to hide patching information

  • To see which packages have pending updates run sudo apt list --upgradable

  • Consider automating updates by installing ubuntu's unattended-updates tool

    • Install it sudo apt install unattended-upgrades apt-listchanges bsd-mailx

    • To automate security updates run sudo dpkg-reconfigure -plow unattended-upgrades

    • To configure unattended updates run sudo vi /etc/apt/apt.conf.d/50unattended-upgrades

  • Add your administrator's email account to this line Unattended-Upgrades : :Mail "email@address.com";

  • To have the kernel load after an update set this parameter to true Unattended-Upgrade::Automatic-Reboot "true";

  • Lastly, edit the /etc/apt/listchanges.conf and set email ID email_address=email@address.com

  • Test your setup with sudo unattended-upgrades --dry-run

How and why to unistall unused packages

Identifying unused packages

Uninstalling unused packages, particularly services, should be removed

  • Save disk space

  • Uninstalling an unused service means there is one less open port to be exploited

Deborphan is a command line utility that can be used to find and remove unused or orphaned packages

  • Install Deborphaned with sudo apt install deborphan

  • After installation run deborphan

  • Run orphaner to uninstall the orphaned packages

  • It is a good practice to also run sudo apt autoremove && sudo apt autoclean

  • Note: Be very judicious about removing packages you do not have enough information for

How and Why to disable firewire ports
  • There is a known attack through Direct Memory Access (dma) through firewire ports

  • Therefore, if you have but don't use these ports it is a good reason to disable them

  • This is done by editing the blacklist-firewire.conf file in /etc/modproble.d

  • Just uncomment the two lines containing the firewire drivers

How and why to disable or uninstall the X11 Desktop Environment

Consider removing X11 from servers

X11 is the foundation of Linux's graphical user interface (GUI)

  • GNOME, KDE, and other desktop interfaces sit on top of X11

  • X11 functions as the middleware between the underlying server functionality and the desktop functionality

There are some non-security benefits to the removal of X11

  • The server's performance will benefit from uninstalling X11

  • It will free up storage space

If users are not using X11 then removing it should not impact usability

if the users only use ssh to interact with the servers, X11 is unnecessary

If users are using unencrypted means of connecting to X11 they expose the server to compromise

  • Threat actors with access to the server can collect screenshots of activities within X11

  • Threat actors can exploit X11's remote desktop protocol (Xrdp) and in combination with netcat can establish a reverse shell

  • If root is logged in the actor can create a user account with sudo rights for future exploits

Disable X11 at startup

As a pre-cursor to uninstalling X11 you can disable X11

  • Change the runlevel from multi-user with the graphical interface to multi-user with the terminal

  • this means switching run level from 5 to 3

  • the command sudo init 3 will reboot the computer into the preferred mode

  • Editing the /etc/inittab file ensures the system will always boot at the proper runlevel

After configuring the server not to start X11 st startup you can remove X11 with these two commands:

  • sudo apt purge 'x11-*'

  • sudo apt autoremove

You should also remove the desktop environment and as an example here is how you remove gnome

  • sudo apt remove ubuntu-gnome-desktop

  • sudo apt remove gnome-shell

  • sudo apt purge ubuntu-gnome-desktop

  • sudo apt autoremove

Other desktop environments can be removed in a similar manner

If X11 is a Must and an absolute requirement, consider the following measures:

  • Disable xrdp and drop all traffic received on that port

  • Enable X11 forwarding through SSH

  • Reject X11 forwarding from all sources except for those who actually need it

  • Restrict X11 Forwarding to only those users who actually need it

Network Hardening at the Host

How to turn off IPv6 if not in use
  • If your company's network does not use IPv6 the best practice is to disable IPv6

  • Leaving IPv6 active particularly if services on the server is listening for IPv6 traffic would allow a rogue computer to communicate with the server

  • If network monitoring is not configured to alert administrators or activate controls at the detection of IPv6 communication it could continue with no one knowing

The following command will temporarily disable IPv6

sudo sysctl -w net.ipv6.conf.all.disable_ipv6=1
sudo sysctl -w net.ipv6.conf.default.disable_ipv6=1
sudo sysctl -w net.ipv6.conf.lo.disable_ipv6=1

Permanently disable IPv6

in order to make the changes permanent edit /etc/sysctl.conf and add the following lines to the bottom of the file:

net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1
net.ipv6.conf.lo.disable_ipv6=1

when the changes are complete, restart the networking services

sudo systemctl restart networking.service
How to employ NIC Bonding and how it improves resiliency

Network Interface Card (NIC) Bonding is a method of bundling multiple network connections so they function as a single logical connection

There are several bonding methods

  • Generic Bonding or bonding-alb

    • Effective bonding methods that do not require any configuration on the switch

    • It does require the NIC interfaces support changing of its MAC address on the fly

    • Tricks the other end (switch or host) to send data across all links

  • Channel Bonding Modes

    • balance-rr: frames transferred in a round robin fashion

    • balance-xor: traffic is hashed and balanced according to receiver

    • 802.3ad: the official standard for link aggregation and it is configurable

  • High Availability

    • Each bonded NIC is connected to two different switches

    • In the event of the failure of a single switch data can still flow

Enabling NIC Bonding

To temporarily set up bond0

  1. Load the driver with madprobe bonding

  2. Setup bond0 with an IP address by running

    • ifconfig bond0 10.10.10.2 netmask 255.255.0.0 OR

    • ip addr add 192.168.1.1/24 dev bond0

  3. Configure mac address for each NICs in the bond

    • sudo ifconfig eth0 down && sudo ifconfig eth# hw either 00:11:22:33:44:55 && sudo ifconfig eth0 up

    • sudo ip link set eth# down && ip link set eth0 address 02:01:02:03:04:08 && sudo ip link eth# up

  4. Enslave (bind them to bond0) two interfaces with the command

    • ifenslave bond0 eth0 eth2

  5. Verify bond0 with ifconfig bond0

  6. View Bonding info with the command more /proc/net/bonding/bond0

System Administration Hardening

Why and How to disable the root account's remote access

When threat actors seek to infiltrate a network they do so by obtaining elevated permissions

Root is the Holy Grail of elevated permissions

  • As a superuser, root has unlimited rights

  • Being able to connect remotely as root then there is no need to obtain other accounts

Disabling remote root access is done by modifying ssh's configuration file with sudo vi /etc/ssh/sshd_config

find the line that reads PerimitRootLogin and change the value to no

The role of sudo command and how to configure it
  • Sudo is short for superuser do

  • It allows users to execute commands as the superuser

  • Before the command is executed users must supply their own password

  • Not all users have permission to use sudo not should they

Dangers of sudo's default configuration: The default configuration for sudo is not completely secure

  • Assuming a user has permission to use sudo, he can run sudo su - root enter a password and login as root

  • Once the user is logged in as root it will be difficult to say with 100% certainty who was at the keyboard

  • By default, any user given permission to use sudo can run any command as sudo

Securing Sudo

There are some basic concepts to consider when securing sudo

  • You can edit sudo's configuration file with the command sudo visudo

  • Do not allow users unlimited access for running commands

    • Limiting the commands a user can run with sudo supports the concept of least privilege %security ALL=(ALL) PASSWD: /sbin/iptables, /usr/sbin/update-grub2, /bin/systemctl

    • NOTE: When specifying only a directory (i.e. /sbin/) you must include the trailing slash (/)

  • Require a user to enter their password each time they use sudo

    • This is achieved by setting the timeout (in minutes) to zero defaults timestamp_timeout=0

  • Grant permissions to groups instead of users

    • Specify groups with a leading percent (%) %security ALL =(ALL) PASSWD: /sbin/iptables, /usr/sbin/update-grub2, /bin/systemctl

  • Limit where executables or scripts can be located in order to use sudo

    • Defaults secure_path- "/usr/local/sbin:/user/local/bin:/local/bin:/local/sbin"

  • Log activity in sudo's own log file

    • Default logfile=/var/log/sudo

  • Explicitly block a command from being run with a preceding exclamation point (!)

    • %group ALL=/usr/bin/, !/usr/bin/passwd

  • Run a command as a user other than root

    • %docker ALL=(docker:docker) /usr/bin/docker

  • Email notification for suspicious (or all) uses of the command

    • there are several options

    • for all the settings below check for the mailto parameter in the configuration file:

      • mailto "admin@yourcompany.com"

    • mail_always - email sent on each use of sudo

    • mail_badpass - Email sent when user enters a bad password

    • mail_no_host - Email sent when user is not allowed to run commands on the host

    • mail_no_perms - Email sent when the command is explicitly denied

    • mail_no_user - email sent if the user is not in the sudoers configuration file

Testing, Monitoring, and Reviewing

Manage Logs and investigate them

As with any OS, logs are a primary means for analyzing and troubleshooting issues

  • The challenge with analyzing is the volume of information contained within the logs

  • However, Linux provides tools like logwatch and auditd to summarize and audit the logs

  • In addition to processing logs, it is important to archieve logs on a centralized server

Setting Up logwatch

Logwatch analyzes the last day's log and provides a summary report

  1. Install the application apt update && apt install -y logwatch

  2. Edit the logwatch configuration file sudo vi /usr/share/logwatch/default.conf/logwatch.conf

    • Configure email address to send the report to MailTo=email@address.com

    • Configure who sends the email (can be a real fake address) MailFrom=email@host.address.com

    • Configure detail of report as low, medium, or high Detail=High

    • Identify which services to summarize

      • Service=sshd2

      • Service=sudo

Setting up auditd

auditd is a service (aka daemon) with the job of collecting and writing audit logs to the disk

auditd comes with multiple utilities for the configuration and evaluation of the audit logs

  1. Install the application sudo apt update && sudo apt install -y auditd auditspd-plugins

  2. Enable and start the service sudo systemctl enable auditd && systemctl start auditd

  3. Examples

    1. Configure auditd to monitor /etc/shadow sudo auditctl -w /etc/shadow -k shadow-key -p rwxa

    2. Generate a report of the audit logs with aureport sudo aureport -k shadow-key

    3. Search the audit logs with ausearch sudo ausearch -m USER_LOGIN

Log Rotation

Given the important role logs can play either for troubleshooting problems or forensic investigations

  1. Install the logrotate on the server apt update && apt install logrotate

  2. Open /etc/logrotate.conf

    1. Confirm the following line is present and not commented out include /etc/logrotate.d

  3. Once it is installed change to the /etc/logrotate.d directory

  4. Find the preconfigured .conf file for the service lig you want to rotate

  5. Edit if needed

Centralized Log Storage

  • Logging to a remote centralized server allows for more holistic analysis of events on the network

    • It also ensures the integrity of the logs in the event a threat actor attempts to cover his tracks by deleting his activities in the local logs

    • The connection between local and centralized server should be encrypted

      • An encrypted connection protects against eavesdropping by hacker

      • Leaves the perpetrator unaware a record of his presence exists

    • Access to the log should be as restrictive as possible

      • This will reduce the attack surface of the logs

Best practices for backups

The following recommendation are simple to implement

  • Don't just scan the backup report

    • There are details in the reports that you can easily overlook if you don't dig into the report

  • Perform manual restorations regularly

    • This assures the backups are working and will be able to perform their role when needed

  • Have a rotation and pull out an archive

    • Perform fill backups at a regular interval and incremental and partial backups in-between take 1 full backup out of rotation at a set interval (i.e. once a month) and move that backup to an off-site source location

  • Perform a full disaster recovery

    • Testing portion of the backups is one thing but a full disaster restoration will test backups and the team for a possible future event

  • Incorporate diskless and cloud backups

    • Different backups allow for overlap and increase the probability of a full recovery of any lost data

How restricting CoreDumps is important for security

When a memory error occurs the system generates a core dump

  • The system saves the Core Dump to /var/crash

  • It contains details about what triggered the crash and any error messages

  • Anyone analyzing this information can gain insights about applications that were running in memory

Information in the Core Dumps could also be useful to someone wanting to exploit the server

  • Restrict access to the core dumps and the directory they are stored in

  • Move the core dumps to another server or workstation that is more secure

  • use and encrypted connection to move the files

Verify that security actions work as expected

Trust but Verify

Developing response plans is critical for the company's resiliency

  • Incidents, manmade or natural, trigger these response plans

  • More complex plans include human intervention and automated processes

  • Assuming these plans will function is asking for trouble

The plans must be tested and trained regularly

  • This will ensure the plan and its components function as expected

  • It ensures individuals know eir role and will be able to act with efficiency when the plan is put into action for a real incident

The plan should be reviewed regularly as well

  • Reviews give stakeholders the opportunity to update the plan as circumstances change

  • Reviews also give the stakeholders the opportunity to identify and address gaps within the plan

Log Management

Service Hardening in Practice

General handling of services

Disable and remove services

Reducing the attack surface of a system includes the disabling and uninstalling of services

  • to temporary disable a service sudo systemctl stop network.service

In a circumstance where you need to restart the computer but do not want the service to start

  • To prevent a service from starting on a reboot sudo systemctl disable nginx.service

When you no longer need a service installed on a server

  • To remove a service sudo apt remove nginx -y

  • to remove a service and its dependencies if they are not needed by other applications sudo apt purge nginx

Services to consider disabling

Whenever you configure a new server or audit an existing one, it is a good practice to evaluate the services running on it and disable and/or remove services that are not needed

  • DNS - In an environment whre you have dedicated DNS servers you can uninstall DNS

  • SSHD - if users are not connecting to the server via SSH or using SFTP and SCP then consider disabling this service

  • SMB - If the server is not sharing directories via SMB this service can be disabled

  • All Legacy Services - Sevices like telnet, and rsync use plaintext and there are more secure options

Security Benefits of splitting the network services

Containerizing Services

Containerizing services provide several security benefits:

  • Transparency - an administrator can easily look inside a container to see what runs inside it

  • Modularity - It is easier to isolate vulnerabilities without disrupting other elements

  • Smaller Attack Surface - unlike a virtual server scenario where you have to harden two OS you have the host's OS, the Docker daemon, and the containerized application

  • Easy Updates - Updating container is as easy as pulling the latest image for it

  • Environment Parity - No matter what OS the host runs the container will run on it and runs the same way on all hosts

Securing CRON jobs

CRON jobs are time-triggered scripts for automation

  • it can serve many different functions

  • Rotate or offload logs to another server

  • Restart Services, manage backups ...

These scripts are run as the user who added them to the cron process

You can allow or deny users access to cron by editing /etc/cron.allow and /etc/cron.deny and adding the user's name to the respective file

Employ best practices for timeout setting for interactive shell sessions

It is good practice to set a timeout for terminal sessions

  • This can be done in each user's profile (/home/user/.bashrc)

  • the default value can be set in the /etc/.bashrc or /etc/profile

  • the key=value pair is TMOUT=VALUE (in seconds)

  • Setting TMOUT to 0 overrides the timeout

  • This timeout affects users logged in at the terminal

There are conflicting requirements to consider

  • A short timeout may lead to orphaned processes

  • A long timeout could be present an opportunity if someone forgets to logout and a malicious person hijacks the session

The usefulness of Login Banners

Login Banners are there to present information to a user upon login

  • This message of the day (motd) is fully configurable

  • The default message in ubuntu provides notification about the patch status of the host

  • While useful to the administrators and engineers, it is information most users don't need

  • It can be dangerous in the hands of a malicious or inexperienced individual

Changing the message to that of warning is a good practice

  • While it will deter the malicious individual it may deter the less experienced

  • It is simply a passive measure

  • By changing the motd, particulars that could be useful to a threat actor are kept from him

  • The motd file is stored in the /etc/ directory

Hardening of public-facing services

Redirect plaintext communications to encrypted channels or how to require encrypted communications for public-facing services

There are risks with unencrypted communications

  • It allow the adversary to gain information by monitoring the plaintext traffic

  • It also allows adversaries to gain credentials

In order to harden web-based applications traffic to and from the webserver should be encrypted

  • This can be done natively by the webserver

    • make sure the connections use stronger encryption and disable older standards

    • This approach applies the same logic we use for legacy services

  • It can also be done through firewall forwarding rules

Consider enforcing encrypted communication for back end communications as well

  • require an encrypted connection between the webserver and database server

  • Use the same approach with the tools developers use to work on the web application

Explain the importance of closing unused ports

Finding and closing open ports start with discovery

  • NOTE: netscan is a deprecated program

  • ss is the replacement for netscan

  • The syntax for ss is as follows

    • sudo ss -tulpn | grep LISTEN

      • t Show only TCP sockets on Linux

      • u Display only UDP sockets on Linux

      • l Show listening sockets

      • p list process name that opened sockets

      • n don't resolve service names

  • You can match service to a port by refrencing /etc/services

  • Start by adding a firewall rule to drop any traffic to the port

  • This is not the end of it, however

    • if this port is not open as part of your network plan then further research needs to be done

    • once you find which service runs on the port in question stop and disable the service

Hardening of SSH services

Steps to limit user access to SSH

If users do not require SSH access you should restrict their access

  1. Edit the ssh_config file sudo vi /etc/ssh/sshd_config

  2. Find the PermitRootLogin parameter

  3. Change the value from yes to no

  4. Restrict other users and groups that do not need access

    1. DenyUsers user1 user2 user3

    2. DenyGroups group1 group2 group3

While SSH cannot block networks natively, you can configure iptables to block unwanted traffic

Limiting SSH functionality

  • SSH is a robust communications service but most users do not need this functionality

  • Just as we apply the least privileges to what permissions a user has, an administrator should consider restricting functionality the user does not use

  • Some functionality to consider disabling

    • Port forwarding - this allows a user to connect to one computer and then tunneling another service through SSH

    • X11 forwarding - unless users are tunnelling X11 through SSH this should be blocked

    • Agent forwarding

How to disable password-based login and employ asymmetric encryption for authentication

An alternative to logging in with a password is using a private key identification (PKI) to authenticate users

  • Once established then only those with the PKI can actually authenticate themselves with the server

  • Keys can be created with a passphrase creating a 2FA scenario

  • It is a good practice to configure the server to allow PKI authentication and ensure it works before disabling password authentication

The following steps will enable PKI authentication

  • Open the ssh configuration file with an editor of your choice sudo vi /etc/ssh/sshd_config

  • Search for PubKeyAuthentication and change the value to yes

  • Restart ssh with sudo systemctl restart ssh

Once everyone has their private key edit sshd_config to disable password logins

  • change PasswordAuthentication value to no

  • After saving the change restart ssh with sudo systemctl restart ssh

Generating Certificates

With PKI Authentication enabled the next step is to generate a user's keypair

Start by executing ssh-keygen or ssh-kaygen -N <passphrase>

  • When prompted provide the path for saving the key (/home/<user_name>/.ssh/id_rsa)

  • copy the public key into the authorized_keys cat /home/<user_name>/.ssh/id_rsa.pub >> .authorized_keys

  • Provide the user with his private key (id_rsa) and the passphrase so he can use it to log in

How to block connections from a specific network

While SSH cannot reject connections from networks natively, you can configure iptables to do the job for SSH

The best option to achieve this is to accept packets from those who are allowed to access the server iptables -A INPUT -p tcp --dport 22 --source 192.168.0.0/24 -j ACCEPT

Before blocking everyone else iptables -A INPUT -p tcp --dport 22 -j DROP

Employ DenyHosts to block brute force attacks against SSH

There are two separate changes to ssh regarding hosts

  • Use DenyHost to block requests from an individual host DenyHost <ip_address>

  • A defense-in-depth approach should include a Firewall rule on the host dropping packets

Explain how and why to block host-based authentication and obsolete rsh functionality

Another host based change to make is to disable host based authentication

  • Host-based authentication is dangerous because anyone can be at the client machine's keyboard

  • Host-based authentication was the standar for conenctivity for rsh

  • Disabling it is as easy as adding the following two lines to the sshd_config file

    • Host *

    • HostbasedAuthentication no

Account Hardening in Practice

User Password Authentication Requirements

Enhance password security with Pluggable Authentication Modules

Pluggable Authentication Modules (PAM) is a service for implementing modular authentication modules

  • It is loaded and executed when a program needs to authenticate a user

  • Configurations for PAM are located in several places including the /etc/pam.d directory

What services does PAM provide

  • Password Quality

  • Password Attempts before lockout

  • Password History

  • Stronger hashing by replacing md5 with SHA512

  • provides the means to prevent brute force attack

Install PAM with sudo apt install libpam-pwquality libpam-modules

Settings can be configured in /etc/pam.d/common-auth

  • the line to configure PAM starts with auth required pam_tally2.so

  • Options include

    • onerr=fail

    • audit

    • silent

    • deny=n (maximum consecutive failures)

    • unlock_time=n (n is the number of minutes to lock the account)

  • To unlock an account locked by failed consecutive attempts /sbin/pam_tally2 -u <username> --reset

Enumerate the benefits of strong passwords, two-factor authentication, and password management

PAM gives you the capability to configure strong password requirements

The parameters are set in /etc/security/pwquality.conf

The fol parameters are ones you can set in pwquality.conf

  • Minimum Length (minlen = 14)

  • password complexity

    • minclass = 4 or

    • dcredit = -1 (1 digit)

    • ucredit = -1 (1 uppercase character)

    • lcredit = -1 (1 lowercase character)

    • ocredit = -1 (1 symbol)

  • You must also ensure two settings are inclided in /etc/pam.d/common-account

  • account requisitite pam_deny.so

  • account required pam_tally2.so

Restrict users from using old passwords

Password reuse limitations prevent users from constantly reusing the same 2 or 3 passwords

  • users will go so far as to change their passwords multiple times in a single session in order to get back to their original password

  • Edit /etc/pam.d/common-password file to set how many passwords the machine should remember

  • The syntax is password required pam_pwhistory.so remember=n where n is the number of previous password to remember

Another setting outside of PAM, defines how frequently a user can change his password

  • This addresses those who would rapidly change their password to get back to the one they want to keep

  • Edit /etc/login.defs

  • change PASS_MIN_DAYS n where n is the number of days that must pass before the user can change his password again

  • NOTE: CIS recommends no less than 1 day between password changes

Manage password expiration and maintain user accounts and password policies

Another setting like PASS_MIN_DAYS, defines how long a user can keep same password before being required to change it

  • Edit /etc/login/.defs

  • Change PASS_MAX_DAYS n where n is the number of days a user can keep using a password before having to change it

  • Password longevity should be greater than days between password changes and less than or equal to 365

A related setting is PASS_WARN_AGE is the setting for an alert the user receives n days before his password is set to expire

Another password management tool allows for locking accounts after they have been inactive for n days

  • CIS recommends 30 days

  • This parameter is set using the following command useradd -D -f -n where n is number of days

  • Setting it to -1 disables this setting and is the default setting

Multifactor Authentication

Multi-factor Authentication is a method of using 2 or more means of authenticating the user is who he claims to be

These means of authentication are

  • something I know - his password or pin

  • Something I have - a phone or access card

  • Something I am - fingerprints, hand geometry, facial geometry

  • Something I do - Typing DNA, handwriting

  • Somewhere I am - geofencing, GPS

Each factor falls into its unique category

  • BAD: a password and a pin are both something I know

  • GOOD - Fingerprint and a phone

Describe the function of Kerberos and how to utilize it

Kerberos is a security protocol implemented by numerous OS including Linux

  • A trusted third party for authentication client-server applications and verifying users' identities

  • It uses secret-key cryptography

  • An open-source protocol and the go-to protocol for Single-Sign-On (SSO)

  • First developed at MIT

The use of key pairs and the instances where they can be employed

Asymmetric key pairs

  • Also known as public-key cryptography

    • consists of two keys (1 public and 1 private)

    • the public key can be used by anyone to encrypt something for you but only you can decrypt it

    • The private key can be used to sign something and everyone can confirm it was signed by you and no one else

  • It is used in many funtions

    • HTTPS (both SSL and TLS)

    • Digital signature

    • Login AUthentication

Account Management Requirements

Validate the User IDs of non-root Users

The root account is often referred to as a Super User

  • The root user doesnot need to be call Root

  • The super User is the user with a user id of Zero (0)

Each user should have a unique user id

when we validate the user id we ensure each user has a unique id

The ID number is what the system uses when it evaluates permissions

If someone manipulates the /etc/passwd file:

  • changes their user id to match another user theu will effectively have the same permissions of the other user based on the User ID

  • This Kind of manipulation can be dangerous.

Lock and unlock account manually

There are scenarios where you will find it necessary to lock a user's account

  • User is on extended leave

  • Prior to user termination

Locking the account in Ubuntu is as easy as:

  • usermod -L username

  • passwd -l username

Unlocking the account in Ubuntu is as easy as:

  • usermode -U username

  • passwd -u username

Checking accounts for empty passwords

Password complexity requirements should be prevented the use of empty passwords but it is better to safe than sorry

passwords are stored in /etc/shadow and each line represents a single user

The field in each line is separated by a colon : and the second field contains the hash of the password

An account with an empty password will have two colons following the username

A faster way to find these is to use the following command on the command- line

passwd -S -a

or

passwd --status -a

Reviewing accounts

The concept of least privileges

The principle that users and programs should only have the necessary privileges to complete their tasks

Use groups for assigning permissions to users

There are several reasons why we assign privileges to groups and not individual users

  • simplifies account management

  • Simplifies audits and reviews

  • Reduces the risk of error

Assigning users to groups

  • In order to assign users the permission they need assign them to the appropriate group

  • This holds true for temporary assignments and/or projects

  • Ensures least privileges as long as groups are properly maintained

Audit user permissions and group associations and monitor user activity

Periodic auditing and monitoring of activity on a server is important for security

  • Reviews and Audits

  • Log Monitoring

Last updated