Linux Hardening
Hardening In Theory
Defense in Depth
Firewalls protect the entry points
Network Intrusion Detection and protection system monitor transactions and find malicious activity based on behavior or indicator of compromise (IOC)
Deep packet inspection looks at the contents of individual packets for anything suspicious
Firewalls, Anti-Virus, and Anti-Malware services run on hosts as well
Reducing the Attack Surface
Attack surface as the entire network and software environment that is exposed to remote or local attacks.
services include
patching all software
Disabling or Uninstalling unused services
disabling unused user accounts
setting requirements for stronger passwords
Layers of Defense that you must harden to achieve defense in Depth:
Hardware/BIOS
Bootloader
Operating System
Services
Administration
Users
Role of CIS (Center of Internet Security)
Leading the global community to secure our ever-changing connected world.
CIS provides best practices from multiple operating systems.
The CIS Benchmarks and controls are best based on best practices for securing IT systems
The benchmarks provide "prescriptive guidance for establishing a secure configuration posture" for the various OS including Ubuntu 20.04 LTS (CIS Ubuntu 20.04 LTS benchmark, 2021 p13)
Advantage of Hardening at the Hardware Level
Hardening the BIOS means security measures are in effect before the operating system starts
These changes cannot be modified by remote attacks
These changes cannot be undone at the OS level
Hardening the Bootloader
The bootloader manages the loading of the OS and it is where an administrator can configure options
Hardening the bootloader will prevent
Modifying boot configurations
The loading of a malicious kernel
Hardening the Kernel
The Kernel manages the resources for the OS
Access to system resources (e.g. the CPU, Input/Output Devices, etc.)
Shares resources between multiple requests
Allocates and deallocates memory for running processes
Allocates devices based on requests for them
Hardening the Kernel protects the OS by
Preventing the misallocation of system resources
Preventing the misallocation of memory
Hardening Storage Devices
Data-at-Rest is
Files on a logical or physical storage device
Records in a database
NOT files in use (e.g. documents open in an editor)
NOT files in transit (e.g. html file moving from server to browser)
A solution for protecting data-at-rest is encryption
The individual file
The directory
The volume
Access Control Lists
Read/Write?execute are basic settings for any file and these are defined for the owning user, owning group, and everyone
Does not offer any more granularity than a single user, single group, and everyone
Access control Lists allow more granularity
ACLs allow for the assignment of multiple individual users and multiple groups
Danger of World-writable file
Data in a world-writable file can be modified by any user
if the world-writable file is a script, anyone can modify it to do something malicious
World-writable configuration files make the scripts and services that use them vulnerable
Layers of defense
Firewalls and IPS/IDS as Endpoint Security
Role of Firewalls
Originally the term firewsll referred to a fire-resistent wall between structures
It was designed to stop the spread of fire from one structure to another
The technological firewalls follow the same concept
Except a technological firewall can be configured to selectively allow traffic through
Network firewalls c an block or allow inbound and outbound traffic
Host based firewalls work in the same manner (looking into the data directly related to that host)
Intrusion Detection and Intrusion Prevention System
IDS Monitors and IPS intiates control
Reasons behind host-based protection in addition to network-based protection
At a basic level, host based firewalls and intrusion systems are defence in depth measures
A host firewall can protect the host and prevent a hacker from using it to his benefit
Blocaking unwanted inbounnd traffic can stop certain types of attacks
Blocking outbound traffic means computer cannot become a pivot point during an attack
Host based IDS and IPS (HIDS and HIPS) focuses on the individual machine
Rule based on behavior are more likely to detect unusual activity to and from the machine
Managing Services
Why to avoid legacy services and how to disable them and xinetd services
Legacy Services
It is older services that emoloy less secure means of interaction
FTP, TFTP, and TFTPD are 3 unencypted means of transferring files
Telnet is an unencrypted means of connecting to a remote server
RSH is another service for connecting to the remote server
HTTP is also unencrypted but there are ways allowing for the redirection of traffic over HTTPS
The risk of Legacy Services
Competent hackers can leverage unencrypted transactions for account credentials
unscrupulous individuaks and government can eavesdrop on private communications
Hacker can insert malicious cookies or code into the stream
Hackers can copy the content of those streams as well
What is xinetd?
is a "Super server" running on many Unix-like system including Linux
it monitors all standard internet ports for traffic
when it receives a request for a particular services (i.e. http) it will start the appropriate service (apache or nginx) and direct the requests
Risk of xinetd?
The xinetd has vulnerabilities dating back to 2000
exposes all servives and allows attackers to bypass intended access restrictions
hackers can exploit it and use it in a Denial of Service attack against the server
It doesnot enforce user and group directives and causes service to be run by the root user
One exploited the adversary has access to the computer with elevated permissions
Why to disable or uninstall unused services
The fewer ports listening the better
Running services can be exploited to gain access to the server
if there is no reason to use the service, remove the service
Disabling a service stops it from listening, but someone can always start it again
Lifecycle Management
How to prepare workstations and servers for lifecycle retirement?
As resource demands rise it becomes necessary to replace hardware in order to keep up with the demand
Also it may be more fiscally resposible to replace a ccomputer vs. upgrading components to ensure complience
Replacing equipment periodically also address compliance with OS requirements (i.e. TPM 2.0)
Life cycle consideration
Data backups
Information is money
Lost datacan have a significant impact on the bisiness
Therefore, it is important to make a good backup of data or copy files to a network drive
Wiping storage and Degaussing drives
Once data on the computer has been saved it is important to wipe the internal drive clean
if the data on the machine is highly sensitive more drastic steps may be required
Deleting the contents off the drive
Overwriting the drive
Degaussing the drive
Destroying the drive
Destroying RAM and CPUs
When the sensitivity of the information is so severe that its release could cause grave harm to the business or nation, it may be necessary to go a step further
in these cases, the orgranizatoion's policy or national regulations may require the destruction of the CPI and the RAM
Recommendation for Integraring Hardening Measures
Test Hardened Environment
Before implementing a plan on your network
Build the hardened OS in a VM
test the VM thoroughly
Get ready to roll out incrementally
The Phased Approach to Roll-out
Other considerations for rollout...
Leave the user's old computer in place during the rollout in case the user experiences issues with the new computer
identify one or two senior executives for phase 4 updates
Roll out to the majority of senior executives should take place in Phase 5
System Hardening in Practice
Hardening the hardware
What to protect in the BIOS/UEFI
There are several things we can protect at the BIOS or UEFI level
Secure BIOS or UEFI with a password
Disable booting from any drive but the computer's hard drive
Protecting the BIOS or UEFI is defence-in-depth
Protecting the BIOS and UEFI with a password prevents users from making changes
is redundancy for a similar change in the OS
Functionality configured in the BIOS or UEFI cannot be overridden by the OS
Hardening the Bootloader
Role of the Bootloader
The bootloader is responsible for putting the OS into memory
It is the place where an administrator can configure different options for the OS
It is where the user can choose which configuration he need to run
Dual-booting is made possible because of the bootloader
Different hardening steps
Securing the bootloader focuses on password protecting each of the boot options
It can be further secured by limiting access to each boot option based on the user accessing it
Password protect the bootloader
Prevent a random individual from starting the OS
Prevent a thief from even getting to a login prompt on a laptop
Password protecting each boot option
This allows any group of users to access the version of the OS they need to use
This allows for the configuration of services for each boot option
Limit user Permissions
Block users from modifying the bootloader configuration
Stops users from getting around the security control the security in place on the computer
How to configure the Bootloader
Starts with the
/etc/default/grub
file and the scripts in/etc/grub.d/
make changes across the various files to build the bootloader that fulfills your requirements
Once your edits are complete run
sudo update-grub
There are other bootloaders (i.e. Linux LOader) but GRUB2 is the default bootloader for Ubuntu Linux
Password Protecting the Bootloader
Generate separate passwords for the superuser an deach user grub-mkpasswd-pbkdf2
Edit /etc/grub.d/00_header
using your editor of choice
Note: You will have to start the editor using sudo
At end of the file add following
cat << EOF
set superusers="admin"
password_pbkdf2 admin HASH1
password_pbkdf2 user1 HASH2
password_pbkdf2 user2 HASH3
EOF
With the passwords stored it is time to configure the bootloader
You do not need to specify the superuser because the superuser can boot all images
On subsequent reboots, grub will give you the option to choose whicch OS to run and ask for the username and associated password
Edit
/boot/grub/grub.cfg
For each menuentry block specify which user can use that option with the
--users
parameter
Securing the Kernel
What the kernel is and to keep it up to date
Refer section 2
What kernel parameters are and how to configure them
sysctl
enable an administrator to set kernel parameters until the next reboot of the machine. This is useful as a means of testing how a change in settings will impact the usability of the server.
Editing Settings in the sysctl.conf
file (found in /etc/ directory) will remain persistent even after reboot
It is better to create a file in the /etc/sysctl.d
directoty rather than editing sysctl.conf
directly
These are also persistent changes
It is the better method becaise a future update can reset the
sysctl.conf
fileIf this setting does not have the desired effect check to see if there is a conflicting setting elsewhere (systemcontrol.com or one of the files in /etc/system.d directory)
Hiding process from other Users
By default, a user can see all processes running on the server
It can reveal too much information to a hacker
A User should only be able to see the processes his account owns
To change this behavior, edit the /etc/fstab file
Open the file in your own favorite editor
Add the
hidepid=2
parameter so the line readsproc /proc proc hidepid=2 0 0
Remount the
/proc
volumesudo mount -o remount proc
It is important to protect the system from internal and external threats
Control Groups
Relatively new (2010) to Linux, control Groups (aka cgroups) allow processes to run in their own kernel and memory spaces.
Administrators can configure how many resources to give to each service independently
This also reduces the risk that the failure of one process impacts another
The customization is configured through the systemctl command
sudo systemctl set-property httpd.service CPUQuota=40% MemoryLimit=500M
This command appends these change to existing files in the
/etc/systemd/system.control/
directory or create new filesThis command can also be used to place resource limits on user accounts
Namespace Isolation
Namespaces were introduced back in 2002
Namespaces are used to assign resources to a process and other processes cannot see those resources
Seven of better known namespaces:
Mount (mnt) - where the process has its own root filesystem
Process ID (pid) - PID namespace provides processes with an independent set of process IDs
UTS - processes have its own unique hostname and domain
Network (net) - Allows for the creation of a virtual network
Interprocess Communications (ipc) - prevent data leakage
Control Group (cgroup)
User- Allows users to have different level of permissions on different processes
Role of Exec Shield, AppArmour and Security Enhanced Linux (SELinux)
Exex Shield, originally designed by RedHat, is designed to protect the system against multiple overflows:
Buffer
Stack
Function Pointer
AppArmor is a Linux kernel security module that allows the system administrator to restrict programs capabilities with per-program profiles
Security Enhanced Linux (SELinux) provides a mechanism for supporting access control security policies, including mandatory access control
These features are enabled by default in many Linux distributions including Ubuntu.
How to disable the key combination "Ctrl-Alt-Delete"
Users logged in at the server can use this well known key combination
Unlike the
init
command, this option does not require root permissionTherefore, disabling
ctrl-alt-delete
prevents an unauthorized reboot of the serverthe command to disable is as follows:
sudo systemctl mask ctrl-alt-del.target && sudo systemctl daemon-reload
The Second command enables the change without a reboot
if you find you have to undo this change the command is
sudo systemctl unmask ctrl-alt-del.target && sudo systemctl daemon-reload
Securing Storage Devices
How to partition disks, and encrypt volumes
Historically, it was a best practice to create multiple volumes for many of the key components of Linux uses to store files
Root (
/
) - where the bulk of the OS resides/boot
- contains the kernel images and other files for starting the OSOnce the OS is loaded and running the volume is no longer used. However, there is the threat of a malicious individual compromising the computer by modifying the content of the directory
The simplest solution is to mount the /boot volume read-only
CAVEAT: Mounting the /boot volume read-only prevents automated updates to update the kernel.
During installation the boot volume is easy
It is more complicated when creating one in a machine that is operational
If there is no unallocated storage one has to shrink one volume with
fdisk
andresize2fs
The boot volume requires between 512MB and 1GB
Warning: Before manipulating volumes take the time to determine if the benefit outweighs the cost if there are issues
Recommendation: Perform a complete backup of the computer before manipulating volumes
/var
- logs, databases, and websites are stored here by default/home
- users' directoriesswap
- disk storage used for memory swapping
Today the default for the Ubuntu install creates only two partitions
Root (
/
)swap
Encrypting a Volume
How to make the boot partition read-only
Blocking Unwanted activities and Traffic
Install and configure IPTables, and Intrusion Detection Systems (IDS)
Take steps for preventing Denial of Service attacks
Minimizing the OS attack surface
Network Hardening at the Host
System Administration Hardening
Testing, Monitoring, and Reviewing
Log Management
Service Hardening in Practice
General handling of services
Disable and remove services
Reducing the attack surface of a system includes the disabling and uninstalling of services
to temporary disable a service
sudo systemctl stop network.service
In a circumstance where you need to restart the computer but do not want the service to start
To prevent a service from starting on a reboot
sudo systemctl disable nginx.service
When you no longer need a service installed on a server
To remove a service
sudo apt remove nginx -y
to remove a service and its dependencies if they are not needed by other applications
sudo apt purge nginx
Services to consider disabling
Whenever you configure a new server or audit an existing one, it is a good practice to evaluate the services running on it and disable and/or remove services that are not needed
DNS - In an environment whre you have dedicated DNS servers you can uninstall DNS
SSHD - if users are not connecting to the server via SSH or using SFTP and SCP then consider disabling this service
SMB - If the server is not sharing directories via SMB this service can be disabled
All Legacy Services - Sevices like telnet, and rsync use plaintext and there are more secure options
Security Benefits of splitting the network services
Containerizing Services
Containerizing services provide several security benefits:
Transparency - an administrator can easily look inside a container to see what runs inside it
Modularity - It is easier to isolate vulnerabilities without disrupting other elements
Smaller Attack Surface - unlike a virtual server scenario where you have to harden two OS you have the host's OS, the Docker daemon, and the containerized application
Easy Updates - Updating container is as easy as pulling the latest image for it
Environment Parity - No matter what OS the host runs the container will run on it and runs the same way on all hosts
Securing CRON jobs
CRON jobs are time-triggered scripts for automation
it can serve many different functions
Rotate or offload logs to another server
Restart Services, manage backups ...
These scripts are run as the user who added them to the cron process
You can allow or deny users access to cron by editing /etc/cron.allow and /etc/cron.deny and adding the user's name to the respective file
Employ best practices for timeout setting for interactive shell sessions
It is good practice to set a timeout for terminal sessions
This can be done in each user's profile (/home/user/.bashrc)
the default value can be set in the /etc/.bashrc or /etc/profile
the key=value pair is TMOUT=VALUE (in seconds)
Setting TMOUT to 0 overrides the timeout
This timeout affects users logged in at the terminal
There are conflicting requirements to consider
A short timeout may lead to orphaned processes
A long timeout could be present an opportunity if someone forgets to logout and a malicious person hijacks the session
The usefulness of Login Banners
Login Banners are there to present information to a user upon login
This message of the day (motd) is fully configurable
The default message in ubuntu provides notification about the patch status of the host
While useful to the administrators and engineers, it is information most users don't need
It can be dangerous in the hands of a malicious or inexperienced individual
Changing the message to that of warning is a good practice
While it will deter the malicious individual it may deter the less experienced
It is simply a passive measure
By changing the motd, particulars that could be useful to a threat actor are kept from him
The motd file is stored in the /etc/ directory
Hardening of public-facing services
Redirect plaintext communications to encrypted channels or how to require encrypted communications for public-facing services
There are risks with unencrypted communications
It allow the adversary to gain information by monitoring the plaintext traffic
It also allows adversaries to gain credentials
In order to harden web-based applications traffic to and from the webserver should be encrypted
This can be done natively by the webserver
make sure the connections use stronger encryption and disable older standards
This approach applies the same logic we use for legacy services
It can also be done through firewall forwarding rules
Consider enforcing encrypted communication for back end communications as well
require an encrypted connection between the webserver and database server
Use the same approach with the tools developers use to work on the web application
Explain the importance of closing unused ports
Finding and closing open ports start with discovery
NOTE: netscan is a deprecated program
ss is the replacement for netscan
The syntax for ss is as follows
sudo ss -tulpn | grep LISTEN
t Show only TCP sockets on Linux
u Display only UDP sockets on Linux
l Show listening sockets
p list process name that opened sockets
n don't resolve service names
You can match service to a port by refrencing /etc/services
Start by adding a firewall rule to drop any traffic to the port
This is not the end of it, however
if this port is not open as part of your network plan then further research needs to be done
once you find which service runs on the port in question stop and disable the service
Hardening of SSH services
Steps to limit user access to SSH
If users do not require SSH access you should restrict their access
Edit the ssh_config file
sudo vi /etc/ssh/sshd_config
Find the
PermitRootLogin
parameterChange the value from yes to no
Restrict other users and groups that do not need access
DenyUsers user1 user2 user3
DenyGroups group1 group2 group3
While SSH cannot block networks natively, you can configure iptables to block unwanted traffic
Limiting SSH functionality
SSH is a robust communications service but most users do not need this functionality
Just as we apply the least privileges to what permissions a user has, an administrator should consider restricting functionality the user does not use
Some functionality to consider disabling
Port forwarding - this allows a user to connect to one computer and then tunneling another service through SSH
X11 forwarding - unless users are tunnelling X11 through SSH this should be blocked
Agent forwarding
How to disable password-based login and employ asymmetric encryption for authentication
An alternative to logging in with a password is using a private key identification (PKI) to authenticate users
Once established then only those with the PKI can actually authenticate themselves with the server
Keys can be created with a passphrase creating a 2FA scenario
It is a good practice to configure the server to allow PKI authentication and ensure it works before disabling password authentication
The following steps will enable PKI authentication
Open the ssh configuration file with an editor of your choice
sudo vi /etc/ssh/sshd_config
Search for PubKeyAuthentication and change the value to yes
Restart ssh with
sudo systemctl restart ssh
Once everyone has their private key edit sshd_config to disable password logins
change PasswordAuthentication value to no
After saving the change restart ssh with
sudo systemctl restart ssh
Generating Certificates
With PKI Authentication enabled the next step is to generate a user's keypair
Start by executing ssh-keygen
or ssh-kaygen -N <passphrase>
When prompted provide the path for saving the key (
/home/<user_name>/.ssh/id_rsa
)copy the public key into the authorized_keys
cat /home/<user_name>/.ssh/id_rsa.pub >> .authorized_keys
Provide the user with his private key (id_rsa) and the passphrase so he can use it to log in
How to block connections from a specific network
While SSH cannot reject connections from networks natively, you can configure iptables to do the job for SSH
The best option to achieve this is to accept packets from those who are allowed to access the server iptables -A INPUT -p tcp --dport 22 --source 192.168.0.0/24 -j ACCEPT
Before blocking everyone else iptables -A INPUT -p tcp --dport 22 -j DROP
Employ DenyHosts to block brute force attacks against SSH
There are two separate changes to ssh regarding hosts
Use DenyHost to block requests from an individual host
DenyHost <ip_address>
A defense-in-depth approach should include a Firewall rule on the host dropping packets
Explain how and why to block host-based authentication and obsolete rsh functionality
Another host based change to make is to disable host based authentication
Host-based authentication is dangerous because anyone can be at the client machine's keyboard
Host-based authentication was the standar for conenctivity for rsh
Disabling it is as easy as adding the following two lines to the sshd_config file
Host *
HostbasedAuthentication no
Account Hardening in Practice
User Password Authentication Requirements
Enhance password security with Pluggable Authentication Modules
Pluggable Authentication Modules (PAM) is a service for implementing modular authentication modules
It is loaded and executed when a program needs to authenticate a user
Configurations for PAM are located in several places including the
/etc/pam.d
directory
What services does PAM provide
Password Quality
Password Attempts before lockout
Password History
Stronger hashing by replacing md5 with SHA512
provides the means to prevent brute force attack
Install PAM with sudo apt install libpam-pwquality libpam-modules
Settings can be configured in /etc/pam.d/common-auth
the line to configure PAM starts with auth required
pam_tally2.so
Options include
onerr=fail
audit
silent
deny=n (maximum consecutive failures)
unlock_time=n (n is the number of minutes to lock the account)
To unlock an account locked by failed consecutive attempts
/sbin/pam_tally2 -u <username> --reset
Enumerate the benefits of strong passwords, two-factor authentication, and password management
PAM gives you the capability to configure strong password requirements
The parameters are set in /etc/security/pwquality.conf
The fol parameters are ones you can set in pwquality.conf
Minimum Length (minlen = 14)
password complexity
minclass = 4 or
dcredit = -1 (1 digit)
ucredit = -1 (1 uppercase character)
lcredit = -1 (1 lowercase character)
ocredit = -1 (1 symbol)
You must also ensure two settings are inclided in
/etc/pam.d/common-account
account requisitite pam_deny.so
account required pam_tally2.so
Restrict users from using old passwords
Password reuse limitations prevent users from constantly reusing the same 2 or 3 passwords
users will go so far as to change their passwords multiple times in a single session in order to get back to their original password
Edit
/etc/pam.d/common-password
file to set how many passwords the machine should rememberThe syntax is
password required pam_pwhistory.so remember=n
where n is the number of previous password to remember
Another setting outside of PAM, defines how frequently a user can change his password
This addresses those who would rapidly change their password to get back to the one they want to keep
Edit
/etc/login.defs
change
PASS_MIN_DAYS n
where n is the number of days that must pass before the user can change his password againNOTE: CIS recommends no less than 1 day between password changes
Manage password expiration and maintain user accounts and password policies
Another setting like PASS_MIN_DAYS, defines how long a user can keep same password before being required to change it
Edit
/etc/login/.defs
Change
PASS_MAX_DAYS n
where n is the number of days a user can keep using a password before having to change itPassword longevity should be greater than days between password changes and less than or equal to 365
A related setting is PASS_WARN_AGE
is the setting for an alert the user receives n days before his password is set to expire
Another password management tool allows for locking accounts after they have been inactive for n days
CIS recommends 30 days
This parameter is set using the following command
useradd -D -f -n
where n is number of daysSetting it to -1 disables this setting and is the default setting
Multifactor Authentication
Multi-factor Authentication is a method of using 2 or more means of authenticating the user is who he claims to be
These means of authentication are
something I know - his password or pin
Something I have - a phone or access card
Something I am - fingerprints, hand geometry, facial geometry
Something I do - Typing DNA, handwriting
Somewhere I am - geofencing, GPS
Each factor falls into its unique category
BAD: a password and a pin are both something I know
GOOD - Fingerprint and a phone
Describe the function of Kerberos and how to utilize it
Kerberos is a security protocol implemented by numerous OS including Linux
A trusted third party for authentication client-server applications and verifying users' identities
It uses secret-key cryptography
An open-source protocol and the go-to protocol for Single-Sign-On (SSO)
First developed at MIT
The use of key pairs and the instances where they can be employed
Asymmetric key pairs
Also known as public-key cryptography
consists of two keys (1 public and 1 private)
the public key can be used by anyone to encrypt something for you but only you can decrypt it
The private key can be used to sign something and everyone can confirm it was signed by you and no one else
It is used in many funtions
HTTPS (both SSL and TLS)
Digital signature
Login AUthentication
Account Management Requirements
Validate the User IDs of non-root Users
The root account is often referred to as a Super User
The root user doesnot need to be call Root
The super User is the user with a user id of Zero (0)
Each user should have a unique user id
when we validate the user id we ensure each user has a unique id
The ID number is what the system uses when it evaluates permissions
If someone manipulates the /etc/passwd file:
changes their user id to match another user theu will effectively have the same permissions of the other user based on the User ID
This Kind of manipulation can be dangerous.
Lock and unlock account manually
There are scenarios where you will find it necessary to lock a user's account
User is on extended leave
Prior to user termination
Locking the account in Ubuntu is as easy as:
usermod -L username
passwd -l username
Unlocking the account in Ubuntu is as easy as:
usermode -U username
passwd -u username
Checking accounts for empty passwords
Password complexity requirements should be prevented the use of empty passwords but it is better to safe than sorry
passwords are stored in /etc/shadow and each line represents a single user
The field in each line is separated by a colon : and the second field contains the hash of the password
An account with an empty password will have two colons following the username
A faster way to find these is to use the following command on the command- line
passwd -S -a
or
passwd --status -a
Reviewing accounts
The concept of least privileges
The principle that users and programs should only have the necessary privileges to complete their tasks
Use groups for assigning permissions to users
There are several reasons why we assign privileges to groups and not individual users
simplifies account management
Simplifies audits and reviews
Reduces the risk of error
Assigning users to groups
In order to assign users the permission they need assign them to the appropriate group
This holds true for temporary assignments and/or projects
Ensures least privileges as long as groups are properly maintained
Audit user permissions and group associations and monitor user activity
Periodic auditing and monitoring of activity on a server is important for security
Reviews and Audits
Log Monitoring
Last updated
Was this helpful?