From the course: CompTIA Security+ (SY0-601) Cert Prep: 9 Operations and Incident Response

Logging security information

From the course: CompTIA Security+ (SY0-601) Cert Prep: 9 Operations and Incident Response

Start my 1-month free trial

Logging security information

- System monitoring creates massive amounts of data output that cyber security analysts must wade through when attempting to conduct log reviews as part of an incident response effort. Fortunately, monitoring technology provides ways for us to automate some of this work. Log files come from a variety of different sources and each of these data sources contains information that may be useful in incident response. Network logs, and in particular NetFlow data, tells us about the systems on a network that communicated with each other and the amount of information that they exchanged. This can be crucial in identifying systems involved in a security incident. Similarly, DNS logs provide information about network name lookups, offering insight into which systems may have communicated with external systems. System logs provide insight into the inner workings of the operating system, recording security events and other activity on the system that might reveal details of an attack. On Windows systems, these are called event logs. Application logs provide similar information about activity that occurred at the application level, including application logins and access to data. Web application logs might tell us about SQL injection attacks or other malicious activity. Authentication logs help us determine who may have used a centralized authentication service and what internal and external systems and applications they accessed through that service. Other specialized log files also play important roles. For example, VoIP and CallManager logs provide insight into the nature of traffic on the network using SIP, the Session Initiation Protocol. In addition to these log files, cybersecurity investigators may make use of dump files from network traffic, memory analyzers, and other sources of raw security information. Vulnerability scan output can also assist in incident response, providing clues about what vulnerabilities attackers may have targeted on a system. One of the most important technologies that supports log monitoring is a protocol called syslog. Syslog has been around for a long time. It actually dates back to the 1980s, but variants of it are still in widespread use today. The syslog standard defines a very simple format that's used to create standardized log messages. Each message consists of four components. The header is the first component. The header contains information about the time and source of the message. This includes a timestamp as well as the IP address and process ID that originated the log entry. The facility is a 24-bit code that describes where the message came from using a number between 0 and 23. For example, facility 0 indicates that the message came from the kernel, facility 1 indicates a user level message, and facility 2 indicates that the error came from the mail service. The third component is the severity, which indicates how serious a message is. I'll explain this more in just a moment. And the fourth component is the message itself. This is where the process that creates the log entry can include information that explains the purpose of the message. Now, I just told you that the severity level of a syslog message ranges from 0, an emergency, down to 7, a debug message. As the number gets higher, the severity of the message decreases. It's common to use severity as a filter when analyzing syslog messages. For example, I might set an alarm to notify administrators when a log server receives a syslog message with severity 2 or lower, indicating that the situation is critical or worse. This chart shows the definitions of each syslog severity level. Syslog is supported by default on all Linux systems and it's the defacto standard for sending and receiving log messages between applications, systems, and devices. Syslog forms the foundational core of many security tools, resource monitoring tools, performance managers, and other security services. Now there are actually three different versions of syslog out there. The original syslog is actually rarely used today because it's been replaced by newer standards. The first of these, syslog-ng, added encryption and reliable delivery to syslog in 1998. And this was further enhanced by the rsyslog standard in 2004. Today, most Linux systems support either syslog-ng or rsyslog. Now, if that isn't confusing enough, there is another tool out there called journalctl that stores log entries on Linux systems using the journal format. While syslog uses text-based logging, the journal format uses binary files. As we manage log entries, one of the most important consideration is the retention of those logs. Logs take up space and we generally don't keep them forever, but we also want to preserve them long enough that we have them if we need to dig back into history as part of an investigation. Log retention decisions should be made deliberately and they have to balance security needs with the cost of maintaining logs. Tagging is another important log management concept. We can tag log entries with different fields, such as the name of the application generating the log, the user involved, and other metadata. These tags make it easier to sort and filter logs during analysis. The NXLog centralization tool is a log management tool that crosses platforms, allowing the logging of records from syslog sources, as well as Windows systems and other devices. NXLog moves us towards the concept of a centralized logging system, which is the subject of our next video.

Contents