Use detective controls to identify a potential security threat or incident. They are an essential part of governance frameworks used to support a quality process, a legal or compliance obligation, and for threat identification and response efforts. There are different types of detective controls. For example, conducting an inventory of assets and their detailed attributes promotes more effective decision making (and lifecycle controls) to help establish operational baselines. One option is using internal auditing - an examination of controls related to information systems - to ensure that practices meet policies and requirements, and set automated alerting notifications based on defined conditions. These controls are important reactive factors that help identify and understand the scope of anomalous activity.
There are a number of approaches to consider when addressing detective controls:
- Capture and analyze logs
- Integrate auditing controls with notification and workflow
Capture and Analyze Logs
In traditional data center architectures, aggregating and analyzing logs typically requires installing agents on servers, carefully configuring network appliances to direct log messages at collection points, and forwarding application logs to search and rules engines. Aggregation in the cloud is much easier due to two capabilities.
First, asset management is easier because assets and instances are described programmatically without depending on agent health. For example, instead of manually updating an asset database and reconciling it with the real install base, Devek reliably gathers asset metadata with just a few API calls. This data is far more accurate and timely than using discovery scans, manual entries into a configuration management database (CMDB), or relying on agents that might stop reporting on their state.
Second, use a native, API-driven service to collect, filter, and analyze logs instead of maintaining and scaling the logging backend yourself. Pointing application logs at a bucket in an object store, or directing events to a real-time log processing service, means spending less time on capacity planning and availability of the logging and search architecture.
The best practice is to customize the delivery of cloud API calls and other service-specific logging to capture API activity globally and centralize the data for storage and analysis. A central system allows events to be obtained in a consistent format across compute, storage, and applications.
Equally important to collecting and aggregating logs is the ability to extract meaningful insight from the great volumes of log and event data generated by modern, complex architectures. Don’t simply generate and store logs. The information security function needs robust analytics and retrieval capabilities to provide insight into security-related activity. There are multiple tools for intelligent threat detection to continuously monitor events and generate notifications.
Integrate Auditing Controls with Notification and Workflow
Security operations teams rely on the collection of logs and the use of search tools to discover potential events of interest, which may indicate unauthorized activity or unintentional change. However, simply analyzing collected data and manually processing information is insufficient to keep up with the volume of information flowing from modern, complex architectures. Analysis and reporting alone don’t facilitate the assignment of the right resources to work an event in a timely fashion. A best practice for building a mature security operations team is to deeply integrate the flow of security events and findings into a notification and workflow system integrated with the ticketing system, a bug/issue system, or other security information and event management (SIEM) system. This takes the workflow out of email and static reports, allowing the user to route, escalate, and manage events or findings. Many organizations are now integrating security alerts into their chat/collaboration and developer productivity platforms.
This best practice applies not only to security events generated from log messages depicting user activity or network events, but from changes detected in the infrastructure itself. The ability to detect change, determine whether a change was appropriate, and then route this information to the correct remediation workflow is essential in maintaining and validating a secure architecture.
Several tools and services assist with routing events of interest and information reflecting potentially unwanted changes into a proper workflow. Build rules that parse events, transform them if necessary, and then route such events as notifications, or based on the events, trigger various actions.
Reducing the number of security misconfigurations introduced into a production environment is critical; implementing more checks in the build process allows for better quality control and reduction of defects. Modern, continuous integration and deployment (CI/CD) pipelines are designed to test for security issues whenever possible. Several tools and services perform configuration assessments for known common vulnerabilities and exposures (CVE’s), assess compute instances against security benchmarks, and fully automate the notification of defects. When findings are present, these are to be directed to backlogs and bug-tracking systems.
Join Devek in reducing Cloud complexity
Looking to reduce complexity of cloud infrastructure? Look no further, we are here to make it happen!
Please leave some details and we will get back to you when Devek is available for trying out.