Logging
Level
Using log levels, a security analyst can decide the priority and the characteristic of log easily
Minimum number of priority can be set to filter out some of less important log
Here is the common logging level and its priority number
System Design
The logging system can be mainly divided into several components
Log accumulator
It can be placed on servers or devices. The agents collect logs on the devices themselves. They then send the collected logs to a central logging system
Log aggregator
To stream processing huge log datas from different sources
To consolidates logs from different sources into a centralized location. This centralization simplifies log management and ensures that all logs are stored in a unified manner, making it easier to search and analyze them.
To label the log and store the logs into storage
To integrate with analysis tool and handle the query from frontend and fetch the result from storage or cache
To support the creation of custom monitoring and alerting rules to detect anomalies, threshold breaches, or specific patterns in log data, triggering notifications or automated actions when predefined conditions are met.
Log Visualizer
Act as a frontend for user to output a log
Allow user to enter query, setting up the rules and alert based on the data source
Stack
There are common stacks/ practices for implementing the design - PLG / ELK
PLG
Promtail (P)
Act as a log accumulator, can be installed as a daemon set on different machines
To collect the log from different applications and then send to loki
Loki (L)
Distributors use consistent hashing in conjunction with a configurable replication factor to determine which instances of the ingester service should receive a given stream.
The ingester service is responsible for writing log data to long-term storage backends (DynamoDB, S3, Cassandra, etc.) on the write path and returning log data for in-memory queries on the read path.
Grafana (G)
Act as a log visualizer
ELK
Logstash (L)
Act as a log accumulator or data pipeline. It collects logs from various sources, stream processing the data and finally sends it to Elastic search or other destinations
Elastic Search (E)
Act as a log aggregator
Unlike loki , it focus on storing the log data
To index the data it receives, enabling fast and efficient search capabilities. It uses inverted index structures to build indexes on the data, allowing for quick retrieval of relevant information based on search queries.
Kibana (K)
Act as frontend for viewing log
References
Last updated
Was this helpful?