Reducing the volume of logs you’re sending to your destinations is a great way to increase the signal in your analysis tools and mitigate the costs associated with long term log storage. This tutorial will show you how to use the Severity Filter processor in BindPlane OP to filter out logs in your pipeline.
Below I have a simple pipeline configured that’s sending Postgres logs to both a Google and Splunk destination. Postgres is being used as an example, log filtering will work with any log source.
To start, Navigate to any agent and use Snapshotsto inspect the log stream and determine what to filter.
- Navigate to the bottom of the configuration page and click one of the agents
- On the agent details page, click the button called "View Recent Telemetry" in the top right of the page
When I do that, I’m seeing a lot of info and debug logs that I’d like to continue sending to Google, but I don’t need in Splunk.
Let’s filter them out.
- Navigate back to your configuration
- Click the processor node just before the Splunk destination.
- Choose the “Severity Filter” processor and set the minimum severity to Warn.
- You’ll see a “1” appear on the processor node, indicating that the processor was successfully added.
- After the agent receives the new configuration, you’ll see the throughput measurements update to reflect the reduction of data going to Splunk
For even more control, you can use the Filter Log Record Attribute processor to filter your logs based on other attributes. Use SnapshotsSnapshots to inspect the log and determine what you’d like to filter.
Updated 6 months ago