ELK & Nagios Part 2: How to get your logs from Redis to Elasticsearch

In the last part we collected the application logs and transferred them to Redis, our high performance messaging queue.

In this part we want to get the logs/messages out of Redis, filter them, cut them in pieces/fields and finally transfer them to our index database Elasticsearch.

We will use Logstash to normalize and enrich our data and to parse it to Elasticsearch.

To get the data out of Redis, we have to define an input plugin, fortunately Logstash comes with an input plugin for Redis, we just have to point it to the Redis server/container and the used Redis port: Read more

ELK & Nagios Part1: How to get your Application Logs to Redis


The easiest way to collect your Application logs (WebSphere, TDI, DB2…) from your servers and send them to Logstash for processing is to use Filebeat as shipper.

Filebeat gives you the possibilty to output your logs directly to Logstash but I prefer to send them first to a message broker. Reason for this is that the message broker can store all messages even if logstash isn’t available and therefore acts as a perfect buffer. Read more

Monitor WebSphere with ELK and Nagios

Worked a lot with the ELK stack for log management and Nagios for system monitoring in the last months and like both solutions a lot.

They are very flexible and customizable to match almost every customer environment. So the natural next step was to combine both solutions to build a very powerful system monitoring and management solution for WebSphere servers (IBM Connections/IBM Sametime).


Read more