Graylog on Docker

When you’re running many services dotted here and there and have to manage many systems and various pieces of network infrastructure, systems admins need a way to consolidate and examine the data from logs, This is what we call SIEM(Security Information & Event Management). There are many solutions for storing, searching and generally turning logs into meaningful information for situational awareness, most will have heard of the ELK stack(Elasticsearch Logstash and Kibana) but there are several others such as AlienVault USM, LogRhythm, Splunk, Nagios, Zabbix and Graylog. These sollutions often are only one part of a two part equation you may still need to be running other services on your network infrastructure to get the best from these platforms.

This software is only for consolidating and correlating logs it doesn’t do any of the logging or intrusion detection, for logging network intrusion attempts you might use an IDS(Intrusion Detection System) such as Snort, Suricata, OSSEC, Bro IDS or OpenWIPS-ng, you would then use syslog to send the logs to the SIEM software. i will not cover how to setup this part of task, it is assumed you can manage to setup log forwarding on your chosen infrastructure, there are detailed instructions on how to do this around the internet

We will be using docker-compose in this example setup on the Fedora Atomic server it should be mentioned that Elasticsearch minimum requirements are 16GB of RAM but for larger setups its recommended to go with 32GB or 64GB of RAM dedicated to Elasticsearch, unfortunately it will not really work well at all with less than 16GB, besides that its rather simple to get going we only need create one file and one directory so once we are logged in via shell if create our directory structure and we change into our working directory:

cd /srv && mkdir -p logs/graylog && cd logs/graylog

Then we want to open a new docker compose config file:

nano docker-compose.yml

version: '2'
    image: mongo:3
    restart: always
      - mongo_data:/data/db
    restart: always
      - "ES_JAVA_OPTS=-Xms8g -Xmx8g"
      - es_data:/usr/share/elasticsearch/data
	soft: -1
        hard: -1
    mem_limit: 16g

    image: graylog/graylog:2.4
    restart: always
      - GRAYLOG_PASSWORD_SECRET=%random-secret-key%
      - GRAYLOG_ROOT_PASSWORD_SHA2=%SHA2 password hash%
      - mongodb:mongo
      - elasticsearch
      - mongodb
      - elasticsearch
      - 9000:9000
      - 514:514
      - 514:514/udp
      - 12201:12201
      - 12201:12201/udp
      - /srv/logs/graylog/data:/usr/share/graylog/data:z
    driver: local
    driver: local
    driver: local

Ill explain whats going on here, so we have a couple of ports forwarded to the host for logging, 514 is the default syslog port, and 12201 is the GELF(Graylog Extended Log Format) port, port 9000 is the port we will access Graylog on, if you have setup graylogs endpoint URI you may need to forward the subdomain onto graylog i will not cover that here, you also need to set a random password secret for graylog to use

you can encode your password to SHA2 with:

echo -n %yourpassword% | shasum -a 256

You can start graylog with docker-compose:

docker-compose up -d

Once graylog is up and running you will want to setup the inputs, extractors, pipelines and other bits and bats such as GeoIP, Threat intelligence etc, because there is such a wide array of services you can parse logs from i am not going to go into detail about adding extractors and such, you can find detailed explanations on the Graylog site as well as an extensive plugin library and various content packs on the Graylog Marketplace

Add a Comment

Your email address will not be published. Required fields are marked *