Centralize your logs on the cheap on obsolete systems

This is a tale about what you should never do, but you are often forced to do in this time and age.

I’ll explain the technical solution and then tell the story for some context.

On a recent-ish system, install multitail:

# yum -y --enablerepo=epel install multitail

multitail allows to follow multiple tail or even the output of multiple commands in one single window (or multiple windows handled by ncurses), but it also allows to save the output of those commands to another file. In my case the command line looked like:

multitail --mergeall -D -a all.log \
  -l 'ssh web01 "tail -qF /var/log/apache2/*.log /var/log/apache2/*/*.log"' \
  -l 'ssh web02 "tail -qF /var/log/apache2/*.log /var/log/apache2/*/*.log"'

This would create a file all.log containing the output from tail -qF of Apache logs from web01 and web02.

So, what’s the backstory? Why would I do something like this? Centralized logs are nothing new, right? We have Solutions[tm] for that.

Backstory

Imagine you have a time constraint of “one hour”.

Then imagine you have systems so obsolete that the signing key (valid for 10 years) for their repositories expired.

If I had more time I would try to see if rsyslog was recent enough to have the text-input file module and I would’ve tried to have rsyslog push the logs to a more recent system with logstash/ELK on.

Bonus code

I made a little script to generate the multitail commandline, here, have fun:

#!/bin/bash
HOST_LIST="web01 web02"
LOG_LIST="/var/log/apache2/*.log /var/log/apache2/*/*.log"

CMD_MULTITAIL="multitail --mergeall -D -a all.log"

for target in $HOST_LIST ; do
  CMD_MULTITAIL="$CMD_MULTITAIL -l 'ssh $target \"tail -qF $LOG_LIST\"'"
done

echo $CMD_MULTITAIL

I seriously hope nobody (else) will ever need this, but if you do, I got you covered.

Advertisements

How to quickly install and configure Splunk Enterprise

As you may have noticed, I’m not a huge fan of proprietary, closed source software. And of course I ended up having to install Splunk for a client. So here’s a few notes on what I did to get it working.

I started following this guide with a few integrations here and there.

Install the Splunk Server

First thing, you need to download the server. You have to register for it (proprietary software).

I got the 64bit RPM for my CentOS 7 server and installed it with

yum install splunk-*-linux-2.6-x86_64.rpm
/opt/splunk/bin/splunk --answer-yes --no-prompt --accept-license enable boot-start
/opt/splunk/bin/splunk --answer-yes --no-prompt --accept-license start

This will automatically accept the license and setup the Splunk Server to start at boot time.

If everything worked correctly, you should be able to connect to your Splunk Server on:

url: http://your-server-name-or-ip:8000
user: admin
pass: changeme

If it doesn’t work, check if you have a firewall on your server machine and open port tcp/8000 if needed.

For more information on this step, I’ll referr you to the Fine Manual:

Configure the Splunk Server

The logical next step is to configure the Splunk Server to listen for incoming logs.

Assuming you didn’t change (yet) your Splunk Server user and password, you’ll need to run:

/opt/splunk/bin/splunk enable listen 9997 -auth admin:changeme
/opt/splunk/bin/splunk enable deploy-server -auth admin:changeme

For more information on this step, check:

Install the Splunk Universal Forwarder on clients

Now that the server side is configured, we need to setup a client to send some logs to it. Again, head off to the download page and grab the package you need.

For large scale deployment you might want to read about how to use user-seed.conf, so you can pre-seed your installation user and password. For this quick tutorial, we’ll skip that and run directly these commands:

yum -y install splunkforwarder-*-linux-2.6-x86_64.rpm
/opt/splunkforwarder/bin/splunk --answer-yes --no-prompt --accept-license enable boot-start
/opt/splunkforwarder/bin/splunk --answer-yes --no-prompt --accept-license start

Again, this will automatically accept the license and enable the forwarder at boot time.

For more information about this step:

Configure the Universal Forwarder

Once the forwarder is installed, you’ll need to configure it to talk to your server.

Please note that the user and password I’m using are those of the local splunk, not the Splunk Server.

/opt/splunkforwarder/bin/splunk add forward-server splunk-server:9997 -auth admin:changeme
/opt/splunkforwarder/bin/splunk set deploy-poll splunk-server:8089 -auth admin:changeme
/opt/splunkforwarder/bin/splunk enable deploy-client -auth admin:changeme
/opt/splunkforwarder/bin/splunk add monitor /var/log/nginx/error.log
/opt/splunkforwarder/bin/splunk restart

In my case I added /var/log/nginx/error.log to the files that will be monitored and sent to the server.

For more information about this step, check out:

Accessing your logs on the Splunk Server

At this point you should be able to log in your Splunk Server web interface, head to the “Search & Reporting” app, and search for your data, for example I used a simple query:

source="/var/log/web/nginx/error.log"

to make sure the data from my log files was ending up in Splunk.

Logging for HAProxy on CentOS 5.x

HAProxy requires syslogd/rsyslogd listening for incoming connections for his logging purposes.

Basic configuration looks like this:

# haproxy.conf

global
log 127.0.0.1 local5 debug

# syslog.conf

[...]
local5.* /var/log/haproxy.log

# /etc/sysconfig/syslog

[...]
SYSLOGD_OPTIONS="-m 0 -r"
[...]

It is critical that syslog is accessible by network (udp port 514, you can use 127.0.0.1 as ip address). To check if syslog is listening, use:

# netstat -lp | grep syslog

udp 0 0 0.0.0.0:514 0.0.0.0:* 24001/syslogd

You can bind syslog only to localhost or provide firewall to avoid exposing the daemon to the network.

References: