Installing Docker on CentOS 7 “the sensible way”

For a production environment, the best idea is probably to set up a Kubernetes cluster or something like that.

But in our case we just wanted a test system that would allow us to have a couple of containers set up in a sensible manner

Install Docker

First thing is, of course, to install Docker. The package that comes with CentOS 7 is already obsolete, so we go to the source and download the community edition from docker.com:

# yum -y install yum-utils
# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# yum -y install docker-ce
# systemctl enable docker
# systemctl start docker
# docker --version
Docker version 17.12.0-ce, build c97c6d6
# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

Install docker-compose

The second thing we want to install – that for some reason is not packaged alongside docker – is docker-compose. Since it’s a python package, we installed pip first:

# yum -y install epel-release
# yum --enablerepo=epel -y install python-pip
# pip install docker-compose
# docker-compose --version
docker-compose version 1.19.0, build 9e633ef

Create a user for the container

We decided that our containers would run with different users, so we created a new user in the docker group:

# useradd -m -G docker container01
# su - container01 -c 'id; docker ps'
uid=1000(container01) gid=1000(container01) groups=1000(container01),994(docker)
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

Create a docker-compose.yml file for the container

I grabbed an example compose file from the official site and saved it as /home/container01/docker-compose.yml

version: '3'

services:
   db:
     image: mysql:5.7
     volumes:
       - db_data:/var/lib/mysql
     restart: always
     environment:
       MYSQL_ROOT_PASSWORD: somewordpress
       MYSQL_DATABASE: wordpress
       MYSQL_USER: wordpress
       MYSQL_PASSWORD: wordpress

   wordpress:
     depends_on:
       - db
     image: wordpress:latest
     ports:
       - "8000:80"
     restart: always
     environment:
       WORDPRESS_DB_HOST: db:3306
       WORDPRESS_DB_USER: wordpress
       WORDPRESS_DB_PASSWORD: wordpress
volumes:
    db_data:

To test the compose file, I switched to the container01 user and ran it:

# su - container01
$ docker-compose up
Creating network "container01_default" with the default driver
Creating volume "container01_db_data" with default driver
Pulling db (mysql:5.7)...
5.7: Pulling from library/mysql
[...]
db_1         | 2018-02-16T17:42:17.911892Z 0 [Warning] 'tables_priv' entry 'sys_config mysql.sys@localhost' ignored in --skip-name-resolve mode.
db_1         | 2018-02-16T17:42:17.915828Z 0 [Note] Event Scheduler: Loaded 0 events
db_1         | 2018-02-16T17:42:17.915984Z 0 [Note] mysqld: ready for connections.
db_1         | Version: '5.7.21'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  MySQL Community Server (GPL)

I stopped the process and spinned down the containers:

^CGracefully stopping... (press Ctrl+C again to force)
Stopping container01_wordpress_1 ... done
Stopping container01_db_1        ... done

$ docker-compose down
Removing container01_wordpress_1 ... done
Removing container01_db_1        ... done
Removing network container01_default

Using volumes will save your data in /var/lib/docker/volumes/container01_db_data/ and persist it through restarts.

Now I wanted to make sure the containers would start and stop with the server, time for some systemd!

Create a systemd service for the container

I created a new systemd service file in /etc/systemd/system/container01-wordpress.service

[Unit]
Description=Example WordPress Containers
After=network.target docker.service
[Service]
Type=simple
User=container01
WorkingDirectory=/home/container01
ExecStart=/usr/bin/docker-compose -f /home/container01/docker-compose.yml up
ExecStop=/usr/bin/docker-compose -f /home/container01/docker-compose.yml down
Restart=always
[Install]
WantedBy=multi-user.target

Then I reloaded the systemd daemon to make sure it would recognize the new service, enabled it and ran it:

# systemctl daemon-reload
# systemctl enable container01-wordpress.service
Created symlink from /etc/systemd/system/multi-user.target.wants/container01-wordpress.service to /etc/systemd/system/container01-wordpress.service.
# systemctl start container01-wordpress.service
# journalctl -f
feb 16 18:47:36 centos7-test.stardata.lan docker-compose[3953]: wordpress_1  | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.18.0.3. Set the 'ServerName' directive globally to suppress this message
feb 16 18:47:36 centos7-test.stardata.lan docker-compose[3953]: wordpress_1  | [Fri Feb 16 17:47:36.915385 2018] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.25 (Debian) PHP/7.2.2 configured -- resuming normal operations
feb 16 18:47:36 centos7-test.stardata.lan docker-compose[3953]: wordpress_1  | [Fri Feb 16 17:47:36.915502 2018] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'

I hope this can help some fellow admin out there :)

Advertisements

How to configure ElasticSearch snapshots

These are a few notes on how to backup and restore ElasticSearch indices. It worked for me, but I’m not an ES expert by any means, so if there’s a better way or something horribly wrong, let me know!

Mount the Shared Storage

The most basic form of snapshots, without using plugins for S3 or other distributed filesystems, uses a shared filesystem mounted on all nodes of the cluster. In my specific case this was a NFS share

# yum -y install nfs-utils rpcbind
# systemctl enable rpcbind
# vim /etc/fstab
... add your mountpoint ...
# mkdir /nfs
# mount /nfs

Set the Shared Storage as repository for ES

After the mount point is configured, you need to set it as a repository in the ES configuration, edit /etc/elasticsearch/elasticsearch.yml and add the key:

path.repo: /nfs

You’ll need to restart each node after the change and wait for it to rejoin the cluster.

Create the snapshot repository in ES

When the configuration is ok you should be able to create your snapshot repository. I made this script based on the official documentation:

#!/bin/bash

repo_name="backup"
repo_location="backup-weekly"

/usr/bin/curl -XPUT "http://localhost:9200/_snapshot/${repo_name}?pretty" -H 'Content-Type: application/json' -d"
{
  \"type\": \"fs\",
  \"settings\": {
    \"location\": \"${repo_location}\"
  }
}
"

This would save the snapshot in /nfs/backup-weekly/ and the repository name would be backup.

Create your first snapshot

Now you should be able to create your first snapshot. I created another script that takes one daily snapshot each day of the week. Please note: make sure the path to date is correct, or the command will fail and the snapshot_name will be empty, thus deleting the repository instead of the snapshot!

#!/bin/bash

repo_name="backup"
snapshot_name=$(LC_ALL=C /usr/bin/date +%A|tr '[:upper:]' '[:lower:]')

target="vip-es"

# delete the old snapshot (if any)
echo $(date) DELETE the old snapshot: $snapshot_name >> /var/log/es-backup.log
/usr/bin/curl -XDELETE "http://${target}:9200/_snapshot/${repo_name}/${snapshot_name}" >> /var/log/es-backup.log

echo $(date) CREATE the new snapshot: $snapshot_name >> /var/log/es-backup.log
/usr/bin/curl -XPUT "http://${target}:9200/_snapshot/${repo_name}/${snapshot_name}?wait_for_completion=true&pretty" >> /var/log/es-backup.log

The output should be something like:

ven 9 feb 2018, 01.31.01, CET DELETE the old snapshot: friday
{"error":{"root_cause":[{"type":"snapshot_missing_exception","reason":"[backup:friday] is missing"}],"type":"snapshot_missing_exception","reason":"[backup:friday] is missing"},"status":404}
ven 9 feb 2018, 01.31.01, CET CREATE the new snapshot: friday
{
  "snapshot" : {
    "snapshot" : "friday",
    "uuid" : "12345679-20212223",
    "version_id" : 6010199,
    "version" : "6.1.1",
    "indices" : [
      "test_configuration",
      ".kibana"
    ],
    "state" : "SUCCESS",
    "start_time" : "2018-02-09T00:31:01.586Z",
    "start_time_in_millis" : 1518136261586,
    "end_time" : "2018-02-09T00:31:04.362Z",
    "end_time_in_millis" : 1518136264362,
    "duration_in_millis" : 2776,
    "failures" : [ ],
    "shards" : {
      "total" : 25,
      "failed" : 0,
      "successful" : 25
    }
  }
}

In this case the DELETE failed because I didn’t have a previous snapshot for the current day.

List the available snapshots

To operate on the snapshots I made another script to list them by name, using jq. You’ll need to install it first (on CentOS 7: yum -y --enablerepo=epel install jq).

#!/bin/bash

repo_name="backup"

/usr/bin/curl -sS "http://localhost:9200/_snapshot/${repo_name}/_all" | jq '.snapshots[] | .snapshot,.end_time'

The output is just a list of snapshot names and their timestamps:

# bash list_snapshots.sh
"wednesday"
"2018-02-07T02:36:05.564Z"
"thursday"
"2018-02-08T02:37:10.403Z"
"friday"
"2018-02-09T02:31:04.362Z"

Restore a snapshot

No backup can be considered “good” without testing a restore from it. So I made another script to test how the restore would work on a separate test environment:

#!/bin/bash

repo_name='prod'
snap_name='wednesday'

for index_name in $(/usr/bin/curl -sS http://localhost:9200/_aliases | /usr/bin/jq 'keys | .[]' | sed -s "s/\"//g" ); do
    /usr/bin/curl -XPOST "http://localhost:9200/${index_name}/_close"
done

/usr/bin/curl -XPOST "http://localhost:9200/_snapshot/${repo_name}/${snap_name}/_restore?pretty"

I’m pretty sure there must be a better way to do this: what I’m doing is getting all the current indices and closing them all one by one (because you can’t restore an index that is currently open), then restoring the snapshot I copied over from the other environment.

It’s pretty horrible, but it works, if you know a better way let me know and I’ll change it, if you don’t… well, it works :)

References