About Gilberto Ficara

100% biologico

Android adb “unknown backup flag” problem

Apparently, at some point the syntax for using adb backup changed and it’s not really well documented…

This is how I backed up my Android phone with adb today:

adb backup -f mybackup.bkp '-apk -obb -shared -all -system'

While before you would launch your backup with:

adb backup -f oldbackup.bkp -apk -obb -shared -all -system

Notice the lack of quotes in the old version.

Advertisements

Schedule one-time jobs with systemd

I rarely use at, but today I shut down crond to do some maintenance and I wanted to schedule an automatic restart for later in the day in case I forget to restart it manually.

So, I ran:

# echo "/usr/bin/service crond start" | at now +6 hours
-bash: at: command not found

Turns out, on systems running systemd you can use systemd-run as a substitute to at to schedule one-time jobs, like this:

# systemd-run --on-active=30 /bin/touch /tmp/foo

The default --on-active parameter is in seconds, but you can pass modifiers to make it more readable:

# systemd-run --on-active="4h 30m" /bin/touch /tmp/foo

If you need to restart a service, there’s a handy shortcut, the --unit parameter:

# systemd-run --on-active=6h --unit crond.service

You can check the job queue (sorta what you would have done with atq) with:

# systemctl list-timers
NEXT LEFT LAST PASSED UNIT ACTIVATES
gio 2018-06-07 16:32:01 CEST 5h 18min left mer 2018-06-06 16:32:01 CEST 18h ago systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service
gio 2018-06-07 17:12:12 CEST 7h left n/a n/a crond.timer crond.service

Another poor service (atd) has been swallowed by systemd. RIP.

References:

Barebone Kickstart setup for CentOS 7

Since I had to install a bunch of baremetal servers and I haven’t had the time to checkout Foreman yet, I created a minimal setup to be able to use a Kickstart file.

My early iterations were done in Packer, then I switched to the baremetal servers to work out the details.

Please note: this is an automated install that WILL DELETE EVERYTHING on /dev/sda !!!

The kickstart file

This kickstart file has been made iterating over CentOS 6 and CentOS 7 default install kickstart files (those generated by the installer), with a couple of changes based on the documentation and similar examples (many thanks to Jeff Geerling !).

Please note: this is an automated install that WILL DELETE EVERYTHING on /dev/sda !!! – Do not run it on the wrong system!

Also, this is just a “template”, make sure to change it where it makes sense, for example the partitioning scheme and the root password. For the network settings, see below the script to customize and serve the kickstart file over http.

template.cfg

# Run the installer
install

# Use CDROM installation media
cdrom

# System language
lang en_US.UTF-8

# Keyboard layouts - Change this!
keyboard --vckeymap=it --xlayouts='it','us' --switch='grp:alt_shift_toggle'

# Enable more hardware support
unsupported_hardware

# Network information - the --device=link option activates the specific IP address on the first interface with a link up
# the ZZNAMEZZ labels will be changed later with sed, to customize the installation
network  --bootproto=static --device=link --gateway=ZZGATEWAYZZ --ip=ZZIPADDRZZ --nameserver=ZZDNSZZ --netmask=ZZNETMASKZZ --noipv6 --activate
network  --hostname=ZZHOSTNAMEZZ

# System authorization information
auth --enableshadow --passalgo=sha512

# Root password - Change this!
rootpw YOUR_SECURE_PASSWORD

# System timezone - Change this!
timezone Europe/Rome --isUtc --nontp

# Run the text install
text

# Skip X config
skipx

# Only use a specific disk, Change the drive here!
ignoredisk --only-use=sda

# Overwrite the MBR
zerombr

# Partition clearing information
clearpart --all --initlabel --drives=sda

# System bootloader configuration - Change the drive here
bootloader --location=mbr --boot-drive=sda


# PARTITIONING
# This is our partitioning scheme, change it where required

# this might not be required
part biosboot --fstype="biosboot" --ondisk=sda --size=1

# this is required
part /boot --fstype="xfs" --ondisk=sda --size=1024

# this will create a Volume Group "VGsystem" spanning the whole disk (except for the /boot partition)
part pv.229 --fstype="lvmpv" --ondisk=sda --size=200000 --grow
volgroup VGsystem --pesize=4096 pv.229

#
logvol /         --fstype="xfs"   --size=10240  --label="ROOT"  --name=LVroot  --vgname=VGsystem
logvol /usr      --fstype="xfs"   --size=20480  --name=LVusr    --vgname=VGsystem
logvol /var      --fstype="xfs"   --size=20480  --name=LVvar    --vgname=VGsystem
logvol /var/log  --fstype="xfs"   --size=20480  --name=LVvarlog --vgname=VGsystem

logvol swap      --fstype="swap"  --size=16384  --name=LVswap   --vgname=VGsystem

logvol /tmp      --fstype="xfs"   --size=10240  --name=LVtmp    --vgname=VGsystem
logvol /home     --fstype="xfs"   --size=51200  --name=LVhome   --vgname=VGsystem
logvol /opt      --fstype="xfs"   --size=20480  --name=LVopt    --vgname=VGsystem


# Do not run the Setup Agent on first boot
firstboot --disabled

# Accept the EULA
eula --agreed

# System services - we disable chronyd because we use NTP
services --disabled="chronyd" --enabled="sshd"


# Reboot the system when the install is complete
reboot


# Packages

%packages --ignoremissing --excludedocs
@^minimal
@core
kexec-tools
%end

%addon com_redhat_kdump --disable

%end

# upgrade the system before rebooting

%post
yum -y upgrade
yum clean all
%end

Customizing and serving the kickstart file

As we mentioned earlier, I made a pretty simple script to customize the kickstart template and serve it over http.

Please note: this is an automated install that WILL DELETE EVERYTHING on /dev/sda !!!

serve_kickstart.sh

#!/bin/bash

gateway="192.168.0.1"
netmask="255.255.255.0"
dns="192.168.0.11,192.168.0.12"

# this is pretty hacky, sorry
local_ipaddr=$(ip -4 -o addr show dev eth0 | awk {'print $4'} | cut -d/ -f1)

# accepts hostname and ip address on the command line
server_hostname="$1"
server_ipaddr="$2"

if [ -z "$server_hostname" ]; then
    server_hostname="freshinstall.stardata.lan"
    echo "Using '$server_hostname' as default."
fi

if [ -z "$server_ipaddr" ]; then
    server_ipaddr="192.168.0.99"
    echo "Using '$server_ipaddr' as default IP address."
fi


# create the file to customize
/bin/cp -f template.cfg custom.cfg

# customize the kickstart file
sed -i "s/ZZGATEWAYZZ/$gateway/g" custom.cfg
sed -i "s/ZZIPADDRZZ/$server_ipaddr/g" custom.cfg
sed -i "s/ZZDNSZZ/$dns/g" custom.cfg
sed -i "s/ZZNETMASKZZ/$netmask/g" custom.cfg
sed -i "s/ZZHOSTNAMEZZ/$server_hostname/g" custom.cfg

# create the file to serve
/bin/mv -f custom.cfg c7.cfg

# write the instructions to add to the boot on screen
echo "To use this kickstart, add to the boot command line: "

echo -e "\nip=${server_ipaddr} netmask=${netmask} gateway=${gateway} dns=${dns} text ks=http://${local_ipaddr}:8000/c7.cfg\n\n"

sleep 3

python -m SimpleHTTPServer

This is what an example run looks like:

$ ./serve_kickstart.sh test01.stardata.lan 192.168.0.100
To use this kickstart, add to the boot command line:

ip=192.168.0.100 netmask=255.255.255.0 gateway=192.168.0.1 dns=192.168.0.11,192.168.0.12 text ks=http://192.168.0.200:8000/c7.cfg

Serving HTTP on 0.0.0.0 port 8000 ...

192.168.0.100 - - [20/Apr/2018 16:03:43] "GET /c7.cfg HTTP/1.1" 200 -

If you take a look at the c7.cfg that is served via http on port 8000, you’ll see that the relevant network placeholders have been swapped with the custom values from the script:

$ grep ^network c7.cfg
network  --bootproto=static --device=link --gateway=192.168.0.1 --ip=192.168.0.100 --nameserver=192.168.0.11,192.168.0.12 --netmask=255.255.255.0 --noipv6 --activate
network  --hostname=test01.stardata.lan

As usual, I hope this helps some fellow admin out there.

References

How to compile and install v8 and v8js on CentOS 7

A client tasked us to install v8 and v8js on a test system to play around with the server-side compilation of vue.js applications.

The v8 library available on SCL and EPEL is old, 2013-old. So we were faced with the option to either compile v8 or switch away from CentOS for Debian or Ubuntu with some third party repositories that we don’t really know much about and we don’t really trust for when the platform will go in production.

A word of caution: compiling takes a long time, I recommend you use a fast machine (I used a VM with 8 cores, 16Gb RAM and it took about 20 mins to compile).

Compiling v8

As we said, compiling takes quite a lot of time. This is the script I wrote after a few rounds of trial and error. You can (and should, really) run this as a normal, unprivileged user.

#!/bin/bash

set -x  # debug
set -e  # exit on all errors

# update and install basic packages
sudo yum -y upgrade
sudo yum -y --enablerepo=epel --enablerepo=remi-php71 install git subversion make gcc-c++ chrpath redhat-lsb-core php php-devel php-pear

mkdir -p local/src

# install the depot tools from google
git clone https://chromium.googlesource.com/chromium/tools/depot_tools.git local/depot_tools
export PATH=$PATH:local/depot_tools

# install v8
cd local/src
fetch v8
cd v8
gclient sync
./tools/dev/v8gen.py -vv x64.release -- is_component_build=true
time ninja -C out.gn/x64.release
time ./tools/run-tests.py --gn

As you can see, there’s a bit of requirements that aren’t a good choice for production (compilers, git, etc), but you can uninstall them later or, better yet, build a package.

Installing v8

For the installation I relied on the v8js build instructions, copying (as root) just the required files from my build directory:

mkdir -p /opt/v8/{lib,include}

cd /your_build/path/local/src/v8
cp -v out.gn/x64.release/lib*.so out.gn/x64.release/*_blob.bin \
   out.gn/x64.release/icudtl.dat /opt/v8/lib/
cp -vR include/* /opt/v8/include/

Now we can install v8js using pecl.

Compiling v8js from PECL

During the initial setup I installed php-pear from the Remi PHP 7.1 repository. This gives us the pecl command to install php modules easily.

When asked for the v8 library path we’re using the /opt/v8 path we created in the previous step:

# pecl install v8js
downloading v8js-2.1.0.tgz ...
Starting to download v8js-2.1.0.tgz (101,553 bytes)
.......................done: 101,553 bytes
28 source files, building
running: phpize
Configuring for:
PHP Api Version:         20160303
Zend Module Api No:      20160303
Zend Extension Api No:   320160303
Please provide the installation prefix of libv8 [autodetect] : /opt/v8
building in /var/tmp/pear-build-rootDGsvNg/v8js-2.1.0
[...]
Build process completed successfully
Installing '/usr/lib64/php/modules/v8js.so'
install ok: channel://pecl.php.net/v8js-2.1.0
configuration option "php_ini" is not set to php.ini location
You should add "extension=v8js.so" to php.ini
# echo 'extension=v8js.so' > /etc/php.d/60-v8js.ini
# service php-fpm restart

We compiled v8js and created a new ini file to load the new module in PHP, and restarted php-fpm to apply the changes.

We can verify that v8js is installed by running a simple phpinfo check:

$ cat > phpinfo.php
<?php phpinfo();
$ php phpinfo.php | grep -i v8
/etc/php.d/60-v8js.ini
v8js
V8 Javascript Engine => enabled
V8 Engine Compiled Version => 6.8.0
V8 Engine Linked Version => 6.8.0 (candidate)
v8js.flags => no value => no value
v8js.icudtl_dat_path => no value => no value
v8js.use_array_access => 0 => 0
v8js.use_date => 0 => 0

At this point you should be good to go. I hope this helps some fellow admins out there :)

Compiling v8 < 5.5

If you need an older version, the instructions for compiling and copying the files are slightly different:

compile-v8-5.2.sh

#!/bin/bash

set -x  # debug
set -e  # exit on errors

# update the system and install required packages
sudo yum -y upgrade
sudo yum -y --enablerepo=epel --enablerepo=remi-php71 install git subversion make gcc-c++ chrpath redhat-lsb-core php php-devel php-pear

mkdir -p local/src

# install depot tools
git clone https://chromium.googlesource.com/chromium/tools/depot_tools.git local/depot_tools
export PATH=$PATH:local/depot_tools 

# install v8
cd local/src 
fetch v8 
cd local/src/v8 
git checkout 5.2
gclient sync

export GYPFLAGS="-Dv8_use_external_startup_data=0"
export GYPFLAGS="${GYPFLAGS} -Dlinux_use_bundled_gold=0"
time make x64.release native library=shared snapshot=on -j8

Please note that if you get this error:

PYTHONPATH="/path/tools/generate_shim_headers:/path/build::/path/tools/gyp/pylib:" \
GYP_GENERATORS=make \
tools/gyp/gyp --generator-output="out" gypfiles/all.gyp \
              -Igypfiles/standalone.gypi --depth=. -S.native  -Dv8_enable_backtrace=1 -Darm_fpu=default -Darm_float_abi=default
gyp: Error importing pymod_do_mainmodule (detect_v8_host_arch): No module named detect_v8_host_arch
make: *** [out/Makefile.native] Error 1

You will need to edit the Makefile as documented in this diff to fix the Python paths.

As for the installation part:

install-v8-5.2.sh

export PATH=$PATH:local/depot_tools 

# install v8
sudo mkdir -p /opt/v8/{lib,include}

cd local/src/v8

sudo cp -v out/native/lib.target/lib*.so /opt/v8/lib/
sudo cp -vR include/* /opt/v8/include/
sudo chrpath -r '$ORIGIN' /opt/v8/lib/libv8.so

# Install libv8_libplatform.a (V8 >= 5.2.51)
echo -e "create /opt/v8/lib/libv8_libplatform.a\naddlib out/native/obj.target/src/libv8_libplatform.a\nsave\nend" | sudo ar -M

As you can see the build process is a bit different, so make sure you're following the correct one for your version.

References

How to install Laravel Echo Server in production

Laravel Echo Server – LES from now on – is a NodeJS and Socket.io-based server to use with Laravel Echo broadcasting.

From what we gathered, LES does not have any way to have a decent clustered / HA setup: the internal state is in memory, only the channel subscriptions are shared via database (Sqlite3 or Redis), so we agreed with our client that a floating Virtual IP handled by a Pacemaker+Corosync cluster would do, even if that meant the state would be lost in case of a cluster switch.

Installing NodeJS and NPM

The first step was to install NodeJS and the Node Package Manager on our host:

# curl -LO 'https://rpm.nodesource.com/setup_8.x'
# bash setup_8.x
# yum -y install nodejs gcc-c++ make

Creating a user and a group for LES

We wanted LES to run as the same nginx user ID and group ID as we had on the webservers, so we created a user and group.

# groupadd -g 994 nginx
# useradd -m -u 996 -g nginx nginx

Generating the SSL certificates

LES would serve clients through SSL, so we had a self-signed certificate set up for testing purposes, to be switched for a trusted cert in production.

# su - nginx

nginx$ mkdir -p /home/nginx/ssl/les/2018-selfsigned
nginx$ cd /home/nginx/ssl/les/2018-selfsigned
nginx$ openssl req -x509 -nodes -newkey rsa:4096 -keyout server.key -out server.pem -days 365
Generating a 4096 bit RSA private key
.......................................................................++
........................................................................................................................................++
writing new private key to 'server.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:IT
State or Province Name (full name) []:Parma
Locality Name (eg, city) [Default City]:Parma
Organization Name (eg, company) [Default Company Ltd]:Stardata.it
Organizational Unit Name (eg, section) []:LES
Common Name (eg, your name or your server\'s hostname) []:les.stardata.it
Email Address []:info@stardata.it

nginx$ cd /home/nginx/ssl/les/
nginx$ ln -s 2018-selfsigned/server.key ./
nginx$ ln -s 2018-selfsigned/server.pem ./

Installing LES and PM2

We finally get to install LES; we’ll also install pm2 to handle the startup and restart of LES.

The official website recommends Supervisor, but in our experience pm2 works a lot better for NodeJS.

# su - nginx

nginx$ npm config set prefix /home/nginx
nginx$ npm install pm2 sqlite3 laravel-echo-server

Configuring LES

LES needs a configuration file that you can generate with the init command.

# su - nginx

nginx$ /home/nginx/node_modules/laravel-echo-server/bin/server.js init
? Do you want to run this server in development mode? No
? Which port would you like to serve from? 6001
? Which database would you like to use to store presence channel members? redis
? Enter the host of your Laravel authentication server. http://localhost
? Will you be serving on http or https? https
? Enter the path to your SSL cert file. /home/nginx/ssl/les/server.pem
? Enter the path to your SSL key file. /home/nginx/ssl/les/server.key
? Do you want to generate a client ID/Key for HTTP API? Yes
? Do you want to setup cross domain access to the API? Yes
? Specify the URI that may access the API: *
? Enter the HTTP methods that are allowed for CORS: GET, POST
? Enter the HTTP headers that are allowed for CORS: Origin, Content-Type, X-Auth-Token, X-Requested-With, Accept, Authorization, X-CSRF-TOKEN, X-Socket-Id

This will create a laravel-echo-server.json file in our nginx home.

Unfortunately, even if the init command asks about the database, it doens’t create a valid configuration for Redis, you’ll have to manually add the host and port parameters, as shown below:

{
    "authHost": "http://localhost",
    "authEndpoint": "/broadcasting/auth",
    "clients": [
        {
            "appId": "XXXX",
            "key": "YYYY"
        }
    ],
    "database": "redis",
    "databaseConfig": {
        "redis": {
      "host": "vip-redis",
      "port": "6379"
    },
        "sqlite": {
            "databasePath": "/database/laravel-echo-server.sqlite"
        }
    },
    "devMode": false,
    "host": null,
    "port": "6001",
    "protocol": "https",
    "socketio": {},
    "sslCertPath": "/home/nginx/ssl/les/server.pem",
    "sslKeyPath": "/home/nginx/ssl/les/server.key",
    "sslCertChainPath": "",
    "sslPassphrase": "",
    "apiOriginAllow": {
        "allowCors": true,
        "allowOrigin": "*",
        "allowMethods": "GET, POST",
        "allowHeaders": "Origin, Content-Type, X-Auth-Token, X-Requested-With, Accept, Authorization, X-CSRF-TOKEN, X-Socket-Id"
    }
}

Configuring PM2

Once LES is set up, we can configure pm2 by creating an echo-server.json:

{
  "name": "echo",
  "script": "/home/nginx/node_modules/laravel-echo-server/bin/server.js",
  "args": "start"
}

And we can start it and verify that it works correctly:

# su - nginx

nginx$ pm2 start echo-server.json

[PM2][WARN] Applications echo not running, starting...
[PM2] App [echo] launched (1 instances)
┌──────────┬────┬──────┬──────┬────────┬─────────┬────────┬─────┬───────────┬───────┬──────────┐
│ App name │ id │ mode │ pid  │ status │ restart │ uptime │ cpu │ mem       │ user  │ watching │
├──────────┼────┼──────┼──────┼────────┼─────────┼────────┼─────┼───────────┼───────┼──────────┤
│ echo     │ 0  │ fork │ 2193 │ online │ 0       │ 0s     │ 4%  │ 11.0 MB   │ nginx │ disabled │
└──────────┴────┴──────┴──────┴────────┴─────────┴────────┴─────┴───────────┴───────┴──────────┘
 Use `pm2 show ` to get more details about an app

nginx$ pm2 log
[...]
PM2        | [2018-03-26 11:27:19] PM2 log: Starting execution sequence in -fork mode- for app name:echo id:0
PM2        | [2018-03-26 11:27:19] PM2 log: App name:echo id:0 online
[...]
0|echo     | L A R A V E L  E C H O  S E R V E R
0|echo     |
0|echo     | version 1.3.6
0|echo     |
0|echo     | Starting server...
0|echo     |
0|echo     | ✔  Running at localhost on port 6001
0|echo     | ✔  Channels are ready.
0|echo     | ✔  Listening for http events...
0|echo     | ✔  Listening for redis events...
0|echo     |
0|echo     | Server ready!

Since everything is ok, we save the setup so pm2 will be able to restart everything once invoked with the resurrect command.

nginx$ pm2 save
[PM2] Saving current process list...
[PM2] Successfully saved in /home/nginx/.pm2/dump.pm2

Creating a systemd service for PM2

The last step in our setup is to create a systemd service to start pm2.

We created a pm2-nginx.service file in /etc/systemd/system:

[Unit]
Description=PM2 process manager
Documentation=https://pm2.keymetrics.io/
After=network.target

[Service]
Type=forking
User=nginx
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Environment=PATH=/usr/bin:/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
Environment=PM2_HOME=/home/nginx/.pm2
PIDFile=/home/nginx/.pm2/pids/echo-0.pid

ExecStart=/home/nginx/node_modules/pm2/bin/pm2 resurrect
ExecReload=/home/nginx/node_modules/pm2/bin/pm2 reload all
ExecStop=/home/nginx/node_modules/pm2/bin/pm2 kill

[Install]
WantedBy=multi-user.target

Then we made sure to enable the service at boot time:

# systemctl daemon-reload
# systemctl enable pm2-nginx

Then we checked to make sure everything started up after a reboot:

# service pm2-nginx status
Redirecting to /bin/systemctl status pm2-nginx.service
● pm2-nginx.service - PM2 process manager
   Loaded: loaded (/etc/systemd/system/pm2-nginx.service; enabled; vendor preset: disabled)
   Active: active (running) since lun 2018-03-26 12:15:24 CEST; 4min 46s ago
     Docs: https://pm2.keymetrics.io/
  Process: 1128 ExecStart=/home/nginx/node_modules/pm2/bin/pm2 resurrect (code=exited, status=0/SUCCESS)
 Main PID: 1246 (laravel-echo-se)
   CGroup: /system.slice/pm2-nginx.service
           ├─1236 PM2 v2.10.1: God Daemon (/home/nginx/.pm2)
           └─1246 laravel-echo-server

mar 26 12:15:18 nodejs01 systemd[1]: Starting PM2 process manager...
mar 26 12:15:23 nodejs01 pm2[1128]: [PM2] Spawning PM2 daemon with pm2_home=/home/nginx/.pm2
mar 26 12:15:24 nodejs01 pm2[1128]: [PM2] PM2 Successfully daemonized
mar 26 12:15:24 nodejs01 pm2[1128]: [PM2] Resurrecting
mar 26 12:15:24 nodejs01 pm2[1128]: [PM2] Restoring processes located in /home/nginx/.pm2/dump.pm2
mar 26 12:15:24 nodejs01 pm2[1128]: [PM2] Process /home/nginx/node_modules/laravel-echo-server/bin/server.js restored
mar 26 12:15:24 nodejs01 systemd[1]: pm2-nginx.service: Supervising process 1246 which is not our child. We\'ll most likely not notice when it exits.
mar 26 12:15:24 nodejs01 systemd[1]: Started PM2 process manager.

Installing Docker on CentOS 7 “the sensible way”

For a production environment, the best idea is probably to set up a Kubernetes cluster or something like that.

But in our case we just wanted a test system that would allow us to have a couple of containers set up in a sensible manner

Install Docker

First thing is, of course, to install Docker. The package that comes with CentOS 7 is already obsolete, so we go to the source and download the community edition from docker.com:

# yum -y install yum-utils
# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# yum -y install docker-ce
# systemctl enable docker
# systemctl start docker
# docker --version
Docker version 17.12.0-ce, build c97c6d6
# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

Install docker-compose

The second thing we want to install – that for some reason is not packaged alongside docker – is docker-compose. Since it’s a python package, we installed pip first:

# yum -y install epel-release
# yum --enablerepo=epel -y install python-pip
# pip install docker-compose
# docker-compose --version
docker-compose version 1.19.0, build 9e633ef

Create a user for the container

We decided that our containers would run with different users, so we created a new user in the docker group:

# useradd -m -G docker container01
# su - container01 -c 'id; docker ps'
uid=1000(container01) gid=1000(container01) groups=1000(container01),994(docker)
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

Create a docker-compose.yml file for the container

I grabbed an example compose file from the official site and saved it as /home/container01/docker-compose.yml

version: '3'

services:
   db:
     image: mysql:5.7
     volumes:
       - db_data:/var/lib/mysql
     restart: always
     environment:
       MYSQL_ROOT_PASSWORD: somewordpress
       MYSQL_DATABASE: wordpress
       MYSQL_USER: wordpress
       MYSQL_PASSWORD: wordpress

   wordpress:
     depends_on:
       - db
     image: wordpress:latest
     ports:
       - "8000:80"
     restart: always
     environment:
       WORDPRESS_DB_HOST: db:3306
       WORDPRESS_DB_USER: wordpress
       WORDPRESS_DB_PASSWORD: wordpress
volumes:
    db_data:

To test the compose file, I switched to the container01 user and ran it:

# su - container01
$ docker-compose up
Creating network "container01_default" with the default driver
Creating volume "container01_db_data" with default driver
Pulling db (mysql:5.7)...
5.7: Pulling from library/mysql
[...]
db_1         | 2018-02-16T17:42:17.911892Z 0 [Warning] 'tables_priv' entry 'sys_config mysql.sys@localhost' ignored in --skip-name-resolve mode.
db_1         | 2018-02-16T17:42:17.915828Z 0 [Note] Event Scheduler: Loaded 0 events
db_1         | 2018-02-16T17:42:17.915984Z 0 [Note] mysqld: ready for connections.
db_1         | Version: '5.7.21'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  MySQL Community Server (GPL)

I stopped the process and spinned down the containers:

^CGracefully stopping... (press Ctrl+C again to force)
Stopping container01_wordpress_1 ... done
Stopping container01_db_1        ... done

$ docker-compose down
Removing container01_wordpress_1 ... done
Removing container01_db_1        ... done
Removing network container01_default

Using volumes will save your data in /var/lib/docker/volumes/container01_db_data/ and persist it through restarts.

Now I wanted to make sure the containers would start and stop with the server, time for some systemd!

Create a systemd service for the container

I created a new systemd service file in /etc/systemd/system/container01-wordpress.service

[Unit]
Description=Example WordPress Containers
After=network.target docker.service
[Service]
Type=simple
User=container01
WorkingDirectory=/home/container01
ExecStart=/usr/bin/docker-compose -f /home/container01/docker-compose.yml up
ExecStop=/usr/bin/docker-compose -f /home/container01/docker-compose.yml down
Restart=always
[Install]
WantedBy=multi-user.target

Then I reloaded the systemd daemon to make sure it would recognize the new service, enabled it and ran it:

# systemctl daemon-reload
# systemctl enable container01-wordpress.service
Created symlink from /etc/systemd/system/multi-user.target.wants/container01-wordpress.service to /etc/systemd/system/container01-wordpress.service.
# systemctl start container01-wordpress.service
# journalctl -f
feb 16 18:47:36 centos7-test.stardata.lan docker-compose[3953]: wordpress_1  | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.18.0.3. Set the 'ServerName' directive globally to suppress this message
feb 16 18:47:36 centos7-test.stardata.lan docker-compose[3953]: wordpress_1  | [Fri Feb 16 17:47:36.915385 2018] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.25 (Debian) PHP/7.2.2 configured -- resuming normal operations
feb 16 18:47:36 centos7-test.stardata.lan docker-compose[3953]: wordpress_1  | [Fri Feb 16 17:47:36.915502 2018] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'

I hope this can help some fellow admin out there :)