linux · tips

How to fix Clementine (music player) playing music, but no audio

Just a quick one, in case someone out there gets the same problem: my Clementine player did start playing music, but I could not hear anything.
Turns out my very minimal Debian testing install didn’t include gstreamer1.0-pulseaudio. Installing that package fixed it for me.

Advertisements
linux · tech · tips

Schedule one-time jobs with systemd

I rarely use at, but today I shut down crond to do some maintenance and I wanted to schedule an automatic restart for later in the day in case I forget to restart it manually.

So, I ran:

# echo "/usr/bin/service crond start" | at now +6 hours
-bash: at: command not found

Turns out, on systems running systemd you can use systemd-run as a substitute to at to schedule one-time jobs, like this:

# systemd-run --on-active=30 /bin/touch /tmp/foo

The default --on-active parameter is in seconds, but you can pass modifiers to make it more readable:

# systemd-run --on-active="4h 30m" /bin/touch /tmp/foo

If you need to restart a service, there’s a handy shortcut, the --unit parameter:

# systemd-run --on-active=6h --unit crond.service

You can check the job queue (sorta what you would have done with atq) with:

# systemctl list-timers
NEXT LEFT LAST PASSED UNIT ACTIVATES
gio 2018-06-07 16:32:01 CEST 5h 18min left mer 2018-06-06 16:32:01 CEST 18h ago systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service
gio 2018-06-07 17:12:12 CEST 7h left n/a n/a crond.timer crond.service

Another poor service (atd) has been swallowed by systemd. RIP.

References:

linux · tech · tips

Barebone Kickstart setup for CentOS 7

Since I had to install a bunch of baremetal servers and I haven’t had the time to checkout Foreman yet, I created a minimal setup to be able to use a Kickstart file.

My early iterations were done in Packer, then I switched to the baremetal servers to work out the details.

Please note: this is an automated install that WILL DELETE EVERYTHING on /dev/sda !!!

The kickstart file

This kickstart file has been made iterating over CentOS 6 and CentOS 7 default install kickstart files (those generated by the installer), with a couple of changes based on the documentation and similar examples (many thanks to Jeff Geerling !).

Please note: this is an automated install that WILL DELETE EVERYTHING on /dev/sda !!! – Do not run it on the wrong system!

Also, this is just a “template”, make sure to change it where it makes sense, for example the partitioning scheme and the root password. For the network settings, see below the script to customize and serve the kickstart file over http.

template.cfg

# Run the installer
install

# Use CDROM installation media
cdrom

# System language
lang en_US.UTF-8

# Keyboard layouts - Change this!
keyboard --vckeymap=it --xlayouts='it','us' --switch='grp:alt_shift_toggle'

# Enable more hardware support
unsupported_hardware

# Network information - the --device=link option activates the specific IP address on the first interface with a link up
# the ZZNAMEZZ labels will be changed later with sed, to customize the installation
network  --bootproto=static --device=link --gateway=ZZGATEWAYZZ --ip=ZZIPADDRZZ --nameserver=ZZDNSZZ --netmask=ZZNETMASKZZ --noipv6 --activate
network  --hostname=ZZHOSTNAMEZZ

# System authorization information
auth --enableshadow --passalgo=sha512

# Root password - Change this!
rootpw YOUR_SECURE_PASSWORD

# System timezone - Change this!
timezone Europe/Rome --isUtc --nontp

# Run the text install
text

# Skip X config
skipx

# Only use a specific disk, Change the drive here!
ignoredisk --only-use=sda

# Overwrite the MBR
zerombr

# Partition clearing information
clearpart --all --initlabel --drives=sda

# System bootloader configuration - Change the drive here
bootloader --location=mbr --boot-drive=sda


# PARTITIONING
# This is our partitioning scheme, change it where required

# this might not be required
part biosboot --fstype="biosboot" --ondisk=sda --size=1

# this is required
part /boot --fstype="xfs" --ondisk=sda --size=1024

# this will create a Volume Group "VGsystem" spanning the whole disk (except for the /boot partition)
part pv.229 --fstype="lvmpv" --ondisk=sda --size=200000 --grow
volgroup VGsystem --pesize=4096 pv.229

#
logvol /         --fstype="xfs"   --size=10240  --label="ROOT"  --name=LVroot  --vgname=VGsystem
logvol /usr      --fstype="xfs"   --size=20480  --name=LVusr    --vgname=VGsystem
logvol /var      --fstype="xfs"   --size=20480  --name=LVvar    --vgname=VGsystem
logvol /var/log  --fstype="xfs"   --size=20480  --name=LVvarlog --vgname=VGsystem

logvol swap      --fstype="swap"  --size=16384  --name=LVswap   --vgname=VGsystem

logvol /tmp      --fstype="xfs"   --size=10240  --name=LVtmp    --vgname=VGsystem
logvol /home     --fstype="xfs"   --size=51200  --name=LVhome   --vgname=VGsystem
logvol /opt      --fstype="xfs"   --size=20480  --name=LVopt    --vgname=VGsystem


# Do not run the Setup Agent on first boot
firstboot --disabled

# Accept the EULA
eula --agreed

# System services - we disable chronyd because we use NTP
services --disabled="chronyd" --enabled="sshd"


# Reboot the system when the install is complete
reboot


# Packages

%packages --ignoremissing --excludedocs
@^minimal
@core
kexec-tools
%end

%addon com_redhat_kdump --disable

%end

# upgrade the system before rebooting

%post
yum -y upgrade
yum clean all
%end

Customizing and serving the kickstart file

As we mentioned earlier, I made a pretty simple script to customize the kickstart template and serve it over http.

Please note: this is an automated install that WILL DELETE EVERYTHING on /dev/sda !!!

serve_kickstart.sh

#!/bin/bash

gateway="192.168.0.1"
netmask="255.255.255.0"
dns="192.168.0.11,192.168.0.12"

# this is pretty hacky, sorry
local_ipaddr=$(ip -4 -o addr show dev eth0 | awk {'print $4'} | cut -d/ -f1)

# accepts hostname and ip address on the command line
server_hostname="$1"
server_ipaddr="$2"

if [ -z "$server_hostname" ]; then
    server_hostname="freshinstall.stardata.lan"
    echo "Using '$server_hostname' as default."
fi

if [ -z "$server_ipaddr" ]; then
    server_ipaddr="192.168.0.99"
    echo "Using '$server_ipaddr' as default IP address."
fi


# create the file to customize
/bin/cp -f template.cfg custom.cfg

# customize the kickstart file
sed -i "s/ZZGATEWAYZZ/$gateway/g" custom.cfg
sed -i "s/ZZIPADDRZZ/$server_ipaddr/g" custom.cfg
sed -i "s/ZZDNSZZ/$dns/g" custom.cfg
sed -i "s/ZZNETMASKZZ/$netmask/g" custom.cfg
sed -i "s/ZZHOSTNAMEZZ/$server_hostname/g" custom.cfg

# create the file to serve
/bin/mv -f custom.cfg c7.cfg

# write the instructions to add to the boot on screen
echo "To use this kickstart, add to the boot command line: "

echo -e "\nip=${server_ipaddr} netmask=${netmask} gateway=${gateway} dns=${dns} text ks=http://${local_ipaddr}:8000/c7.cfg\n\n"

sleep 3

python -m SimpleHTTPServer

This is what an example run looks like:

$ ./serve_kickstart.sh test01.stardata.lan 192.168.0.100
To use this kickstart, add to the boot command line:

ip=192.168.0.100 netmask=255.255.255.0 gateway=192.168.0.1 dns=192.168.0.11,192.168.0.12 text ks=http://192.168.0.200:8000/c7.cfg

Serving HTTP on 0.0.0.0 port 8000 ...

192.168.0.100 - - [20/Apr/2018 16:03:43] "GET /c7.cfg HTTP/1.1" 200 -

If you take a look at the c7.cfg that is served via http on port 8000, you’ll see that the relevant network placeholders have been swapped with the custom values from the script:

$ grep ^network c7.cfg
network  --bootproto=static --device=link --gateway=192.168.0.1 --ip=192.168.0.100 --nameserver=192.168.0.11,192.168.0.12 --netmask=255.255.255.0 --noipv6 --activate
network  --hostname=test01.stardata.lan

As usual, I hope this helps some fellow admin out there.

References

linux · tech · tips

Installing Docker on CentOS 7 “the sensible way”

For a production environment, the best idea is probably to set up a Kubernetes cluster or something like that.

But in our case we just wanted a test system that would allow us to have a couple of containers set up in a sensible manner

Install Docker

First thing is, of course, to install Docker. The package that comes with CentOS 7 is already obsolete, so we go to the source and download the community edition from docker.com:

# yum -y install yum-utils
# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# yum -y install docker-ce
# systemctl enable docker
# systemctl start docker
# docker --version
Docker version 17.12.0-ce, build c97c6d6
# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

Install docker-compose

The second thing we want to install – that for some reason is not packaged alongside docker – is docker-compose. Since it’s a python package, we installed pip first:

# yum -y install epel-release
# yum --enablerepo=epel -y install python-pip
# pip install docker-compose
# docker-compose --version
docker-compose version 1.19.0, build 9e633ef

Create a user for the container

We decided that our containers would run with different users, so we created a new user in the docker group:

# useradd -m -G docker container01
# su - container01 -c 'id; docker ps'
uid=1000(container01) gid=1000(container01) groups=1000(container01),994(docker)
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

Create a docker-compose.yml file for the container

I grabbed an example compose file from the official site and saved it as /home/container01/docker-compose.yml

version: '3'

services:
   db:
     image: mysql:5.7
     volumes:
       - db_data:/var/lib/mysql
     restart: always
     environment:
       MYSQL_ROOT_PASSWORD: somewordpress
       MYSQL_DATABASE: wordpress
       MYSQL_USER: wordpress
       MYSQL_PASSWORD: wordpress

   wordpress:
     depends_on:
       - db
     image: wordpress:latest
     ports:
       - "8000:80"
     restart: always
     environment:
       WORDPRESS_DB_HOST: db:3306
       WORDPRESS_DB_USER: wordpress
       WORDPRESS_DB_PASSWORD: wordpress
volumes:
    db_data:

To test the compose file, I switched to the container01 user and ran it:

# su - container01
$ docker-compose up
Creating network "container01_default" with the default driver
Creating volume "container01_db_data" with default driver
Pulling db (mysql:5.7)...
5.7: Pulling from library/mysql
[...]
db_1         | 2018-02-16T17:42:17.911892Z 0 [Warning] 'tables_priv' entry 'sys_config mysql.sys@localhost' ignored in --skip-name-resolve mode.
db_1         | 2018-02-16T17:42:17.915828Z 0 [Note] Event Scheduler: Loaded 0 events
db_1         | 2018-02-16T17:42:17.915984Z 0 [Note] mysqld: ready for connections.
db_1         | Version: '5.7.21'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  MySQL Community Server (GPL)

I stopped the process and spinned down the containers:

^CGracefully stopping... (press Ctrl+C again to force)
Stopping container01_wordpress_1 ... done
Stopping container01_db_1        ... done

$ docker-compose down
Removing container01_wordpress_1 ... done
Removing container01_db_1        ... done
Removing network container01_default

Using volumes will save your data in /var/lib/docker/volumes/container01_db_data/ and persist it through restarts.

Now I wanted to make sure the containers would start and stop with the server, time for some systemd!

Create a systemd service for the container

I created a new systemd service file in /etc/systemd/system/container01-wordpress.service

[Unit]
Description=Example WordPress Containers
After=network.target docker.service
[Service]
Type=simple
User=container01
WorkingDirectory=/home/container01
ExecStart=/usr/bin/docker-compose -f /home/container01/docker-compose.yml up
ExecStop=/usr/bin/docker-compose -f /home/container01/docker-compose.yml down
Restart=always
[Install]
WantedBy=multi-user.target

Then I reloaded the systemd daemon to make sure it would recognize the new service, enabled it and ran it:

# systemctl daemon-reload
# systemctl enable container01-wordpress.service
Created symlink from /etc/systemd/system/multi-user.target.wants/container01-wordpress.service to /etc/systemd/system/container01-wordpress.service.
# systemctl start container01-wordpress.service
# journalctl -f
feb 16 18:47:36 centos7-test.stardata.lan docker-compose[3953]: wordpress_1  | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.18.0.3. Set the 'ServerName' directive globally to suppress this message
feb 16 18:47:36 centos7-test.stardata.lan docker-compose[3953]: wordpress_1  | [Fri Feb 16 17:47:36.915385 2018] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.25 (Debian) PHP/7.2.2 configured -- resuming normal operations
feb 16 18:47:36 centos7-test.stardata.lan docker-compose[3953]: wordpress_1  | [Fri Feb 16 17:47:36.915502 2018] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'

I hope this can help some fellow admin out there :)

linux · tech · tips

How to configure ElasticSearch snapshots

These are a few notes on how to backup and restore ElasticSearch indices. It worked for me, but I’m not an ES expert by any means, so if there’s a better way or something horribly wrong, let me know!

Mount the Shared Storage

The most basic form of snapshots, without using plugins for S3 or other distributed filesystems, uses a shared filesystem mounted on all nodes of the cluster. In my specific case this was a NFS share

# yum -y install nfs-utils rpcbind
# systemctl enable rpcbind
# vim /etc/fstab
... add your mountpoint ...
# mkdir /nfs
# mount /nfs

Set the Shared Storage as repository for ES

After the mount point is configured, you need to set it as a repository in the ES configuration, edit /etc/elasticsearch/elasticsearch.yml and add the key:

path.repo: /nfs

You’ll need to restart each node after the change and wait for it to rejoin the cluster.

Create the snapshot repository in ES

When the configuration is ok you should be able to create your snapshot repository. I made this script based on the official documentation:

#!/bin/bash

repo_name="backup"
repo_location="backup-weekly"

/usr/bin/curl -XPUT "http://localhost:9200/_snapshot/${repo_name}?pretty" -H 'Content-Type: application/json' -d"
{
  \"type\": \"fs\",
  \"settings\": {
    \"location\": \"${repo_location}\"
  }
}
"

This would save the snapshot in /nfs/backup-weekly/ and the repository name would be backup.

Create your first snapshot

Now you should be able to create your first snapshot. I created another script that takes one daily snapshot each day of the week. Please note: make sure the path to date is correct, or the command will fail and the snapshot_name will be empty, thus deleting the repository instead of the snapshot!

#!/bin/bash

repo_name="backup"
snapshot_name=$(LC_ALL=C /usr/bin/date +%A|tr '[:upper:]' '[:lower:]')

target="vip-es"

# delete the old snapshot (if any)
echo $(date) DELETE the old snapshot: $snapshot_name >> /var/log/es-backup.log
/usr/bin/curl -XDELETE "http://${target}:9200/_snapshot/${repo_name}/${snapshot_name}" >> /var/log/es-backup.log

echo $(date) CREATE the new snapshot: $snapshot_name >> /var/log/es-backup.log
/usr/bin/curl -XPUT "http://${target}:9200/_snapshot/${repo_name}/${snapshot_name}?wait_for_completion=true&pretty" >> /var/log/es-backup.log

The output should be something like:

ven 9 feb 2018, 01.31.01, CET DELETE the old snapshot: friday
{"error":{"root_cause":[{"type":"snapshot_missing_exception","reason":"[backup:friday] is missing"}],"type":"snapshot_missing_exception","reason":"[backup:friday] is missing"},"status":404}
ven 9 feb 2018, 01.31.01, CET CREATE the new snapshot: friday
{
  "snapshot" : {
    "snapshot" : "friday",
    "uuid" : "12345679-20212223",
    "version_id" : 6010199,
    "version" : "6.1.1",
    "indices" : [
      "test_configuration",
      ".kibana"
    ],
    "state" : "SUCCESS",
    "start_time" : "2018-02-09T00:31:01.586Z",
    "start_time_in_millis" : 1518136261586,
    "end_time" : "2018-02-09T00:31:04.362Z",
    "end_time_in_millis" : 1518136264362,
    "duration_in_millis" : 2776,
    "failures" : [ ],
    "shards" : {
      "total" : 25,
      "failed" : 0,
      "successful" : 25
    }
  }
}

In this case the DELETE failed because I didn’t have a previous snapshot for the current day.

List the available snapshots

To operate on the snapshots I made another script to list them by name, using jq. You’ll need to install it first (on CentOS 7: yum -y --enablerepo=epel install jq).

#!/bin/bash

repo_name="backup"

/usr/bin/curl -sS "http://localhost:9200/_snapshot/${repo_name}/_all" | jq '.snapshots[] | .snapshot,.end_time'

The output is just a list of snapshot names and their timestamps:

# bash list_snapshots.sh
"wednesday"
"2018-02-07T02:36:05.564Z"
"thursday"
"2018-02-08T02:37:10.403Z"
"friday"
"2018-02-09T02:31:04.362Z"

Restore a snapshot

No backup can be considered “good” without testing a restore from it. So I made another script to test how the restore would work on a separate test environment:

#!/bin/bash

repo_name='prod'
snap_name='wednesday'

for index_name in $(/usr/bin/curl -sS http://localhost:9200/_aliases | /usr/bin/jq 'keys | .[]' | sed -s "s/\"//g" ); do
    /usr/bin/curl -XPOST "http://localhost:9200/${index_name}/_close"
done

/usr/bin/curl -XPOST "http://localhost:9200/_snapshot/${repo_name}/${snap_name}/_restore?pretty"

I’m pretty sure there must be a better way to do this: what I’m doing is getting all the current indices and closing them all one by one (because you can’t restore an index that is currently open), then restoring the snapshot I copied over from the other environment.

It’s pretty horrible, but it works, if you know a better way let me know and I’ll change it, if you don’t… well, it works :)

References

linux · tech · tips

Synchronize a directory structure with Ansible

Disclaimer: this is not ideal. We should manage the whole configuration with Ansible. “Baby steps” I guess… :)
Consider this as a workaround I hope you’ll never have to resort to, but I’m sharing it just in case…

We’re migrating from some old scripts to using Ansible to handle some of our clients deploys.

One of the tasks that were handled by these bash scripts is to synchronize a directory structure, so that the application log files would always find the same directory structure on every application server.

We used rsync for that, copying only the directories:

rsync -av -f"+ */" -f"- *" /path/to/app/ $target:/path/to/app/

To translate this to Ansible we used two tasks:

---
- name: Deploy log directories
  vars:
    dir_log_path: /var/log/nginx
  hosts: webservers
  serial: 10%
  tasks:
  - name: find log directories
    find:
      paths:
      - '{{ dir_log_path }}'
      file_type: directory
    register: log_dirs
    delegate_to: ws-deploy

  - name: create log directories
    file:
      path: "{{ item.path }}"
      state: directory
      owner: "{{ item.uid }}"
      group: "{{ item.gid }}"
      mode: "{{ item.mode }}"
    with_items: "{{ log_dirs.files }}"

We record in the log_dirs variable the directories existing on ws-deploy, the server where we have the latest configuration loaded on, then we recreate the same structure using the file module in Ansible on all the other webservers.