linux · tech · tips

Barebone Kickstart setup for CentOS 7

Since I had to install a bunch of baremetal servers and I haven’t had the time to checkout Foreman yet, I created a minimal setup to be able to use a Kickstart file.

My early iterations were done in Packer, then I switched to the baremetal servers to work out the details.

Please note: this is an automated install that WILL DELETE EVERYTHING on /dev/sda !!!

The kickstart file

This kickstart file has been made iterating over CentOS 6 and CentOS 7 default install kickstart files (those generated by the installer), with a couple of changes based on the documentation and similar examples (many thanks to Jeff Geerling !).

Please note: this is an automated install that WILL DELETE EVERYTHING on /dev/sda !!! – Do not run it on the wrong system!

Also, this is just a “template”, make sure to change it where it makes sense, for example the partitioning scheme and the root password. For the network settings, see below the script to customize and serve the kickstart file over http.

template.cfg

# Run the installer
install

# Use CDROM installation media
cdrom

# System language
lang en_US.UTF-8

# Keyboard layouts - Change this!
keyboard --vckeymap=it --xlayouts='it','us' --switch='grp:alt_shift_toggle'

# Enable more hardware support
unsupported_hardware

# Network information - the --device=link option activates the specific IP address on the first interface with a link up
# the ZZNAMEZZ labels will be changed later with sed, to customize the installation
network  --bootproto=static --device=link --gateway=ZZGATEWAYZZ --ip=ZZIPADDRZZ --nameserver=ZZDNSZZ --netmask=ZZNETMASKZZ --noipv6 --activate
network  --hostname=ZZHOSTNAMEZZ

# System authorization information
auth --enableshadow --passalgo=sha512

# Root password - Change this!
rootpw YOUR_SECURE_PASSWORD

# System timezone - Change this!
timezone Europe/Rome --isUtc --nontp

# Run the text install
text

# Skip X config
skipx

# Only use a specific disk, Change the drive here!
ignoredisk --only-use=sda

# Overwrite the MBR
zerombr

# Partition clearing information
clearpart --all --initlabel --drives=sda

# System bootloader configuration - Change the drive here
bootloader --location=mbr --boot-drive=sda


# PARTITIONING
# This is our partitioning scheme, change it where required

# this might not be required
part biosboot --fstype="biosboot" --ondisk=sda --size=1

# this is required
part /boot --fstype="xfs" --ondisk=sda --size=1024

# this will create a Volume Group "VGsystem" spanning the whole disk (except for the /boot partition)
part pv.229 --fstype="lvmpv" --ondisk=sda --size=200000 --grow
volgroup VGsystem --pesize=4096 pv.229

#
logvol /         --fstype="xfs"   --size=10240  --label="ROOT"  --name=LVroot  --vgname=VGsystem
logvol /usr      --fstype="xfs"   --size=20480  --name=LVusr    --vgname=VGsystem
logvol /var      --fstype="xfs"   --size=20480  --name=LVvar    --vgname=VGsystem
logvol /var/log  --fstype="xfs"   --size=20480  --name=LVvarlog --vgname=VGsystem

logvol swap      --fstype="swap"  --size=16384  --name=LVswap   --vgname=VGsystem

logvol /tmp      --fstype="xfs"   --size=10240  --name=LVtmp    --vgname=VGsystem
logvol /home     --fstype="xfs"   --size=51200  --name=LVhome   --vgname=VGsystem
logvol /opt      --fstype="xfs"   --size=20480  --name=LVopt    --vgname=VGsystem


# Do not run the Setup Agent on first boot
firstboot --disabled

# Accept the EULA
eula --agreed

# System services - we disable chronyd because we use NTP
services --disabled="chronyd" --enabled="sshd"


# Reboot the system when the install is complete
reboot


# Packages

%packages --ignoremissing --excludedocs
@^minimal
@core
kexec-tools
%end

%addon com_redhat_kdump --disable

%end

# upgrade the system before rebooting

%post
yum -y upgrade
yum clean all
%end

Customizing and serving the kickstart file

As we mentioned earlier, I made a pretty simple script to customize the kickstart template and serve it over http.

Please note: this is an automated install that WILL DELETE EVERYTHING on /dev/sda !!!

serve_kickstart.sh

#!/bin/bash

gateway="192.168.0.1"
netmask="255.255.255.0"
dns="192.168.0.11,192.168.0.12"

# this is pretty hacky, sorry
local_ipaddr=$(ip -4 -o addr show dev eth0 | awk {'print $4'} | cut -d/ -f1)

# accepts hostname and ip address on the command line
server_hostname="$1"
server_ipaddr="$2"

if [ -z "$server_hostname" ]; then
    server_hostname="freshinstall.stardata.lan"
    echo "Using '$server_hostname' as default."
fi

if [ -z "$server_ipaddr" ]; then
    server_ipaddr="192.168.0.99"
    echo "Using '$server_ipaddr' as default IP address."
fi


# create the file to customize
/bin/cp -f template.cfg custom.cfg

# customize the kickstart file
sed -i "s/ZZGATEWAYZZ/$gateway/g" custom.cfg
sed -i "s/ZZIPADDRZZ/$server_ipaddr/g" custom.cfg
sed -i "s/ZZDNSZZ/$dns/g" custom.cfg
sed -i "s/ZZNETMASKZZ/$netmask/g" custom.cfg
sed -i "s/ZZHOSTNAMEZZ/$server_hostname/g" custom.cfg

# create the file to serve
/bin/mv -f custom.cfg c7.cfg

# write the instructions to add to the boot on screen
echo "To use this kickstart, add to the boot command line: "

echo -e "\nip=${server_ipaddr} netmask=${netmask} gateway=${gateway} dns=${dns} text ks=http://${local_ipaddr}:8000/c7.cfg\n\n"

sleep 3

python -m SimpleHTTPServer

This is what an example run looks like:

$ ./serve_kickstart.sh test01.stardata.lan 192.168.0.100
To use this kickstart, add to the boot command line:

ip=192.168.0.100 netmask=255.255.255.0 gateway=192.168.0.1 dns=192.168.0.11,192.168.0.12 text ks=http://192.168.0.200:8000/c7.cfg

Serving HTTP on 0.0.0.0 port 8000 ...

192.168.0.100 - - [20/Apr/2018 16:03:43] "GET /c7.cfg HTTP/1.1" 200 -

If you take a look at the c7.cfg that is served via http on port 8000, you’ll see that the relevant network placeholders have been swapped with the custom values from the script:

$ grep ^network c7.cfg
network  --bootproto=static --device=link --gateway=192.168.0.1 --ip=192.168.0.100 --nameserver=192.168.0.11,192.168.0.12 --netmask=255.255.255.0 --noipv6 --activate
network  --hostname=test01.stardata.lan

As usual, I hope this helps some fellow admin out there.

References

Advertisements
linux · tech · tips

Installing Docker on CentOS 7 “the sensible way”

For a production environment, the best idea is probably to set up a Kubernetes cluster or something like that.

But in our case we just wanted a test system that would allow us to have a couple of containers set up in a sensible manner

Install Docker

First thing is, of course, to install Docker. The package that comes with CentOS 7 is already obsolete, so we go to the source and download the community edition from docker.com:

# yum -y install yum-utils
# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# yum -y install docker-ce
# systemctl enable docker
# systemctl start docker
# docker --version
Docker version 17.12.0-ce, build c97c6d6
# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

Install docker-compose

The second thing we want to install – that for some reason is not packaged alongside docker – is docker-compose. Since it’s a python package, we installed pip first:

# yum -y install epel-release
# yum --enablerepo=epel -y install python-pip
# pip install docker-compose
# docker-compose --version
docker-compose version 1.19.0, build 9e633ef

Create a user for the container

We decided that our containers would run with different users, so we created a new user in the docker group:

# useradd -m -G docker container01
# su - container01 -c 'id; docker ps'
uid=1000(container01) gid=1000(container01) groups=1000(container01),994(docker)
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

Create a docker-compose.yml file for the container

I grabbed an example compose file from the official site and saved it as /home/container01/docker-compose.yml

version: '3'

services:
   db:
     image: mysql:5.7
     volumes:
       - db_data:/var/lib/mysql
     restart: always
     environment:
       MYSQL_ROOT_PASSWORD: somewordpress
       MYSQL_DATABASE: wordpress
       MYSQL_USER: wordpress
       MYSQL_PASSWORD: wordpress

   wordpress:
     depends_on:
       - db
     image: wordpress:latest
     ports:
       - "8000:80"
     restart: always
     environment:
       WORDPRESS_DB_HOST: db:3306
       WORDPRESS_DB_USER: wordpress
       WORDPRESS_DB_PASSWORD: wordpress
volumes:
    db_data:

To test the compose file, I switched to the container01 user and ran it:

# su - container01
$ docker-compose up
Creating network "container01_default" with the default driver
Creating volume "container01_db_data" with default driver
Pulling db (mysql:5.7)...
5.7: Pulling from library/mysql
[...]
db_1         | 2018-02-16T17:42:17.911892Z 0 [Warning] 'tables_priv' entry 'sys_config mysql.sys@localhost' ignored in --skip-name-resolve mode.
db_1         | 2018-02-16T17:42:17.915828Z 0 [Note] Event Scheduler: Loaded 0 events
db_1         | 2018-02-16T17:42:17.915984Z 0 [Note] mysqld: ready for connections.
db_1         | Version: '5.7.21'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  MySQL Community Server (GPL)

I stopped the process and spinned down the containers:

^CGracefully stopping... (press Ctrl+C again to force)
Stopping container01_wordpress_1 ... done
Stopping container01_db_1        ... done

$ docker-compose down
Removing container01_wordpress_1 ... done
Removing container01_db_1        ... done
Removing network container01_default

Using volumes will save your data in /var/lib/docker/volumes/container01_db_data/ and persist it through restarts.

Now I wanted to make sure the containers would start and stop with the server, time for some systemd!

Create a systemd service for the container

I created a new systemd service file in /etc/systemd/system/container01-wordpress.service

[Unit]
Description=Example WordPress Containers
After=network.target docker.service
[Service]
Type=simple
User=container01
WorkingDirectory=/home/container01
ExecStart=/usr/bin/docker-compose -f /home/container01/docker-compose.yml up
ExecStop=/usr/bin/docker-compose -f /home/container01/docker-compose.yml down
Restart=always
[Install]
WantedBy=multi-user.target

Then I reloaded the systemd daemon to make sure it would recognize the new service, enabled it and ran it:

# systemctl daemon-reload
# systemctl enable container01-wordpress.service
Created symlink from /etc/systemd/system/multi-user.target.wants/container01-wordpress.service to /etc/systemd/system/container01-wordpress.service.
# systemctl start container01-wordpress.service
# journalctl -f
feb 16 18:47:36 centos7-test.stardata.lan docker-compose[3953]: wordpress_1  | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.18.0.3. Set the 'ServerName' directive globally to suppress this message
feb 16 18:47:36 centos7-test.stardata.lan docker-compose[3953]: wordpress_1  | [Fri Feb 16 17:47:36.915385 2018] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.25 (Debian) PHP/7.2.2 configured -- resuming normal operations
feb 16 18:47:36 centos7-test.stardata.lan docker-compose[3953]: wordpress_1  | [Fri Feb 16 17:47:36.915502 2018] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'

I hope this can help some fellow admin out there :)

linux · tech

How to create a CentOS 7 KVM image with Packer

Packer is a tool to automate the installation and provisioning of virtual machines to generate images for various platforms. You can have, for example, images for your test environment created with QEMU/KVM or Docker and images for your production environment created as Amazon AMI or VMware VMX images.

Basically, Packer starts a VM in a private environment, feeds an ISO to the VM to install the operating system (using kickstart, preseed or various other automation mechanisms) and then waits until the VM restarts and is available via SSH or WinRM. When it is available, Packer can run different provisioners (from bash scripts to your favourite tool like Ansible, Chef or Puppet) to setup the system as required. Once it’s done provisioning, it will shut down the VM and possibly apply post-processors that can, for example, pack a VMware image made by multiple files in a single file and so on.

In this article I’ll show you the steps to create a CentOS 7 image on KVM and explain some important settings.

First thing, you’ll need Packer. You can download it from https://www.packer.io/downloads.html

# curl -O https://releases.hashicorp.com/packer/0.11.0/packer_0.11.0_linux_amd64.zip
# curl -O https://releases.hashicorp.com/packer/0.11.0/packer_0.11.0_SHA256SUMS
# curl -O https://releases.hashicorp.com/packer/0.11.0/packer_0.11.0_SHA256SUMS.sig
# gpg --recv-keys 51852D87348FFC4C
# gpg --verify packer_0.11.0_SHA256SUMS.sig packer_0.11.0_SHA256SUMS
# sha256sum -c packer_0.11.0_SHA256SUMS 2>/dev/null | grep OK
# unzip packer*.zip ; rm -f packer*.zip
# chmod +x packer
# mv packer /usr/bin/packer.io

I already did something “different” from the official documentation, sorry about that, but CentOS and Fedora already have a completely unrelated program named packer in /usr/sbin/, so to avoid confusion I named the Packer binary packer.io. All my examples will use this syntax, so make sure to keep that in mind when you’ll check other examples on the official website or other blogs.

Let’s make sure we have all we need to run the example. On my CentOS 7 host, I had to install:

# yum -y install epel-release
# yum -y install --enablerepo=epel qemu-system-x86

If you’re running this example on a remote host, you’ll probably want to setup X11 forwarding to be able to see the QEMU console. You’ll need to edit your server’s /etc/ssh/sshd_config file and make sure you have these options enabled:

X11Forwarding yes
X11UseLocalhost no

Then you’ll need to restart sshd and make sure you have at least xauth installed:

# service sshd restart
# yum -y install xauth

At this point by logging to your remote host with the -X option to ssh, you should be able to forward X to your local system and see the QEMU graphical console:

# ssh -X user@remotehost 'qemu-system-x86_64'

If you still have problems, this article that helped me solve a few issues: http://www.cyberciti.biz/faq/how-to-fix-x11-forwarding-request-failed-on-channel-0/

Now you’ll need a work directory. One important thing to note is that Packer will use this directory, and subdirectories, as a stage for the files, including the VM disk image, so I highly recommend to create this workdir on a fast storage (SSD works best). In my case, I created it on my RAID 10 array and assigned ownership to my unprivileged user:

# mkdir -p /storage/packer.io/centos7-base
# chown velenux:velenux -R /storage/packer.io

At this point you should not need the root console anymore. If you have problems starting qemu/kvm you’ll probably need to add your unprivileged user to the appropriate groups and login again.

We’re finally ready to start exploring Packer. Our work directory will contain 3 main components: a packer configuration file, a kickstart file to setup our CentOS installation automatically and a provisioning script that will take care of post-installation setup of the virtual machine.

To make things easier I created a public github repo with an example you can clone on https://github.com/stardata/packer-centos7-kvm-example

The first thing we’re going to examine is the packer configuration file, centos7-base.json:

{
  "builders":
  [
    {
      "type": "qemu",
      "accelerator": "kvm",
      "headless": false,
      "qemuargs": [
        [ "-m", "2048M" ],
        [ "-smp", "cpus=1,maxcpus=16,cores=4" ]
      ],
      "disk_interface": "virtio",
      "disk_size": 100000,
      "format": "qcow2",
      "net_device": "virtio-net",

      "iso_url": "http://centos.fastbull.org/centos/7/isos/x86_64/CentOS-7-x86_64-Minimal-1511.iso",
      "iso_checksum": "88c0437f0a14c6e2c94426df9d43cd67",
      "iso_checksum_type": "md5",

      "vm_name": "centos7-base",
      "output_directory": "centos7-base-img",

      "http_directory": "docroot",
      "http_port_min": 10082,
      "http_port_max": 10089,

      "ssh_host_port_min": 2222,
      "ssh_host_port_max": 2229,

      "ssh_username": "root",
      "ssh_password": "CHANGEME",
      "ssh_port": 22,
      "ssh_wait_timeout": "1200s",

      "boot_wait": "40s",
      "boot_command": [
        "<up><wait><tab><wait> text ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/c7-kvm-ks.cfg<enter><wait>"
      ],

      "shutdown_command": "shutdown -P now"
    }
  ],

  "provisioners":
  [
    {
      "type": "shell-local",
      "command": "tar zcf stardata-install.tar.gz stardata-install/"
    },
    {
      "type": "file",
      "source": "stardata-install.tar.gz",
      "destination": "/root/stardata-install.tar.gz"
    },
    {
      "type": "shell",
      "pause_before": "5s",
      "inline": [
        "cd /root/",
        "tar zxf stardata-install.tar.gz",
        "cd stardata-install/",
        "./install.sh",
        "yum clean all"
      ]
    }
  ]
}

I tried to arrange the contents to make it easier to read for newcomers.

The first thing you should notice is the general structure of the file: we have two sections, builders and provisioners.

In our example, the first is a list of only one element (the QEMU/KVM builder), but you could easily add more builders after that, to create images using different plugins.

In the provisioners section we have 3 different provisioners that will be run in sequence: the first runs a command on the host system, the second transfer a file (created/updated by the first) to the VM and the third runs a series of commands on the VM. We’ll talk a bit more about them later.

Now let’s examine our first builder: based on this configuration, Packer will run QEMU with 1 CPU with 4 cores and 2G of RAM, creating a qcow2 virt-io disk with 100000M of space available. Note that qcow2 is a sparse file, or “thin provision disk”: the disk image will only use the space required and grow when required. Please notice how I set “headless” to false. This is a boolean value, not a string, and when you finish testing and debugging your Packer configuration you’ll probably want to set it back to true.

The next set of parameters inform Packer of the URI where to find the installation ISO for this image. This ISO will be downloaded and cached locally during the first build, and you will probably want to pick a better mirror from http://isoredirect.centos.org/centos/7/isos/x86_64/CentOS-7-x86_64-Minimal-1511.iso

vm_name is pretty self-explanatory and output_directory is where the final image will be, if the build completes correctly.

The http_* parameters are required to setup the HTTP server that Packer will start during the build to serve files (for example, the kickstart file) to the virtual machine.

The ssh_host_* parameters specify the ports that will be redirected from the Host to the VM during the build. Packer utilizes ranges because it can run multiple builds (for multiple platforms) in parallel and allocates different ports for different builds. You can read more about that on the official documentation, https://www.packer.io/docs/builders/qemu.html

The next set of parameters specifies the values to use when accessing the VM via SSH. Note that the password must be the same you set in your kickstart and the wait_timeout is the maximum time that Packer will wait for the VM to become accessible via SSH. Considering it will have to install the distribution first, I set this to 1200s (20m), altho in my tests the whole build process – including provisioning that happens after the system is available via SSH – took about 13m.

The boot_wait parameter sets a fixed amount of time that Packer will wait before proceeding with the boot_command; it’s important to specify a value that is long enough to allow the system to reach the distribution boot prompt, but short enough so that the default installation won’t start.

The boot_command parameter allows to emulate various key-presses to interact with the bootscreen. In my specific case, I’m emulating pressing the up key (to skip the media check), then Tab to autocomplete the boot parameters based on the selected item and then I add the parameters required for a kickstart installation and emulate the pression of the Enter key.
Running the build you’ll see this happen on your screen without any interaction on your part!

Lastly, the shutdown_command is the command that will be run after the provisioners.

Before talking about the provisioners, it’s worth examining the kickstart file in docroot/c7-kvm-ks.cfg.

# Run the installer
install

# Use CDROM installation media
cdrom

# System language
lang en_US.UTF-8

# Keyboard layouts
keyboard us

# Enable more hardware support
unsupported_hardware

# Network information
network --bootproto=dhcp --hostname=centos7-test.stardata.lan

# System authorization information
auth --enableshadow --passalgo=sha512

# Root password
rootpw CHANGEME

# Selinux in permissive mode (will be disabled by provisioners)
selinux --permissive

# System timezone
timezone UTC

# System bootloader configuration
bootloader --append=" crashkernel=auto" --location=mbr --boot-drive=vda

# Run the text install
text

# Skip X config
skipx

# Only use /dev/vda
ignoredisk --only-use=vda

# Overwrite the MBR
zerombr

# Partition clearing information
clearpart --none --initlabel

# Disk partitioning information
part pv.305 --fstype="lvmpv" --ondisk=vda --size=98000
part /boot --fstype="ext4" --ondisk=vda --size=1024 --label=BOOT
volgroup VGsystem --pesize=4096 pv.305
logvol /opt  --fstype="ext4" --size=5120 --name=LVopt --vgname=VGsystem
logvol /usr  --fstype="ext4" --size=10240 --name=LVusr --vgname=VGsystem
logvol /var  --fstype="ext4" --size=10240 --name=LVvar --vgname=VGsystem
logvol swap  --fstype="swap" --size=4096 --name=LVswap --vgname=VGsystem
logvol /  --fstype="ext4" --size=10240 --label="ROOT" --name=LVroot --vgname=VGsystem
logvol /tmp  --fstype="ext4" --size=5120 --name=LVtmp --vgname=VGsystem
logvol /var/log  --fstype="ext4" --size=10240 --name=LVvarlog --vgname=VGsystem
logvol /home  --fstype="ext4" --size=5120 --name=LVhome --vgname=VGsystem


# Do not run the Setup Agent on first boot
firstboot --disabled

# Accept the EULA
eula --agreed

# System services
services --disabled="chronyd" --enabled="sshd"

# Reboot the system when the install is complete
reboot


# Packages

%packages --ignoremissing --excludedocs
@^minimal
@core
kexec-tools
# unnecessary firmware
-aic94xx-firmware
-atmel-firmware
-b43-openfwwf
-bfa-firmware
-ipw2100-firmware
-ipw2200-firmware
-ivtv-firmware
-iwl100-firmware
-iwl1000-firmware
-iwl3945-firmware
-iwl4965-firmware
-iwl5000-firmware
-iwl5150-firmware
-iwl6000-firmware
-iwl6000g2a-firmware
-iwl6050-firmware
-libertas-usb8388-firmware
-ql2100-firmware
-ql2200-firmware
-ql23xx-firmware
-ql2400-firmware
-ql2500-firmware
-rt61pci-firmware
-rt73usb-firmware
-xorg-x11-drv-ati-firmware
-zd1211-firmware

%end

%addon com_redhat_kdump --enable --reserve-mb='auto'

%end

%post
yum -y upgrade
yum clean all
%end

As you can see the file is commented, so I will not spend too much time on it, but it’s important to note how the password is the same we set in the Packer configuration and the network options are set on DHCP, because Packer will run a private network for the build and provide an IP address to the VM.
The partitioning scheme is similar to what we use in production and provided as an example, but I highly recommend you use your own partitioning scheme that you can retrieve in the file /root/anaconda-ks.cfg after a “normal” installation.

After the operating system is installed and restarted, SSH becomes available and Packer proceeds to run the providers.

In our example, the first provider runs a shell on the Host system to update the content of stardata-install.tar.gz, so if you modify stardata-install/install.sh you’ll be uploading the updated version to the VM.

The second provider, as we mentioned, copies stardata-install.tar.gz to the /root/ directory in the VM.

The third and last provider runs a few commands to enter /root/, extract the tar.gz, enter stardata-install/ and run ./install.sh and then runs yum clean all to cleanup the yum cache so our image will be even smaller.

We’re ready for our first build. We’re going to clone the repository and run packer.io with PACKER_LOG=1 so we can see all the debug messages.

$ cd /storage/centos7-base/
$ git clone https://github.com/stardata/packer-centos7-kvm-example.git
$ cd packer-centos7-kvm-example
$ PACKER_LOG=1 packer.io build centos7-base.json
...

If everything works correctly, at the end of the build you’ll have your qcow2-format image in centos7-base-img/

For more information, you can check:

linux · tech

Ethernet bonding in CentOS 7

Just a few quick notes about how I configured Ethernet bonding on CentOS 7. I want to write it down because it was subtly different from what I had on CentOS 6, so I’ll have a reference for the future ;)

/etc/sysconfig/network-scripts/ifcfg-bond0

DEVICE=bond0
TYPE=Bond
BONDING_MASTER=yes
IPADDR=192.168.0.100
PREFIX=24
GATEWAY=192.168.0.1
ONBOOT=yes
BOOTPROTO=static
NM_CONTROLLED=no
USERCTL=no
BONDING_OPTS="mode=1 miimon=100 downdelay=300 updelay=30000 primary=enp2s0f0"
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
IPV6_AUTOCONF=no
IPV6_DEFROUTE=no
IPV6_PEERDNS=no
IPV6_PEERROUTES=no
IPV6_FAILURE_FATAL=no
DNS1=192.168.0.251
DNS2=192.168.0.252
DNS3=192.168.0.253
DOMAIN=stardata.lan

/etc/sysconfig/network-scripts/ifcfg-enp2s0f0

NAME=enp2s0f0
DEVICE=enp2s0f0
TYPE=Ethernet
BOOTPROTO=none
ONBOOT=yes
NM_CONTROLLED="no"
USERCTL="no"
MASTER="bond0"
SLAVE="yes"

/etc/sysconfig/network-scripts/ifcfg-enp4s0f0

NAME=enp4s0f0
DEVICE=enp4s0f0
TYPE=Ethernet
BOOTPROTO=none
ONBOOT=yes
NM_CONTROLLED="no"
USERCTL="no"
MASTER="bond0"
SLAVE="yes"

For more info and different methods to setup the bonding (nmtui is actually pretty cool), check the official RHEL7 documentation.

linux · tech

firewalld and nmcli – how to open a port on a specific interface on CentOS 7

For admins used to using iptables, the changes in RHEL 7.x made life a lot harder: the default config is a mess of zones, rules sending the traffic through different chains and what not. I had to spend hours tracking down how to add a single port to a single zone and switch one interface from one zone to another, I might as well document the whole experience for the sake of fellow admins out there:

First thing first, let’s see how our interfaces are configured:

# firewall-cmd --get-active-zones
public
  interfaces: ens160 ens192 ens224 ens256

In my particular case I want to switch ens224 (my management interface) from the “public” to the “work” zone, so I check what services are enabled in both zones:

# firewall-cmd --zone=public --list-services
dhcpv6-client http ssh
# firewall-cmd --zone=work --list-services
dhcpv6-client ipp-client ssh

And then I make sure I have http enabled in the “work” zone as well:

# firewall-cmd --permanent --zone=work --add-service http

Then I went to switch the ens224 interface from “public” to “work”… but it didn’t work:

# firewall-cmd --permanent --zone=public --remove-interface=ens224
# firewall-cmd --permanent --zone=work --add-interface=ens224
# firewall-cmd --reload
success
# firewall-cmd --get-active-zones
public
  interfaces: ens160 ens192 ens224 ens256

You also need to change the zone in the configuration setting, either by editing the configuration file in /etc/sysconfig/network-scripts/ or, as it was in my case, by fiddling with NetworkManager:

# nmcli c
NAME    UUID                                  TYPE            DEVICE 
nas     xxxxxxxx-yyyy-zzzz-tttt-wwwwwwwwwwww  802-3-ethernet  ens256 
cda-be  xxxxxxxx-yyyy-zzzz-tttt-wwwwwwwwwwww  802-3-ethernet  ens224 
bal     xxxxxxxx-yyyy-zzzz-tttt-wwwwwwwwwwww  802-3-ethernet  ens160 
cda-fe  xxxxxxxx-yyyy-zzzz-tttt-wwwwwwwwwwww  802-3-ethernet  ens192
# nmcli -p con show cda-be|grep connection.zone
connection.zone:                        --
# nmcli con modify cda-be connection.zone work
# nmcli -p con show cda-be|grep connection.zone
connection.zone:                        work

I reloaded the firewall configuration again and verified with iptables that the rules were now pointing to the “work” zone and that the zone did allow for http traffic:

# firewall-cmd --reload
# iptables -nvL
[...]
Chain INPUT_ZONES (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 IN_public  all  --  ens256 *       0.0.0.0/0            0.0.0.0/0           [goto] 
    0     0 IN_public  all  --  ens192 *       0.0.0.0/0            0.0.0.0/0           [goto] 
    1    44 IN_public  all  --  ens160 *       0.0.0.0/0            0.0.0.0/0           [goto] 
    1    60 IN_work    all  --  ens224 *       0.0.0.0/0            0.0.0.0/0           [goto] 
    0     0 IN_public  all  --  +      *       0.0.0.0/0            0.0.0.0/0           [goto] 
[...]
Chain IN_work_allow (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     udp  --  *      *       0.0.0.0/0            0.0.0.0/0            udp dpt:631 ctstate NEW
    0     0 ACCEPT     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:80 ctstate NEW
    0     0 ACCEPT     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:22 ctstate NEW
    0     0 ACCEPT     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:443 ctstate NEW

I hope this will be useful for someone out there :)
More info, as usual, in the official documentation: firewall-cmd, nmcli.