Adaptec Series 6 – 6G SAS/PCIe 2 (rev 01) [9005:028b] tools

Posting this in hope to save time to some fellow admins out there.

I had this on a server:

# lspci | grep -i adaptec
05:00.0 RAID bus controller: Adaptec Series 6 - 6G SAS/PCIe 2 (rev 01)
# lspci -n | grep '05:00'
05:00.0 0104: 9005:028b (rev 01)

Seems impossible to see the model number, not even lshw reports it. Luckily, the tools are the same for all the controllers, you can find them going to the Microsemi Adaptec Series 6 support page, clicking any controller, clicking the Storage Manager download link and then the Microsemi Adaptec ARCCONF Command Line Utility.

This is the link to the Microsemi Adaptec ARCCONF Command Line Utility Download Page at the moment of this writing.

Once installed, the tool will be in /usr/Arcconf, I created a symlink in /usr/bin

# arcconf getconfig 1
Controllers found: 1
----------------------------------------------------------------------
Controller information
----------------------------------------------------------------------
   Controller Status                        : Optimal
   Channel description                      : SAS/SATA
   Controller Model                         : Adaptec 6405E
...

There’s more information about how to use the tools on these pages:

Advertisements

Test Driven Infrastructure with Goss

I was looking into tools to help write Ansible playbooks and ended up on the Molecule website. The project looks really interesting, but what catched my attention at first was the option to test your playbooks by using one of 3 frameworks: Goss, Serverspec or Testinfra.

Goss is a very young project but at the same time it has 3 features that got me interested:

  • It’s tiny and compact: one go static binary without external dependencies.
  • It’s fast.
  • It seems very easy to use without sacrificing too much power.

I had a few basic checks created years ago for my servers, so I set myself out to see how much work it would take to port them from Bash (ewww, I know.) to Goss.

I ended up rewriting most of them in less than 3 hours, and now:

  • they are way easier to read
  • I added a lot of functionality
  • and it only took 3 hours even if I was dealing with a few quirks and some trial and error required to bridge the gap where the documentation was too sparse
    (but it could be better when you read this: I sent a pull request to the project with some improvements to the documentation)

Excited yet? Let’s see how to get started. First thing, you’ll want goss.

$ curl -o goss https://github.com/aelsabbahy/goss/releases/download/v0.2.4/goss-linux-amd64
$ chmod +x goss
$ ./goss --help
NAME:
   goss - Quick and Easy server validation
[...]

The README on the official website has a very nice “45 seconds introduction” that I recommend to check out quickly if you want to get an idea of what Goss can do.

I’ll start a bit slower and talk you through some considerations I made after a few hours working with it.

Goss files are YAML or JSON files describing the tests you want to run to validate your system. Goss has a cool autoadd feature that automatically creates a few predefined tests for a given resource, let’s start from this:

# ./goss -g httpd.yaml autoadd httpd
Adding Package to 'httpd.yaml':
httpd:
  installed: true
  versions:
  - 2.2.15

Adding Process to 'httpd.yaml':
httpd:
  running: true

Adding Port to 'httpd.yaml':
tcp6:80:
  listening: true
  ip:
  - '::'

Adding Service to 'httpd.yaml':
httpd:
  enabled: true
  running: true


# cat httpd.yaml
package:
  httpd:
    installed: true
    versions:
    - 2.2.15
port:
  tcp6:80:
    listening: true
    ip:
    - '::'
service:
  httpd:
    enabled: true
    running: true
process:
  httpd:
    running: true

So, we already have a barebone test suite to make sure our webserver is up and running: it will check that the package is installed, and the current version, it’ll make sure that something is listening on all addresses (tcp6 ::) on port 80, it’ll make sure that the httpd service is enabled at boot time and is currently running and that the httpd process is currently listd in the process list.

Please note that “service running” will get the data from upstart/systemd while “process running” will actually check the process list: if the httpd process is running but the service is not, then something went wrong!

Let’s try to run our basic test suite:

# ./goss -g httpd.yaml validate --format documentation
Process: httpd: running: matches expectation: [true]
Port: tcp6:80: listening: matches expectation: [true]
Port: tcp6:80: ip: matches expectation: [["::"]]
Package: httpd: installed: matches expectation: [true]
Package: httpd: version: matches expectation: [["2.2.15"]]
Service: httpd: enabled: matches expectation: [true]
Service: httpd: running: matches expectation: [true]

Total Duration: 0.015s
Count: 14, Failed: 0, Skipped: 0

All green! Our tests passed.

Now let’s say we want to make sure that we want httpd to run a ServerLimit of 200 clients. Goss allows us to check a file content using powerful regular expressions, for example we can add to our httpd.yaml file:

file:
  /etc/httpd/conf/httpd.conf:
    exists: true
    contains:
    - "/^ServerLimit\\s+200$/"

We’re saying that we want a line starting with ServerLimit, followed by some spaces or tabs and then 200 at the end of the line. Let’s run our suite again and see if it works:

# ./goss -g httpd.yaml validate --format documentation
File: /etc/httpd/conf/httpd.conf: exists: matches expectation: [true]
File: /etc/httpd/conf/httpd.conf: contains: matches expectation: [/^ServerLimit\s+200$/]
[...]
Count: 18, Failed: 0, Skipped: 0

All green again! Our server looks in good shape. Let’s add another check, this time we want to make sure the DocumentRoot directory exists. We add another check to the list:

file:
  /etc/httpd/conf/httpd.conf:
    exists: true

service:
  httpd:
    enabled: true
    running: true

file:
  /var/www/html:
    filetype: directory
    exists: true

But if we run this suite we’ll notice that our previous check on httpd.conf doesn’t run anymore. The reason why this happens is that the goss file describes a nested data structure, so the second file entry will overwrite the first, and you’ll end up scratching your head, wondering why your first test hasn’t been run.
In JSON would have been more obvious:

{
  "file": {
    "/etc/httpd/conf/httpd.conf": {
      "exists": true
    }
  },

  "service": {
    "httpd": {
      "enabled": true,
      "running": true
    }
  },

  "file": {
    "/var/www/html": {
      "filetype": "directory",
      "exists": true
    }
  }
}

See how the second file entry overwrites the first one? Keep that in mind!

Since you’ll probably want to keep your tests in different files, let’s talk quickly about how to manage that. For example, let’s create a new file to monitor a fileserver mount:

# ./goss -g fileserver.yaml add mount /mnt/nfs
Adding Mount to 'fileserver.yaml'
[...]
# cat fileserver.yaml
mount:
  /mnt/nfs:
    exists: true
    opts:
    - rw
    - nodev
    - noexec
    - relatime
    source: vip-nfs.stardata.lan:/data/nfs
    filesystem: nfs

If we want to check both fileserver.yaml and httpd.yaml at the same time, we’ll need to use the gossfile directive creating a new file that includes the other two:

# ./goss -g all.yaml add goss httpd.yaml
# ./goss -g all.yaml add goss fileserver.yaml
# cat all.yaml
gossfile:
  fileserver.yaml: {}
  httpd.yaml: {}

# ./goss -g all.yaml validate
.............

Total Duration: 0.016s
Count: 13, Failed: 0, Skipped: 0

If we want to get a single file containing all the tests, we can use the render command:

# ./goss -g all.yaml render
file:
  /etc/httpd/conf/httpd.conf:
    exists: true
    contains:
    - /^ServerLimit\s+200$/
package:
  httpd:
    installed: true
    versions:
    - 2.2.15
port:
  tcp6:80:
    listening: true
    ip:
    - '::'
service:
  httpd:
    enabled: true
    running: true
process:
  httpd:
    running: true
mount:
  /mnt/nfs:
    exists: true
    opts:
    - rw
    - nodev
    - noexec
    - relatime
    source: vip-nfs.stardata.lan:/data/nfs
    filesystem: nfs

This way we can easily distribute the test suite since it’s a single file.

I hope to have sparked some interest in this tool. It’s still very basic, for example it doesn’t support variables or loops yet, but it’s great to start writing quickly some tests to make sure your servers are configured and working as intended!

How to create a CentOS 7 KVM image with Packer

Packer is a tool to automate the installation and provisioning of virtual machines to generate images for various platforms. You can have, for example, images for your test environment created with QEMU/KVM or Docker and images for your production environment created as Amazon AMI or VMware VMX images.

Basically, Packer starts a VM in a private environment, feeds an ISO to the VM to install the operating system (using kickstart, preseed or various other automation mechanisms) and then waits until the VM restarts and is available via SSH or WinRM. When it is available, Packer can run different provisioners (from bash scripts to your favourite tool like Ansible, Chef or Puppet) to setup the system as required. Once it’s done provisioning, it will shut down the VM and possibly apply post-processors that can, for example, pack a VMware image made by multiple files in a single file and so on.

In this article I’ll show you the steps to create a CentOS 7 image on KVM and explain some important settings.

First thing, you’ll need Packer. You can download it from https://www.packer.io/downloads.html

# curl -O https://releases.hashicorp.com/packer/0.11.0/packer_0.11.0_linux_amd64.zip
# curl -O https://releases.hashicorp.com/packer/0.11.0/packer_0.11.0_SHA256SUMS
# curl -O https://releases.hashicorp.com/packer/0.11.0/packer_0.11.0_SHA256SUMS.sig
# gpg --recv-keys 51852D87348FFC4C
# gpg --verify packer_0.11.0_SHA256SUMS.sig packer_0.11.0_SHA256SUMS
# sha256sum -c packer_0.11.0_SHA256SUMS 2>/dev/null | grep OK
# unzip packer*.zip ; rm -f packer*.zip
# chmod +x packer
# mv packer /usr/bin/packer.io

I already did something “different” from the official documentation, sorry about that, but CentOS and Fedora already have a completely unrelated program named packer in /usr/sbin/, so to avoid confusion I named the Packer binary packer.io. All my examples will use this syntax, so make sure to keep that in mind when you’ll check other examples on the official website or other blogs.

Let’s make sure we have all we need to run the example. On my CentOS 7 host, I had to install:

# yum -y install epel-release
# yum -y install --enablerepo=epel qemu-system-x86

If you’re running this example on a remote host, you’ll probably want to setup X11 forwarding to be able to see the QEMU console. You’ll need to edit your server’s /etc/ssh/sshd_config file and make sure you have these options enabled:

X11Forwarding yes
X11UseLocalhost no

Then you’ll need to restart sshd and make sure you have at least xauth installed:

# service sshd restart
# yum -y install xauth

At this point by logging to your remote host with the -X option to ssh, you should be able to forward X to your local system and see the QEMU graphical console:

# ssh -X user@remotehost 'qemu-system-x86_64'

If you still have problems, this article that helped me solve a few issues: http://www.cyberciti.biz/faq/how-to-fix-x11-forwarding-request-failed-on-channel-0/

Now you’ll need a work directory. One important thing to note is that Packer will use this directory, and subdirectories, as a stage for the files, including the VM disk image, so I highly recommend to create this workdir on a fast storage (SSD works best). In my case, I created it on my RAID 10 array and assigned ownership to my unprivileged user:

# mkdir -p /storage/packer.io/centos7-base
# chown velenux:velenux -R /storage/packer.io

At this point you should not need the root console anymore. If you have problems starting qemu/kvm you’ll probably need to add your unprivileged user to the appropriate groups and login again.

We’re finally ready to start exploring Packer. Our work directory will contain 3 main components: a packer configuration file, a kickstart file to setup our CentOS installation automatically and a provisioning script that will take care of post-installation setup of the virtual machine.

To make things easier I created a public github repo with an example you can clone on https://github.com/stardata/packer-centos7-kvm-example

The first thing we’re going to examine is the packer configuration file, centos7-base.json:

{
  "builders":
  [
    {
      "type": "qemu",
      "accelerator": "kvm",
      "headless": false,
      "qemuargs": [
        [ "-m", "2048M" ],
        [ "-smp", "cpus=1,maxcpus=16,cores=4" ]
      ],
      "disk_interface": "virtio",
      "disk_size": 100000,
      "format": "qcow2",
      "net_device": "virtio-net",

      "iso_url": "http://centos.fastbull.org/centos/7/isos/x86_64/CentOS-7-x86_64-Minimal-1511.iso",
      "iso_checksum": "88c0437f0a14c6e2c94426df9d43cd67",
      "iso_checksum_type": "md5",

      "vm_name": "centos7-base",
      "output_directory": "centos7-base-img",

      "http_directory": "docroot",
      "http_port_min": 10082,
      "http_port_max": 10089,

      "ssh_host_port_min": 2222,
      "ssh_host_port_max": 2229,

      "ssh_username": "root",
      "ssh_password": "CHANGEME",
      "ssh_port": 22,
      "ssh_wait_timeout": "1200s",

      "boot_wait": "40s",
      "boot_command": [
        "<up><wait><tab><wait> text ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/c7-kvm-ks.cfg<enter><wait>"
      ],

      "shutdown_command": "shutdown -P now"
    }
  ],

  "provisioners":
  [
    {
      "type": "shell-local",
      "command": "tar zcf stardata-install.tar.gz stardata-install/"
    },
    {
      "type": "file",
      "source": "stardata-install.tar.gz",
      "destination": "/root/stardata-install.tar.gz"
    },
    {
      "type": "shell",
      "pause_before": "5s",
      "inline": [
        "cd /root/",
        "tar zxf stardata-install.tar.gz",
        "cd stardata-install/",
        "./install.sh",
        "yum clean all"
      ]
    }
  ]
}

I tried to arrange the contents to make it easier to read for newcomers.

The first thing you should notice is the general structure of the file: we have two sections, builders and provisioners.

In our example, the first is a list of only one element (the QEMU/KVM builder), but you could easily add more builders after that, to create images using different plugins.

In the provisioners section we have 3 different provisioners that will be run in sequence: the first runs a command on the host system, the second transfer a file (created/updated by the first) to the VM and the third runs a series of commands on the VM. We’ll talk a bit more about them later.

Now let’s examine our first builder: based on this configuration, Packer will run QEMU with 1 CPU with 4 cores and 2G of RAM, creating a qcow2 virt-io disk with 100000M of space available. Note that qcow2 is a sparse file, or “thin provision disk”: the disk image will only use the space required and grow when required. Please notice how I set “headless” to false. This is a boolean value, not a string, and when you finish testing and debugging your Packer configuration you’ll probably want to set it back to true.

The next set of parameters inform Packer of the URI where to find the installation ISO for this image. This ISO will be downloaded and cached locally during the first build, and you will probably want to pick a better mirror from http://isoredirect.centos.org/centos/7/isos/x86_64/CentOS-7-x86_64-Minimal-1511.iso

vm_name is pretty self-explanatory and output_directory is where the final image will be, if the build completes correctly.

The http_* parameters are required to setup the HTTP server that Packer will start during the build to serve files (for example, the kickstart file) to the virtual machine.

The ssh_host_* parameters specify the ports that will be redirected from the Host to the VM during the build. Packer utilizes ranges because it can run multiple builds (for multiple platforms) in parallel and allocates different ports for different builds. You can read more about that on the official documentation, https://www.packer.io/docs/builders/qemu.html

The next set of parameters specifies the values to use when accessing the VM via SSH. Note that the password must be the same you set in your kickstart and the wait_timeout is the maximum time that Packer will wait for the VM to become accessible via SSH. Considering it will have to install the distribution first, I set this to 1200s (20m), altho in my tests the whole build process – including provisioning that happens after the system is available via SSH – took about 13m.

The boot_wait parameter sets a fixed amount of time that Packer will wait before proceeding with the boot_command; it’s important to specify a value that is long enough to allow the system to reach the distribution boot prompt, but short enough so that the default installation won’t start.

The boot_command parameter allows to emulate various key-presses to interact with the bootscreen. In my specific case, I’m emulating pressing the up key (to skip the media check), then Tab to autocomplete the boot parameters based on the selected item and then I add the parameters required for a kickstart installation and emulate the pression of the Enter key.
Running the build you’ll see this happen on your screen without any interaction on your part!

Lastly, the shutdown_command is the command that will be run after the provisioners.

Before talking about the provisioners, it’s worth examining the kickstart file in docroot/c7-kvm-ks.cfg.

# Run the installer
install

# Use CDROM installation media
cdrom

# System language
lang en_US.UTF-8

# Keyboard layouts
keyboard us

# Enable more hardware support
unsupported_hardware

# Network information
network --bootproto=dhcp --hostname=centos7-test.stardata.lan

# System authorization information
auth --enableshadow --passalgo=sha512

# Root password
rootpw CHANGEME

# Selinux in permissive mode (will be disabled by provisioners)
selinux --permissive

# System timezone
timezone UTC

# System bootloader configuration
bootloader --append=" crashkernel=auto" --location=mbr --boot-drive=vda

# Run the text install
text

# Skip X config
skipx

# Only use /dev/vda
ignoredisk --only-use=vda

# Overwrite the MBR
zerombr

# Partition clearing information
clearpart --none --initlabel

# Disk partitioning information
part pv.305 --fstype="lvmpv" --ondisk=vda --size=98000
part /boot --fstype="ext4" --ondisk=vda --size=1024 --label=BOOT
volgroup VGsystem --pesize=4096 pv.305
logvol /opt  --fstype="ext4" --size=5120 --name=LVopt --vgname=VGsystem
logvol /usr  --fstype="ext4" --size=10240 --name=LVusr --vgname=VGsystem
logvol /var  --fstype="ext4" --size=10240 --name=LVvar --vgname=VGsystem
logvol swap  --fstype="swap" --size=4096 --name=LVswap --vgname=VGsystem
logvol /  --fstype="ext4" --size=10240 --label="ROOT" --name=LVroot --vgname=VGsystem
logvol /tmp  --fstype="ext4" --size=5120 --name=LVtmp --vgname=VGsystem
logvol /var/log  --fstype="ext4" --size=10240 --name=LVvarlog --vgname=VGsystem
logvol /home  --fstype="ext4" --size=5120 --name=LVhome --vgname=VGsystem


# Do not run the Setup Agent on first boot
firstboot --disabled

# Accept the EULA
eula --agreed

# System services
services --disabled="chronyd" --enabled="sshd"

# Reboot the system when the install is complete
reboot


# Packages

%packages --ignoremissing --excludedocs
@^minimal
@core
kexec-tools
# unnecessary firmware
-aic94xx-firmware
-atmel-firmware
-b43-openfwwf
-bfa-firmware
-ipw2100-firmware
-ipw2200-firmware
-ivtv-firmware
-iwl100-firmware
-iwl1000-firmware
-iwl3945-firmware
-iwl4965-firmware
-iwl5000-firmware
-iwl5150-firmware
-iwl6000-firmware
-iwl6000g2a-firmware
-iwl6050-firmware
-libertas-usb8388-firmware
-ql2100-firmware
-ql2200-firmware
-ql23xx-firmware
-ql2400-firmware
-ql2500-firmware
-rt61pci-firmware
-rt73usb-firmware
-xorg-x11-drv-ati-firmware
-zd1211-firmware

%end

%addon com_redhat_kdump --enable --reserve-mb='auto'

%end

%post
yum -y upgrade
yum clean all
%end

As you can see the file is commented, so I will not spend too much time on it, but it’s important to note how the password is the same we set in the Packer configuration and the network options are set on DHCP, because Packer will run a private network for the build and provide an IP address to the VM.
The partitioning scheme is similar to what we use in production and provided as an example, but I highly recommend you use your own partitioning scheme that you can retrieve in the file /root/anaconda-ks.cfg after a “normal” installation.

After the operating system is installed and restarted, SSH becomes available and Packer proceeds to run the providers.

In our example, the first provider runs a shell on the Host system to update the content of stardata-install.tar.gz, so if you modify stardata-install/install.sh you’ll be uploading the updated version to the VM.

The second provider, as we mentioned, copies stardata-install.tar.gz to the /root/ directory in the VM.

The third and last provider runs a few commands to enter /root/, extract the tar.gz, enter stardata-install/ and run ./install.sh and then runs yum clean all to cleanup the yum cache so our image will be even smaller.

We’re ready for our first build. We’re going to clone the repository and run packer.io with PACKER_LOG=1 so we can see all the debug messages.

$ cd /storage/centos7-base/
$ git clone https://github.com/stardata/packer-centos7-kvm-example.git
$ cd packer-centos7-kvm-example
$ PACKER_LOG=1 packer.io build centos7-base.json
...

If everything works correctly, at the end of the build you’ll have your qcow2-format image in centos7-base-img/

For more information, you can check: