Creating a Corosync 2.x + Pacemaker 1.1 cluster on Ubuntu 14.04 LTS

Last month I had to create a 2-node active/passive cluster based on Ubuntu 14.04 LTS. The caveat was that this release bundles Corosync 2.x and Pacemaker 1.1, but has no access to Red Hat tools such as pcs and CMAN that are referenced in the official documentation. You can find my config files on github.

I assume you already have your ssh set up to access all nodes via key exchange and that your /etc/hosts file looks something like this: localhost.localdomain localhost  node01.cluster.lan  node01  node02.cluster.lan  node02  vip01.cluster.lan   vip01

First thing was to install a few packages (on both nodes):

# apt-get install pacemaker corosync rsync screen vim-nox mutt mailx curl wget sysstat ntp

Then I made sure I had a basic firewall set up that allowed the two nodes to talk to each other and exposed ports 80, 443 and 22 to the public (based on default Red Hat/CentOS firewall rules). I installed iptables-persistent on both nodes, created the /etc/iptables/rules.v4 file and restarted the service:

# apt-get install iptables-persistent
cat > /etc/iptables/rules.v4
# iptables-save
# Generated by iptables-save v1.4.21 on Fri Apr 24 17:08:30 2015
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
# Completed on Fri Apr 24 17:08:30 2015
# service iptables-persistent restart

Then I went on reading the Fantastic Manual™ for corosync. First step is always to generate a new key and share it with other members of the cluster:

node01# corosync-keygen
node01# rsync -a /etc/corosync/authkey node02:/etc/corosync/

Then you need to create a configuration file in /etc/corosync/corosync.conf. The format is a bit different from Corosync 1.x, so make sure to read the manual or use my config as a template and change the parameters you see fit (for example cluster name and the IP addressess of your nodes).

compatibility: whitetank

totem {
        # cluster name
        cluster_name: coro01

        # totem parameters
        version: 2
        secauth: off
        rrp_mode: none
        vsftype: none
        clear_node_high_bit: yes
        # low level network parameters
        token: 3000
        token_retransmits_before_loss_const: 10
        join: 60
        consensus: 4000
        max_messages: 20

        # not documented?
        threads: 0

        # ring0 interface
        interface {
                ringnumber: 0
                mcastport: 5405
                ttl: 1

	# specify we don't use unicast or anycast
        transport: udpu

# quorum
quorum {
        provider: corosync_votequorum
        two_node: 1

# nodes
nodelist {
        node { 
        node { 

# logging
logging {
        fileline: off
        to_logfile: yes
        to_syslog: no
        debug: off
        logfile: /var/log/cluster/corosync.log
        timestamp: on
        logger_subsys {
                subsys: AMF
                debug: off

amf {
        mode: disabled

Since I’m logging on file, I made sure to create the necessary directory and touch an empty log on both nodes:

# mkdir /var/log/cluster
# touch /var/log/cluster/corosync.log

Then I made sure that corosync and pacemaker would start at boot time on both nodes:

# for SRV in corosync pacemaker ; do
	update-rc.d $SRV defaults
	service $SRV start

Finally I ran crm configure edit and made a very simple configuration for my cluster:

node $id="275446325" node01.cluster.lan
node $id="278989934" node02.cluster.lan

primitive ip_www01 ocf:heartbeat:IPaddr2 \
        params ip="" nic="eth0" cidr_netmask="32" iflabel="www01" \
        operations $id="ip_www01-operations" \
        op monitor interval="30s" timeout="60s" start-delay="5s" \
        meta target-role="Started"
primitive mail_www01 ocf:heartbeat:MailTo \
        op monitor interval="60s" timeout="20s" start-delay="30s" \
        params email="" subject="Cluster coro01 Migration"

group www01 ip_www01 mail_www01

location www01_pref   www01 100: node01.cluster.lan
location www01_nopref www01  10: node02.cluster.lan

property $id="cib-bootstrap-options" \
        cluster-infrastructure="corosync" \
        stonith-enabled="false" \

rsc_defaults $id="rsc-options" \

I hope this will be useful to all the admins out there that will have to deal with Ubuntu and Corosync 2.x/Pacemaker clusters :)