How to run a Flask application in Docker

Flask is a nice web application framework for Python.

My example app.py looks like:

from flask import Flask

app = Flask(__name__)

@app.route('/')
def hello_world():
  return 'Hello, World!'

According to Flask documentation, to run the application we need to run FLASK_APP=app.py flask run. So our Dockerfile will run this command and we’ll pass an environment variable with the application name when we start the container:

FROM python:3-onbuild
EXPOSE 5000
CMD [ "python", "-m", "flask", "run", "--host=0.0.0.0" ]

The --host=0.0.0.0 parameter is necessary so that we will be able to connect to flask from outside the docker container.

Using the -onbuild version of the Python container is handy because it will import a file named requirements.txt and install the Python modules listed in it, so go on and create this file in the same directory, containing the single line, flask.

Now we can build our container:

docker build -t flaskapp .

This might take a while. When it ends, we’ll be able to run the container, passing the FLASK_APP environment variable:

docker run -it --rm --name flaskapp \
  -v "$PWD":/usr/src/app -w /usr/src/app \
  -e LANG=C.UTF-8 -e FLASK_APP=app.py \
  -p 5000:5000 flaskapp

As you can see I’m mounting the local directory $PWD to /usr/src/app in the container and setting the work directory there. I’m also passing the -p 5000:5000 parameter so that the container tcp port 5000 is available by connecting to my host machine port 5000.

You can test your app with your browser or with curl:

$ curl http://127.0.0.1:5000/
Hello, World!

I hope this will be useful to someone out there, have fun! :)

How to fix Fedora 25 dnf upgrade “certificate expired” failure

After logging on a Fedora 25 system I didn’t log for a while, I ran dnf clean all ; dnf upgrade to update it, but I ran into this problem:

# dnf -vvvv -d 5 upgrade
cachedir: /var/cache/dnf
Loaded plugins: playground, builddep, config-manager, debuginfo-install, generate_completion_cache, needs-restarting, copr, protected_packages, noroot, download, Query, reposync
DNF version: 1.1.10
Cannot download 'https://mirrors.fedoraproject.org/metalink?repo=updates-released-f25&arch=x86_64': Cannot prepare internal mirrorlist: Curl error (60): Peer certificate cannot be authenticated with given CA certificates for https://mirrors.fedoraproject.org/metalink?repo=updates-released-f25&arch=x86_64 [Peer's Certificate has expired.].
Errore: Failed to synchronize cache for repo 'updates'

The certificates expired and dnf refuses to work. To update the certificates, I simply installed with rpm the packages ca-certificates, p11-kit, p11-kit-trust, openssl and openssl-libs from the distro upgrades.

# rpm -Uvh http://www.nic.funet.fi/pub/mirrors/fedora.redhat.com/pub/fedora/linux/updates/25/x86_64/c/ca-certificates-2017.2.11-1.1.fc25.noarch.rpm \ http://www.nic.funet.fi/pub/mirrors/fedora.redhat.com/pub/fedora/linux/updates/25/x86_64/p/p11-kit-0.23.2-3.fc25.x86_64.rpm \ http://www.nic.funet.fi/pub/mirrors/fedora.redhat.com/pub/fedora/linux/updates/25/x86_64/p/p11-kit-trust-0.23.2-3.fc25.x86_64.rpm \
http://www.nic.funet.fi/pub/mirrors/fedora.redhat.com/pub/fedora/linux/updates/25/x86_64/o/openssl-1.0.2k-1.fc25.x86_64.rpm \ http://www.nic.funet.fi/pub/mirrors/fedora.redhat.com/pub/fedora/linux/updates/25/x86_64/o/openssl-libs-1.0.2k-1.fc25.x86_64.rpm
# dnf upgrade
Fedora 25 - x86_64 - Updates  
Fedora 25 - x86_64
Ultima verifica della scadenza dei metadati: 0:00:19 fa il Mon Apr 17 13:47:09 2017.
[...]

You might have to find the latest version by browsing around your favourite Fedora mirror (you can find the base url in /etc/yum.repos.d/fedora-updates.repo).

Some links worth reading:

PHP Sessions on MS Azure Redis service

I sure hope you’ll never end up facing this, but in case you do indeed have a PHP application on MS Azure, and you want to use MS Azure Redis service for the session backend, you’ll have to set your session.save_path to something like:

session.save_path='tcp://UNIQUENAME.redis.cache.windows.net:6379?auth=MSHATESYOU4FUNandPROFIT=&timeout=1&prefix=OHNOMS'

Easy enough, unless your auth key happens to contain a + symbol. In that case, your PHP session creation will fail with this error:

# php count.php
PHP Fatal error:  Uncaught exception 'RedisException' with message 'Failed to AUTH connection' in count.php:3
Stack trace:
#0 count.php(3): session_start()
#1 {main}
  thrown in count.php on line 3
PHP Fatal error:  Uncaught exception 'RedisException' with message 'Failed to AUTH connection' in [no active file]:0
Stack trace:
#0 {main}
  thrown in [no active file] on line 0

From redis-cli the authentication was working fine, so it took us a while to debug. It ended up being a problem with the + symbol, and the the quickest solution in our case was to just regen the auth key so it didn’t have a + in it, but I suspect (but I didn’t test this solution) that URLencoding the + as %2B might work as well.

How to quickly install and configure Splunk Enterprise

As you may have noticed, I’m not a huge fan of proprietary, closed source software. And of course I ended up having to install Splunk for a client. So here’s a few notes on what I did to get it working.

I started following this guide with a few integrations here and there.

Install the Splunk Server

First thing, you need to download the server. You have to register for it (proprietary software).

I got the 64bit RPM for my CentOS 7 server and installed it with

yum install splunk-*-linux-2.6-x86_64.rpm
/opt/splunk/bin/splunk --answer-yes --no-prompt --accept-license enable boot-start
/opt/splunk/bin/splunk --answer-yes --no-prompt --accept-license start

This will automatically accept the license and setup the Splunk Server to start at boot time.

If everything worked correctly, you should be able to connect to your Splunk Server on:

url: http://your-server-name-or-ip:8000
user: admin
pass: changeme

If it doesn’t work, check if you have a firewall on your server machine and open port tcp/8000 if needed.

For more information on this step, I’ll referr you to the Fine Manual:

Configure the Splunk Server

The logical next step is to configure the Splunk Server to listen for incoming logs.

Assuming you didn’t change (yet) your Splunk Server user and password, you’ll need to run:

/opt/splunk/bin/splunk enable listen 9997 -auth admin:changeme
/opt/splunk/bin/splunk enable deploy-server -auth admin:changeme

For more information on this step, check:

Install the Splunk Universal Forwarder on clients

Now that the server side is configured, we need to setup a client to send some logs to it. Again, head off to the download page and grab the package you need.

For large scale deployment you might want to read about how to use user-seed.conf, so you can pre-seed your installation user and password. For this quick tutorial, we’ll skip that and run directly these commands:

yum -y install splunkforwarder-*-linux-2.6-x86_64.rpm
/opt/splunkforwarder/bin/splunk --answer-yes --no-prompt --accept-license enable boot-start
/opt/splunkforwarder/bin/splunk --answer-yes --no-prompt --accept-license start

Again, this will automatically accept the license and enable the forwarder at boot time.

For more information about this step:

Configure the Universal Forwarder

Once the forwarder is installed, you’ll need to configure it to talk to your server.

Please note that the user and password I’m using are those of the local splunk, not the Splunk Server.

/opt/splunkforwarder/bin/splunk add forward-server splunk-server:9997 -auth admin:changeme
/opt/splunkforwarder/bin/splunk set deploy-poll splunk-server:8089 -auth admin:changeme
/opt/splunkforwarder/bin/splunk enable deploy-client -auth admin:changeme
/opt/splunkforwarder/bin/splunk add monitor /var/log/nginx/error.log
/opt/splunkforwarder/bin/splunk restart

In my case I added /var/log/nginx/error.log to the files that will be monitored and sent to the server.

For more information about this step, check out:

Accessing your logs on the Splunk Server

At this point you should be able to log in your Splunk Server web interface, head to the “Search & Reporting” app, and search for your data, for example I used a simple query:

source="/var/log/web/nginx/error.log"

to make sure the data from my log files was ending up in Splunk.

Workaround for NFS share not mounted at boot

I had a couple of servers unable to mount a NFS share at boot time. My /etc/fstab was something like:

[... usual stuff ...]
nfs.domain.tld:/nfs /nfs  nfs4  _netdev,auto,rw,noexec,nodev,timeo=5,retry=5,retrans=5,rsize=32768,wsize=32768,proto=tcp,hard,intr  1 2

If I tried to mount it after boot, it would work without any problem.

After checking the basic stuff (services, network access, etc), I went to see if The Internet[tm] knew any better, and this suggestion was spot-on, for some reason that I couldn’t pinpoint, even if the mountpoint definition had a _netdev attribute, it seemed like the mount was failing to properly resolve the name during the boot.

For the moment I went for a quick workaround, there was two main options: either add the NFS server hostname to /etc/hosts, or switch to the IP address in /etc/fstab. I went for the latter because it’s simpler (less stuff can break), until I can find out why it doesn’t resolve the names during the boot.

How to exclude everything except a specific pattern with rsync

Just a quick tip (and reminder for me): if you want to rsync only a specific file, or pattern, and exclude everything else, the syntax is:

rsync -a --include="*/" --include="*your-pattern*" --exclude="*" /source/path/ /destination/path/

In my specific case I wanted to copy only gzipped files, and my command line was:

rsync -avP --include="*/" --include="*your-pattern*" --exclude="*" /source/path/ /destination/path/

The first --include directive allows rsync to descend into subdirectories, while the second provides the actual filename or pattern we want to access.

How to cleanup and shrink disk space usage of a Windows KVM virtual machine

We still need Windows VMs (sadly, for a few tools we’re trying to get rid of), and my VM grew so much that the image was up to 60Gb. With my laptop only having a 256Gb SSD, it was getting pretty crowded. So I set up to cleanup the Windows image and shrink it down as much as possible, and I managed to get it down to 13Gb.

Since I’m not very familiar with Windows, I leveraged the knowledge of the Internet and started cleaning my system using the tips from this article: I ran CCleaner, removed old files, uninstalled unused software. Then I went on to the “not obvious” ways to free space. I opened an administrator console and proceeded to remove shadow copies:

vssadmin delete shadows /for=c: /all

and I consolidated the Service Pack on disk, to get rid of a lot of backups from C:\windows\winsxs\:

dism /online /cleanup-image /spsuperseded

there’s a few more things you can do to save space in that directory, especially if you run Windows 8.1, Server 2012 or newer, it’s worth checking this Microsoft Technet article.

Once I cleaned up as much space as possible, I ran the Windows Defrag utility to cluster up the remaining data and then went on to fill the rest of the disk with zeroes. Think of it like doing dd if=/dev/zero of=/zero.img: you’re creating a file containing only zeroes, so that those clusters will result “empty” during the shrinking.

On Windows, the recommended tool to zero-fill your disk seems to be SDelete. I ran it as administrator in a cmd console:

sdelete -z c:

This took a long time. Hours. Best thing would probably have been to run it overnight: learn from my mistakes! :)

Note: if you have a thin disk (for example a qcow2 image), filling it up with zeroes will actually consume space on the host, up to the maximum size of the virtual disk. In my case, the image grew from a bit more than 60G to 200G. A necessary, and temporary, sacrifice.

ls -l /var/lib/libvirt/images/
[...]
-rw-r--r-- 1 root root 200G 31 dic 16.34 win7_orig.img

After SDelete finished running (and syncing to disk), I shut down the VM and prepared for the next step: shrinking the actual disk image. Thankfully, qemu-img allows you to convert to the same format. This will discard any empty cluster (remember? we filled them with zeroes, so they are empty!).

In my case, I ran two processes in parallel, because I wanted to see how much of a difference it would make to have a compressed image versus a non-compressed image, as suggested by this Proxmox wiki page:

cd /var/lib/libvirt/images/
qemu-img convert -O qcow2 win7_nocomp.img win7_orig.img &
qemu-img convert -O qcow2 -c win7_compress.img win7_orig.img &
watch ls -l

This process didn’t take too long, less than one hour, and the result was pretty interesting:

ls -l /var/lib/libvirt/images/
[...]
-rw-r--r-- 1 root root  13G  1 gen 18.13 win7_compress.img
-rw-r--r-- 1 root root  31G 31 dic 19.09 win7_nocomp.img
-rw-r--r-- 1 root root 200G 31 dic 16.34 win7_orig.img

The compressed image is less than half the non-compressed one, but you’ll use a bit more CPU when using it. In my case this is completely acceptable, because saving disk space is more important.