Upgrade VMware ESX from the command line

VMware has some of the worst documentation in the entire industry, so I’m saving here these notes for fellow admins that need to deal with this.

First, you have to identify your current profile

# ssh YOUR_ESX_SERVER
~ # esxcli software profile get
(Updated) HP-ESXi-5.1.0-standard-iso
   Name: HP-ESXi-5.1.0-standard-iso
   Vendor: YOUR_VENDOR
   Creation Time: 2017-11-07T15:24:51
   Modification Time: 2017-11-07T15:25:06
   Stateless Ready: False

With that info you can go on the VMware website (or your vendor website) and download the new release. In my case this is an HP server, so I downloaded VMware-ESXi-5.5.0-Update3-3116895-HP-550.9.4.26-Nov2015-depot.zip from the download page.

Then I loaded the depot file to the ESX server:

# scp VMware-ESXi-*-depot.zip YOUR_ESX_SERVER:/vmfs/volumes/YOUR_VOLUME_NAME/

At this point I shut down all VMs on the server and put it in maintenance mode, then logged back in in console, found out the new profile name and ran the upgrade:

# ssh YOUR_ESX_SERVER
# ~ esxcli software sources profile list -d /vmfs/volumes/YOUR_VOLUME_NAME/VMware-ESXi-5.5.0-Update3-3116895-HP-550.9.4.26-Nov2015-depot.zip
Name                              Vendor           Acceptance Level
--------------------------------  ---------------  ----------------
HP-ESXi-5.5.0-Update3-550.9.4.26  Hewlett-Packard  PartnerSupported

~ # esxcli software profile update -d /vmfs/volumes/YOUR_VOLUME_NAME/VMware-ESXi-5.5.0-Update3-3116895-HP-550.9.4.26-Nov2015-depot.zip -p HP-ESXi-5.5.0-Update3-550.9.4.26
Update Result
  Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
  Reboot Required: true

I rebooted the system, as required, and then logged back in to check the updates:

~ # esxcli software profile get
(Updated) HP-ESXi-5.1.0-standard-iso
   Name: (Updated) HP-ESXi-5.1.0-standard-iso
   Vendor: YOUR_VENDOR
   Creation Time: 2017-11-07T15:24:51
   Modification Time: 2017-11-07T15:25:06
   Stateless Ready: False
   Description: 

      2017-11-07T15:24:51.436759+00:00: The following VIBs are
      installed:
        net-bnx2x     2.712.50.v55.6-1OEM.550.0.0.1331820
        ata-pata-amd  0.3.10-3vmw.550.0.0.1331820
        sata-sata-sil24       1.1-1vmw.550.0.0.1331820
[...]

Hopefully nobody out there will have to deal with this, but if you do, I hope I got you covered.

Reference:

Advertisements

How to solve OpenVPN errors after upgrading OpenSSL

I went on upgrading OpenVPN and OpenSSL on an old production system, but after restarting the service, the clients would not connect. There were two different problems a “CRL expired” error and after fixing that a “CRL signature failed” error.

CRL expired

The OpenVPN server logs were reporting:

Mon Nov  6 10:04:22 2017 TCP connection established with [AF_INET]192.168.100.1:19347
Mon Nov  6 10:04:23 2017 192.168.100.1:19347 TLS: Initial packet from [AF_INET]192.168.100.1:19347, sid=150b3618 b004e9a4
Mon Nov  6 10:04:23 2017 192.168.100.1:19347 VERIFY ERROR: depth=0, error=CRL has expired: C=IT, ST=PR, L=Parma, O=domain, OU=domain.eu, CN=user, name=user, emailAddress=info@stardata.it
Mon Nov  6 10:04:23 2017 192.168.100.1:19347 OpenSSL: error:140890B2:SSL routines:SSL3_GET_CLIENT_CERTIFICATE:no certificate returned
Mon Nov  6 10:04:23 2017 192.168.100.1:19347 TLS_ERROR: BIO read tls_read_plaintext error
Mon Nov  6 10:04:23 2017 192.168.100.1:19347 TLS Error: TLS object -> incoming plaintext read error
Mon Nov  6 10:04:23 2017 192.168.100.1:19347 TLS Error: TLS handshake failed
Mon Nov  6 10:04:23 2017 192.168.100.1:19347 Fatal TLS error (check_tls_errors_co), restarting
Mon Nov  6 10:04:23 2017 192.168.100.1:19347 SIGUSR1[soft,tls-error] received, client-instance restarting

This is a common problem on older systems, the culprit is in the OpenSSL configuration used to generate the CRL, that is limited to just 30 days by default.

So, I had to regenerate my CRL after increasing the default_crl_days parameter in the ssl config to 180 (for our use case is more than enough), using:

$ openssl  ca  -gencrl  -keyfile keys/ca.key  \
               -cert keys/ca.crt  -out keys/crl.pem \
               -config easy-rsa/openssl-1.0.0.cnf

CRL signature failed

Due to the vulnerabilities found in MD5, this hashing routine has been disabled by default on modern SSL. Our certificates, though, were still using it, so the new error message (after fixing the CRL), became:

Mon Nov  6 10:14:40 2017 TCP connection established with [AF_INET]192.168.100.1:18463
Mon Nov  6 10:14:41 2017 192.168.100.1:18463 TLS: Initial packet from [AF_INET]192.168.100.1:18463, sid=13fdd1fe 5d82d4d6
Mon Nov  6 10:14:42 2017 192.168.100.1:18463 VERIFY ERROR: depth=0, error=CRL signature failure: C=IT, ST=PR, L=Parma, O=domain, OU=domain.eu, CN=user, name=user, emailAddress=info@stardata.it
Mon Nov  6 10:14:42 2017 192.168.100.1:18463 OpenSSL: error:140890B2:SSL routines:SSL3_GET_CLIENT_CERTIFICATE:no certificate returned
Mon Nov  6 10:14:42 2017 192.168.100.1:18463 TLS_ERROR: BIO read tls_read_plaintext error
Mon Nov  6 10:14:42 2017 192.168.100.1:18463 TLS Error: TLS object -> incoming plaintext read error
Mon Nov  6 10:14:42 2017 192.168.100.1:18463 TLS Error: TLS handshake failed
Mon Nov  6 10:14:42 2017 192.168.100.1:18463 Fatal TLS error (check_tls_errors_co), restarting
Mon Nov  6 10:14:42 2017 192.168.100.1:18463 SIGUSR1[soft,tls-error] received, client-instance restarting

This one was trickier to solve. It turns out that you can re-enable MD5 as a workaround using two environment variables: NSS_HASH_ALG_SUPPORT=+MD5 and OPENSSL_ENABLE_MD5_VERIFY=1. In my case, I just added them to openvpn init script because the system is going to be decommissioned soon.

References