Paranoia, XenServer and libvirt: Full Disk Encryption unlocking from virtual serial TTY


Everybody wants encryption today; encryption of network traffic, mainly.
But what about the encryption of data-at-rest? I mean, what if someone got physical access to your storage?
Of course you can protect specific folders with your tools of choice, but there’s always a chance that some file got saved outside your security fence by a zealous program, or just forgotten by you on the desktop… here comes the Full Disk Encryption (shortly, FDE) to the rescue!

Maybe you are  already using it without being aware of that… if BitLocker on Windows and FileVault on Mac OS X sound familiar, you are on track.

The penguin within

But, what about our beloved Linux machines?
In the Linux world, FDE is mainly achieved through LUKS; this software is capable of encrypting your disks with AESXTS using both passphrases and key files, usually stored in an USB drive and plugged on-demand.

I won’t tell you how to setup LUKS in your machine because you should better refer to the installation manual of your distro.

If you plan to use it with a workstation, you’ll be set already manually inserting the password or the USB drive on every boot.

And my VMs?

But, what if you want to use LUKS in a virtual environment, in a hosted VM?

I won’t talk about acrobatic USB-passthrough, focusing instead on the passphrase method.

Of  course, you can manually type it in your preferred hypervisor VM-console; but because having a very long and random passphrase is of utmost importance, writing a 40 characters long on every server reboot can be extremely tedious and can easily bring to bad habits, like short passphrases, avoiding necessary update-reboot cycles or dumping the LUKS thing at all.

Meeting the serial (killer?)

The best way of input long keys would be having it stored in encrypted keyrings with the possibility to copy-and-paste them into the VM prompt as needed, but the graphical consoles AFAIK don’t support input methods like that, and of course we cannot use SSH&co. because there isn’t any service running in the VM before boot.

There’s another way to access the VM at “physical” level, at least with hypervisors like KVM and XEN: the glorious and vintage serial console!

Maybe you have already heard of it for router configuration; shortly, it was the leading way of talking to an operating system out-of-band before the video output we use today.

Any Linux distribution provide this capability; to enable it in a CentOS 7 guest, you just need to modify your /etc/default/grub to make it looks like this:

 

GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL="console serial"
GRUB_SERIAL_COMMAND="serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1"
GRUB_CMDLINE_LINUX_DEFAULT="console=tty1 console=ttyS0,115200"
GRUB_CMDLINE_LINUX="rd.lvm.lv=DONT_CHANGE_IT!/root rd.luks.uuid=DONT_CHANGE_IT! rd.lvm.lv=DONT_CHANGE_IT!/s
wap rhgb"
GRUB_DISABLE_RECOVERY="true"

and rebuild your initrd with grub2-mkconfig -o /boot/grub2/grub.cfg.
Take a snapshot before doing any modification to your grub.cfg, you have been warned!

Now reboot your system and everything should work as before, except for…

Practical unlocking

Yes, now there is an active serial interface that can be used to access your VM from the hypervisor, in a completely textually way.
Really, you can stop typing that super-long random string one-character-at-time right now!

The thing is, how to access it from the virtualization host?

  • If you are using libvirt-virsh framework, you are lucky: just type virsh console VMname and you are set! The exit-escape is CTRL-].
  • If you are on XenServer with XAPI, use the xl console VMname console (same escape). Dump that xe console command, it won’t help you if you are targeting an HVM guest; it is very likely that you are on HVM, you should now if you still are on PV-mode. The xl command solution is undocumented in the XenServer doc, and it’s the actual reason why I wrote this post. 

Oh, XL is also the new default toolstack for XEN management!

Now you should be set and smooth, go and enjoy your very-long and very-random LUKS passphrases!

XenServer 7 software RAID with mail alert

Leggi questo articolo anche in italiano.

XenServer 7 is emerging as the reference implementation of the opensource hypervisor Xen, proving advanced features regarding storage, networking, HA and management; all packaged in a self-contained, enterprise-level bundle, often superior to VMware vSphere, HyperV and KVM, the other big players of the enterprise virtualization market.

It should also be noticed that Xen is the hypervisor of choice for the majority of public clouds (AWS and the others), is completely FLOSS  software and its XenServer declination is managed directly from the Linux Foundation (no CITRIX anymore), making it the de-facto standard solution for the bare metal opensource virtualization.

Powerful batteries included!

The software stack that surrounds the Xen kernel has made a giant leap thanks to the adoption of a modified CentOS 7 as the dom0 in XenServer 7.
The standard installation of XS already include every piece of software needed to completely skip an hardware RAID solution and adopt an enterprise-ready software RAID solution! As of today, MDADM is one of the few RAID solution that can manage NVMe RAID or other niche configurations; thanks to the modern CPU power, MDADM (especially with very big arrays) plays on the same ground of some ASICS-based controller.

Furthermore, software RAID enable the creation of array using non-branded disks, a strategic feature of XS7 in comparison to products like VMware, that at most permits the creation of a RAIN using paid add-ons like vSAN. If you have ever compared the price of branded HDD or SSD  (HP, IBM, Dell, ecc.) with the COTS ones, you know what I’m talking about.

Dive into MDADM

This is the to-do list you have to follow in order to build a software RAID that will warn you in the event of malfunction:

  • Creation on an MDADM array;
  • Configure SSMTP to send email warnings;
  • Test if email alerts are working;
  • Add the array as an SR to XS7.

MDADM is a very battle-tested piece of software; included in the Linux kernel since many years, has earned on the field its reputation of very stable and bug-free software; indeed it is used in big critical system (like the “big irons x86”) and in the majority of commercial NAS system (Synology, QNAP, Buffalo, ecc).

Because is already included in the standard installation on XS7, the creation of a RAID array is very simple, especially in the typical scenario in which you have installed the hypervisor in a USB drive that won’t be part of the array… as recommended by the best practices! I have spoke about that here (Warning, italian inside™! Let me know if you want to read it in english).

For instance, create a RAID 10 with the first four HDDs is as simple as

 mdadm --create /dev/md0 --run --level=10 --raid-devices=4 /dev/sd[a,b,c,d] 

To monitor the disk syncing, just do

 watch cat /proc/mdstat 

or the evergreen

 mdadm --detail /dev/md0

that will also give you various information about the status of the array.

Would you trust a system that does not warn you in the event of failure?

We are going to set the array monitoring; just to begin, copy the active array configuration in the config file of MDADM with

 mdadm --verbose --detail --scan >> /etc/mdadm.conf 

This will enable the array monitoring by the MDADM daemon.
Now, configure the SSMTP to send email warnings… just as an example, this configuration of /etc/ssmtp/ssmtp.conf is usable to send notification from a gmail address:

 
root=your_mail@gmail.com
mailhub=smtp.gmail.com:587
Hostname=your_hostname.yourdomain.sth
UseTLS=YES
TLS_CA_File=/etc/pki/tls/certs/ca-bundle.crt
UseSTARTTLS=YES
AuthUser=your_mail@gmail.com
AuthPass=your_password
FromLineOverride=YES

It’s important to set /etc/ssmtp/revaliases like that:

 root:your_mail@gmail.com:smtp.gmail.com:587 

Eventually, the MAILADDR and MAILFROM parameters from the last rows of /etc/mdadm.conf will need to be assigned to your sender and receiver addresses, so mdadm.conf will look like that:

 
ARRAY /dev/md0 level=raid10 num-devices=4 metadata=1.2 name=your_hostname:0 UUID=UUID_ARRAY
    devices=/dev/sda,/dev/sdb,/dev/sdc,/dev/sdd
MAILADDR destination_mail@provider.sth
MAILFROM your_mail@gmail.com

Don’t leave your chair without some testing!

Now that the configuration is completed, you need to do some checks; MDADM can send a test email with

 mdadm --monitor --scan --test --oneshot 

But, if you really want, you can put a disk in “failed” state (be aware of rebuilding times with big HDD) using

 mdadm --manage --set-faulty /dev/mdo /dev/sdb mdadm --manage /dev/md0 --add /dev/sdb 

You have to wait a little for the fail detection, as far as ten minutes, because the daemon is correctly “lazy” at recognizing the fail, even if it is simulated.

Now that the array is tested and ready to be used, just add it to the XS7 SR:

 xe sr-create content-type=user device-config:device=/dev/md0  name-label="Local RAID10" shared=false type=lvm 

In XS7, the array is recognized even after a reboot without manual module insertion or other configurations; everything is just ready to go!