A hypervisor is computer software, firmware or hardware that can create virtual machines. LXD is bundled with Ubuntu 16.04, and has the advantage of being able to run hundreds of unmodified Linux operating systems on a single server and at incredible speeds.
When Ubuntu 16.04 was launched, one of its most interesting features was software called LXD that was included with it. LXD is somewhat similar to Docker but differs from it in a few aspects.
The base technology used in both Docker and LXD is LXC or Linux Containers, which has been around for a few years. It didn’t witness much traction because of the complexity involved in setting up the containers. Both LXD and Docker provide a high level user-friendly interface to the low level commands offered by LXC.
Differences between LXD, Docker and KVM
The primary difference between LXD and Docker is that the latter is meant for application-specific containers. Consider a typical use case of a Web application. You’ll need a database server and a Web server. So you will have two Docker containers—one which runs the database and the other running the Web server. The containers do not perform any other function.
In LXD, it is possible to run a complete OS as a container. For this, typically, one requires KVM or Xen and full hardware virtualisation (emulated disk, network cards and CPU). Due to the number of layers in full HVM, the performance is not as good as LXC but security is better. The guest virtual machine is fully isolated from the host machine, which decreases the attack surface. Ubuntu officially supports running RHEL, CentOS, Ubuntu, Debian and Oracle Linux containers in LXD.
Setting up LXD
LXD is best used with the Z File System (ZFS), which has become relatively stable. Ubuntu 16.04 supports ZFS officially. ZFS is not supported as a root file system in an Ubuntu installation—you don’t get that option even while installing Ubuntu, but you can always use it as a data storage file system.
The advantage of ZFS is that you get quick CoW (Copy-on-Write) snapshots, clones which are very useful for containers. Let’s assume that you have created a container and installed the LAMP stack. Now, you can snapshot it and create as many LAMP containers at no time just by cloning them. The best part is that the clones consume disk space only for what differs from the original data. If you have installed your WordPress application in one of the containers and, let’s say, WordPress takes up 10MB, then that particular container will consume only 10MB, unlike a full HVM for which you have to allocate a complete 10-20GB disk image file.
Creating the ZFS pool
Assuming you have a fresh installation of Ubuntu, with a spare disk or partition in the existing disk, we can install the required ZFS utilities as shown below:
apt install zfsutils-linux
In my set-up, I have /dev/sda as the OS drive and /dev/sdb as another drive, on which I’ll be creating the ZFS pool. ZFS is very sensitive to the disk names provided when creating the pool; so to ensure that it picks up the proper disk, we need to go to /dev/disk/by-id to find the appropriate disk.
[email protected]:~# ls -lh /dev/disk/by-id total 0 lrwxrwxrwx 1 root root 9 Dec 4 10:55 scsi-14d534654202020207305e3437703544694957d7ced624a7d -> ../../sr0 lrwxrwxrwx 1 root root 9 Dec 4 10:55 scsi-3600224804e58652893517c552e9d9cc3 -> ../../sdb lrwxrwxrwx 1 root root 9 Dec 4 10:55 scsi-360022480cf72d7b58630bab61658ad98 -> ../../sda lrwxrwxrwx 1 root root 10 Dec 4 10:55 scsi-360022480cf72d7b58630bab61658ad98-part1 -> ../../sda1 lrwxrwxrwx 1 root root 10 Dec 4 10:55 scsi-360022480cf72d7b58630bab61658ad98-part2 -> ../../sda2 lrwxrwxrwx 1 root root 10 Dec 4 10:55 scsi-360022480cf72d7b58630bab61658ad98-part3 -> ../../sda3 lrwxrwxrwx 1 root root 9 Dec 4 10:55 wwn-0x600224804e58652893517c552e9d9cc3 -> ../../sdb lrwxrwxrwx 1 root root 9 Dec 4 10:55 wwn-0x60022480cf72d7b58630bab61658ad98 -> ../../sda lrwxrwxrwx 1 root root 10 Dec 4 10:55 wwn-0x60022480cf72d7b58630bab61658ad98-part1 -> ../../sda1 lrwxrwxrwx 1 root root 10 Dec 4 10:55 wwn-0x60022480cf72d7b58630bab61658ad98-part2 -> ../../sda2 lrwxrwxrwx 1 root root 10 Dec 4 10:55 wwn-0x60022480cf72d7b58630bab61658ad98-part3 -> ../../sda3
As seen above in the output, the disk-ID for sdb is scsi-3600224804e58652893517c552e9d9cc3. If you use partitions on an existing disk, take the ID numbers from /dev/disk/by-id. If you use normal names like /dev/sda or /dev/sdb, then you may not be able to get back the ZFS pool on reboot, in case the order of disks changes for some reason – which will cause the names of the disks to be changed.
[email protected]:~# zpool create zdata /dev/disk/by-id/scsi-3600224804e58652893517c552e9d9cc3 invalid vdev specification use ‘-f’ to override the following errors: /dev/disk/by-id/scsi-3600224804e58652893517c552e9d9cc3 does not contain an EFI label but it may contain partition
And there, I got an error while creating the pool. Since I know that the disk doesn’t contain anything, I’ll simply force it using –f and create my pool.
[email protected]:~# zpool status pool: zdata state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM zdata ONLINE 0 0 0 scsi-3600224804e58652893517c552e9d9cc3 ONLINE 0 0 0 errors: No known data errors
This is a very basic ZFS pool with just one disk. ZFS directly supports mirroring, striping and RAID5 modes. So it eliminates the need for MDADM or LVM. Documentation of such advanced configurations is easily available online; in particular, FreeBSD’s ZFS guide is very easy to understand and apply the information to Ubuntu in spite of it being so different from FreeBSD.
Now we can initialise LXD using the following command:
It will ask you a few questions and offer to even create a ZFS pool, but I have not used that option since I always create my ZFS pools manually, which gives me flexibility on how to set them up.
[email protected]:~# lxd init Name of the storage backend to use (dir or zfs) [default=zfs]: Create a new ZFS pool (yes/no) [default=yes]? no Name of the existing ZFS pool or dataset: zdata Would you like LXD to be available over the network (yes/no) [default=no]? Do you want to configure the LXD bridge (yes/no) [default=yes]? Warning: Stopping lxd.service, but it can still be activated by: lxd.socket LXD has been successfully configured.
By default, lxdbr0, the interface that is created by this set-up script will do a NAT. In case you don’t want to do a NAT, you can either select that option during the init time or run dpkg-reconfigure lxd after this to do it again.
Creating your first container
This is pretty simple. First, use the following command:
lxc launch ubuntu: first-container
In the above command, launch is the command name for creating and launching a container; ubuntu: is the name of the image which is to be used to create the container named 1stc. The name of the image follows this syntax: <repository>:<image name>. So, essentially, ubuntu in that command is the name of the image repository. Note that the name of the container must be a valid host name.
[email protected]:~# lxc launch ubuntu: first-container Creating first-container Starting first-container To get into your container, [email protected]:~# lxc exec first-container /bin/bash [email protected]:~# ifconfig eth0 Link encap:Ethernet HWaddr 00:16:3e:b1:5d:bb inet addr:10.134.32.8 Bcast:10.134.32.255 Mask:255.255.255.0 inet6 addr: fe80::216:3eff:feb1:5dbb/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:18 errors:0 dropped:0 overruns:0 frame:0 TX packets:14 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2055 (2.0 KB) TX bytes:1837 (1.8 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
LXD has a lot of other features, all of which cannot be covered in one article. If you’d like to explore LXD in more depth, head on to Google and LXD’s official GitHub repository.