So you followed the quick start guide for setting up a MAAS development environment… now what? You might be asking yourself, “how do I test my code on a full install of MAAS”?
This topic explains how you might go about setting up a MAAS development environment for maximum flexibility.
Prerequisites
Install a hypervisor that MAAS can control, such as libvirt
. (MAAS can also support some VMware setups, but it isn’t as widely used or tested as libvirt
.)
sudo apt install libvirt-bin qemu-kvm cpu-checker
sudo snap install lxd
MAAS has the capability to manage KVM Pods, which require the use of KVM acceleration. To verify that KVM acceleration is available on your system, run kvm-ok
. The output should look something like the following:
$ kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used
More information
MAAS stands for “Metal as a Service”. MAAS was designed to work with physical hardware; most commonly, IPMI servers. Since you’re not likely to have IPMI servers available to a MAAS running on your development machine, you’re going to need the capability to set up virtual machines using a hypervisor that MAAS is able to control.
The LXD snap can be used to easily bring up and tear down containers that can be used for testing a variety of scenarios with MAAS, such as development Ubuntu releases or previous LTS releases.
Configuring LXD
LXD must be configured before its first use. Your specific configuration may vary (especially based on how much disk space you want to allocate to LXD). Expand the section below for an example of a recommended way to configure it, based on this guide:
LXD Configuration Example
$ lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (btrfs, ceph, dir, lvm, zfs) [default=zfs]: btrfs
Create a new BTRFS pool? (yes/no) [default=yes]: yes
Would you like to use an existing block device? (yes/no) [default=no]:
Size in GB of the new loop device (1GB minimum) [default=100GB]: 200
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]: no
Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]: yes
Name of the existing bridge or host interface: virbr0
Would you like LXD to be available over the network? (yes/no) [default=no]: no
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
For development purposes, I recommend using either dir
or btrfs
as a storage backend. The dir
backend is the simplest approach, since it does not require allocating a separate storage device (or loopback device) for LXD. The btrfs
backend is nice since it supports a wide set of features when used with LXD.
I recommend using the virbr0
bridge (created when libvirt-bin
was installed) as the default bridge.
Optional LXD Configuration
It may also be useful to pre-download the Ubuntu images you’ll be using, so that they’re ready to go for when you launch containers, such as by running:
lxc image copy ubuntu:trusty local: --copy-aliases
lxc image copy ubuntu:xenial local: --copy-aliases
lxc image copy ubuntu:bionic local: --copy-aliases
lxc image copy ubuntu-daily:cosmic local: --copy-aliases
(Note that doing this means that you may need to manually update your local images, such as by running lxc image refresh <local-image-name>
in a cron job.)
I also find it useful to set a predictable schedule for when the LXD snap updates. (Due to a known issue, snap updates can cause open lxd exec
sessions to terminate.) You can do so as follows:
sudo snap set lxd refresh.timer=fri,23:00-01:00
This causes updates to the LXD snap to be scheduled for every Friday night, between 11pm and 1am.
Defining a Test Network for MAAS Management
Using libvirt
to manage your virtual bridges is an easy way to make sure your test networks can be used seamlessly across many different tools. You can use the virtual bridges libvirt
creates for creating test VMs, attaching LXD containers, or configuring KVM pods in MAAS.
When you install the libvirt-bin
package, a virbr0
network (called default
) is created, complete with NAT and managed DHCP. This network is useful if you want to boot VMs or containers for testing (either independent of MAAS, or themselves running MAAS), so we’ll leave it alone.
You’ll need to create at least one “MAAS-compatible” network; that is, one that MAAS can fully manage DHCP services on. You can do so as follows, assuming your user has access to run libvirt
commands; you may need to add yourself to the appropriate group (usually libvirt
or libvirtd
), or run sudo -i
first:
cat << EOF > maas.xml
<network>
<name>maas</name>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<dns enable='no'/>
<bridge name='virbr1' stp='off' delay='0'/>
<domain name='testnet'/>
<ip address='172.16.99.1' netmask='255.255.255.0'>
</ip>
</network>
EOF
virsh net-define maas.xml
rm maas.xml
virsh net-start maas
virsh net-autostart maas
Defining a network in this way (that is, with DHCP disabled, and with the name maas
) gives MAAS the opportunity to control DHCP in the future, and ensures that if a KVM pod is set up, it will be used as the default network to attach VMs for network (PXE) booting.
More information
The supported way to manage machines in MAAS is to allow MAAS full control over DHCP services on one or more networks. This allows MAAS to determine which machine is booting over the network, and provide the correct configuration (depending on the machine’s lifecycle). For example, if MAAS notices that a machine performs a network boot, but the machine has not been seen before, that machine will be enlisted into MAAS. (Similarly, if the user chooses a to perform an action on a particular machine, MAAS will power it up and take the appropriate action when the machine boots.)
Defining a Default Storage Pool for a KVM Pod
In order to use a hypervisor as a KVM pod, you must define a default
storage pool. On any Ubuntu host (or container) running libvirt
, you can easily do that as follows:
virsh pool-define-as default dir - - - - "/var/lib/libvirt/images"
virsh pool-autostart default
virsh pool-start default
Creating a LXD Container that can itself be a KVM Pod
When testing MAAS 2.5+, it’s helpful to have a MAAS controller running on the same system that is to be used as a KVM pod. (If MAAS can correlate a KVM pod to the host it’s running on, it can be more flexible about how networks are attached.)
The following example shows how to create a container, and use cloud-init
's netplan integration to attach to the existing virbr0
network, and the virbr1
network created above. It also instructs cloud-init to install libvirt and MAAS, making this setup a ready-made KVM pod!
CONTAINER=bionic-maas-pod
CIDR=172.16.99.2/24
lxc init ubuntu:bionic $CONTAINER -s default --no-profiles
lxc network attach virbr0 $CONTAINER eth0 eth0
lxc network attach virbr1 $CONTAINER eth1 eth1
lxc config set $CONTAINER user.user-data "#cloud-config
package_upgrade: true
apt:
sources:
maas:
source: ppa:maas/next
packages:
- jq
- maas
- libvirt-bin
- qemu-kvm
locale: en_US.UTF-8
timezone: $(timedatectl | grep 'Time zone:' | awk '{print $3}')
runcmd:
- [touch, /tmp/startup-complete]
"
lxc config set $CONTAINER user.network-config "version: 2
ethernets:
eth0:
match:
name: eth0
dhcp4: true
eth1:
match:
name: eth1
bridges:
br0:
interfaces: [eth1]
addresses:
- $CIDR
"
lxc config device add $CONTAINER kvm unix-char path=/dev/kvm
lxc start $CONTAINER
lxc exec $CONTAINER -- /bin/bash -c 'while ! [ -f /tmp/startup-complete ]; do sleep 0.5; done'
lxc exec $CONTAINER bash
kvm-ok
Using this setup passes the /dev/kvm
device through to the containers, and allows a new libvirt
hypervisor to host KVM pods in MAAS (in isolation from the hypervisor already running on the container’s host).
Testing with a Privileged Container
At one point, due to bug #1784501, it was useful to test MAAS inside a privileged container.
If, for whatever reason, you need to test with a privileged container, simply run the following after you lxc init
your container:
lxc config set $CONTAINER security.privileged true
Configuring libvirt inside the container for use with KVM pods
In order for legacy (pre-2.5.0) KVM pods to operate with MAAS, you need to create maas
network within libvirt for MAAS to attach to. (MAAS will prefer to use the maas
network, followed by the default
network, in absence of any other information.) Similar to how we configured the container hypervisor’s interfaces, we can do so as follows (attaching to the existing br0
bridge rather than creating a new one):
cat << EOF > maas.xml
<network>
<name>maas</name>
<forward mode='bridge'/>
<bridge name='br0'/>
</network>
EOF
virsh net-define maas.xml
rm maas.xml
virsh net-start maas
virsh net-autostart maas
For MAAS to be able to connect to the KVM pod, you will need to either set up a trusted SSH key for MAAS to access a user with in the libvirt
group, or set a password.
For testing, you can edit /etc/ssh/sshd_config
to change PasswordAuthentication
to yes
, and run passwd ubuntu
and then sudo service ssh restart
. Then when you add the KVM pod in MAAS, use a URL in the form:
qemu+ssh://ubuntu@172.16.99.2/system
… and provide the password you set.
Note that you will also need to define the default
storage pool (as described above) to use the container as a pod.