Setting up a Flexible "Virtual MAAS" Test Environment


#1

So you followed the quick start guide for setting up a MAAS development environment… now what? You might be asking yourself, “how do I test my code on a full install of MAAS”?

This topic explains how you might go about setting up a MAAS development environment for maximum flexibility.

Prerequisites

Install a hypervisor that MAAS can control, such as libvirt. (MAAS can also support some VMware setups, but it isn’t as widely used or tested as libvirt.)

sudo apt install libvirt-bin qemu-kvm cpu-checker
sudo snap install lxd

MAAS has the capability to manage KVM Pods, which require the use of KVM acceleration. To verify that KVM acceleration is available on your system, run kvm-ok. The output should look something like the following:

$ kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used
More information

MAAS stands for “Metal as a Service”. MAAS was designed to work with physical hardware; most commonly, IPMI servers. Since you’re not likely to have IPMI servers available to a MAAS running on your development machine, you’re going to need the capability to set up virtual machines using a hypervisor that MAAS is able to control.

The LXD snap can be used to easily bring up and tear down containers that can be used for testing a variety of scenarios with MAAS, such as development Ubuntu releases or previous LTS releases.

Configuring LXD

LXD must be configured before its first use. Your specific configuration may vary (especially based on how much disk space you want to allocate to LXD). Expand the section below for an example of a recommended way to configure it, based on this guide:

LXD Configuration Example
$ lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Name of the storage backend to use (btrfs, ceph, dir, lvm, zfs) [default=zfs]: btrfs
Create a new BTRFS pool? (yes/no) [default=yes]: yes
Would you like to use an existing block device? (yes/no) [default=no]: 
Size in GB of the new loop device (1GB minimum) [default=100GB]: 200
Would you like to connect to a MAAS server? (yes/no) [default=no]: 
Would you like to create a new local network bridge? (yes/no) [default=yes]: no
Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]: yes
Name of the existing bridge or host interface: virbr0
Would you like LXD to be available over the network? (yes/no) [default=no]: no
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] 
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:

For development purposes, I recommend using either dir or btrfs as a storage backend. The dir backend is the simplest approach, since it does not require allocating a separate storage device (or loopback device) for LXD. The btrfs backend is nice since it supports a wide set of features when used with LXD.

I recommend using the virbr0 bridge (created when libvirt-bin was installed) as the default bridge.

Optional LXD Configuration

It may also be useful to pre-download the Ubuntu images you’ll be using, so that they’re ready to go for when you launch containers, such as by running:

lxc image copy ubuntu:trusty local: --copy-aliases
lxc image copy ubuntu:xenial local: --copy-aliases
lxc image copy ubuntu:bionic local: --copy-aliases
lxc image copy ubuntu-daily:cosmic local: --copy-aliases

(Note that doing this means that you may need to manually update your local images, such as by running lxc image refresh <local-image-name> in a cron job.)

I also find it useful to set a predictable schedule for when the LXD snap updates. (Due to a known issue, snap updates can cause open lxd exec sessions to terminate.) You can do so as follows:

sudo snap set lxd refresh.timer=fri,23:00-01:00

This causes updates to the LXD snap to be scheduled for every Friday night, between 11pm and 1am.

Defining a Test Network for MAAS Management

Using libvirt to manage your virtual bridges is an easy way to make sure your test networks can be used seamlessly across many different tools. You can use the virtual bridges libvirt creates for creating test VMs, attaching LXD containers, or configuring KVM pods in MAAS.

When you install the libvirt-bin package, a virbr0 network (called default) is created, complete with NAT and managed DHCP. This network is useful if you want to boot VMs or containers for testing (either independent of MAAS, or themselves running MAAS), so we’ll leave it alone.

You’ll need to create at least one “MAAS-compatible” network; that is, one that MAAS can fully manage DHCP services on. You can do so as follows, assuming your user has access to run libvirt commands; you may need to add yourself to the appropriate group (usually libvirt or libvirtd), or run sudo -i first:

cat << EOF > maas.xml
<network>
  <name>maas</name>
  <forward mode='nat'>
    <nat>
      <port start='1024' end='65535'/>
    </nat>
  </forward>
  <dns enable='no'/>
  <bridge name='virbr1' stp='off' delay='0'/>
  <domain name='testnet'/>
  <ip address='172.16.99.1' netmask='255.255.255.0'>
  </ip>
</network>
EOF
virsh net-define maas.xml
rm maas.xml
virsh net-start maas
virsh net-autostart maas

Defining a network in this way (that is, with DHCP disabled, and with the name maas) gives MAAS the opportunity to control DHCP in the future, and ensures that if a KVM pod is set up, it will be used as the default network to attach VMs for network (PXE) booting.

More information

The supported way to manage machines in MAAS is to allow MAAS full control over DHCP services on one or more networks. This allows MAAS to determine which machine is booting over the network, and provide the correct configuration (depending on the machine’s lifecycle). For example, if MAAS notices that a machine performs a network boot, but the machine has not been seen before, that machine will be enlisted into MAAS. (Similarly, if the user chooses a to perform an action on a particular machine, MAAS will power it up and take the appropriate action when the machine boots.)

Defining a Default Storage Pool for a KVM Pod

In order to use a hypervisor as a KVM pod, you must define a default storage pool. On any Ubuntu host (or container) running libvirt, you can easily do that as follows:

virsh pool-define-as default dir - - - - "/var/lib/libvirt/images"  
virsh pool-autostart default  
virsh pool-start default

Creating a LXD Container that can itself be a KVM Pod

When testing MAAS 2.5+, it’s helpful to have a MAAS controller running on the same system that is to be used as a KVM pod. (If MAAS can correlate a KVM pod to the host it’s running on, it can be more flexible about how networks are attached.)

The following example shows how to create a container, and use cloud-init's netplan integration to attach to the existing virbr0 network, and the virbr1 network created above. It also instructs cloud-init to install libvirt and MAAS, making this setup a ready-made KVM pod!

CONTAINER=bionic-maas-pod
CIDR=172.16.99.2/24
lxc init ubuntu:bionic $CONTAINER -s default --no-profiles
lxc network attach virbr0 $CONTAINER eth0 eth0
lxc network attach virbr1 $CONTAINER eth1 eth1
lxc config set $CONTAINER user.user-data "#cloud-config
package_upgrade: true
apt:
  sources:
    maas:
      source: ppa:maas/next
packages:
  - jq
  - maas
  - libvirt-bin
  - qemu-kvm
locale: en_US.UTF-8
timezone: $(timedatectl | grep 'Time zone:' | awk '{print $3}')
runcmd:
  - [touch, /tmp/startup-complete]
"
lxc config set $CONTAINER user.network-config "version: 2
ethernets:
  eth0:
    match:
      name: eth0
    dhcp4: true
  eth1:
    match:
      name: eth1
bridges:
  br0:
    interfaces: [eth1]
    addresses:
     - $CIDR
"
lxc config device add $CONTAINER kvm unix-char path=/dev/kvm
lxc start $CONTAINER
lxc exec $CONTAINER -- /bin/bash -c 'while ! [ -f /tmp/startup-complete ]; do sleep 0.5; done'
lxc exec $CONTAINER bash
kvm-ok

Using this setup passes the /dev/kvm device through to the containers, and allows a new libvirt hypervisor to host KVM pods in MAAS (in isolation from the hypervisor already running on the container’s host).

Testing with a Privileged Container

At one point, due to bug #1784501, it was useful to test MAAS inside a privileged container.

If, for whatever reason, you need to test with a privileged container, simply run the following after you lxc init your container:

lxc config set $CONTAINER security.privileged true

Configuring libvirt inside the container for use with KVM pods

In order for legacy (pre-2.5.0) KVM pods to operate with MAAS, you need to create maas network within libvirt for MAAS to attach to. (MAAS will prefer to use the maas network, followed by the default network, in absence of any other information.) Similar to how we configured the container hypervisor’s interfaces, we can do so as follows (attaching to the existing br0 bridge rather than creating a new one):

cat << EOF > maas.xml
<network>
  <name>maas</name>
  <forward mode='bridge'/>
  <bridge name='br0'/>
</network>
EOF
virsh net-define maas.xml
rm maas.xml
virsh net-start maas
virsh net-autostart maas

For MAAS to be able to connect to the KVM pod, you will need to either set up a trusted SSH key for MAAS to access a user with in the libvirt group, or set a password.

For testing, you can edit /etc/ssh/sshd_config to change PasswordAuthentication to yes, and run passwd ubuntu and then sudo service ssh restart . Then when you add the KVM pod in MAAS, use a URL in the form:

qemu+ssh://ubuntu@172.16.99.2/system

… and provide the password you set.

Note that you will also need to define the default storage pool (as described above) to use the container as a pod.


Creating a full-featured container for working on MAAS 2.3
KVM Pod Networking with macvlan
[2.5] Automatically grabs all existing vms in a pod and commissions them
MaaS as a VM, how about adding the underlying KVM as a Pod?
#2

I’m not sure how the fabric’s tie into the region controller and if you’re using a rack-controller inside the LXC Container with LibVirt or not. I don’t see how the DHCP servers can be controlled if they aren’t connect to it either. But right now, I can get the Region Controller to connect to the Libvirt running inside another LXC container but the PXE boot isn’t working after I create the Machine from the MAAS GUI.

I know a diagram would help me on how everything is connected from in the bare metal Ubuntu hosting these multiple layers of VM’s and Networks.

Thanks!


#3

This tutorial was meant to help people create a completely self-contained test environment. If you want to expose the MAAS DHCP services to outside the container, you would need to bridge the container to a physical interface instead of using a libvirt bridge. Hope this helps.


#4

Hi! Thanks for the quick response!

I was trying to setup this environment but it wasn’t clear where to run LibVirt in your tutorial. My LXC container 172.16.99.2 isn’t getting out through the main one for some reason to update packages to install LibVirt. So i’m working on that now.

$ lxc profile show maasServer
config:
raw.lxc: |-
lxc.cgroup.devices.allow = c 10:237 rwm
lxc.apparmor.profile = unconfined
lxc.cgroup.devices.allow = b 7:* rwm
security.privileged: “true”
description: Default LXD profile
devices:
eth0:
name: eth0
nictype: bridged
parent: lxdbr0
type: nic
eth1:
name: eth1
nictype: bridged
parent: virbr1
type: nic
kvm:
path: /dev/kvm
type: unix-char
loop0:
path: /dev/loop0
type: unix-block
loop1:
path: /dev/loop1
type: unix-block
loop2:
path: /dev/loop2
type: unix-block
loop3:
path: /dev/loop3
type: unix-block
loop4:
path: /dev/loop4
type: unix-block
loop5:
path: /dev/loop5
type: unix-block
loop6:
path: /dev/loop6
type: unix-block
loop7:
path: /dev/loop7
type: unix-block
root:
path: /
pool: default
type: disk
name: maasServer
used_by:

  • /1.0/containers/maasServer

$ lxc profile show maasPod
config: {}
description: Default LXD profile
devices:
eth0:
name: eth0
nictype: bridged
parent: virbr1
type: nic
kvm:
path: /dev/kvm
type: unix-char
root:
path: /
pool: default
type: disk
name: maasPod
used_by:

  • /1.0/containers/maaspod01

#5

Ah, so it looks like you are trying to create an environment a bit more complex than the one I have described in this post.

I like to run libvirt on the Ubuntu host itself, but I don’t use that for MAAS pods. Instead, I also run libvirt inside the LXC container (just because I like it to be both self-contained, and running alongside the MAAS controller). As you can see from my instructions, I simply forward the libvirt bridges from the Ubuntu host to the LXC containers, and create bridged networks in libvirt for use with MAAS.

From your diagram, I do see one problem: MAAS won’t like the fact that you have a duplicated 192.168.122.0/24 subnet. If MAAS sees two subnets with the same CIDR, it will treat them as the same subnet. That is why it’s better to create the networks on the Ubuntu host and simply forward them into the container using bridges. Furthermore, make sure you don’t enable DHCP in libvirt. Instead, you’ll need to let MAAS manage DHCP.


#6

OK so I should create two more libvirt MAAS networks with different CIDR like the original for the MAAS Pods.

Thanks for the tip!


#7

I cloned the LXC Maas Pod container and need to generate a new RackController ID, how would I go about doing that?


#8

MAAS stores the region and rack system_id at /var/lib/maas/maas_id. Stop the rack, delete /var/lib/maas_id, and then restarting the rack will generate a new id.


#9

The default libvirt network MAC addresses in the LXC containers have to be different too, otherwise the MAAS ID gets generated the same on both even after deletion.


#10

I got everything to work but I think there’s timing problems with the MAAS talking to virsh inside the LXC containers. For example, I have to click compose a few times before it will successfully create the KVM. Same thing with commissioning, releasing, etc. I do notice that the power is 99% of the time seen as ERROR when it’s on or off.