Libvirt VM host with ZFS storage pool issue

I’m trying to compose a VM on a MAAS provisioned libvirt host, and I’m getting this error:

Pod unable to compose machine: Unable to compose machine because: Failed talking to pod: Unable to compose test: Virsh command [‘start’, ‘–paused’, ‘test’] failed: Failed to start domain ‘test’ error: internal error: process exited while connecting to monitor: 2023-06-15T17:31:05.508059Z qemu-system-x86_64: -blockdev {“driver”:“file”,“filename”:“/dev/zvol/local/f1b07bcb-bf67-49e2-be9e-842d60ba34d8”,“node-name”:“libvirt-1-storage”,“auto-read-only”:true,“discard”:“unmap”}: ‘file’ driver requires ‘/dev/zvol/local/f1b07bcb-bf67-49e2-be9e-842d60ba34d8’ to be a regular file

The storage backend is ZFS on a block device:

# zpool list -v
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
local       928G  15.0G   913G        -         -     0%     1%  1.00x    ONLINE  -
  nvme0n1   928G  15.0G   913G        -         -     0%  1.61%      -    ONLINE

Which is added as a storage pool to libvirt via:

# virsh pool-dumpxml extra
<pool type='zfs'>
  <name>extra</name>
  <uuid>fa2d6df4-d4d0-419d-8e03-360b36f8db9d</uuid>
  <capacity unit='bytes'>996432412672</capacity>
  <allocation unit='bytes'>16059431936</allocation>
  <available unit='bytes'>980372980736</available>
  <source>
    <name>local</name>
  </source>
  <target>
    <path>/dev/zvol/local</path>
  </target>
</pool>

I’m not exactly sure what the issue is, but I can manually create the VM using virsh, ie:

    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source dev='/dev/zvol/local/test'/>
      <target dev='hda' bus='ide'/>
      <boot order='2'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>

Perhaps the work around is to create the libvirt pool as type dir instead of zfs, and point the dir at the zfs mount, like:

zfs set mountpoint=/local/virtual-machines local/virtual-machines

<pool type='dir'>
  <name>extra</name>
  <uuid>4e2e1393-f787-4cd2-8e04-30fb0aa2d4d2</uuid>
  <capacity unit='bytes'>949421473792</capacity>
  <allocation unit='bytes'>131584</allocation>
  <available unit='bytes'>949421342208</available>
  <source>
  </source>
  <target>
    <path>/local/virtual-machines</path>
    <permissions>
      <mode>0755</mode>
      <owner>0</owner>
      <group>0</group>
    </permissions>
  </target>
</pool>

Then, MAAS composes the VM fine.

Doesn’t seem right though…

I suppose I’ll just keep posting to myself here :rofl:

I think MAAS is trying to create the disk as sourcetype file, happening here: https://github.com/maas/maas/blob/313c7172557ac357a86244ffe7cb72d90efe0aef/src/provisioningserver/drivers/pod/virsh.py#LL941C23-L941C23

It is doing this without regard to the fact that a libvirt storage pool of type zfs wants the sourcetype to be block.

I think this might be a bug with libvirt, since virsh will allow you to create a volume in the zfs pool, AND attach it to the VM with --sourcetype file, like:

root@nuc-server-5:~# virsh vol-create-as extra test1 34359738368 --allocation 0
Vol test1 created

root@nuc-server-5:~# virsh attach-disk test /dev/zvol/local/test1 vdc --targetbus virtio --sourcetype file --config --serial serial
Disk attached successfully

However, when you try to start the VM, it throws an error:

root@nuc-server-5:~# virsh start --domain test
error: Failed to start domain 'test'
error: internal error: process exited while connecting to monitor: 2023-06-16T16:14:39.729601Z qemu-system-x86_64: -blockdev {"driver":"file","filename":"/dev/zvol/local/test1","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}: 'file' driver requires '/dev/zvol/local/test1' to be a regular file

If I repeat the same steps, but with --sourcetype block, everything works as expected:

root@nuc-server-5:~# virsh vol-create-as extra test1 34359738368 --allocation 0
Vol test1 created

root@nuc-server-5:~# virsh attach-disk test /dev/zvol/local/test1 vdc --targetbus virtio --sourcetype block --config --serial serial
Disk attached successfully

root@nuc-server-5:~# virsh start --domain test
Domain 'test' started

So, I’m not sure what the best path forward would be from here.

Submit a bug report to libvirt?

Or provide some logic in the attach_local_volume function here so it attaches the disk as block instead of file when the storage pool is zfs. I think the latter option would be best, since the _create_local_volume function has logic to account for zfs pools.

It looks like you’ve done some good debugging and isolated the issue with attaching ZFS volumes to VMs composed by MAAS on libvirt. Here are my thoughts, if you don’t mind a technical author and MAAS hobbyist trying to help a little:

  • I agree your analysis points to a mismatch between how MAAS composes the disk XML using type ‘file’ versus libvirt expecting ‘block’ for ZFS pools.

  • Submitting an issue on launchpad for the MAAS project would be the best next step so the developers can review and potentially address this behavior.

  • Including a minimal XML snippet or example that shows the working and non-working cases would help illustrate the problem.

  • Proposing a code change to attach_local_volume to detect ZFS pools and use type ‘block’ is a good solution, if you have cycles to prepare a merge request.

  • As a workaround for now, you could skip using MAAS storage pools and pre-create disks with ‘block’ type manually via virsh. MAAS would still inject them at deploy time.

  • Calling out the libvirt behavior mismatch specifically in any bug report would also help so it’s visible to all parties.

Let me know if you need any other advice on best places to report the issue or propose fixes! The MAAS team is generally very responsive to community contributions.

1 Like

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.