LVM Storage Template breaks Installation on 20.04

Hi!

I don’t know why, but If I assign the LVM storage template to a server that will be installed with 20.04 hwe kernel, the installation process fails with:

finish: cmd-install/stage-curthooks/builtin/cmd-curthooks: FAIL: curtin command curthooks
Traceback (most recent call last):
  File "/curtin/curtin/commands/main.py", line 202, in main
    ret = args.func(args)
  File "/curtin/curtin/commands/curthooks.py", line 1886, in curthooks
    builtin_curthooks(cfg, target, state)
  File "/curtin/curtin/commands/curthooks.py", line 1851, in builtin_curthooks
    setup_grub(cfg, target, osfamily=osfamily,
  File "/curtin/curtin/commands/curthooks.py", line 804, in setup_grub
    install_grub(instdevs, target, uefi=uefi_bootable, grubcfg=grubcfg)
  File "/curtin/curtin/commands/install_grub.py", line 398, in install_grub
    in_chroot.subp(cmd, env=env, capture=True)
  File "/curtin/curtin/util.py", line 780, in subp
    return subp(*args, **kwargs)
  File "/curtin/curtin/util.py", line 275, in subp
    return _subp(*args, **kwargs)
  File "/curtin/curtin/util.py", line 139, in _subp
    raise ProcessExecutionError(stdout=out, stderr=err,
curtin.util.ProcessExecutionError: Unexpected error while running command.
Command: ['unshare', '--fork', '--pid', '--', 'chroot', '/tmp/tmpi6ddcgfl/target', '/usr/lib/grub/grub-multi-install']
Exit code: 1
Reason: -
Stdout: ''
Stderr: Installing grub to /boot/efi.
        Installing for x86_64-efi platform.
        Installation finished. No error reported.
        Installing grub to /var/lib/grub/esp.
        Installing for x86_64-efi platform.
        grub-install: error: /var/lib/grub/esp doesn't look like an EFI partition.
        
Unexpected error while running command.

I have other servers like this one (slight previous HW version) installed with 18.04.

Any idea why this could be happening?

don’t even know why is trying to install to /var/lib/grub/esp

I can see this bug report, perhaps is related…

I think removing the boot flag form /boot, which I believe is used for legacy boot, would fix the issue.

However I cannot see flags in UI partitions,

@rvallel, did you ever figure this one out?

The attached issue is relevant as the BOOT flag confuses the EFI boot system.

I am not sure why I am experiencing this now and not before. I use many other similar servers as this one, with Ubuntu 18.04 and this issue does not happen. But new version include an unsupported network adapter and I am forced to use Ubuntu 20.04 with hwe kernel. Some of the new components may create the issue.

I have a workaround, that is to create an alternative LVM partitioning using the CLI, that creates /boot in a partition that is not bootable, see below.

It would have been nice if there was API or UI to change the bootable flag of a partition.

This is the script that I use to mimic LVM storage profile workarounding this issue:

# Assign the storage template

DISK_ID=$(maas $PROFILE machine set-storage-layout $SYSTEM_ID storage_layout=blank | jq -r '.boot_disk.id')

# create EFI partition on /boot/efi (bootable)

PART_ID=$(maas $PROFILE partitions create $SYSTEM_ID $DISK_ID bootable=true size=550000000 | jq -r '.id')
log Efi Partition ID is $PART_ID
OUT=$(maas $PROFILE partition format $SYSTEM_ID $DISK_ID $PART_ID fstype=vfat label=efi)
maas $PROFILE partition mount $SYSTEM_ID $DISK_ID $PART_ID mount_point=/boot/efi

# create boot partition on /boot (NOT BOOTABLE)

PART_ID=$(maas $PROFILE partitions create $SYSTEM_ID $DISK_ID bootable=false size=$SIZE_BOOT | jq -r '.id')
log Boot Partition ID is $PART_ID
OUT=$(maas $PROFILE partition format $SYSTEM_ID $DISK_ID $PART_ID fstype=ext4 label=boot)
maas $PROFILE partition mount $SYSTEM_ID $DISK_ID $PART_ID mount_point=/boot

# create empty partition for volume group

PART_ID=$(maas $PROFILE partitions create $SYSTEM_ID $DISK_ID bootable=false | jq -r '.id')
log Volume Group Partition ID is $PART_ID
VG_ID=$(maas $PROFILE volume-groups create $SYSTEM_ID name=vgroot partitions=$PART_ID | jq -r '.id')
log Volume Group ID is $VG_ID


# Create the Root LVM

ROOT_ID=$(maas $PROFILE volume-group create-logical-volume $SYSTEM_ID $VG_ID name=root size=$SIZE_ROOT | jq -r '.id')
log Created Root Volume

OUT=$(maas $PROFILE block-device format $SYSTEM_ID $ROOT_ID fstype=ext4) 
OUT=$(maas $PROFILE block-device mount $SYSTEM_ID $ROOT_ID mount_point=/)
log Mounted Root Volume

it seems like this might be a feature request? what do you think?

I think this is too thin grained for a feature request. all in all perhaps my problem here is too obscure and not frequent.

However, I have noticed that there is interest for customizing the “storage layouts”.

This issue should be added into that bucket.

Custom Storage Layouts should be easy to assign to hosts. They should be text-based and easy to copy and modify. For example, I would have taken the LVM storage layout, edited it to remove the boot blag on the partition, and there you go.

I guess the feature request is to have Template based storage layouts, with templates for the current layouts distributed (flat, lvm, etc). So that they are easy to copy modify and create new ones.

1 Like

really nice analysis. i’ll try to make sure this doesn’t get lost. thanks!

1 Like