Ubuntu 24 deployment on Software RAID 1 Fails

Hi,

I am trying to deploy Ubuntu 24 on a physical server with MAAS. There are 2 identical 960GB SSD drives on the server and I want to do software raid on md0. But somehow installation is failing. What could be wrong here?

Installation Log :

        get_blockdev_sector_size: (log=512, phys=512)
        sdb logical_block_size_bytes: 512
        adding partition 'sdb-part1' to disk 'sdb' (ptable: 'gpt')
        partnum: 1 offset_sectors: 2048 length_sectors: 1875369983
        Preparing partition location on disk /dev/sdb
        Wiping 1M on /dev/sdb at offset 1048576
        Running command ['sgdisk', '--new', '1:2048:1875372031', '--typecode=1:8300', '/dev/sdb'] with allowed return codes [0] (capture=True)
        Running command ['udevadm', 'info', '--query=property', '--export', '/dev/sdb'] with allowed return codes [0] (capture=True)
        /dev/sdb is multipath device? False
        Running command ['blockdev', '--rereadpt', '/dev/sdb'] with allowed return codes [0] (capture=True)
        Running command ['udevadm', 'settle'] with allowed return codes [0] (capture=False)
        TIMED udevadm_settle(): 0.158
        TIMED udevadm_settle(exists='/dev/sdb1'): 0.000
        get_path_to_storage_volume for volume sdb-part1({'device': 'sdb', 'id': 'sdb-part1', 'name': 'sdb-part1', 'number': 1, 'offset': '4194304B', 'size': '960189431808B', 'type': 'partition', 'uuid': 'b90952e2-c64c-4650-b3a9-f080a77735b5', 'wipe': 'superblock'})
        get_path_to_storage_volume for volume sdb({'grub_device': True, 'id': 'sdb', 'model': 'Micron_7450_MTFD', 'name': 'sdb', 'ptable': 'gpt', 'serial': '3f438f3b0175a000', 'type': 'disk', 'wipe': 'superblock'})
        Processing serial 3f438f3b0175a000 via udev to 3f438f3b0175a000
        lookup_disks found: ['scsi-23f438f3b0175a000', 'scsi-23f438f3b0175a000-part1']
        Running command ['udevadm', 'info', '--query=property', '--export', '/dev/sdb'] with allowed return codes [0] (capture=True)
        /dev/sdb is multipath device? False
        Running command ['udevadm', 'info', '--query=property', '--export', '/dev/sdb'] with allowed return codes [0] (capture=True)
        /dev/sdb is multipath device member? False
        block.lookup_disk() returning path /dev/sdb
        Running command ['partprobe', '/dev/sdb'] with allowed return codes [0, 1] (capture=False)
        Running command ['udevadm', 'settle'] with allowed return codes [0] (capture=False)
        TIMED udevadm_settle(): 0.100
        devsync happy - path /dev/sdb now exists
        return volume path /dev/sdb
        Running command ['partprobe', '/dev/sdb'] with allowed return codes [0, 1] (capture=False)
        Running command ['udevadm', 'settle'] with allowed return codes [0] (capture=False)
        TIMED udevadm_settle(): 0.088
        devsync happy - path /dev/sdb now exists
        return volume path /dev/sdb1
        Running command ['blkid', '-o', 'export', '/dev/sdb1'] with allowed return codes [0, 2] (capture=True)
        Creating dname udev rule '['SUBSYSTEM=="block"', 'ACTION=="add|change"', 'ENV{DEVTYPE}=="partition"', 'ENV{ID_PART_ENTRY_UUID}=="2a111f8e-aaa6-4487-b3eb-46d64ec9ddb4"', 'SYMLINK+="disk/by-dname/sdb-part1"\n']'
        finish: cmd-install/stage-partitioning/builtin/cmd-block-meta: SUCCESS: configuring partition: sdb-part1
        start: cmd-install/stage-partitioning/builtin/cmd-block-meta: configuring raid: md0
        raid: cfg: {
         "devices": [
          "sda-part1",
          "sdb-part1"
         ],
         "id": "md0",
         "name": "md0",
         "raidlevel": 1,
         "spare_devices": [],
         "type": "raid"
        }
        get_path_to_storage_volume for volume sda-part1({'device': 'sda', 'id': 'sda-part1', 'name': 'sda-part1', 'number': 1, 'offset': '4194304B', 'size': '960189431808B', 'type': 'partition', 'uuid': '8fa7e9b1-997a-46a2-96bd-3e026a7bfced', 'wipe': 'superblock'})
        get_path_to_storage_volume for volume sda({'grub_device': True, 'id': 'sda', 'model': 'Micron_7450_MTFD', 'name': 'sda', 'ptable': 'gpt', 'serial': '34438f3b0175a000', 'type': 'disk', 'wipe': 'superblock'})
        Processing serial 34438f3b0175a000 via udev to 34438f3b0175a000
        lookup_disks found: ['scsi-234438f3b0175a000', 'scsi-234438f3b0175a000-part1']
        Running command ['udevadm', 'info', '--query=property', '--export', '/dev/sda'] with allowed return codes [0] (capture=True)
        /dev/sda is multipath device? False
        Running command ['udevadm', 'info', '--query=property', '--export', '/dev/sda'] with allowed return codes [0] (capture=True)
        /dev/sda is multipath device member? False
        block.lookup_disk() returning path /dev/sda
        Running command ['partprobe', '/dev/sda'] with allowed return codes [0, 1] (capture=False)
        Running command ['udevadm', 'settle'] with allowed return codes [0] (capture=False)
        TIMED udevadm_settle(): 0.092
        devsync happy - path /dev/sda now exists
        return volume path /dev/sda
        Running command ['partprobe', '/dev/sda'] with allowed return codes [0, 1] (capture=False)
        Running command ['udevadm', 'settle'] with allowed return codes [0] (capture=False)
        TIMED udevadm_settle(): 0.092
        devsync happy - path /dev/sda now exists
        return volume path /dev/sda1
        get_path_to_storage_volume for volume sdb-part1({'device': 'sdb', 'id': 'sdb-part1', 'name': 'sdb-part1', 'number': 1, 'offset': '4194304B', 'size': '960189431808B', 'type': 'partition', 'uuid': 'b90952e2-c64c-4650-b3a9-f080a77735b5', 'wipe': 'superblock'})
        get_path_to_storage_volume for volume sdb({'grub_device': True, 'id': 'sdb', 'model': 'Micron_7450_MTFD', 'name': 'sdb', 'ptable': 'gpt', 'serial': '3f438f3b0175a000', 'type': 'disk', 'wipe': 'superblock'})
        Processing serial 3f438f3b0175a000 via udev to 3f438f3b0175a000
        lookup_disks found: ['scsi-23f438f3b0175a000', 'scsi-23f438f3b0175a000-part1']
        Running command ['udevadm', 'info', '--query=property', '--export', '/dev/sdb'] with allowed return codes [0] (capture=True)
        /dev/sdb is multipath device? False
        Running command ['udevadm', 'info', '--query=property', '--export', '/dev/sdb'] with allowed return codes [0] (capture=True)
        /dev/sdb is multipath device member? False
        block.lookup_disk() returning path /dev/sdb
        Running command ['partprobe', '/dev/sdb'] with allowed return codes [0, 1] (capture=False)
        Running command ['udevadm', 'settle'] with allowed return codes [0] (capture=False)
        TIMED udevadm_settle(): 0.088
        devsync happy - path /dev/sdb now exists
        return volume path /dev/sdb
        Running command ['partprobe', '/dev/sdb'] with allowed return codes [0, 1] (capture=False)
        Running command ['udevadm', 'settle'] with allowed return codes [0] (capture=False)
        TIMED udevadm_settle(): 0.088
        devsync happy - path /dev/sdb now exists
        return volume path /dev/sdb1
        raid: device path mapping: <zip object at 0x7f8385d764c0>
        mdadm_create: md_name=/dev/md0 raidlevel=1  devices=['/dev/sda1', '/dev/sdb1'] spares=[] name=
        Running command ['hostname', '-s'] with allowed return codes [0] (capture=True)
        devname '/dev/sda1' had holders: []
        Running command ['mdadm', '--examine', '/dev/sda1'] with allowed return codes [0] (capture=True)
        not a valid md member device: /dev/sda1
        /dev/sda1 not mdadm member, force=False so skiping zeroing
        devname '/dev/sdb1' had holders: []
        Running command ['mdadm', '--examine', '/dev/sdb1'] with allowed return codes [0] (capture=True)
        not a valid md member device: /dev/sdb1
        /dev/sdb1 not mdadm member, force=False so skiping zeroing
        Running command ['udevadm', 'settle'] with allowed return codes [0] (capture=False)
        TIMED udevadm_settle(): 0.009
        Running command ['udevadm', 'control', '--stop-exec-queue'] with allowed return codes [0] (capture=False)
        Running command ['mdadm', '--create', '/dev/md0', '--run', '--homehost=testserver', '--raid-devices=2', '--metadata=default', '--level=1', '/dev/sda1', '/dev/sdb1'] with allowed return codes [0] (capture=True)
        Running command ['udevadm', 'control', '--start-exec-queue'] with allowed return codes [0] (capture=False)
        TIMED udevadm_settle(exists='/dev/md0'): 0.000
        get_path_to_storage_volume for volume md0({'devices': ['sda-part1', 'sdb-part1'], 'id': 'md0', 'name': 'md0', 'raidlevel': 1, 'spare_devices': [], 'type': 'raid'})
        Running command ['partprobe', '/dev/md0'] with allowed return codes [0, 1] (capture=False)
        Running command ['udevadm', 'settle'] with allowed return codes [0] (capture=False)
        TIMED udevadm_settle(): 0.028
        devsync happy - path /dev/md0 now exists
        return volume path /dev/md0
        Running command ['mdadm', '--query', '--detail', '--export', '/dev/md0'] with allowed return codes [0] (capture=True)
        Creating dname udev rule '['SUBSYSTEM=="block"', 'ACTION=="add|change"', 'ENV{MD_UUID}=="c0e6b49f:0b07d9c5:9f98f28a:a82b3141"', 'SYMLINK+="disk/by-dname/md0"\n']'
        Running command ['mdadm', '--detail', '--scan'] with allowed return codes [0] (capture=True)
        finish: cmd-install/stage-partitioning/builtin/cmd-block-meta: SUCCESS: configuring raid: md0
        start: cmd-install/stage-partitioning/builtin/cmd-block-meta: configuring lvm_volgroup: vgroup0
        get_path_to_storage_volume for volume md0({'devices': ['sda-part1', 'sdb-part1'], 'id': 'md0', 'name': 'md0', 'raidlevel': 1, 'spare_devices': [], 'type': 'raid'})
        Running command ['partprobe', '/dev/md0'] with allowed return codes [0, 1] (capture=False)
        Running command ['udevadm', 'settle'] with allowed return codes [0] (capture=False)
        TIMED udevadm_settle(): 0.040
        devsync happy - path /dev/md0 now exists
        return volume path /dev/md0
        Running command ['vgcreate', '--force', '--zero=y', '--yes', 'vgroup0', '/dev/md0'] with allowed return codes [0] (capture=True)
        Running command ['pvscan'] with allowed return codes [0] (capture=True)
        Running command ['vgscan'] with allowed return codes [0] (capture=True)
        finish: cmd-install/stage-partitioning/builtin/cmd-block-meta: SUCCESS: configuring lvm_volgroup: vgroup0
        start: cmd-install/stage-partitioning/builtin/cmd-block-meta: configuring lvm_partition: vgroup0-lv0
        Running command ['lvcreate', 'vgroup0', '--name', 'lv0', '--zero=y', '--wipesignatures=y', '--yes', '--size', '960176848896.0B'] with allowed return codes [0] (capture=False)
          Volume group "vgroup0" has insufficient free space (228894 extents): 228924 required.
        An error occured handling 'vgroup0-lv0': ProcessExecutionError - Unexpected error while running command.
        Command: ['lvcreate', 'vgroup0', '--name', 'lv0', '--zero=y', '--wipesignatures=y', '--yes', '--size', '960176848896.0B']
        Exit code: 5
        Reason: -
        Stdout: ''
        Stderr: ''
        finish: cmd-install/stage-partitioning/builtin/cmd-block-meta: FAIL: configuring lvm_partition: vgroup0-lv0
        TIMED BLOCK_META: 8.499
        finish: cmd-install/stage-partitioning/builtin/cmd-block-meta: FAIL: curtin command block-meta
        Traceback (most recent call last):
          File "/curtin/curtin/commands/main.py", line 202, in main
            ret = args.func(args)
          File "/curtin/curtin/log.py", line 97, in wrapper
            return log_time("TIMED %s: " % msg, func, *args, **kwargs)
          File "/curtin/curtin/log.py", line 79, in log_time
            return func(*args, **kwargs)
          File "/curtin/curtin/commands/block_meta.py", line 124, in block_meta
            return meta_custom(args)
          File "/curtin/curtin/commands/block_meta.py", line 2209, in meta_custom
            handler(command, storage_config_dict, context)
          File "/curtin/curtin/commands/block_meta.py", line 1558, in lvm_partition_handler
            util.subp(cmd)
          File "/curtin/curtin/util.py", line 280, in subp
            return _subp(*args, **kwargs)
          File "/curtin/curtin/util.py", line 144, in _subp
            raise ProcessExecutionError(stdout=out, stderr=err,
        curtin.util.ProcessExecutionError: Unexpected error while running command.
        Command: ['lvcreate', 'vgroup0', '--name', 'lv0', '--zero=y', '--wipesignatures=y', '--yes', '--size', '960176848896.0B']
        Exit code: 5
        Reason: -
        Stdout: ''
        Stderr: ''
        Unexpected error while running command.
        Command: ['lvcreate', 'vgroup0', '--name', 'lv0', '--zero=y', '--wipesignatures=y', '--yes', '--size', '960176848896.0B']
        Exit code: 5
        Reason: -
        Stdout: ''
        Stderr: ''
        
Stderr: ''

does it work with 22.04?

Yes it is showing the same error on Ubuntu 22 too.

    Running command ['lvcreate', 'vgroup0', '--name', 'lv0', '--zero=y', '--wipesignatures=y', '--yes', '--size', '960176848896.0B'] with allowed return codes [0] (capture=False)
      Volume group "vgroup0" has insufficient free space (228894 extents): 228924 required.
    An error occured handling 'vgroup0-lv0': ProcessExecutionError - Unexpected error while running command.
    Command: ['lvcreate', 'vgroup0', '--name', 'lv0', '--zero=y', '--wipesignatures=y', '--yes', '--size', '960176848896.0B']

I have the exact same issue on 3.4.2 attempting to deploy Ubuntu 22.04 on Software RAID 1 with LVM.

There are no issues with just the raid volume mounted to /
But as soon as you attempt to use LVM on top of the raid volume the lvcreate command fails during deployment.

I’m using the following custom layout. But I get the exact same error when configuring the partitions, raid and lvm in the GUI.

cat > “$MAAS_STORAGE_CONFIG_FILE” <<EOL
{
“layout”: {
“nvme0n1”: {
“type”: “disk”,
“ptable”: “gpt”,
“partitions”: [
{
“name”: “nvme0n1-part1”,
“bootable”: true,
“fs”: “fat32”,
“size”: “536.87M”
},
{
“name”: “nvme0n1-part2”,
“size”: “511.56G”
}
]
},
“nvme1n1”: {
“type”: “disk”,
“ptable”: “gpt”,
“partitions”: [
{
“name”: “nvme1n1-part1”,
“fs”: “fat32”,
“size”: “536.87M”
},
{
“name”: “nvme1n1-part2”,
“size”: “511.56G”
}
]
},
“md0”: {
“type”: “raid”,
“level”: 1,
“members”: [
“nvme0n1-part2”,
“nvme1n1-part2”
]
},
“vgroot”: {
“type”: “lvm”,
“members”: [
“md0”
],
“volumes”: [
{
“name”: “lvroot”,
“size”: “511.54G”,
“fs”: “ext4”
}
]
}
},
“mounts”: {
“/boot/efi”: {
“device”: “nvme0n1-part1”
},
“/”: {
“device”: “lvroot”
}
}
}
EOL

Performed some more tests (And looking more closely at the logs), the issue was easily resolved by decreasing the size of the lvm volume slightly.

For some reason when creating the volume in the GUI, the calculated size is too large and fails to create.