Release server with secure erase disk failed


When releasing with secure erase disk enabled the disk failed to erase

Above was the log of the failed deletion

Does this mean that my disk’s firmware does not support this operation ?
Im using MAAS 3.5
The disk is SAMSUNG MZQL23T8HCLS-00A07 with the firmware GDC5A02Q
The image is the default Ubuntu 22.04

Hey,

I’d suggest to look at the machine details in the MAAS UI and seek for more details about the failure.

Also, you’d better download the output of the script using the CLI

maas admin node-script-result download <system_id> <script_id> output=all

A follow up to this, this is the event log of MAAS

Thu, 16 Jan. 2025 15:09:38	Node changed status - From 'Disk erasing' to 'Failed disk erasing'
Thu, 16 Jan. 2025 15:09:38	Marking node failed - Failed to erase disks.
Thu, 16 Jan. 2025 15:08:48	Loading ephemeral
Thu, 16 Jan. 2025 15:08:48	HTTP Request - /images/574b060/ubuntu/amd64/ga-22.04/jammy/stable/squashfs
Thu, 16 Jan. 2025 15:07:48	HTTP Request - /images/ebab7ce/ubuntu/amd64/ga-22.04/jammy/stable/boot-initrd
Thu, 16 Jan. 2025 15:07:44	HTTP Request - /images/39c00a8/ubuntu/amd64/ga-22.04/jammy/stable/boot-kernel
Thu, 16 Jan. 2025 15:07:44	Performing PXE boot
Thu, 16 Jan. 2025 15:07:44	PXE Request - commissioning
Thu, 16 Jan. 2025 15:07:44	TFTP Request - /grub/grub.cfg-58:a2:e1:d5:83:15
Thu, 16 Jan. 2025 15:07:44	TFTP Request - /grub/grub.cfg
Thu, 16 Jan. 2025 15:07:44	TFTP Request - /grub/x86_64-efi/terminal.lst
Thu, 16 Jan. 2025 15:07:44	TFTP Request - /grub/x86_64-efi/crypto.lst
Thu, 16 Jan. 2025 15:07:44	TFTP Request - /grub/x86_64-efi/fs.lst
Thu, 16 Jan. 2025 15:07:44	TFTP Request - /grub/x86_64-efi/command.lst
Thu, 16 Jan. 2025 15:07:42	TFTP Request - grubx64.efi
Thu, 16 Jan. 2025 15:07:42	TFTP Request - bootx64.efi
Thu, 16 Jan. 2025 15:07:42	TFTP Request - bootx64.efi
Thu, 16 Jan. 2025 15:03:28	Node - Started releasing 'BMG-HGX-H100-8x-101'.
Thu, 16 Jan. 2025 15:03:28	Node changed status - From 'Deployed' to 'Disk erasing'

The installation output is just empty

After running the script wipe-disks manually on a deployed node I found out that the virtual disk sda is detected and the script tried to wipe it resulting in the error below:

Traceback (most recent call last):
  File "/root/maas_wipe.py", line 719, in <module>
    main()
  File "/root/maas_wipe.py", line 697, in main
    zero_disk(kname, info)
  File "/root/maas_wipe.py", line 517, in zero_disk
    with open(DEV_PATH % kname, "rb") as fp:
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: [Errno 123] No medium found: b'/dev/sda'

I’ve seen such error when a disk is broken. I think you have to investigate manually (for example, you can enter rescue mode and poke around…)

{
          "id": "sda",
          "device": "8:0",
          "model": "Virtual HDisk0",
          "type": "usb",
          "read_only": false,
          "size": 0,
          "removable": true,
          "numa_node": 0,
          "device_path": "pci-0000:00:14.0-usb-0:11.2:1.0-scsi-0:0:0:0",
          "block_size": 0,
          "firmware_version": "1.00",
          "rpm": 1,
          "serial": "AAAABBBBCCCC3",
          "device_id": "usb-AMI_Virtual_HDisk0_AAAABBBBCCCC3-0:0",
          "partitions": [],
          "usb_address": "1:4"
}

This disk seems to be a virtual hdisk
The log above I obtained from MaaS’s commission script 50-maas-01-commissioning
Is there a particular reason why this script is needed ?

Yes it’s the core of commissioning: gathering the data from your hardware and populate all the information in the MAAS inventory so that you can then configure the machine accordingly

So the virtual disk that I’m seeing is from my hardware right ? MAAS did not put it there correct ?

Do you have usb sticks attached to that machine?