Select disks to wipe with MaaS while releasing node

Hi everyone,

I’m testing different storage configurations on my servers with MaaS via GUI.
However since I’m using LVM over a Raid 1 ( I guess it is the reason) , each time I’ve deployed a server and I want to install it again, I have to secure wipe it when I release it. If I don’t I got an error on next deployment.
It means it takes some time.
My disks are like that :

  • /dev/sda/ : SSD 240Gb
  • /dev/sdb/ : SSD 240Gb
  • /dev/sdc/ : SSD 1,6Tb

I only need to wipe 2 first disks, but the secure wipe erases everything and takes a while to run with my 3rd disk of 1,6 Tb.
Is there any way to select disks to wipe ? Maybe a command by CLI that I don’t know yet ?

Thank you !

What error do you get?

I noticed recently that I am getting a similar error.

I have a machine with eight disks, but I only have one of the disks configured in MAAS. The other seven disks are all reported in MAAS, I just haven’t specified storage to be used on them in MAAS. Instead, I configure ZFS on them as part of the post install.

If I redeploy the node without running zpool delete ... first, I get an error from Curtin that it couldn’t access /dev/cciss/c0d5 because it was busy. I didn’t ask Curtin to access /dev/cciss/c0d5, so I I was intending to file a bug report with somewhat more detail than what I’ve provided here.

You might find that the following command sequence will quickly cleanup the LVM labels before you release and redeploy your server until this gets fixed.

dd if=/dev/zero of=/dev/sda bs=1M count=400
dd if=/dev/zero of=/dev/sdb bs=1M count=400
dd if=/dev/zero of=/dev/sdc bs=1M count=400
poweroff -f


1 Like

If you use quick erase with secure erase it will be a much quicker release. MAAS will perform secure erase on the disks that support it, and for the disks that don’t it will perform a quick erase (overwrite the first and last 2 MiB of the disk, removes access to the defined filesystem or partitions on the disk).

Note: The quick erase could still allow data to be recovered with recovery software, but if you not worried about that and you just want to be sure that the drive seems clean on release then that’s a good option.

Yes, I like the idea that the disk erasure policy could be disk-specific, with the current behaviour also modelled as an ‘all disks’ preference.

We are currently working up a next-generation storage story, especially to deal with common disk profiles (“give me another VM server”) and this would fit into that story quite nicely.

Thanks a lot for your answers guys !

The thing is I need to be able to SSH into the machine to run those commands, right ?
Sometimes my deployments crash after storage config and it is too late. I think I could run those commands by selecting “Allow SSH” in commission parameters but I don’t know how to connect at this time, maybe you know ?

I never noticed that I could select both option at the same time actually, but here my 3 disks support secure-erase, so it will wipe the 3, not only the 2 I use for boot in raid 1 ?

Thank you :slight_smile:

Edit : Your commands work well @lloyd it solves my problem, I actually have to commission allowing SSH and then before 1st deployment I run your commands and it will be working.
How could I set it in a script to force the command at each commission ?

So setting @lloyd commands in “early commands” in my curtin config file worked well.

Thank you