Storage, the future

Hello MAASters,

We’re planning for future features for MAAS, and have some -wide- questions about storage:

We’re interested in better understanding:

How customised, or how ‘cookie cutter’ are your storage-layouts?
(Do you keep pets or farm cattle?)

Does your storage vary by role, e.g. storage, compute or infrastructure nodes?

Tell us about your typical trade-offs between performance, security, and resilience?
(e.g Your use of SSD vs, HDD, encryption, partitions, LVGs, RAID and bcache)

Do you automate storage set-up?
(How, ideally, would you like MAAS to automate storage-management?)

Somewhat open questions, I know, but all thoughts welcome

Thanks!

1 Like

A couple of items that I have experienced in practice with deployments that were otherwise mostly identical between servers, that I envisage mainly with regards to creating ‘templates’ for storage between otherwise similar servers.

  • The first detected disk was not the one desired to be the root disk (which is the current MAAS default). For example perhaps the boot disk was an SSD, and the vendor (for whatever reason) pre configured all 100 servers to have all of my Ceph HDDs in the first 12 slots and all of the SSDs last, one of which is the boot device. It would be useful to be able to define some kind of basic rule that said which device was desired for the boot disk. Perhaps by Size and Type (HDD/SSD), Model Name/Number, or by slot number in the chassis (some hardware/drivers expose a slot number, some won’t, sadly it seems this works less commonly on Linux)
  • Some charms (e.g. ceph) want access to raw disk devices without a partition layout being configured in advance (or possibly even existing after install). Right now, if you don’t partition the disk MAAS will not pass any info for it into the client either to the juju provider (sadly no storage spaces support in juju MAAS yet, this is needed) and secondly even the /dev/disk/by-dname style setup which works with a partition is not possible without it.
  • UEFI is pretty much defacto now, we need to be able to specify (or even consider defaulting to) GPT partition tables - right now I believe MBR is used unless it is the boot disk and installation is done via EFI. I haven’t checked super recently in case that changed. This can be particularly problematic for software RAID1 booting.
  • Ideally I would be able to tell MAAS to “copy” the storage configuration from another node - or use a node to create a template that was then applied. I am not immediately sure how that would look in terms of how it auto detects the matching disks but multiple heuristics could potentially be used. This would be equally great for Networking.

Great questions! For our boot disk, we usually set things up in a LVM layout with multiple, separate logical volumes for things such as /var/log and etc. Last we checked, the MAAS LVM layout type did not support multiple logical volumes so we had to automate this ourselves. We created Ansible playbooks to run through a set of MAAS CLI commands. It’s a bit of a hassle so we would love to see a more fleshed out LVM layout option.

  • I’m currently only using MAAS for deployment automation in my lab. This means that I’m keeping pets.

  • Outside the lab (where I use RH tooling instead of MAAS) I use LVM to turn other people’s requests for pets into my cattle. All storage comes from the SAN and so we simply dedicate an a single oversized thin provisioned LUN on each machine for the OS. Using LVM means that we can adjust sizes of things like /tmp and /var after the O/S is deployed. All application storage configuration is done after the system is up and running (“Deployed” in MAAS terminology).