Custom storage layout configuration examples

MAAS 3.1 provides the ability of defining a custom storage layout for a machine via a custom commissioning script.

The script must be uploaded to MAAS and have some properties:

  • it must run after the 40-maas-01-machine-resources script and before the 50-maas-01-commissioning one, so it should have a name that starts with anything between 41- and 49-. This ensures the script can access the JSON file created by the former which provides info about the machine hardware and network resources. In addition, the custom script can directly inspect the machine it’s running on to determine how to configure storage.
  • it can read machine hardware/network information from the JSON file at the path specified by $MAAS_RESOURCES_FILE
  • it must output a JSON file at the path specified by $MAAS_STORAGE_CONFIG_FILE with the desired storage layout
  • names of disks provided in the custom layout must match the ones detected by MAAS and provided in the resources file.

Configuration format

The configuration contains two main sections:

  • layout, which lists the desired storage layout in terms of disks and volumes, along with their setup (partitions, volumes, …).
    This consists of a dictionary of device names and their configuration. Each device must have a type property (see below for supported types).
  • mounts, which lists the desired filesystem mount points.
    As an example:
"mounts": {
  "/": {
    "device": "sda2",
    "options": "noatime"
  },
  "/boot/efi": {
    "device": "sda1"
  },
  "/data": {
    "device": "raid0"
  }     
}

A complete $MAAS_STORAGE_CONFIG_FILE would look like this:

{
    "layouts": {
        "sda": {
           ...
        },
        "raid0": {
           ...
        },
        ...
    },
    "mounts": {
       "/": {
           ...
       },
       ...
    }
}

The following device types are supported in the "layout" section:

Disk

"sda": {
  "type": "disk",
  "ptable": "gpt",
  "boot": true,
  "partitions": [
    {
      "name": "sda1",
      "fs": "vfat",
      "size": "100M"
      "bootable": true,
    }
  ]
}

A disk entry defines a physical disk.
The following details can be specified:

  • the partition table type (ptable), which can be gpt or mbr
  • whether it should be selected as boot disk
  • optionally, a list of partitions to create, with their size and filesystem type (fs)

LVM

"lvm0": {
  "type": "lvm",
  "members": [
    "sda1",
    "sdb1",
  ],
  "volumes": [
    {
      "name": "data1",
      "size": "5G",
      "fs": "ext4"
    },
    {
      "name": "data2",
      "size": "7G",
      "fs": "btrfs"
    }
  ]
}

An lvm entry defines a VG (volume group) composed by a set of disks or partitions (listed as members). Optionally it’s possible to specify the the LVs (logical volumes) to create.
Those are defined similarly to partitions, with a name and size (and optionally the filesystem).

Bcache

"bcache0": {
  "type": "bcache",
  "cache-device": "sda",
  "backing-device": "sdf3",
  "cache-mode": "writeback",
  "fs": "ext4"
}

A bcache entry must specify a device to use as cache and one to use as storage. Both can be either a partition or a disk.
Optionally the cache-mode for the Bcache can be specified.

RAID

"myraid": {
  "type": "raid",
  "level": 5,
  "members": [
    "sda",
    "sdb",
    "sdc",
  ],
  "spares": [
    "sdd",
    "sde"
  ],
  "fs": "btrfs"

A raid entry defines a RAID with a set of member devices.
Spare devices can also be specified.

Configuration examples

Here’s a few examples of custom storage layout configurations that a script could output to the $MAAS_STORAGE_CONFIG_FILE. The examples assumes that the machine has 5 disks (named sda to sde).

Note that there’s no need to add entries for those devices in the layout section if the disks are not explicitly partitioned, but just used by other devices (e.g. RAID or LVM).

Simple single-disk layout with GPT partitioning

{
  "layout": {
    "sda": {
      "type": "disk",
      "ptable": "gpt",
      "boot": true,
      "partitions": [
        {
          "name": "sda1",
          "fs": "vfat",
          "size": "500M",
          "bootable": true
        },
        {
          "name": "sda2",
          "size": "5G",
          "fs": "ext4"
        },
        {
          "name": "sda3",
          "size": "2G",
          "fs": "swap"
        },
        {
          "name": "sda4",
          "size": "120G",
          "fS": "ext4"
        }
      ]
    }
  },
  "mounts": {
    "/": {
      "device": "sda2",
      "options": "noatime"
    },
    "/boot/efi": {
      "device": "sda1"
    },
    "/data": {
      "device": "sda4"
    },
    "none": {
      "device": "sda3"
    }
  }
}

In the mounts section, options for mountpoints can be specified. For swap, an entry must be present (with any unique name that doesn’t start with a /), otherwise the swap will be created but not activated.

RAID 5 setup (with spare devices)

{
  "layout": {
    "storage": {
      "type": "raid",
      "level": 5,
      "members": [
        "sda",
        "sdb",
        "sdc"
      ],
      "spares": [
        "sdd",
        "sde"
      ],
      "fs": "btrfs"
    }
  },
  "mounts": {
    "/data": {
      "device": "storage"
    }
  }
}

Both full disks and partitions can be used as RAID members.

LVM with pre-defined volumes

{
  "layout": {
    "storage": {
      "type": "lvm",
      "members": [
        "sda",
        "sdb",
        "sdc",
        "sdd"
      ],
      "volumes": [
        {
          "name": "data1",
          "size": "1T",
          "fs": "ext4"
        },
        {
          "name": "data2",
          "size": "2.5T",
          "fs": "btrfs"
        }
      ]
    }
  },
  "mounts": {
    "/data1": {
      "device": "data1"
    },
    "/data2": {
      "device": "data2"
    }
  }
}

If no volumes are specified, the volume group is still created.

Bcache

{
  "layout": {
     "data1": {
      "type": "bcache",
      "cache-device": "sda",
      "backing-device": "sdb",
      "cache-mode": "writeback",
      "fs": "ext4"
    },
    "data2": {
      "type": "bcache",
      "cache-device": "sda",
      "backing-device": "sdc",
      "fs": "btrfs"
    }
  },
  "mounts": {
    "/data1": {
      "device": "data1"
    },
    "/data2": {
      "device": "data2"
    }
  }
}

The same cacheset can be used by different bcaches by specifing the same backing-device for them.

LVM on top of RAID with Bcache

{
  "layout": {
    "bcache0": {
      "type": "bcache",
      "backing-device": "sda",
      "cache-device": "sdf"
    },
    "bcache1": {
      "type": "bcache",
      "backing-device": "sdb",
      "cache-device": "sdf"
    },
    "bcache2": {
      "type": "bcache",
      "backing-device": "sdc",
      "cache-device": "sdf"
    },
    "bcache3": {
      "type": "bcache",
      "backing-device": "sdd",
      "cache-device": "sdf"
    },
    "bcache4": {
      "type": "bcache",
      "backing-device": "sde",
      "cache-device": "sdf"
    },
    "raid": {
      "type": "raid",
      "level": 5,
      "members": [
        "bcache0",
        "bcache1",
        "bcache2"
      ],
      "spares": [
        "bcache3",
        "bcache4"
      ]
    },
    "lvm": {
      "type": "lvm",
      "members": [
        "raid"
      ],
      "volumes": [
        {
          "name": "root",
          "size": "10G",
          "fs": "ext4"
        },
        {
          "name": "data",
          "size": "3T",
          "fs": "btrfs"
        }
      ]
    }
  },
  "mounts": {
   "/": {
      "device": "root"
    },
    "/data": {
      "device": "data"
    }
  }
}

The RAID is created by using 5 bcache devices, each one using a different disk and the same SSD cache device. LVM is created on top of the RAID device and volumes are then created in it, to provide partitions.

2 Likes

This is fantastic news, thanks so much, team! just wondering, is it possible to do other things that aren’t listed here, such as ZFS? or does this custom layout system only support the filesystems and raid/lvm layers listed here?

@tessa ZFS is a bit of a special case. You can specify a "zfsroot" filesystem type intended to be mounted as /, in the same way you can create it directly from the MAAS UI/API. ZFS filesystems on non-root partitions are currently not supported.

1 Like

is there syntax for specifying the root pool layout across multiple devices, or is it still just a sort of single partition type of install? is there custom config options for some of the other “special” layout types, like specifying the metadata version for mdadm arrays, etc? what about partitioning options, such as partition type and name?

also, the docs say “upload the script to maas”, but where inside the filesystem should custom changes be uploaded? I can see the other provisioning scripts inside /snap/maas/current/lib/python3.8/site-packages/provisioningserver/refresh/, but that doesn’t feel like the right place for custom scripts to live. and it appears to be mounted read-only.

it’d also be great to have documentation on the contents and structure of the $MAAS_RESOURCES_FILE, so it’s clearer how to actually leverage in the custom storage script.

The zfsroot filesystem type currently only supports one device.

Currently there’s no way to specify options for filesystem creation, only mount options.

Commissioning scripts can be uploaded via the MAAS API or UI. After that, they’ll appear in the list when commissioning a machine and they will be enabled by default. It’s still possible to selectively skip some scripts when commissioning a machine.

You can see the content of the file for a commissioned machine as the output of the 50-maas-01-commissioning script (e.g in the commissioning tab in the UI).

1 Like

thanks for the clarification, ack. appreciate it. I also found the app that generates the resources output in /snap/maas/current/usr/share/maas/machine-resources/amd64, which is easier to work with than commissioning machines every time I wanna examine the output.

got one last question for you. I’ve written a script that seems to do the things documented here, but after uploading as 42-custom-storage, I still get the following error when trying to select the “custom” option to configure a server:

Failed to configure storage layout 'custom': No custom storage layout configuration found

where do these scripts get logged? how I can examine the output to ensure my custom script is running correctly? the logs in the MAAS web interface for the host in question don’t show anything until comissioning starts, and grepping the MAAS logs on the server itself don’t turn up anything obvious. is there a way to know the value of $MAAS_STORAGE_CONFIG_FILE for a host so I can examine that path and verify the custom script is outputting data there?

Did you recommission the machine after adding the script?
After commissioning the machine with the extra script, the content of the json file created by the script gets incorporated in the output of 50-maas-01-commissioning, under the storage-extra key.

To help debugging your script, you could add a set -x at the beginning, so that the trace debug gets outputted to stderr and you can see it in the commissioning tab of the machine after recommissioning.

1 Like

ahhh I see, I didn’t realize it only happens at the commissioning phase, that makes sense. I’ve got my script running, and it provides output that looks correct to me, but the 50-maas-01-commissioning just reports a failure with no errors on stdout or stderr, it seems to just print the content of the json I created. is there any sort of schema validation tool or something that can help narrow down why it doesn’t like the custom storage config I’ve created? or is it more of a trial-and-error thing based on these docs right now?

strangely, the 50-* script has a return code of 0, so I’m really not sure why maas thinks it failed. it looks like it’s running correctly, at least from the contents of that script.

Unfortunately we don’t have a validation tool , but you can at least manually check that your config matches the schema defined in $SNAP/lib/python3.8/site-packages/maasserver/storage_custom_schema.yaml.

WRT 50-maas-01-commissioning, if the result code is 0 but it’s still marked as failed, that usually means there’s been some error in processing the data. Please check the regiond.log.

1 Like

@ack @tessa it sounds like it might be a bug - if so, @tessa can you raise a bug please?

@tessa could you please attach/paste the output for the 50-maas-01-commissioning script?

@ack I’ve opened a bug, like Anton suggested. should be easier to keep track of this particular issue rather than just in this forum thread, as I’ve gotten things pretty off topic.

I’ll be adding all the details and output shortly.

1 Like

ok, thanks Tessa. Also, check your DMs. Was wondering if you were aware of machine cloning in 3.1 for network settings.

I am totally stuck here. I’m trying to import a .json file, modify it based on what is in the MAAS_RESOURCES_FILE and then set the MAAS_STORAGE_CONFIG_FILE with the output but it keeps telling Errno 2 file or folder doesn’t exist. I don’t know what I’m doing wrong.

Hi @dadams-fg, can you please provide some details about the script you’re trying, and if possible attach it (or a simple reproducer of the issue you’re seeing)?

@ack What we figured out eventually was that the absolute path of a template file did not work with the env that MAAS used to run the commissioning scripts. We decided to skip using a template storage.json configuration file and put it directly into the custom storage script. We output that to the $MAAS_STORAGE_CONFIG_FILE but the 50 script that looks for the custom information did not add the storage-extra things at the bottom of the MAAS_STORAGE_CONFIG_FILE so we wound up directly modifying the MAAS_STORAGE_CONFIG_FILE to achieve a custom storage layout.

Here is the end result.

#!/usr/bin/env python3
#
# 41-custom-storage-layout - set layout for ess controller
#
# Copyright (C) 2012-2020 Canonical
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program.  If not, see <http://www.gnu.org/licenses/>.
#
# --- Start MAAS 1.0 script metadata ---
# name: 41-custom-storage-layout
# title: Set layout for ESS_Controller
# description: Set layout for ESS_Controller
# script_type: commissioning
# timeout: 60
# --- End MAAS 1.0 script metadata ---

import json
import os
import sys

storage = '''{"layout": {"sda": {"type": "disk", "ptable": "gpt", "boot": true, "partitions": [{"name": "sda1", "fs": "fat32", "size": "4G", "bootable": true}, {"name": "sda2", "size": "23G", "fs": "ext4"}, {"name": "sda3", "size": "10G", "fs": "swap"}, {"name": "sda4", "size": "41G", "fs": "ext4"}, {"name": "sda5", "size": "1G", "fs": "ext4"}, {"name": "sda6", "size": "1G", "fs": "ext4"}]}}, "mounts": {"/": {"device": "sda2", "options": "noatime"}, "/boot/efi": {"device": "sda1"}, "/home": {"device": "sda4"}, "/var/log": {"device": "sda5"}, "/boot": {"device": "sda6"}, "none": {"device": "sda3"}}}'''

def read_json_file(path):
    try:
        with open(path) as fd:
            return json.load(fd)
    except OSError as e:
        sys.exit(f"Failed to read {path}: {e}")
    except json.JSONDecodeError as e:
        sys.exit(f"Failed to parse {path}: {e}")

data = read_json_file(os.environ["MAAS_RESOURCES_FILE"])

disk_type = data["resources"]["storage"]["disks"][0]["id"]

print (os.environ["MAAS_RESOURCES_FILE"])
print (os.getcwd())

layout_file = json.loads(storage)

layout_file["layout"][disk_type] = layout_file["layout"]["sda"]

if disk_type !="sda":
    del layout_file["layout"]["sda"]

for partition in layout_file["layout"][disk_type]["partitions"]:
    partition["name"] = partition["name"].replace("sda", disk_type)

for mount in layout_file["mounts"]:
    layout_file["mounts"][mount]["device"] = layout_file["mounts"][mount]["device"].replace("sda", disk_type)

data["storage-extra"] = layout_file

with open(os.environ["MAAS_RESOURCES_FILE"], 'w') as fd:
    json.dump(data, fd)

I noted that your output on the last 2 lines goes to MAAS_RESOURCES_FILE instead of MAAS_STORAGE_CONFIG_FILE…

I’m also working on a script of my own, and had another question:

  • Is it possible to interogate the Pool or Tags that are applied to a host to make decisions about which disks to select and perpaps set up different configurations based on those variables?

Thanks

~~ Charles

Hi @bedfordc, sorry for the late reply.

Currently, info about tags and pools are not available to the machine when executing the script.
The only information you can rely on is the the content of the $MAAS_RESOURCES_FILE, and whatever you can gather by inspecting the machine itself in your script.

Okay - I don’t see a way to specifiy a partition that takes up the rest of the disk… If a size is not specified for a partition will it consume the entire drive?

Also where would I find the MAAS_RESOURCES_FILE contents? Is that in the GUI somewhere so I can get a look at it to be better prepared how to code the script to take it apart?

Thanks

~~ Charles

The size is currently required, so you have to calculate the remaining size based on the disk size.

The content of the file is reported to maas as output of the 20-maas-03-machine-resources commissioning script.