Hello guys. Thank you for helping us. :pray:

I am not able to deploy CEPH on MAAS hosts, nor isolated installation, nor together with OpenStack.

The problem is that CEPH in all installations does not accept the Storage MAAS deployment, which in my case is booting on a group of 4 hard drives in RAID6, adding up to 40T.

As I mentioned in another distress call, still waiting for some return from the community the RAID6 does not pass the SMART test. However, if I put the RAID to boot when commissioning, then I can make the Host of this machine (generated in the KVM) make this asset available, to be consumed by the deploy via Juju, or directly by MAAS.

But when I try to install CEPH, the return is always the same:

. No block devices detected using current configuration

. Monitor bootstrapped but waiting for number of OSDs to reach expected-osd-count

. Cannot use /dev/sda: device is partitioned

I tried in some ways in search in this environment and on google in general, but I haven’t found a solution yet.

The JUJU fully deployed OpenStack, failing only in CEPH.

Thank you if you can answer me.

Greetings to all

Hello! Thank you for reaching out and providing detailed information about the issues you’re facing with deploying CEPH on MAAS hosts and with OpenStack. I understand that you’re experiencing difficulties and have already sought help from the community. As a matter of fact, I think I replied to your linked post earlier this week.

Based on the description you provided, it appears that the problem lies in the detection of block devices and the partitioning of /dev/sda during the CEPH installation process. While I don’t have a direct solution at the moment, I can at least offer some suggestions to troubleshoot the issue further:

  1. Block Device Detection: Double-check the configuration and ensure that the block devices are properly recognized by MAAS and are accessible to CEPH. Make sure the necessary drivers and modules are loaded for your RAID configuration.
  2. Partitioning of /dev/sda: Verify that /dev/sda is not already partitioned, since I think CEPH requires an unpartitioned device. You may need to remove any existing partitions on /dev/sda before proceeding with the CEPH installation.
  3. Community Support: Keep asking. Somebody may have encountered similar challenges and could provide further insights or possible workarounds. You might even try the OpenStack mailing list, prefacing your subject with something like “Canonical-MAAS-CEPH-OpenStack question”.

I apologize for not having an immediate solution from the MAAS side, but I hope these suggestions help you (at least a little) in troubleshooting the problem. If you have any further questions or need additional assistance, please feel free to ask. Best of luck, and I hope your deployment efforts are successful!

1 Like

Thank you @billwear.

So, maybe the solution to my case is in item 2 of your answer, dear @billwear.

It is that, if in the comissioning/deploy I do not partition the hda and the hda is not the boot disk, MAAS does not recognize this block of hard drives and does not make it available to be consumed by the LXD Host created by MAAS in the KVM.
When I partition the hda and do it with the boot, MAAS recognizes the block and it is available to be consumed by the host and the virtual machines that will be created within that host.
Would you have a solution for that? Am I doing something wrong?
Thanks again, @billwear


1 Like

Not off the top of my head, but give me a couple of hours! :slight_smile:

1 Like

Hello, @billwear

Today I deployed a new physical machine, an HP with iLO 5, and everything worked out, both commissioning and deployment. For testing I eliminated the RAID and HP has 2 18T HDD and a 1T SSD. The result was the same as in previous situations, as for boot disk I chose the SSD, only it is recognized by MAAS in the creation of the Host for this HP machine. The SSD is partitioned and the two HDDs are not partitioned, because the CEPH is giving an error Indicating the existence of a partition.

I thought the problem was the RAID made on the physical controller that was the problem, but in this test I found that it could be something else.

Thank you for your patience and attention, @billwear. Have a great day.

Hi, @billwear
Are you fine?
Please see if you have a little time to clarify this, and I wanted to reformulate my question better:

So, I deleted the RAID. Now the NVMe and the 4 HDD are free. All recognized and all Health and all with their tags created by MAAS itself in commissioning.

When MAAS creates the HOST KVM it only recognizes NVMe and only NVMe I can use, in this automatic configuration of MAAS because the other HDDs were not automatically attached to the HOST - and now I wanted to know if I can do this manually.

Would you recommend me a tutorial where I can do this procedure? Or would I have another way to use HDDs that do not appear as KVM HOST assets?

For example: how could I set these HDDs to be used by CEPH, via JUJU - since they do not appear for use on the HOST? Or if not by Juju, or by CLI, etc.?

In the words of MAAS: how to make disks from “Available disks and partitions” mode to “USED disks and partitions” mode.

I’ve already managed to do this, if I partition the disk, but when the disk is partitioned it is refused by CEPH, as I described earlier.

Thank you for your attention.


Hi, @penacleiton ,

I hope you’re doing well. Thank you for reaching out and providing some context. It seems there might be some confusion regarding the behavior of MAAS, particularly in relation to RAID configurations. It’s important to clarify whether you’re referring to software RAID or hardware RAID.

Additionally, it seems like you might have conflicting requirements. On one hand, you want to mark a drive as “used” in MAAS, but on the other hand, you want to use it for Ceph. It would be helpful to gain a clearer understanding of your desired end goal so that we can provide more accurate guidance.

Also, you mention a “host KVM” and the possibility of running “Ceph on a KVM on a machine.” While this setup is technically feasible, it’s worth noting that it may not deliver optimal performance.

After seeing this message, I think we might need more information about your specific objectives. Once we have a clear understanding of what you’re trying to achieve, we can try to provide more appropriate guidance on how to proceed.


Thank you for your reply, @billwear

In fact, what you wrote here is the basis of everything I need to understand, and yet, I’m trying to do this, after a few months studying MAAS. Let’s go by stage, then.

My physical servers all have 1 NVMe and 4 SAS HDD.

When doing automatic commissioning, without manual configurations, MAAS creates a Host with all the resources of each machine (storage, memory, cpu)

And it’s within these hosts that I’m trying to upload my applications, and for that I’m using JUJU.

From what I understand, JUJU does not operate on the machines directly, but needs a Host, which as I said, each host created by MAAS is the mirror of the physical machines, offering all the resources available in them to be used by the applications.

  • Regarding memories and CPU, everything is perfect. The totality physically available is available virtually on each Host.

  • My problem is in relation to storage. Because MAAS only makes available, or usable, only the device it uses as the one that makes the system Boot.

At this point the question is: if MAAS identifies only the boot disk as used, and the other disks are only evaluable, is that correct?

The evaluable disks, they do not appear on the Host as likely to be consumed by applications through JUJU. Is that also correct? On the host, NVMe only appears as a consumable, because it is used as a boot disk.

If you can, with your enormous patience, answer this question, I believe I will have greater chances to understand all this and move on.

Once again, my thanks for spending your time with me and with all those who have presented your doubts and problems, here in this space.

All the best, @billwear :pray:

It sounds like you have a good understanding of how MAAS handles storage and hosts. To address your main question:

  • By default, MAAS will only utilize and make available the boot disk it detects on each machine during commissioning. This is intended to keep the OS disk separate.
  • The other disks attached to the system will show up in MAAS as “evaluable” but not yet available to Juju by default.
  • To make those additional disks usable, you need to configure them in MAAS through the CLI or API. You can partition, format, mount, and tag them.
  • Once extra disks are configured, they will become available as resources that can be consumed by Juju applications on deployment.
  • So in summary, yes - you need to take some manual steps in MAAS to prepare and tag those extra disks before Juju will be able to utilize them.

Let me know if that helps explain the workflow! The key is to configure and tag the disks in MAAS first, then your Juju applications will be able to make use of those additional storage resources. The docs have more details on storage configuration. Feel free to ask any other questions.

1 Like

Thank you for the Storage class at MAAS, @billwear :purple_heart: Super enlightening, and thank you for that.
Without wanting to exploit your time too much, could you tell me the paths via CLI so that I can configure the disks considered evaluable, so that they can be used by JUJU?
If it is not too much, if you can also send the links for configuration through API, I would like to know the processes, although I am still a neophyte in this area.
If there is nothing specific, if you have an indication of a course where you can study this in detail, I would also greatly appreciate it.