Hello guys. Thank you for helping us. :pray:

I am not able to deploy CEPH on MAAS hosts, nor isolated installation, nor together with OpenStack.

The problem is that CEPH in all installations does not accept the Storage MAAS deployment, which in my case is booting on a group of 4 hard drives in RAID6, adding up to 40T.

As I mentioned in another distress call, still waiting for some return from the community the RAID6 does not pass the SMART test. However, if I put the RAID to boot when commissioning, then I can make the Host of this machine (generated in the KVM) make this asset available, to be consumed by the deploy via Juju, or directly by MAAS.

But when I try to install CEPH, the return is always the same:

. No block devices detected using current configuration

. Monitor bootstrapped but waiting for number of OSDs to reach expected-osd-count

. Cannot use /dev/sda: device is partitioned

I tried in some ways in search in this environment and on google in general, but I haven’t found a solution yet.

The JUJU fully deployed OpenStack, failing only in CEPH.

Thank you if you can answer me.

Greetings to all

Hello! Thank you for reaching out and providing detailed information about the issues you’re facing with deploying CEPH on MAAS hosts and with OpenStack. I understand that you’re experiencing difficulties and have already sought help from the community. As a matter of fact, I think I replied to your linked post earlier this week.

Based on the description you provided, it appears that the problem lies in the detection of block devices and the partitioning of /dev/sda during the CEPH installation process. While I don’t have a direct solution at the moment, I can at least offer some suggestions to troubleshoot the issue further:

  1. Block Device Detection: Double-check the configuration and ensure that the block devices are properly recognized by MAAS and are accessible to CEPH. Make sure the necessary drivers and modules are loaded for your RAID configuration.
  2. Partitioning of /dev/sda: Verify that /dev/sda is not already partitioned, since I think CEPH requires an unpartitioned device. You may need to remove any existing partitions on /dev/sda before proceeding with the CEPH installation.
  3. Community Support: Keep asking. Somebody may have encountered similar challenges and could provide further insights or possible workarounds. You might even try the OpenStack mailing list, prefacing your subject with something like “Canonical-MAAS-CEPH-OpenStack question”.

I apologize for not having an immediate solution from the MAAS side, but I hope these suggestions help you (at least a little) in troubleshooting the problem. If you have any further questions or need additional assistance, please feel free to ask. Best of luck, and I hope your deployment efforts are successful!

1 Like

Thank you @billwear.

So, maybe the solution to my case is in item 2 of your answer, dear @billwear.

It is that, if in the comissioning/deploy I do not partition the hda and the hda is not the boot disk, MAAS does not recognize this block of hard drives and does not make it available to be consumed by the LXD Host created by MAAS in the KVM.
When I partition the hda and do it with the boot, MAAS recognizes the block and it is available to be consumed by the host and the virtual machines that will be created within that host.
Would you have a solution for that? Am I doing something wrong?
Thanks again, @billwear


1 Like

Not off the top of my head, but give me a couple of hours! :slight_smile:

1 Like

Hello, @billwear

Today I deployed a new physical machine, an HP with iLO 5, and everything worked out, both commissioning and deployment. For testing I eliminated the RAID and HP has 2 18T HDD and a 1T SSD. The result was the same as in previous situations, as for boot disk I chose the SSD, only it is recognized by MAAS in the creation of the Host for this HP machine. The SSD is partitioned and the two HDDs are not partitioned, because the CEPH is giving an error Indicating the existence of a partition.

I thought the problem was the RAID made on the physical controller that was the problem, but in this test I found that it could be something else.

Thank you for your patience and attention, @billwear. Have a great day.

Hi, @billwear
Are you fine?
Please see if you have a little time to clarify this, and I wanted to reformulate my question better:

So, I deleted the RAID. Now the NVMe and the 4 HDD are free. All recognized and all Health and all with their tags created by MAAS itself in commissioning.

When MAAS creates the HOST KVM it only recognizes NVMe and only NVMe I can use, in this automatic configuration of MAAS because the other HDDs were not automatically attached to the HOST - and now I wanted to know if I can do this manually.

Would you recommend me a tutorial where I can do this procedure? Or would I have another way to use HDDs that do not appear as KVM HOST assets?

For example: how could I set these HDDs to be used by CEPH, via JUJU - since they do not appear for use on the HOST? Or if not by Juju, or by CLI, etc.?

In the words of MAAS: how to make disks from “Available disks and partitions” mode to “USED disks and partitions” mode.

I’ve already managed to do this, if I partition the disk, but when the disk is partitioned it is refused by CEPH, as I described earlier.

Thank you for your attention.