Juju, maas, charmed k8s - distributing machine creation across hardware

I have 10 bare metal servers all under the umbrella of MAAS. I am able to provision and deploy both bare metal instances as well as VMs (LXD).

I’m trying to get charmed kubernetes up and running such that the worker nodes are distributed across the hardware resources. First, I set up a cloud (MAAS) and instantiate a controller. This works - a VM is provisioned automatically and the juju controller is spun up.

I then add a model and deploy charmed-kubernetes. This sort of works in that it does deploy. However, it deploys all new VMs on the same bare metal instance. This is not what I’m after exactly, and I can’t find any obvious way via the docs to tell it to distributed as maximally as possible across all available resources.

Is there any documentation I’m missing or does anyone have the magic incantation to get this to happen?

In case this matters I am trying to involve GPUs for the workers but haven’t gotten far enough to worry about that yet.

One way you could solve this is to put the bare metal instances in different Availability Zones. Then charmed k8s would try to spread the instances across those for each service.

Change your constraints in the bundle.yaml file (Constraint=VM-worker1 with GPU for example) by using the TAGs at your MAAS server. If you want, I can send you images of explanations of my clouds (Models).

@schwim, did you ever sort this out?

Apologies for the delay, I got pulled away from this project for a bit.

No, I have not worked this out yet. Hoping to spend some time with it this weekend.

OK, have been testing deploying to bare metal. It works with servers in the Ready state by setting a tag as a constraint for the model as follows:

    juju add-model test-k8s
    juju set-model-constraints tags=cpu_compute

This provides 3 bare metal systems that are in a Ready state via MaaS. Deploying charmed-kubernetes with juju deploy charmed-kubernetes grabs these three servers and starts placing the apps on them, but it fails because there are only 3 machines available. Destroying the model I understand should release these machines, but it does not. The command eventually fails, unable to release the machines.

If I constrain to a tag that has 5 KVM machines all tagged the same (e.g. gpu-compute) this also works - VMs are spun up and the applications provisioned. This succeeds, but it piles all VMs on a single machine rather than distributing across the pool.

In short, what I’m ultimately after is an example implementation of charmed-kubernetes such that:

  • Certain apps are provisioned on VMs on nodes that are selected by tag
  • VMs are distributed across the machines so there’s at least a semblance of redundancy for the components that are redundant
  • Ability to provision other component apps direct to bare metal

I’m pretty sure some of this can be accomplished via the bundle.yaml file.

Finally is there some trick to having the machines (VM as well as bare metal) to go back into a Ready state after the model is destroyed?


@schwim, at this point, you should prolly flip over to the juju forum – they are better equipped.