I’m optimizing our Kubernetes cluster deployments with CAPI for MAAS. My goal is to provision control plane nodes as VMs on MAAS-managed LXD hosts, while continuing to use bare-metal machines for worker nodes and other services. This way, I can efficiently utilize compute for the control plane without dedicating beefy bare-metal machines.
I’m looking for advice on the best networking setup when MAAS provisions a bare-metal machine that then acts as an LXD host, hosting Kubernetes control plane VMs.
Specifically, how do you handle networking on the MAAS-deployed LXD host to allow for robust container/VM networking while maintaining MAAS’s control over IP addressing?
I’m interested in:
LXD bridge integration: How do you connect LXD’s lxdbr0 (or another bridge) to a MAAS-managed subnet for the control plane VMs?
MAAS DHCP interaction: Do you disable DHCP on lxdbr0 and let MAAS manage the VM IPs, or do you have a different strategy?
Best practices/gotchas: Any network configurations, netplan examples, or pitfalls to avoid for this hybrid bare-metal/VM CAPI setup?
We make use of the community CAPI provider for MaaS. The CAPI book suggests to use the same infrastructure provider for a K8s cluster, and so I was under the impression that the community CAPI provider for MaaS has the feature inbuilt. Let me go through the repo to understand it better.
We have plans to implement an official CAPI provider, tho.
My suggestion would be to not use MAAS to compose VMs: this feature will be dropped in the
future (no ETA, but it will happen) because the general concept is
you want VMs => use LXD
you want metal => use MAAS
To me this feels like I shouldn’t use VMs via MaaS for any new usecases.