I keep fighting with MAAS for simple things such as setting usernames or hostname, things that are supposed to be straight forward, become complex.
I think the bottom line is the following: where does MAAS responsibility end?
If we are talking about Metal as a service, when one orders a server there is a delivery (Deployment) and from then on, one owns the Server: i.e. responsibility for configuring it further lies with the customer. Until perhaps Rescue and obviously Release.
Cloud init can be the tool that allows us for customize the metal delivery, but it seems as if it was used for MAAS internals: i.e. It is not clear where the settings are coming from, touching it may break the system, etc.
Today, in particular, I am fighting MAAS over control of my /etc/hosts… and hostname.
I identify MAAS as Metal as a Service, the hosts are deployed and delivered to a project that knows nothing about MAAS, or has access to its console.
In particular they use ansible for managing their services with playbooks already developed known to run well on ubuntu. They provide a public ssh key and expect an IP address to connect to, just like if this was Hetzner, OVH or any other metal provider.
furthermore, as you know servers have multiple interfaces.
would you make your public interface the MAAS interface too? We chose to use the “management” interface for MAAS.
some software packages expect you to pick a particular hostname and fqdn, that would be on the public interface.
Regardless of this, I think it is good that MAAS uses cloud-init to further customize the deployment of metal, but it would be nice if it was optional and transparent, easier to modify, etc.
There are some customizations that I cannot recognize in this scripts, for example: vmware-tools come installed in my deployed machine, there must be a package installation somewhere, but I cannot see it.
This is actually really simplified. Machines are easily named or renamed and associating dns suffixes is just as easy under the DNS tab.
Seeing as though you already use ansible, user management becomes trivial because you can distribute public keys at deployment time which ansible can leverage to run playbooks to manage users, keys and passwords from code.
Regarding multiple interfaces I think MaaS implementation of spaces is quite neat. You can attach and configure bonds, bridges or vlans and even configure IP adresses before deploying the machine resulting in robust network segmentation on multihomed machines.
That said MaaS is geared more toward managing fleets of servers (cattle) and focuses less of intimate configuration of purpose built machines (pets)
I personally find that simply booting a brand new machine from the network and MaaS automatically inventories, names and profiles a machine in just a few minutes is borderline magical and has literally saved me hundreds of hours of typical “sysadmin slog work”
Yes. Including cloud-init UI when deploying helps.
Also, in my use case, which is installing Cloudstack on clusters, I have no interest in beyong cloud-init first install of server.
My current workaround is to disable cloud-init as of first installation step of Cloustack.
Also, many of the problems described here come form the fact that I wanted a “deployment” vlan, and separate “production” vlans. MAAS owns the deployment VLAN, and all production traffic is separate.
Setting things like the hostname (if the production one is different) and the default gateway become a problem if you want to have this separation. I have workarrounds for both, but I might just go with the default setup of one VLAN for deployment and production.