Dev Setup: MAAS, Juju

Hi there!

I’m working to build a small sandbox environment, using metal to give as close to a prod env as possible.

I have 4 physical nodes that I have deployed via MAAS. I’m looking as a second step to deploy Ceph via Juju. Should I add a Juju controller to each node and deploy Ceph individually, or should I add a MAAS Cloud to Juju? 2 of the 4 boxes are able to host KVM, but I’d like to deploy Ceph to metal if possible, which it seems like I would need to do via individual Juju controllers on each box. Is this correct?

1 Like

Hi There!

When integrating MAAS and Juju together, you actually need to let Juju ask MAAS to deploy the machines for you. MAAS is just like a cloud for Juju so you do not need to deploy the machines yourself.

In the environment I’m using, I already have a MAAS with a few machines in Ready state. These machines are machines that Juju will use to deploy services on.

So, I configured a cloud in Juju like this (Please note that I already had Juju installed and previously configured against a different cloud so some steps may differ. I followed https://docs.jujucharms.com/2.4/en/clouds-maas):

Add a cloud

First I added a cloud in Juju, and I specified the MAAS endpoint (http://10.10.10.1:5240/MAAS):

ubuntu@maas00:~$ juju add-cloud
Cloud Types
  maas
  manual
  openstack
  oracle
  vsphere

Select cloud type: maas

Enter a name for your maas cloud: maas

Enter the API endpoint url: http://10.10.10.1:5240/MAAS/

Cloud "maas" successfully added

Add credentials

Then I created a credential in Juju, for the “admin” user in MAAS (this was obviously previously created). In this process, I also used the OAUTH key from the “admin” user in MAAS:

ubuntu@maas00:~$ juju add-credential maas
Enter credential name: admin

A credential "admin" already exists locally on this client.
Replace local credential? (y/N): y

Using auth-type "oauth1".

Enter maas-oauth: 

Credential "admin" updated locally for cloud "maas".

Bootstrap the environment

Once everything is configured, I bootstrapped the environment:

ubuntu@maas00:~$ juju bootstrap maas maas-controller --credential admin
Creating Juju controller "maas-controller" on maas
Looking for packaged Juju agent version 2.4.4 for amd64
Launching controller instance(s) on maas...
 - 6xhw4c (arch=amd64 mem=3.5G cores=1)
Installing Juju agent on bootstrap instance
Fetching Juju GUI 2.14.0
Waiting for address
Attempting to connect to 10.10.10.6:22
Connected to 10.10.10.6
Running machine configuration script...
Bootstrap agent now started
Contacting Juju controller at 10.10.10.6 to verify accessibility...
Bootstrap complete, "maas-controller" controller now available
Controller machines are in the "controller" model
Initial model "default" added

When bootstrapping the environment, Juju will request an available machine in MAAS (e.g. a machine in Ready state) to deploy the Juju controller on. So Juju will ask MAAS to deploy a machine, once deployed, Juju will install the controller on that machine (you can see that above).

Deploying your applications

Finally, cnce you have the controller installed, you can use Juju to deploy applications as if you were to do that in any other cloud. For example, running juju status would yield:

ubuntu@maas00:~$ juju status
Model    Controller       Cloud/Region  Version  SLA          Timestamp
default  maas-controller  maas          2.4.4    unsupported  18:02:34Z

Model "admin/default" is empty.

And now you can deployed an application. In this case, you would deploy Ceph, but I’ll deploy haproxy as a demo:

ubuntu@maas00:~$ juju deploy haproxy
Located charm "cs:haproxy-46".
Deploying charm "cs:haproxy-46".

With the above, Juju will ask MAAS to deploy a machine and similarly to the bootstrap process, Juju will wait till the machine is deployed with MAAS to install your application.

ubuntu@maas00:~$ juju status
Model    Controller       Cloud/Region  Version  SLA          Timestamp
default  maas-controller  maas          2.4.4    unsupported  18:03:03Z

App      Version  Status   Scale  Charm    Store       Rev  OS      Notes
haproxy           waiting    0/1  haproxy  jujucharms   46  ubuntu  

Unit       Workload  Agent       Machine  Public address  Ports  Message
haproxy/0  waiting   allocating                                  waiting for machine

Here’s a nice example on how to deploy Ceph:

https://docs.jujucharms.com/2.4/en/charms-storage-ceph

Hope this helps.

2 Likes

Thanks for the reply - I had read the docs on adding a MAAS cloud to Juju, but thanks for posting here, it’s useful to see how it’s used in a real-world scenario.

I am wondering about the different types of hosts now, some services I want to deploy to the MAAS-controlled metal, some will go into VMs, some will go into LXD containers on metal, and some into LXD containers within VMs. How do I handle those different deployment paths effectively? I know MAAS has the pods, and if I run juju on a host it’ll add LXD containers for service deployments, but is there a singular way to accoomplish this? What’s the best practice here? I saw something about ‘layers’ in Juju but I didn’t dig into it yet. Coming from a Saltstack background, this is all a bit different, and exciting!

Complex deployments that require certain machines (or types) are done by leveraging constraints.

Effectively, when Juju requests a machine from MAAS it can specify certain constraints (as a matter of fact, Juju already passes default constraints always). MAAS uses those constraints to find a machine for that requests and assigns one to Juju. More information on that can be found on https://docs.jujucharms.com/2.4/en/charms-constraints. I believe that Juju defines certain common constraints, but you can also use ‘tags’ to give you flexibility allowing you to tag machines in MAAS whichever way you want and use them with constraints in Juju.

Note: When you use MAAS without Juju, you can also use constraints to request machines. See https://docs.maas.io/2.5/en/api “POST /MAAS/api/2.0/machines/ op=allocate”

That said, for complex deployments (e.g. OpenStack) you would typically create a bundle. A bundle is basically just a configuration that tells Juju how to deploy your application stack. So the bundle does two main things:

  1. Describes what applications you are deploying (e.g. the charm, options, etc)
  2. Defines the relationships between those application.

With a bundle, you can basically define all services (e.g. ceph, horizon, glance, nova, etc), and define a set of machines where to put those services; and constraints those machines as well. More information is found in https://docs.jujucharms.com/2.4/en/charms-bundles .

1 Like

Thanks again. Although it might sound like I’m building an OpenStack cluster, I’m actually building this as a proof-of-concept to migrate away from OpenStack. Our hardware is generally static and OpenStack imposes too heavy an overhead for our needs.

I thought some more about approaches and options. Constraints does sound like something I can use, assuming I can apply constraints to target either exisitng bare metal, a new VM, an existing VM deployment, a new LXD container, or an existing LXD container. I’ll go read some docs. Thanks!

I was just using OpenStack as an example as it is one of the most complex charm bundles you can have.

Constraints will indeed allow you to do what you need. Keep in mind that Juju also has placement constraints that would allow you to deploy a charm into a new machine or an already running machine (whether this is physical, virtual or a container). The role that MAAS plays for Juju is just like a cloud.

Keep in mind that when using pods in MAAS, VM’s inside that pod appear just like any machine in MAAS to Juju, so Juju doesn’t specifically know it is a VM. However, you can leverage constraints to select which pod you want your machine to be created in, or if you already have pre-created VM’s, to specify which VM to deploy on. Once juju has deployed on any machine from MAAS (whether virtual/physical), you can use placement constraints to put other services in there

1 Like

OK, I’ve had some time to test out a few deployments now, and it looks like:
If I want to deploy directly into an LXD container on metal, I can’t do this with a MAAS or Manual cloud (virt_type constraint is not available)

Which leads me to think I’ll Juju-deploy an LXD cluster over MAAS, then spin up a cloud for that cluster in my controller, so I can use that to deploy containers to Metal without a VM layer/pod inbetween.

I found this handy looking charm, but I get stuck on the Unit ‘preparing machine’, Not sure why. I’m working on debugging it but the logs seem not to show much. Am I barking up the wrong tree here?

Looks like the charm simply silently waits on one unit for the rest of the cluster units to arrive before attempting to complete the process. I added more units and it’s working now!

I’ll add the lxd cluster cloud to Juju now and see what happens next.

Well, I hit a bump bootstrapping the cluster in Juju. 2 bumps, really.
Firstly: I wanted to bootstrap into a container that’s not inside the lxd cluster. In the docs it indicates that should be possible in 2.5 (I’m running 2.5-beta1, maybe it’s still to-do?). My MAAS cloud controller is deployed to a container inside a VM on the MAAS cloud, and I was hoping to put the LXD cluster controller in the same place, for neatness. I tried:juju bootstrap lxd-cluster-devops --to maas-devops/sweet-bull.maas but that errors that the availability zone is not valid.

Secondly: Bootstrapping the controller inside the LXD cluster fails as it’s looking for the default lxdbr0 interface for connectivity. The cluster doesn’t have this (actually the cluster charm looks like it’s supposed to configure fan networking, but all I see is the physical interface on the deployed nodes). So some work to do there too!

Hi Phil,

I don’t fully understand what you are trying to do, but hopefully, this will provide some more info/context.

MAAS can only really deploy physical or virtual machines. If you are using a KVM pod, MAAS will allow you to create a VM that would just become like any machine in MAAS. However, MAAS can not do the same with LXD and containers. Please also note that 2.5b2+ will allow you to chose any physical machine in MAAS and convert it into a Pod (e.g. MAAS will deploy the machine with Ubuntu bionic, install/configure KVM/libvirt/etc, and add it to MAAS as a KVM pod).

That said, in the context of Juju, MAAS simply provides instances to Juju, which Juju will use to put applications on. For Juju, it doesn’t really matter whether the instance that MAAS provided is a physical system or a virtual machine (that for e.g., was automatically created from a KVM pod). However, you as an administrator may provide constraints to request an instance that better suits your needs. Furthermore, (again, in the context of MAAS) Juju can also place a service/application inside a LXD container, inside an instance/machine you have already deployed (with Juju) - note that Juju can also do other fancy stuff with Juju directly, but we are just talking in the context of MAAS.

Example 1

To provide a more practical example, this is what I have done:

  • maas00 -> Machine (physical) where MAAS is running
  • node04 -> Machine (physical) I enlistment/commissioned/deployed with MAAS
    • I deployed node04 and made it a KVM pod
    • I added a tag to node04 KVM pod (e.g. test-tag).

Now, I’m doing to bootstrap my environment, and I’m going to put the controller on a virtual machine inside the node04 KVM pod. I can do this with:

ubuntu@maas00:~$ juju bootstrap maas maas-controller \
     --credential admin \
     --bootstrap-constraints "tags=test-pod mem=4000 cores=2"
Creating Juju controller "maas-controller" on maas
Looking for packaged Juju agent version 2.4.4 for amd64
Launching controller instance(s) on maas...
 - ecyxhe (arch=amd64 mem=3.9G cores=2)
Installing Juju agent on bootstrap instance
Fetching Juju GUI 2.14.0
Waiting for address
Attempting to connect to 10.90.90.21:22
Connected to 10.90.90.21
Running machine configuration script...
Bootstrap agent now started
Contacting Juju controller at 10.90.90.21 to verify accessibility...
Bootstrap complete, "maas-controller" controller now available
Controller machines are in the "controller" model
Initial model "default" added

With the above, Juju asked MAAS for a machine with 2 cores, 4 gigs of ram and tagged as ‘test-pod’. Since no physical machine was found with those constraints, but the pod does because its tagged as ‘test-pod’ (and has available resources) then it automatically creates a VM with the requested CPU/Mem and deploys it with Ubuntu.

Note that you can just the same type of constraints when deploying a service (with --constraints rather than bootstrap-constraints, but I used it as an example).

Example 2 - Putting applications inside a container

Now, since we want to put services inside containers inside a physical machine, I’ll do so with Juju. There are various ways of doing it, but I’ll chose an approach to explain an example. For this, I’ll use node06.maas (which I just added to my MAAS for this example). To do so, I do:

juju deploy ubuntu --to node06.maas

What the above does is uses the ‘ubuntu’ charm to deploy it in node06. This could have been any other application/charm. Once completed, we see something like:

ubuntu@maas00:~$ juju status
Model    Controller       Cloud/Region  Version  SLA          Timestamp
default  maas-controller  maas          2.4.4    unsupported  03:16:25Z

App     Version  Status  Scale  Charm   Store       Rev  OS      Notes
ubuntu  18.04    active      1  ubuntu  jujucharms   12  ubuntu  

Unit       Workload  Agent  Machine  Public address  Ports  Message
ubuntu/0*  active    idle   0        10.90.90.22            ready

Machine  State    DNS          Inst id  Series  AZ       Message
0        started  10.90.90.22  estbnr   bionic  default  Deployed

Now, I want to put another service/application inside the machine I just deployed, which is machine 0. I can do this with:

ubuntu@maas00:~$ juju deploy ubuntu ubuntu2 --to lxd:0

What the above does is deploy another instance of the same ‘ubuntu’ charm/application to a container inside machine 0, and this results in:

ubuntu@maas00:~$ juju status
Model    Controller       Cloud/Region  Version  SLA          Timestamp
default  maas-controller  maas          2.4.4    unsupported  03:26:36Z

App      Version  Status  Scale  Charm   Store       Rev  OS      Notes
ubuntu   18.04    active      1  ubuntu  jujucharms   12  ubuntu  
ubuntu2  18.04    active      1  ubuntu  jujucharms   12  ubuntu  

Unit        Workload  Agent  Machine  Public address  Ports  Message
ubuntu2/0*  active    idle   0/lxd/0  10.90.90.23            ready
ubuntu/0*   active    idle   0        10.90.90.22            ready

Machine  State    DNS          Inst id              Series  AZ       Message
0        started  10.90.90.22  estbnr               bionic  default  Deployed
0/lxd/0  started  10.90.90.23  juju-04fb1d-0-lxd-0  bionic  default  Container started

** NOTE **: If you are wondering what’s the difference between ubuntu and ubuntu2, these are just different (instances of the) applications, which is similar to doing this:

juju deploy ceph ceph-storage-cluster-1 -n 4
juju deploy ceph-osd ceph-osd-1
juju add-relation ceph-storage-cluster-1 ceph-osd-1

juju deploy ceph ceph-storage-cluster-2 -n 5
juju deploy ceph-osd ceph-osd-2
juju add-relation ceph-storage-cluster-2 ceph-osd-2

… which means two different clusters.

Example 3

Lastly, instead of doing example 2 above, you could just have simply done this:

juju deploy ubuntu --to lxd

The above simple requests a machine from MAAS (any random machine because I didn’t pass constraints, which could have also been a VM from inside a pod), and installs the application/charm inside a container.

ubuntu@maas00:~$ juju status
Model    Controller       Cloud/Region  Version  SLA          Timestamp
default  maas-controller  maas          2.4.4    unsupported  03:57:57Z

App     Version  Status  Scale  Charm   Store       Rev  OS      Notes
ubuntu  18.04    active      1  ubuntu  jujucharms   12  ubuntu  

Unit       Workload  Agent  Machine  Public address  Ports  Message
ubuntu/1*  active    idle   1/lxd/0  10.90.90.23            ready

Machine  State    DNS          Inst id              Series  AZ       Message
1        started  10.90.90.22  estbnr               bionic  default  Deployed
1/lxd/0  started  10.90.90.23  juju-04fb1d-1-lxd-0  bionic  default  Container started

So what’s the difference? In example 2 I deployed a charm/application inside a physical machine and took advantage of it to deploy other services/charm/applications inside containers inside that machine. In this example, however, I simply requested to put my service in a new machine inside a container (so there’s really just 1 application, not 2).

Hope this helps clarify things a bit.

PS. I am by no means a Juju expert nor a day to day operator of it, so I will try help as much as possible. But if you are wondering or wanted to ask more advanced Juju questions, we also have this discourse: https://discourse.jujucharms.com/

2 Likes

That’s awesome. Thanks so much for the reply. I simply missed this functionality in my tunnel-vison of assumptions about how this all should work together. I’ll go away and try this out now. Thanks again, a really helpful, indepth explanation. I really appreciate you spending your valuable time to illustrate this in detail.

I think I’m maybe moving into Juju territory here (or LXD territory even) but posting here for continuity. I’ll take it to their forums if it’s deemed more appropriate now.

OK, I’m trying things out here - 4 machines in MAAS sat at ready state. In Juju I deploy 3 ceph-mon units into new containers on those machines. The machines get provisioned, and then the container provisioning fails as juju is looking for the default lxdbr0 bridge interface, but it doesn’t exist.

I guess it’s possibly due to the pre-configuration i’ve added to each machine inside MAAS, which MAAS is preserving at deployment, which possibly is preventing LXD from setting up it’s default bridge (I’ve added bridge interfaces to each node).

I’m not quite sure what’s the expected behavior from the Juju perspective when the interfaces are already bridged, however, what I do know is that Juju should be bridging the interface where LXD/the container are being attached. In the examples above, I did not previously create a bridge and Juju was smart enough to grab the interface of the physical system and create a bridge on top to put containers in.

1 Like

Additionally, something that I found unexpected happened. I deployed with this command:
juju deploy bionic/ceph-mon -n 3 --to lxd
The end result looks like this:
Every 2.0s: juju status --color maas-hl: Fri Oct 19 10:13:13 2018

Model          Controller    Cloud/Region  Version    SLA          Timestamp
ceph-lxd-maas  maas-homelab  maas-homelab  2.5-beta1  unsupported  10:13:13-07:00

App       Version  Status   Scale  Charm     Store       Rev  OS      Charm version  Notes
ceph-mon  12.2.7   blocked    2/3  ceph-mon  jujucharms   27  ubuntu

Unit         Workload  Agent       Machine  Public address  Ports  Message
ceph-mon/0   waiting   allocating  0/lxd/0                         waiting for machine
ceph-mon/1   blocked   idle        1        10.0.10.116            Insufficient peer units to bootstrap cluster (require 3)
ceph-mon/2*  blocked   idle        2        10.0.10.117            Insufficient peer units to bootstrap cluster (require 3)

Machine  State    DNS          Inst id  Series  AZ       Message
0        started  10.0.10.115  xm8wq4   bionic  default  Deployed
0/lxd/0  pending               pending  bionic
1        started  10.0.10.116  cxwtaw   bionic  default  Deployed
2        started  10.0.10.117  8qcqnb   bionic  default  Deployed

So it appears only the first unit was assigned to a container (and failed to deploy due to the missing lxdbr0 interface) and the other two ended up directly on the machines metal.

try

juju deploy ceph-mon -n 3 --to lxd:0,lxd:1,lxd:2

@seffyroff, it’s a bit old, but did you ever solve this one to your satisfaction?