Maas juju charmed ceph on deployed phsyical servers

Hello
I have 3 physical servers, 1 for running MAAS, 2 other larger servers deployed as KVM hosts
Juju controller in installed on VM on one of the KVMhosts
Now trying to deploy juju charmed ceph, want to use 1 vm on 1 physical host with a virtual disk, and the 2 physical servers with raw disks. Plan to add a 3rd physical server later.
Created a vm on one of the KVM hosts and did a juju add-machine, ceph-osd does get deployed to the VM but not the physical servers, I get constraints errors. I could create addition VM but would prefer to use the baremetal physical servers with raw disks dedicated to ceph. Is this not possible because the physical server are in a deployed state, not a ready state or is it because I am not specifying the correct contraints?
Any idea of the performance difference between just using VMs. Also what happens if a disk fails in the scenario?
Now consider using ceph deploy instead of juju charmed ceph and use the server running MAAS as a ceph node as well.

juju deploy -n 3 --config ./ceph.yaml --constraints tags=ceph --debug ceph-osd
19:44:03 INFO juju.cmd supercommand.go:56 running juju [2.9.37 51672c0e4243f0d0e73f13cf1bbf5c5a9a720632 gc go1.18.8]
19:44:03 DEBUG juju.cmd supercommand.go:57 args: []string{"/snap/juju/21315/bin/juju", “deploy”, “-n”, “3”, “–config”, “./ceph.yaml”, “–constraints”, “tags=ceph”, “–debug”, “ceph-osd”}
19:44:03 INFO juju.juju api.go:86 connecting to API addresses: [172.18.20.200:17070]
19:44:03 DEBUG juju.api apiclient.go:1153 successfully dialed “wss://172.18.20.200:17070/api”
19:44:03 INFO juju.api apiclient.go:688 connection established to “wss://172.18.20.200:17070/api”
19:44:03 INFO juju.juju api.go:86 connecting to API addresses: [172.18.20.200:17070]
19:44:03 DEBUG juju.api apiclient.go:1153 successfully dialed “wss://172.18.20.200:17070/model/619f699c-6400-4efd-8894-fb39aad621a4/api”
19:44:03 INFO juju.api apiclient.go:688 connection established to “wss://172.18.20.200:17070/model/619f699c-6400-4efd-8894-fb39aad621a4/api”
19:44:03 DEBUG juju.cmd.juju.application.deployer deployer.go:396 cannot interpret as local charm: file does not exist
19:44:03 DEBUG juju.cmd.juju.application.deployer deployer.go:208 cannot interpret as a redeployment of a local charm from the controller
19:44:04 DEBUG juju.cmd.juju.application.store charmadapter.go:142 cannot interpret as charmstore bundle: xenial (series) != “bundle”
19:44:04 INFO cmd charm.go:452 Preparing to deploy “ceph-osd” from the charmhub
19:44:06 INFO cmd charm.go:550 Located charm “ceph-osd” in charm-hub, revision 513
19:44:06 INFO cmd charm.go:236 Deploying “ceph-osd” from charm-hub charm “ceph-osd”, revision 513 in channel stable on xenial
19:44:06 DEBUG juju.api monitor.go:35 RPC connection died
19:44:06 DEBUG juju.api monitor.go:35 RPC connection died
ERROR cannot add application “ceph-osd”: application already exists:
deploy application using an alias name:
juju deploy
or use remove-application to remove the existing one and try again.
19:44:06 DEBUG cmd supercommand.go:537 error stack:
github.com/juju/juju/api/client/application.(*Client).Deploy:207: cannot add application “ceph-osd”: application already exists
github.com/juju/juju/cmd/juju/application.(*deployAPIAdapter).Deploy:144:
github.com/juju/juju/cmd/juju/application/deployer.(*deployCharm).deploy:262: cannot add application “ceph-osd”: application already exists:
deploy application using an alias name:
juju deploy
or use remove-application to remove the existing one and try again.

:~$ juju status
Model Controller Cloud/Region Version SLA Timestamp
ceph maas2-default maas2/default 2.9.37 unsupported 19:44:19-05:00

App Version Status Scale Charm Channel Rev Exposed Message
ceph-osd waiting 0/3 ceph-osd stable 513 no waiting for machine

Unit Workload Agent Machine Public address Ports Message
ceph-osd/9 waiting allocating 18 waiting for machine
ceph-osd/10 waiting allocating 16 waiting for machine
ceph-osd/11 waiting allocating 17 waiting for machine

Machine State Address Inst id Series AZ Message
3 started 172.18.20.201 vm-45-ceph-1 focal default Deployed
16 down pending xenial No available machine matches constraints: [(‘agent_name’, [‘619f699c-6400-4efd-8894-fb39aad621a4’]), (‘arch’, [‘amd64’]), (‘tags’, [‘ceph’]), (‘zone’, [‘default’])] (resolved to “arch=amd64/generic tags=ceph zone=default”)
17 down pending xenial No available machine matches constraints: [(‘agent_name’, [‘619f699c-6400-4efd-8894-fb39aad621a4’]), (‘arch’, [‘amd64’]), (‘tags’, [‘ceph’]), (‘zone’, [‘default’])] (resolved to “arch=amd64/generic tags=ceph zone=default”)
18 down pending xenial No available machine matches constraints: [(‘agent_name’, [‘619f699c-6400-4efd-8894-fb39aad621a4’]), (‘arch’, [‘amd64’]), (‘tags’, [‘ceph’]), (‘zone’, [‘default’])] (resolved to “arch=amd64/generic tags=ceph zone=default”)

@ubumadmin, you should probably ask this question on the Juju forum.