Issues with Grub / Boot while deploying


#1

Hello
I’m working with MAAS trying to deploy Ubuntu 16.04 on my servers, actually it is working but not as I would like, I mean it deploys well but I have errors in my logs and Grub does not get installed :

Setting up os-prober (1.70ubuntu3.3) …
Setting up thermald (1.5-2ubuntu4) …
Running in chroot, ignoring request.
invoke-rc.d: policy-rc.d denied execution of start.
Setting up grub-pc (2.02~beta2-36ubuntu3.18) …

Creating config file /etc/default/grub with new version
Generating grub configuration file …
/run/lvm/lvmetad.socket: connect failed: No such file or directory
WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
/run/lvm/lvmetad.socket: connect failed: No such file or directory
WARNING: Failed to connect to lvmetad. Falling back to internal scanning.

I have a lot of these errors in my logs…

In my curtin file I asked for Grub to be installed on both of my disks (LVM) but it does not install it actually, I have to run it by myself once server is up.

Moreover my /boot is almost empty, I’m not having init*, vmlinuz*, config* files. So I don’t even know how my server is booting. I think I’m missing something big in my config since I’m new with MAAS.

Here is my storage config :

storage:
version: 1
config:

  • id: sdb
    type: disk
    ptable: msdos
    path: /dev/sdb
    name: main_disk
    wipe: superblock-recursive
    grub_device: true
  • id: sdc
    type: disk
    ptable: msdos
    path: /dev/sdc
    wipe: superblock-recursive
    grub_device: true
  • id: sdb1
    type: partition
    number: 1
    size: 2GB
    device: sdb
    flag: boot
    wipe: superblock-recursive
  • id: sdc1
    type: partition
    number: 1
    size: 2GB
    device: sdc
    flag: boot
    wipe: superblock-recursive
  • id: md0
    type: raid
    name: md0
    raidlevel: 1
    devices:
    • sdb1
    • sdc1
      ptable: msdos
  • id: md0_format
    fstype: ext4
    type: format
    volume: md0
  • id: mount-md0_format
    device: md0_format
    path: /boot
    type: mount
  • id: sdb2
    type: partition
    size: 221GB
    device: sdb
    wipe: superblock-recursive
  • id: sdc2
    type: partition
    size: 221GB
    device: sdc
    wipe: superblock-recursive
  • id: md1
    type: raid
    name: md1
    raidlevel: 1
    devices:
    • sdb2
    • sdc2
      ptable: msdos
  • id: volgroup1
    name: vg00
    type: lvm_volgroup
    devices:
    • md1
  • id: lvmpart1
    name: root
    #size: 924G
    type: lvm_partition
    volgroup: volgroup1
  • id: lv1_fs
    name: storage
    type: format
    fstype: ext4
    volume: lvmpart1
  • id: lv1_mount
    type: mount
    path: /
    device: lv1_fs
    swap:
    filename: swap.img
    size: 0
    grub:
    install_devices:
    • /dev/sdb1
    • /dev/sdc1

It could come from the way MAAS commissions and deploys UEFI or Legacy. How can I check that it is commissioning & deploying using Legacy or UEFI ? My BIOS settings specify Legacy.
I’m also deploying via web UI so I’m wandering if Curtin is only using my curtin_userdata layout for storage of if he uses storage layout specified on GUI ? Once it is deployed it is the config I’ve asked for in curtin_userdata, but maybe he uses GUI config for first boot.

I hope you will be able to help me and that it is the good place to ask for help !
Thank you
Ludwig


#2

Hi there,

What are the versions of MAAS and Curtin? (e.g. dpkg -l | grep maas and dpkg -l | grep curtin)

That said, a few questions:

  1. What versions are you using, both for MAAS & Curtin (e.g. dpkg -l | grep curtin).
  2. Are you sending that storage configuration in curtin_userdata or is the machine configured over the UI/API ?
  3. If 1 is true for curtin_userdata, why are you doing so instead of configuring the machine’s storage in MAAS?
  4. What OS are you deploying?

Thanks.


#3

Hello :slight_smile:
Thank you for your answer !

  1. So I’m using :
    MaaS 2.4.2
    Curtin 18.1

  2. I’m sending that storage in curtin_userdata which results in config below when it is deployed. It is the storage config that I want but still with empty /boot and grub to be reinstalled.

$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 223.6G 0 disk
├─sdb2 8:18 0 221G 0 part
│ └─md1 9:1 0 220.9G 0 raid1
│ └─vg00-root 253:0 0 220.9G 0 lvm /
└─sdb1 8:17 0 2G 0 part
└─md0 9:0 0 2G 0 raid1 /boot
sdc 8:32 0 223.6G 0 disk
├─sdc2 8:34 0 221G 0 part
│ └─md1 9:1 0 220.9G 0 raid1
│ └─vg00-root 253:0 0 220.9G 0 lvm /
└─sdc1 8:33 0 2G 0 part
└─md0 9:0 0 2G 0 raid1 /boot
sda 8:0 0 894.3G 0 disk

  1. Because I want it later to be fully automatic and at the moment to be faster, I have a lot of new machines to install and I don’t want to spend time on GUI

  2. I’m deloying servers on Ubuntu 16.04 LTS

I add something, it seems errors I’ve link before are not responsible of my empty /boot because I’ve found old deployment logs on an other server and they are here too, but the deployment was Ok.

Generating grub configuration file …
/run/lvm/lvmetad.socket: connect failed: No such file or directory
WARNING: Failed to connect to lvmetad. Falling back to internal scanning.

But at this time I did not use custom storage config, so it seems to come from it…

Thank you !