I’m using MAAS and Juju to deploy OpenStack on top of servers that have Mellanox MT27710 ConnectX-4 Lx NICs. After some performance tests, I found that raising the RX and TX ring sizes of the Mellanox NICs offered better performance and less packet drops. I set the RX and TX sizes with:
sudo ethtool -G rx 8192 tx 8192
I was trying to find the best way to configure this so that it survives reboots and redeploys. I ended up adding a script to cloudinit-userdata in the Juju model defaults that adds a systemd service on each deployed node that runs those commands at each boot.
I’m thinking however that these settings should be configurable elsewhere in an easier manner. I was wondering if MAAS would be right place, just like we can configure MTU settings in MAAS.
Do you think it makes sense to file a feature request for MAAS to support setting Mellanox ring sizes?
Thank you.
1 Like
I also posted this request in the Juju discourse: https://discourse.juju.is/t/setting-mellanox-rx-and-tx-ring-sizes-with-juju/2388
Rick said he agreed that this sounds like specific config of the network which would be done in MAAS in the interface config.
However I believe that netplan has to support these configuration options if we want MAAS to be able to modify them.
This came to be this way:
-
During an investigation of dropped packets and high retransmits, we found that raising the TX and RX ring buffer sizes on the Mellanox Connect-X NICs fixed the issue
-
We added a script in /usr/lib/networkd-dispatcher/ to configure those options on every boot on all the nodes.
-
However a better idempotent approach would be if MAAS could control these NIC settings so that we don’t have to write scripts or charms to configure them.
Can you set these values using a kernel boot argument? If so you can create a tag which passes the argument to the machine. This should work in any machine state.
I don’t believe I can. That would indeed have been an interesting option. The only way I know to configure this option is with ethtool.