Boosting MAAS performance

Errors or typos? Topics missing? Hard to read? Let us know.

This guide explains how we measure MAAS performance, details recent improvements, and explains how to track your own metrics.

Our reference case

We’ve improved MAAS API performance through rigorous testing, using scenarios that include five rack controllers, 48 machines per fabric, five VMs per LXD host, and machines with diverse features. Our testing setup is designed to reflect a range of real-world conditions.

We use continuous performance monitoring to track this variety.

Performance Monitoring Snapshot

Our daily simulations of 10, 100, and 1000 machines provide detailed insights into scalability. The Jenkins tool tests both the REST and WebSocket APIs, with results stored in a database for analysis through our dashboard.

Comparative testing of stable and development releases helps us identify and fix bugs early. For example, MAAS 3.2 machine listings load 32% faster than in MAAS 3.1 in our tests.

Work done so far

Our commitment to performance optimisation is ongoing. Some highlights include:

These examples represent just a part of our comprehensive performance improvement efforts.

Help us with metrics

Contribute by tracking your MAAS metrics and sharing them with us. Your input on machine counts, network sizes, and performance experiences is valuable. Join the discussion on our Discourse performance forum.

Recent + upcoming

The MAAS 3.2 update has achieved a 32% speed increase in machine listing through the REST API. We’re continuing to work on further enhancements.

What’s next

Our focus now is on optimising other MAAS features, including search functionalities. Feedback and insights are always welcome.

1 Like

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.