Upstream Linux kernel testing has grown exponentially on many fronts during the past few years: kselftest is now more stable, KUnit gaining coverage and many out-of-tree test suites have kept growing. Many automated systems are running those tests continuously and regzbot has now become a central place for tracking regressions with weekly reports for mainline.
Despite these monumental achievements, mainline and stable releases still happen entirely at their maintainers’ discretion, in the absence of any known blocking regression. Linux kernel development has worked for many years while relying on subsystem maintainers’ testing tools and best-effort test reports. But in the same way that open-source has brought contributors together with a single code base, testing can bring kernel users together. Rather than having to rely heavily on downstream testing, we could bring some of the real-world quality control upstream too.
The aim of this talk is to raise a number of critical questions: What would it take to gate releases on a set of passing test results, even basic ones to start with? Can the upstream kernel community ever make such a culture shift? Could release tags include certified test results as a meaningful quality measurement for its users?
Guillaume Tucker, Collabora