Performance
cmakefmt is fast enough that you never have to think twice about running it
— in local workflows, editor integrations, pre-commit hooks, or CI. That is
not an accident. Speed is a design goal, not a side effect.
Highlights
Section titled “Highlights”- 25.33× geometric-mean speedup over
cmake-formaton real-world CMake files - ~97× faster than
cmake-formaton a 612-file repository (sequential, oomph-lib) - ~240× faster with
--parallel 8on the same repository - Fastest individual fixture: 53.76× speedup (
mariadb_server/CMakeLists.txt, 656 lines) - End-to-end format of a 1000+ line synthetic file: ~8.8 ms
Benchmark Environment
Section titled “Benchmark Environment”Current headline measurements were captured on:
- macOS
26.3.1 aarch64-apple-darwin10logical CPUsrustc 1.94.1hyperfine 1.20.0cmake-format 0.6.13
Exact numbers vary by machine. What matters across releases is that relative performance trends remain strong and regressions are caught early.
Benchmark Approach
Section titled “Benchmark Approach”Per-file timings are measured with hyperfine against a corpus of 15
real-world CMakeLists.txt files sourced from well-known open-source projects.
Each fixture is run in isolation — one file, one process — to isolate
single-file formatting performance from batch overhead.
Both cmakefmt and cmake-format are timed in the same hyperfine invocation
so they share the same system conditions:
hyperfine --warmup 10 --runs 50 \ "cmakefmt path/to/CMakeLists.txt" \ "cmake-format path/to/CMakeLists.txt"- 10 warmup runs — allow the OS page cache and branch predictor to reach a steady state before measurements are recorded
- 50 timed runs — give hyperfine enough samples for a stable mean and tight confidence interval
- Geometric mean — used instead of arithmetic mean because it is resistant to outliers and gives equal weight to each fixture regardless of absolute runtime
The fixture corpus and its pinned commit SHAs are stored in
tests/fixtures/real_world/manifest.toml. To reproduce the measurements
locally, fetch the corpus and run hyperfine against each file using the commands
in How To Reproduce below.
Benchmark Results
Section titled “Benchmark Results”Time comparison across real-world fixtures. Hover a bar to see the exact timings and speedup for that fixture.
Per-fixture speedup. The dashed line marks the geometric mean (25.33×).
The following Criterion estimates cover a 1000+ line synthetic stress-test file:
| Metric | Estimate | 95% CI |
|---|---|---|
| Parser-only | 7.1067 ms | 7.0793–7.1359 ms |
| Formatter-only (from parsed AST) | 1.7545 ms | 1.7425–1.7739 ms |
End-to-end format_source | 8.8248 ms | 8.8018–8.8519 ms |
| Debug/barrier-heavy formatting | 313.98 µs | 311.89–317.54 µs |
All Criterion estimates show a point estimate with a 95% confidence interval — the range within which the true mean is expected to fall 95% of the time. “AST” (Abstract Syntax Tree) is the structured in-memory representation produced by parsing, before formatting.
Parallel Batch Throughput
Section titled “Parallel Batch Throughput”Multi-file runs are single-threaded by default, but opt-in parallelism scales well. The chart shows two real-world batches:
- 220-file batch — the 14-fixture real-world corpus used for the per-file comparison above, with each file formatted repeatedly to produce a stable wall-clock measurement
- 612-file batch — oomph-lib, a larger real-world CMake repository used to measure scaling behavior at a more realistic project size
Hover a point to see the time and speedup vs serial for each batch.
Peak RSS (Resident Set Size — the RAM physically held in memory by the process)
rises from 13.2 MB serial to 20.7 MB at --parallel 8 on the 220-file
batch, and from 11.3 MB to 17.0 MB on the 612-file batch. That is why the
tool defaults to single-threaded execution unless you explicitly request more.
A direct head-to-head against cmake-format on the 612-file oomph-lib repository
(/usr/bin/time -l) showed:
cmake-format(sequential):45.69 srealcmakefmtserial:0.47 sreal →~97×faster(!)cmakefmt --parallel 8:0.19 sreal →~240×faster(!)
What The Numbers Mean In Practice
Section titled “What The Numbers Mean In Practice”The headline numbers matter not as abstract benchmarks, but because they change what feels viable:
- repository-wide
--checkin CI — comfortable - pre-commit hooks on staged files — instant
- repeated local formatting during development — no delay you will notice
- editor-triggered format-on-save — faster than the save dialog
How To Reproduce
Section titled “How To Reproduce”Run the formatter benchmark suite:
cargo bench --bench formatterSave a baseline before a risky change:
cargo bench --bench formatter -- --save-baseline before-changeCompare a later run against that baseline:
cargo bench --bench formatter -- --baseline before-change