Performance
This document summarizes the current local aq vs jq benchmark results for the harvested upstream jq direct-success test corpus.
The raw machine-generated artifacts live in the repository at:
benchmarks/jq-upstream-benchmark.mdbenchmarks/jq-upstream-benchmark.json
Setup
- Sources: jq upstream
tests/base64.test,tests/jq.test,tests/man.test,tests/manonig.test,tests/onig.test,tests/optional.test, andtests/uri.test - Harvested direct-success cases:
836 - Comparable cases in this workspace:
833 aqbinary:target/release/aqjqbinary: local jq master build attmp/jq-master/build/install-pure/bin/jq- jq version:
jq-master-69785bf-dirty - Warmup runs per case:
1 - Measured runs per case:
3 - Per-run timeout:
10.0s
The benchmark command is:
python3 scripts/jq_upstream_benchmark.py \
--jq-binary "$PWD/tmp/jq-master/build/install-pure/bin/jq"Use an absolute jq path here. The harness changes working directories while it runs module and fixture cases.
Headline
On the latest local jq-master run, aq is broadly at parity with jq and slightly ahead on aggregate.
Aggregate results from the saved report:
- Compared cases:
833 aqfaster cases:411jqfaster cases:185- Roughly equal cases:
237 - Uncomparable cases:
3 - Sum of jq medians:
1.343s - Sum of aq medians:
1.261s - Median
aq/jqratio:0.95x - Geometric mean
aq/jqratio:0.96x
Interpretation:
- Ratios below
1.00xmeanaqis faster. - Ratios above
1.00xmeanaqis slower.
The current release build is in the same performance tier as jq master on this corpus, with a small overall aggregate edge to aq.
Startup
Focused local one-shot measurements against the same jq-master build came out to:
aq -n null: about1.30msjq -n null: about1.50msaq . --compacton{"a":1}: about1.32msjq . -con{"a":1}: about1.35ms
So startup is also essentially at parity, with aq slightly ahead in these small local checks.
Notable Wins
The biggest speedups in the current run were:
- jq case
#391, datetime roundtrip pipeline: jq27.34ms, aq2.21ms,0.08x - jq case
#512, deeptojson/fromjson/flattencase: jq11.36ms, aq2.71ms,0.24x - jq case
#513, deeptojsonandfromjsondepth-limit case: jq5.21ms, aq2.86ms,0.55x
These are real wins, but they are no longer representative of the whole story. Against jq master, most of the suite is clustered much closer to parity.
Notable Slowdowns
The largest remaining slowdowns in the current run were:
- jq case
#104, destructuring swap: jq1.46ms, aq2.23ms,1.52x manonig.testcase#8, regex global match: jq1.38ms, aq1.93ms,1.40x- jq case
#468,try input catch .: jq1.57ms, aq2.16ms,1.38x
These are still small absolute differences, but they are the main places where plain jq master is currently tighter.
Heaviest Cases
The heaviest cases called out in the saved report were:
jq case
#512reduce range(9999) as $_ ([];[.]) | tojson | fromjson | flattenjq11.36ms, aq2.71ms,0.24xjq case
#513reduce range(10000) as $_ ([];[.]) | tojson | try (fromjson) catch . | (contains("<skipped: too deep>") | not) and contains("Exceeds depth limit for parsing")jq5.21ms, aq2.86ms,0.55xjq case
#391last(range(365 * 67)|("1970-03-01T01:02:03Z"|strptime("%Y-%m-%dT%H:%M:%SZ")|mktime) + (86400 * .)|strftime("%Y-%m-%dT%H:%M:%SZ")|strptime("%Y-%m-%dT%H:%M:%SZ"))jq27.34ms, aq2.21ms,0.08x
The deep JSON cases are still where most of the absolute time is, even though two of the three are already wins.
Caveats
This benchmark compares aq against a local jq-master build, not against the older local jq 1.8.1 baseline used earlier in the project.
There are 3 uncomparable cases in this workspace. In those cases, the local jq-master build failed while aq produced the expected upstream result:
modulemetamodulemeta | .deps | lengthmodulemeta | .defs | length
The saved benchmark therefore represents:
- a strict local
aqvs local jq-master performance comparison - on the harvested upstream direct-success corpus
- with three jq-master local-build exceptions in the module metadata slice
Current Read
Current assessment:
aqis competitive with a current jq-master build on the harvested upstream corpus- overall performance is near parity, with a modest aggregate edge to
aq - remaining performance work is targeted rather than structural
The best remaining optimization targets are:
- deep
tojsonandfromjsonworkloads, especially jq case#512 - low-millisecond overhead in simple map and arithmetic pipelines
- module metadata behavior, if we want cleaner apples-to-apples comparison on the last uncomparable cases