-
Notifications
You must be signed in to change notification settings - Fork 2
Description
Observation (gray area — not a clear violation)
File: frameworks/go-fasthttp/main.go
What happens
In loadDatasetLarge(), the entire compression response is pre-built at startup:
func loadDatasetLarge() {
// ...reads /data/dataset-large.json...
items := make([]ProcessedItem, len(raw))
for i, d := range raw {
items[i] = ProcessedItem{
// ...copies all fields...
Total: math.Round(d.Price*float64(d.Quantity)*100) / 100, // pre-computed
}
}
jsonLargeResponse, _ = json.Marshal(ProcessResponse{Items: items, Count: len(items)})
}The compression handler then serves this pre-serialized JSON, only applying gzip per-request:
compressedHandler = fasthttp.CompressHandlerLevel(func(ctx *fasthttp.RequestCtx) {
ctx.SetBody(jsonLargeResponse) // pre-serialized at startup
}, flate.BestSpeed)What the spec says
Compute total field for each item, serialize to JSON (~1MB)
Must compress with gzip on the fly when Accept-Encoding: gzip is present
Must NOT pre-compress the response at startup
Analysis
- ✅ Gzip compression happens per-request (not pre-compressed)
- ✅ Uses
flate.BestSpeed(level 1) as required ⚠️ Totals are pre-computed and JSON is pre-serialized at startup- The spec says "compute total field for each item, serialize to JSON" but doesn't explicitly say "per-request" for
/compressionthe way it does for/json
This is a gray area. The benchmark is primarily measuring gzip compression throughput on this endpoint, and the compression itself is done per-request. Pre-serializing the JSON means the endpoint measures pure compression speed rather than JSON serialization + compression. Whether that's the intent or not is up to the benchmark maintainers.
Note
The /json endpoint correctly computes totals per-request in processHandler() — no issue there. This observation is limited to /compression.