Skip to content

perf: stream HTML chunks through lol_html without full-document buffering#321

Draft
jevansnyc wants to merge 1 commit intomainfrom
feat/streaming-chunked-response
Draft

perf: stream HTML chunks through lol_html without full-document buffering#321
jevansnyc wants to merge 1 commit intomainfrom
feat/streaming-chunked-response

Conversation

@jevansnyc
Copy link
Collaborator

Summary

  • HtmlRewriterAdapter now uses lol_html's incremental streaming API instead of accumulating the entire response body before processing. The HtmlRewriter is created eagerly in new() with a shared RcVecSink output buffer; each process_chunk call writes directly to the live rewriter and drains whatever lol_html has ready, so the prefix of the document (everything up to the first matched element) flows downstream immediately rather than waiting for the last byte.
  • process_gzip_to_gzip now delegates to the existing process_through_compression helper (same as deflate and brotli already did), eliminating the read_to_end that buffered the entire decompressed body into a Vec before processing.
  • Tests updated: intermediate chunks may now carry data, so assertions collect output across all chunks rather than asserting intermediates are always empty.

What this does not change yet (follow-up)

The full TTFB improvement requires two more changes tracked for a follow-up PR:

  • publisher.rs: let mut output = Vec::new() still collects the complete processed body before Fastly sends response headers downstream
  • publisher.rs: req.send() is still synchronous; need send_async() + StreamingBody to push bytes to the client as they arrive from origin

Test plan

  • cargo check -p trusted-server-common passes (confirmed locally)
  • CI passes
  • Manual: deploy to staging and confirm TTFB metrics improve for HTML responses vs baseline

…ring

HtmlRewriterAdapter previously accumulated the entire request body before
instantiating the HtmlRewriter, then processed it in a single pass. This
caused TTFB to be gated on receiving the last byte of the origin response.

Switch to lol_html's incremental streaming API by creating the HtmlRewriter
eagerly in new() with a shared RcVecSink output buffer. Each process_chunk
call writes directly to the live rewriter and drains whatever output lol_html
has produced so far, so bytes flow downstream as soon as the parser can emit
them (typically everything up to the first matched element fires immediately).

Also fix process_gzip_to_gzip to delegate to process_through_compression
(like deflate and brotli already did) instead of decompressing the entire
body into a Vec before processing.

Update tests: intermediate chunks may now carry data, so assertions collect
across all chunks rather than asserting intermediates are empty.
@jevansnyc jevansnyc requested a review from prk-Jr February 18, 2026 14:40
@jevansnyc jevansnyc linked an issue Feb 18, 2026 that may be closed by this pull request
@aram356
Copy link
Collaborator

aram356 commented Feb 19, 2026

@jevansnyc ⚠️ This not my no as trivial because

this will break the RSC Next.js integration for HTML responses that contain self.__next_f.push() scripts with origin URLs. The post-processing phase that rewrites RSC payloads will receive only the tail of the document instead of the full output, causing placeholder strings to leak into the response and breaking React hydration.

Possible fix

The HtmlWithPostProcessing wrapper needs to accumulate all output across chunks when post-processors are registered, then run post-processing on the complete document at is_last=true. Alternatively, the post-processing could be restructured to work incrementally — but that's a larger change given the cross-script T-chunk combining logic.

@aram356 aram356 marked this pull request as draft February 19, 2026 00:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Enable Streaming Chunks for responses to improve TTFB on TS

2 participants

Comments