fix(http2): preserve AsyncLocalStorage context across client request callbacks#29046
fix(http2): preserve AsyncLocalStorage context across client request callbacks#29046cirospaciari wants to merge 3 commits intomainfrom
Conversation
…callbacks
Node captures an AsyncResource('PendingRequest') at client.request() time
and runs the stream's 'response'/'data'/'end' emits inside that scope
(lib/internal/http2/core.js:1869,448). Bun's native handler callbacks
were bound at connect() time, so all stream events ran in the
connect-time ALS context, not the per-request one.
Now request() captures the resource on the stream and the client-side
event handlers wrap emits in runInAsyncScope.
Ports test/parallel/test-http2-async-local-storage.js.
|
Updated 10:09 PM PT - Apr 13th, 2026
❌ @cirospaciari, your commit d548871 has 1 failures in
🧪 To try this PR locally: bunx bun-pr 29046That installs a local version of the PR into your bun-29046 --bun |
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 4 minutes and 41 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (1)
WalkthroughUpdates HTTP/2 implementation to add AsyncResource tracking per client stream and ensure response events emit within the request's async scope. Adds test validating AsyncLocalStorage context propagation during HTTP/2 client-server communication. Changes
🚥 Pre-merge checks | ✅ 2✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Comment |
Emit 'response' synchronously inside the request's async scope instead of deferring via process.nextTick, so it fires before 'data'/'end' when HEADERS+DATA arrive in the same tick. Also wrap 'continue', 'trailers', and 'headers' emits in the same scope so AsyncLocalStorage.getStore() returns the per-request context in those listeners too.
There was a problem hiding this comment.
LGTM — follow-up commits addressed all previously raised concerns.
Extended reasoning...
Overview
This PR adds kRequestAsyncResource (an AsyncResource('PendingRequest')) captured at client.request() time and uses runInAsyncScope to ensure stream event callbacks fire in the correct per-request ALS context. Two files are touched: the main http2.ts implementation and a new parallel test porting Node's test-http2-async-local-storage.js.
Prior review concerns — now resolved
My two prior inline comments flagged: (1) the process.nextTick deferral of 'response' causing a data→end→response ordering inversion for small responses, and (2) 'continue', 'trailers', and 'headers' emits left outside the async scope. Both were addressed in commit d548871 ('wrap all client streamHeaders emits in runInAsyncScope synchronously'), which moves the emit synchronous and wraps all four event paths in the request scope. The unreachable code === -2 dead-code branch was removed in 59adcdd.
Remaining bug reports (pre-existing)
The two bugs filed with this review cycle — HTTP 100 Continue double-emitting both 'continue' and 'headers', and error-path handlers (frameError/aborted/streamError) not wrapped in async scope — are both explicitly marked pre-existing and were present before this PR. They are real defects but are out of scope here.
Security risks
No security-sensitive code paths (auth, crypto, permissions) are touched. ALS propagation is an observability/tracing concern with no direct security impact.
Level of scrutiny
Moderate: this touches HTTP/2 event dispatch plumbing that could break event ordering or silently drop data. The follow-up commits directly addressed the ordering and scope-coverage gaps, and the added test validates the core ALS behavior.
| function emitClientStreamHeaders(self, stream, headers, flags, rawheaders) { | ||
| const status = stream[bunHTTP2StreamStatus]; | ||
| const header_status = headers[HTTP2_HEADER_STATUS]; | ||
| if (header_status === HTTP_STATUS_CONTINUE) { | ||
| stream.emit("continue"); | ||
| } | ||
|
|
||
| if ((status & StreamState.StreamResponded) !== 0) { | ||
| stream.emit("trailers", headers, flags, rawheaders); | ||
| } else { | ||
| if (header_status >= 100 && header_status < 200) { | ||
| stream.emit("headers", headers, flags, rawheaders); | ||
| } else { | ||
| stream[bunHTTP2StreamStatus] = status | StreamState.StreamResponded; | ||
| if (header_status === 421) { | ||
| // 421 Misdirected Request | ||
| removeOriginFromSet(self, stream); | ||
| } | ||
| self.emit("stream", stream, headers, flags, rawheaders); | ||
| stream.emit("response", headers, flags, rawheaders); | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
🟣 Pre-existing bug: in emitClientStreamHeaders, HTTP 100 Continue responses fire both 'continue' AND 'headers' events — Node.js only emits 'continue' and returns early. Any application listener on 'headers' will receive a spurious extra event for every HTTP 100 Continue response. Fix: change the informational-header check to else if so it only runs when header_status !== HTTP_STATUS_CONTINUE.
Extended reasoning...
What the bug is and how it manifests
In emitClientStreamHeaders (lines 1919–1941), the code uses two separate if statements rather than a mutual if/else-if chain. When header_status === HTTP_STATUS_CONTINUE (100), the first if fires stream.emit('continue'). Then, because StreamResponded is never set for informational frames, the second outer if falls into its else branch, where header_status >= 100 && header_status < 200 is also true, so stream.emit('headers', headers, flags, rawheaders) fires too. Every HTTP 100 Continue response emits both 'continue' and 'headers'.
The specific code path that triggers it
The two guards are structurally independent if statements with no else link between them. When header_status === 100: (1) the first if matches HTTP_STATUS_CONTINUE and emits 'continue'; (2) the second if checks StreamResponded (not set for informational frames) and falls to else; (3) inside else, 100 >= 100 && 100 < 200 is true, so 'headers' is also emitted.
Node.js uses an if/else-if/else chain so once 'continue' is emitted the code returns; the 'headers' branch is never reached for status 100.
Why existing code does not prevent it
The StreamResponded flag is only set for final (>= 200) responses, so it provides no protection for informational frames. Nothing prevents both branches from executing in the same call for status 100.
What the impact would be
Any consumer listening to 'headers' to track informational responses (e.g., Early Hints handling, gRPC channel-level metadata) will receive a spurious 'headers' event for every upload that receives an Expect: 100-continue acknowledgement. This can corrupt state machines that expect 'headers' only for 101-199 informational responses, not for 100 Continue.
Step-by-step proof
- Server sends HTTP/2 100 Continue in response to Expect: 100-continue.
- Native parser calls streamHeaders -> emitClientStreamHeaders(self, stream, headers, flags, rawheaders).
- header_status = headers[HTTP2_HEADER_STATUS] = 100.
- header_status === HTTP_STATUS_CONTINUE -> true -> stream.emit('continue') fires.
- (status & StreamState.StreamResponded) !== 0 -> false (100 is informational, flag not set) -> enter else.
- header_status >= 100 && header_status < 200 -> true (100 satisfies both bounds) -> stream.emit('headers', ...) fires.
- Both events emitted; Node.js only emits 'continue'.
How to fix it
Replace the two independent if statements with an else-if chain so the 'headers' branch is skipped when 'continue' was already emitted for status 100. This is pre-existing behavior: the original inline logic in streamHeaders had the same structural defect, and the PR faithfully extracted it into emitClientStreamHeaders without correcting it.
| }, | ||
| streamData(self: ClientHttp2Session, stream: ClientHttp2Stream, data: Buffer) { | ||
| if (!self || typeof stream !== "object" || !data) return; | ||
| pushToStream(stream, data); | ||
| pushToStreamInScope(stream, data); | ||
| }, | ||
| streamHeaders( | ||
| self: ClientHttp2Session, |
There was a problem hiding this comment.
🟣 This PR fixes ALS context propagation for the happy-path stream events (streamHeaders, streamData, streamEnd), but the sibling error-path handlers — frameError (~line 3341), aborted (~line 3349), and streamError (~line 3358) — were not wrapped in runInAsyncScope and still fire in the connect-time async context. This is a pre-existing omission, but it creates a visible inconsistency: applications using AsyncLocalStorage for trace IDs or tenant context will get the correct per-request store in 'response'/'data'/'end' handlers but silently receive the connect-time store in 'error'/'frameError'/'aborted' handlers.
Extended reasoning...
What the bug is and how it manifests
The PR correctly wraps three client-side native callbacks in kRequestAsyncResource.runInAsyncScope: streamData (via pushToStreamInScope), the end-of-stream push in the aborted/end block, and streamHeaders (via emitClientStreamHeaders). However, the three error-path handlers in ClientHttp2Session.#Handlers were not updated:
frameError(~line 3341): callsprocess.nextTick(emitFrameErrorEventNT, stream, ...)with no async scope wrappingaborted(~line 3349): callsstream.emit('aborted')synchronously andprocess.nextTick(emitStreamErrorNT, ...)at ~line 3352, both without scopestreamError(~line 3358): callsprocess.nextTick(emitStreamErrorNT, ...)without scope
The specific code path that triggers it
These are native-triggered callbacks that fire in the connect-time async context (the context active when http2.connect() was called). When they schedule work via process.nextTick, the microtask runs in whatever ambient context the event loop has at that point — not the per-request context captured by kRequestAsyncResource. The helper functions emitFrameErrorEventNT and emitStreamErrorNT are plain wrappers that simply call stream.emit with no scope restoration.
Why existing code doesn't prevent it
The runInAsyncScope pattern was only applied to the success path in this PR. The three error handlers share the same #Handlers object but were not touched. There is no automatic propagation of async scope via process.nextTick — each callback must explicitly call runInAsyncScope if it wants to restore context.
What the impact would be
Any application using AsyncLocalStorage for per-request context (trace IDs, tenant IDs, correlation IDs) that also handles errors on client streams will observe inconsistent behavior: storage.getStore() returns { id: 1 } (the per-request store) in 'response'/'data'/'end' handlers but returns { id: 0 } (the connect-time store) in 'error'/'frameError'/'aborted' handlers. This silent misbehavior can cause wrong trace spans, wrong tenant routing, or missed error attribution.
Step-by-step proof
- User calls
storage.run({ id: 0 }, () => http2.connect(...))— connect-time store is{ id: 0 }. - User calls
storage.run({ id: 1 }, () => client.request(...))—req[kRequestAsyncResource]captures{ id: 1 }. - Server sends RST_STREAM or the connection drops; native fires the
streamErrorhandler. streamErrorhandler runs in the connect-time context (no scope wrapping) and callsprocess.nextTick(emitStreamErrorNT, stream, ...).emitStreamErrorNTruns in the microtask queue's ambient context — also connect-time.stream.emit('error', err)fires; user's handler callsstorage.getStore()→ receives{ id: 0 }, not{ id: 1 }.
How to fix it
Each error-path handler should retrieve stream[kRequestAsyncResource] and, if present, use it to wrap the process.nextTick call (or the synchronous emit) analogously to how streamHeaders was fixed:
frameError(self, stream, type, code) {
const reqAsync = stream[kRequestAsyncResource];
if (reqAsync) reqAsync.runInAsyncScope(process.nextTick, null, emitFrameErrorEventNT, stream, type, code);
else process.nextTick(emitFrameErrorEventNT, stream, type, code);
},The same pattern applies to the aborted and streamError handlers. This was pre-existing before the PR but the PR's stated goal of full ALS preservation makes the omission directly relevant.
Node captures an
AsyncResource('PendingRequest')atclient.request()time and runs the stream's'response'/'data'/'end'emits inside that scope (lib/internal/http2/core.js). Bun's native handler callbacks were bound atconnect()time, so all stream events ran in the connect-time ALS context, not the per-request one.Now
request()captures the resource on the stream and the client-side event handlers wrap emits inrunInAsyncScope.Ports Node's
test/parallel/test-http2-async-local-storage.js.