Skip to content

fix(http2): preserve AsyncLocalStorage context across client request callbacks#29046

Open
cirospaciari wants to merge 3 commits intomainfrom
claude/h2-als-per-request-asyncresource
Open

fix(http2): preserve AsyncLocalStorage context across client request callbacks#29046
cirospaciari wants to merge 3 commits intomainfrom
claude/h2-als-per-request-asyncresource

Conversation

@cirospaciari
Copy link
Copy Markdown
Member

Node captures an AsyncResource('PendingRequest') at client.request() time and runs the stream's 'response'/'data'/'end' emits inside that scope (lib/internal/http2/core.js). Bun's native handler callbacks were bound at connect() time, so all stream events ran in the connect-time ALS context, not the per-request one.

Now request() captures the resource on the stream and the client-side event handlers wrap emits in runInAsyncScope.

Ports Node's test/parallel/test-http2-async-local-storage.js.

…callbacks

Node captures an AsyncResource('PendingRequest') at client.request() time
and runs the stream's 'response'/'data'/'end' emits inside that scope
(lib/internal/http2/core.js:1869,448). Bun's native handler callbacks
were bound at connect() time, so all stream events ran in the
connect-time ALS context, not the per-request one.

Now request() captures the resource on the stream and the client-side
event handlers wrap emits in runInAsyncScope.

Ports test/parallel/test-http2-async-local-storage.js.
@robobun
Copy link
Copy Markdown
Collaborator

robobun commented Apr 8, 2026

Updated 10:09 PM PT - Apr 13th, 2026

@cirospaciari, your commit d548871 has 1 failures in Build #45574 (All Failures):


🧪   To try this PR locally:

bunx bun-pr 29046

That installs a local version of the PR into your bun-29046 executable, so you can run:

bun-29046 --bun

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 8, 2026

Warning

Rate limit exceeded

@cirospaciari has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 4 minutes and 41 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 4 minutes and 41 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 6d055915-ac57-458c-a73d-c3cad6ed4f3b

📥 Commits

Reviewing files that changed from the base of the PR and between f418398 and d548871.

📒 Files selected for processing (1)
  • src/js/node/http2.ts

Walkthrough

Updates HTTP/2 implementation to add AsyncResource tracking per client stream and ensure response events emit within the request's async scope. Adds test validating AsyncLocalStorage context propagation during HTTP/2 client-server communication.

Changes

Cohort / File(s) Summary
HTTP/2 Async Scope Handling
src/js/node/http2.ts
Added kRequestAsyncResource symbol for per-stream AsyncResource tracking. Introduced pushToStreamInScope() wrapper to execute pushToStream() within request scope. Updated client header handling to emit response events via emitResponseNT (scheduled via process.nextTick) when async scope available; retained fallback direct emission. Extended sessionErrorFromCode() to map HTTP/2 error code -2 to appropriate error constant.
HTTP/2 AsyncLocalStorage Test
test/js/node/test/parallel/test-http2-async-local-storage.js
New test verifying AsyncLocalStorage context preservation across concurrent HTTP/2 requests. Creates server, establishes client connection within storage scope, performs two concurrent requests with distinct storage ids, and asserts context isolation and correct response data on stream/request/end events.
🚥 Pre-merge checks | ✅ 2
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately summarizes the main change: fixing AsyncLocalStorage context preservation in HTTP/2 client request callbacks, which is the core objective of the PR.
Description check ✅ Passed The description explains the problem (ALS context running at connect-time instead of per-request) and the solution (capturing AsyncResource and using runInAsyncScope), but lacks details about verification beyond mentioning the ported test.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Comment @coderabbitai help to get the list of available commands and usage tips.

Comment thread src/js/node/http2.ts Outdated
Comment thread src/js/node/http2.ts Outdated
Comment thread src/js/node/http2.ts Outdated
Emit 'response' synchronously inside the request's async scope instead of
deferring via process.nextTick, so it fires before 'data'/'end' when
HEADERS+DATA arrive in the same tick. Also wrap 'continue', 'trailers',
and 'headers' emits in the same scope so AsyncLocalStorage.getStore()
returns the per-request context in those listeners too.
Copy link
Copy Markdown
Contributor

@claude claude Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM — follow-up commits addressed all previously raised concerns.

Extended reasoning...

Overview

This PR adds kRequestAsyncResource (an AsyncResource('PendingRequest')) captured at client.request() time and uses runInAsyncScope to ensure stream event callbacks fire in the correct per-request ALS context. Two files are touched: the main http2.ts implementation and a new parallel test porting Node's test-http2-async-local-storage.js.

Prior review concerns — now resolved

My two prior inline comments flagged: (1) the process.nextTick deferral of 'response' causing a data→end→response ordering inversion for small responses, and (2) 'continue', 'trailers', and 'headers' emits left outside the async scope. Both were addressed in commit d548871 ('wrap all client streamHeaders emits in runInAsyncScope synchronously'), which moves the emit synchronous and wraps all four event paths in the request scope. The unreachable code === -2 dead-code branch was removed in 59adcdd.

Remaining bug reports (pre-existing)

The two bugs filed with this review cycle — HTTP 100 Continue double-emitting both 'continue' and 'headers', and error-path handlers (frameError/aborted/streamError) not wrapped in async scope — are both explicitly marked pre-existing and were present before this PR. They are real defects but are out of scope here.

Security risks

No security-sensitive code paths (auth, crypto, permissions) are touched. ALS propagation is an observability/tracing concern with no direct security impact.

Level of scrutiny

Moderate: this touches HTTP/2 event dispatch plumbing that could break event ordering or silently drop data. The follow-up commits directly addressed the ordering and scope-coverage gaps, and the added test validates the core ALS behavior.

Comment thread src/js/node/http2.ts
Comment on lines +1919 to +1941
function emitClientStreamHeaders(self, stream, headers, flags, rawheaders) {
const status = stream[bunHTTP2StreamStatus];
const header_status = headers[HTTP2_HEADER_STATUS];
if (header_status === HTTP_STATUS_CONTINUE) {
stream.emit("continue");
}

if ((status & StreamState.StreamResponded) !== 0) {
stream.emit("trailers", headers, flags, rawheaders);
} else {
if (header_status >= 100 && header_status < 200) {
stream.emit("headers", headers, flags, rawheaders);
} else {
stream[bunHTTP2StreamStatus] = status | StreamState.StreamResponded;
if (header_status === 421) {
// 421 Misdirected Request
removeOriginFromSet(self, stream);
}
self.emit("stream", stream, headers, flags, rawheaders);
stream.emit("response", headers, flags, rawheaders);
}
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟣 Pre-existing bug: in emitClientStreamHeaders, HTTP 100 Continue responses fire both 'continue' AND 'headers' events — Node.js only emits 'continue' and returns early. Any application listener on 'headers' will receive a spurious extra event for every HTTP 100 Continue response. Fix: change the informational-header check to else if so it only runs when header_status !== HTTP_STATUS_CONTINUE.

Extended reasoning...

What the bug is and how it manifests

In emitClientStreamHeaders (lines 1919–1941), the code uses two separate if statements rather than a mutual if/else-if chain. When header_status === HTTP_STATUS_CONTINUE (100), the first if fires stream.emit('continue'). Then, because StreamResponded is never set for informational frames, the second outer if falls into its else branch, where header_status >= 100 && header_status < 200 is also true, so stream.emit('headers', headers, flags, rawheaders) fires too. Every HTTP 100 Continue response emits both 'continue' and 'headers'.

The specific code path that triggers it

The two guards are structurally independent if statements with no else link between them. When header_status === 100: (1) the first if matches HTTP_STATUS_CONTINUE and emits 'continue'; (2) the second if checks StreamResponded (not set for informational frames) and falls to else; (3) inside else, 100 >= 100 && 100 < 200 is true, so 'headers' is also emitted.

Node.js uses an if/else-if/else chain so once 'continue' is emitted the code returns; the 'headers' branch is never reached for status 100.

Why existing code does not prevent it

The StreamResponded flag is only set for final (>= 200) responses, so it provides no protection for informational frames. Nothing prevents both branches from executing in the same call for status 100.

What the impact would be

Any consumer listening to 'headers' to track informational responses (e.g., Early Hints handling, gRPC channel-level metadata) will receive a spurious 'headers' event for every upload that receives an Expect: 100-continue acknowledgement. This can corrupt state machines that expect 'headers' only for 101-199 informational responses, not for 100 Continue.

Step-by-step proof

  1. Server sends HTTP/2 100 Continue in response to Expect: 100-continue.
  2. Native parser calls streamHeaders -> emitClientStreamHeaders(self, stream, headers, flags, rawheaders).
  3. header_status = headers[HTTP2_HEADER_STATUS] = 100.
  4. header_status === HTTP_STATUS_CONTINUE -> true -> stream.emit('continue') fires.
  5. (status & StreamState.StreamResponded) !== 0 -> false (100 is informational, flag not set) -> enter else.
  6. header_status >= 100 && header_status < 200 -> true (100 satisfies both bounds) -> stream.emit('headers', ...) fires.
  7. Both events emitted; Node.js only emits 'continue'.

How to fix it

Replace the two independent if statements with an else-if chain so the 'headers' branch is skipped when 'continue' was already emitted for status 100. This is pre-existing behavior: the original inline logic in streamHeaders had the same structural defect, and the PR faithfully extracted it into emitClientStreamHeaders without correcting it.

Comment thread src/js/node/http2.ts
Comment on lines 3387 to 3393
},
streamData(self: ClientHttp2Session, stream: ClientHttp2Stream, data: Buffer) {
if (!self || typeof stream !== "object" || !data) return;
pushToStream(stream, data);
pushToStreamInScope(stream, data);
},
streamHeaders(
self: ClientHttp2Session,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟣 This PR fixes ALS context propagation for the happy-path stream events (streamHeaders, streamData, streamEnd), but the sibling error-path handlers — frameError (~line 3341), aborted (~line 3349), and streamError (~line 3358) — were not wrapped in runInAsyncScope and still fire in the connect-time async context. This is a pre-existing omission, but it creates a visible inconsistency: applications using AsyncLocalStorage for trace IDs or tenant context will get the correct per-request store in 'response'/'data'/'end' handlers but silently receive the connect-time store in 'error'/'frameError'/'aborted' handlers.

Extended reasoning...

What the bug is and how it manifests

The PR correctly wraps three client-side native callbacks in kRequestAsyncResource.runInAsyncScope: streamData (via pushToStreamInScope), the end-of-stream push in the aborted/end block, and streamHeaders (via emitClientStreamHeaders). However, the three error-path handlers in ClientHttp2Session.#Handlers were not updated:

  • frameError (~line 3341): calls process.nextTick(emitFrameErrorEventNT, stream, ...) with no async scope wrapping
  • aborted (~line 3349): calls stream.emit('aborted') synchronously and process.nextTick(emitStreamErrorNT, ...) at ~line 3352, both without scope
  • streamError (~line 3358): calls process.nextTick(emitStreamErrorNT, ...) without scope

The specific code path that triggers it

These are native-triggered callbacks that fire in the connect-time async context (the context active when http2.connect() was called). When they schedule work via process.nextTick, the microtask runs in whatever ambient context the event loop has at that point — not the per-request context captured by kRequestAsyncResource. The helper functions emitFrameErrorEventNT and emitStreamErrorNT are plain wrappers that simply call stream.emit with no scope restoration.

Why existing code doesn't prevent it

The runInAsyncScope pattern was only applied to the success path in this PR. The three error handlers share the same #Handlers object but were not touched. There is no automatic propagation of async scope via process.nextTick — each callback must explicitly call runInAsyncScope if it wants to restore context.

What the impact would be

Any application using AsyncLocalStorage for per-request context (trace IDs, tenant IDs, correlation IDs) that also handles errors on client streams will observe inconsistent behavior: storage.getStore() returns { id: 1 } (the per-request store) in 'response'/'data'/'end' handlers but returns { id: 0 } (the connect-time store) in 'error'/'frameError'/'aborted' handlers. This silent misbehavior can cause wrong trace spans, wrong tenant routing, or missed error attribution.

Step-by-step proof

  1. User calls storage.run({ id: 0 }, () => http2.connect(...)) — connect-time store is { id: 0 }.
  2. User calls storage.run({ id: 1 }, () => client.request(...))req[kRequestAsyncResource] captures { id: 1 }.
  3. Server sends RST_STREAM or the connection drops; native fires the streamError handler.
  4. streamError handler runs in the connect-time context (no scope wrapping) and calls process.nextTick(emitStreamErrorNT, stream, ...).
  5. emitStreamErrorNT runs in the microtask queue's ambient context — also connect-time.
  6. stream.emit('error', err) fires; user's handler calls storage.getStore() → receives { id: 0 }, not { id: 1 }.

How to fix it

Each error-path handler should retrieve stream[kRequestAsyncResource] and, if present, use it to wrap the process.nextTick call (or the synchronous emit) analogously to how streamHeaders was fixed:

frameError(self, stream, type, code) {
  const reqAsync = stream[kRequestAsyncResource];
  if (reqAsync) reqAsync.runInAsyncScope(process.nextTick, null, emitFrameErrorEventNT, stream, type, code);
  else process.nextTick(emitFrameErrorEventNT, stream, type, code);
},

The same pattern applies to the aborted and streamError handlers. This was pre-existing before the PR but the PR's stated goal of full ALS preservation makes the omission directly relevant.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants