stream: cache minimum cursor count in broadcast#63322
Open
trivikr wants to merge 1 commit into
Open
Conversation
Track how many broadcast consumers share the cached minimum cursor, matching the share implementation. This lets buffer trimming avoid calling getMinCursor() until the last minimum-cursor consumer advances or detaches. Add coverage for fan-out trimming behavior. Signed-off-by: Kamat, Trivikram <16024985+trivikr@users.noreply.github.com> Assisted-by: openai:gpt-5.5
Collaborator
|
Review requested:
|
Member
Author
|
I'm not sure why I didn't see improvements in the previous PR in #63262 (comment) In this PR, I validated that benchmarks are run correctly $ ./configure --ninja
$ make JOBS=8
$ ./node benchmark/compare.js \
--old ../node_old \
--new ./node \
--runs 10 \
--filter iter-throughput-broadcast \
--no-progress \
streams > /tmp/broadcast.csv
$ cat /tmp/broadcast.csv | Rscript benchmark/compare.R |
Member
Author
|
Re-run benchmarks with 30 runs confidence improvement accuracy (*) (**) (***)
streams/iter-throughput-broadcast.js n=5 datasize=1048576 consumers=1 api='classic' *** 10.67 % ±2.96% ±3.94% ±5.13%
streams/iter-throughput-broadcast.js n=5 datasize=1048576 consumers=1 api='iter' *** 13.41 % ±2.64% ±3.51% ±4.57%
streams/iter-throughput-broadcast.js n=5 datasize=1048576 consumers=1 api='webstream' *** 49.68 % ±3.89% ±5.18% ±6.76%
streams/iter-throughput-broadcast.js n=5 datasize=1048576 consumers=2 api='classic' *** 8.68 % ±3.12% ±4.16% ±5.43%
streams/iter-throughput-broadcast.js n=5 datasize=1048576 consumers=2 api='iter' *** 11.49 % ±3.00% ±3.99% ±5.20%
streams/iter-throughput-broadcast.js n=5 datasize=1048576 consumers=2 api='webstream' *** 33.26 % ±3.24% ±4.33% ±5.69%
streams/iter-throughput-broadcast.js n=5 datasize=1048576 consumers=4 api='classic' *** 9.85 % ±3.28% ±4.38% ±5.72%
streams/iter-throughput-broadcast.js n=5 datasize=1048576 consumers=4 api='iter' *** 15.16 % ±3.49% ±4.65% ±6.05%
streams/iter-throughput-broadcast.js n=5 datasize=1048576 consumers=4 api='webstream' *** 35.17 % ±2.90% ±3.86% ±5.03%
streams/iter-throughput-broadcast.js n=5 datasize=16777216 consumers=1 api='classic' ** 4.34 % ±3.19% ±4.25% ±5.55%
streams/iter-throughput-broadcast.js n=5 datasize=16777216 consumers=1 api='iter' *** 14.81 % ±5.90% ±7.90% ±10.37%
streams/iter-throughput-broadcast.js n=5 datasize=16777216 consumers=1 api='webstream' *** 28.71 % ±2.92% ±3.91% ±5.13%
streams/iter-throughput-broadcast.js n=5 datasize=16777216 consumers=2 api='classic' *** 6.16 % ±2.46% ±3.28% ±4.27%
streams/iter-throughput-broadcast.js n=5 datasize=16777216 consumers=2 api='iter' *** 15.17 % ±6.29% ±8.43% ±11.11%
streams/iter-throughput-broadcast.js n=5 datasize=16777216 consumers=2 api='webstream' *** 14.40 % ±3.90% ±5.20% ±6.78%
streams/iter-throughput-broadcast.js n=5 datasize=16777216 consumers=4 api='classic' * 9.01 % ±6.96% ±9.31% ±12.21%
streams/iter-throughput-broadcast.js n=5 datasize=16777216 consumers=4 api='iter' 2.17 % ±2.98% ±3.96% ±5.16%
streams/iter-throughput-broadcast.js n=5 datasize=16777216 consumers=4 api='webstream' *** 10.91 % ±2.35% ±3.13% ±4.09%
Be aware that when doing many comparisons the risk of a false-positive
result increases. In this case, there are 18 comparisons, you can thus
expect the following amount of false-positive results:
0.90 false positives, when considering a 5% risk acceptance (*, **, ***),
0.18 false positives, when considering a 1% risk acceptance (**, ***),
0.02 false positives, when considering a 0.1% risk acceptance (***) |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #63322 +/- ##
=======================================
Coverage 90.04% 90.04%
=======================================
Files 714 714
Lines 225338 225364 +26
Branches 42598 42603 +5
=======================================
+ Hits 202897 202940 +43
+ Misses 14236 14198 -38
- Partials 8205 8226 +21
🚀 New features to boost your workflow:
|
jasnell
approved these changes
May 15, 2026
This comment was marked as outdated.
This comment was marked as outdated.
Ethan-Arrowood
approved these changes
May 15, 2026
Collaborator
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
This ports the cached minimum-cursor consumer count used by
sharetobroadcast.broadcastpreviously used a dirty min-cursor flag, causing trimming to callgetMinCursor()and scan all consumers whenever the cached minimum might havechanged. Under high fan-out, that can make normal consumer advancement pay an
O(consumers) cost.
This change tracks how many consumers are currently at the cached minimum
cursor.
broadcastnow only recomputes the minimum when the last such consumeradvances or detaches. The
drop-oldestpath still recomputes after forcinglagging cursors forward.
Benchmark
Assisted-by: openai:gpt-5.5