Replace LRUCache with Hashtable-backed AggregateTable; eliminate MetricKey#11382
Draft
dougqh wants to merge 3 commits into
Draft
Replace LRUCache with Hashtable-backed AggregateTable; eliminate MetricKey#11382dougqh wants to merge 3 commits into
dougqh wants to merge 3 commits into
Conversation
Standalone classes for swapping the consumer-side LRUCache<MetricKey, AggregateMetric> with a multi-key Hashtable in the next commit. No call sites use them yet. - AggregateEntry extends Hashtable.Entry, holds the canonical MetricKey, the mutable AggregateMetric, and copies of the 13 raw SpanSnapshot fields for matches(). The 64-bit lookup hash is computed via chained LongHashingUtils.addToHash calls (no varargs, no boxing of short/boolean). - AggregateTable wraps a Hashtable.Entry[] from Hashtable.Support.create. findOrInsert(SpanSnapshot) walks the bucket comparing raw fields, falling back to MetricKeys.fromSnapshot on a true miss. On cap overrun, it scans for an entry with hitCount==0 and unlinks it; if none, it returns null and the caller drops the data point. - MetricKeys.fromSnapshot extracts the canonicalization logic (DDCache lookups + UTF8 encoding) from Aggregator.buildMetricKey, so the helper can be called from AggregateTable on miss. This also commits Hashtable and LongHashingUtils (added earlier, previously uncommitted) and lifts Hashtable.Entry / Hashtable.Support visibility so client code outside datadog.trace.util can build higher-arity tables -- the case the javadoc describes but the original visibility didn't actually support. Specifically: Entry is now public abstract with a protected ctor; keyHash, next(), and setNext() are public; Support's create / clear / bucketIndex / bucketIterator / mutatingBucketIterator methods are public. Tests: AggregateTableTest covers hit, miss, distinct-by-spanKind, peer-tag identity (including null vs non-null), cap overrun with stale victim, cap overrun with no victim (returns null), expungeStaleAggregates, forEach, clear, and that the canonical MetricKey is built at insert. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Replace LRUCache<MetricKey, AggregateMetric> with the AggregateTable added
in the prior commit. The hot path in Drainer.accept becomes:
AggregateMetric aggregate = aggregates.findOrInsert(snapshot);
if (aggregate != null) {
aggregate.recordOneDuration(snapshot.tagAndDuration);
dirty = true;
} else {
healthMetrics.onStatsAggregateDropped();
}
On the steady-state hit path the lookup is a 64-bit hash compute + bucket
walk + matches(snapshot) -- no MetricKey allocation, no SERVICE_NAMES /
SPAN_KINDS / PEER_TAGS_CACHE lookups. The canonical MetricKey is now built
once per unique key at insert time, in MetricKeys.fromSnapshot.
Behavioral change in the cap-overrun path
-----------------------------------------
The old LRUCache evicted least-recently-used: at cap, a new insert would
push out the oldest entry regardless of whether it was live or stale.
AggregateTable instead scans for a hitCount==0 entry to recycle, and drops
the new key if none exists. Practical impact: in the common case where
the table holds a stable set of recurring keys, an unrelated burst of new
keys is dropped (and reported via onStatsAggregateDropped) rather than
evicting the established keys. The existing test that asserted "service0
evicted in favor of service10" is updated to assert the new semantics.
The other cap-related test ("should not report dropped aggregate when
evicted entry was already flushed") still passes unchanged: after report()
clears all entries to hitCount=0, the next wave of inserts recycles them.
Threading fix
-------------
ConflatingMetricsAggregator.disable() used to call aggregator.clearAggregates()
and inbox.clear() directly from the Sink's IO event thread, racing with the
aggregator thread mid-write. The race was tolerable for LinkedHashMap; it
is not for AggregateTable (chain corruption can NPE or loop). disable()
now offers a ClearSignal to the inbox so the aggregator thread itself
performs the table clear and the inbox.clear(). Adds one SignalItem
subclass + one branch in Drainer.accept; preserves the single-writer
invariant for AggregateTable end-to-end.
Removed: LRUCache import, AggregateExpiry inner class, the static
buildMetricKey / materializePeerTags / encodePeerTag helpers (now in
MetricKeys).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
MetricKey existed for two reasons -- the prior LRUCache key role (now handled by AggregateTable's Hashtable.Entry mechanics) and as the labels argument to MetricWriter.add. The first is gone; the second is the only thing keeping MetricKey alive. Fold its UTF8-encoded label fields onto AggregateEntry, change MetricWriter.add to take AggregateEntry directly, and delete MetricKey + MetricKeys. What AggregateEntry now holds ----------------------------- - 10 UTF8BytesString label fields (resource, service, operationName, serviceSource, type, spanKind, httpMethod, httpEndpoint, grpcStatusCode, and a List<UTF8BytesString> peerTags for serialization). - 3 primitives (httpStatusCode, synthetic, traceRoot). - AggregateMetric (the value being accumulated). - The raw String[] peerTagPairs is retained alongside the encoded peerTags -- matches() compares it positionally against the snapshot's pairs; the encoded form is only consumed by the writer. matches(SpanSnapshot) compares the entry's UTF8 forms to the snapshot's raw String / CharSequence fields via content-equality (UTF8BytesString.toString() returns the underlying String in O(1)). This closes a latent bug in the prior raw-vs-raw matches(): if one snapshot delivered a tag value as String and a later snapshot delivered the same content as UTF8BytesString, the old Objects.equals would return false and the table would split into two entries. Content-equality matching collapses them into one. Consolidated caches ------------------- The static UTF8 caches that used to live partly on MetricKey (RESOURCE_CACHE, OPERATION_CACHE, SERVICE_SOURCE_CACHE, TYPE_CACHE, KIND_CACHE, HTTP_METHOD_CACHE, HTTP_ENDPOINT_CACHE, GRPC_STATUS_CODE_CACHE, SERVICE_CACHE) and partly on ConflatingMetricsAggregator (SERVICE_NAMES, SPAN_KINDS, PEER_TAGS_CACHE) are all now on AggregateEntry. The split was duplicating work -- SERVICE_NAMES and SERVICE_CACHE both cached service-name to UTF8BytesString. One cache per field now. API change: MetricWriter.add ---------------------------- Was: add(MetricKey key, AggregateMetric aggregate) Now: add(AggregateEntry entry) The aggregate lives on the entry. Single-arg. SerializingMetricWriter reads the same UTF8 fields off AggregateEntry that it previously read off MetricKey; the wire format is byte-identical. Test impact ----------- AggregateEntry.of(...) takes the same 13 positional args new MetricKey(...) took, so test diffs are mostly mechanical: new MetricKey(args) -> AggregateEntry.of(args) writer.add(key, _) -> writer.add(entry) ValidatingSink in SerializingMetricWriterTest now iterates List<AggregateEntry> directly. ConflatingMetricAggregatorTest's Spock matchers (~36 sites) rely on AggregateEntry.equals comparing the 13 label fields (not the aggregate) so the mock matches by labels regardless of the aggregate state at call time; post-invocation closures verify aggregate state. Benchmarks (2 forks x 5 iter x 15s) ----------------------------------- The change is consumer-thread only; producer publish() is unchanged. SimpleSpan bench: 3.123 +- 0.025 us/op (prior: 3.119 +- 0.018) DDSpan bench: 2.412 +- 0.022 us/op (prior: 2.463 +- 0.041) Both within noise -- the win is structural (one less class, one less allocation per miss, one fewer cache layer) rather than benchmarked. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
3 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Stacked on top of #11381 — review that first; the merge base of this PR is
dougqh/conflating-metrics-background-work. The diff shown here is only the work that's new beyond that PR.Restructures the consumer-side aggregate store. Three commits, intended to be reviewed in order:
1. Add
AggregateTable+AggregateEntrybacked byHashtableIntroduces a multi-key hash table that lets the consumer thread look up the {labels → AggregateMetric} entry directly from a
SpanSnapshot's raw fields — noMetricKeyallocation per snapshot, no per-snapshot UTF8 cache lookups, no CHM operations. Hot-path lookup iskeyHash compute→bucket walk→matches(snapshot)→ returned entry has theAggregateMetricto mutate in place.This commit is standalone — no call sites yet, only the new classes + unit tests for hit/miss/cap-overrun/expunge/clear behavior. Also lifts the visibility of
Hashtable.EntryandHashtable.Supportso external clients can implement higher-arity tables — the javadoc'd use case the original visibility didn't actually support.2. Swap
Aggregatorto useAggregateTable+ routedisable()clear through aClearSignalReplaces
LRUCache<MetricKey, AggregateMetric>withAggregateTableinAggregator. Drops theAggregateExpirylistener — drop reporting (onStatsAggregateDropped) moves to the cap-overrun path insideDrainer.accept.Threading fix bundled here:
ConflatingMetricsAggregator.disable()used to callaggregator.clearAggregates()andinbox.clear()directly from the Sink's IO callback thread, racing with the aggregator thread. That race was tolerable forLinkedHashMap(worst case = corrupted internal state right before everything got cleared anyway); it's not tolerable forHashtable(chain corruption can NPE or loop).disable()now offers aClearSignalto the inbox so the aggregator thread itself performs the clear — preserves the single-writer invariant forAggregateTableend-to-end.Cap-overrun semantic change: the old
LRUCacheevicted least-recently-used.AggregateTableinstead scans for ahitCount==0entry to recycle, and drops the new key if none exists. Practical impact: in steady state, an unrelated burst of new keys gets dropped (and reported viaonStatsAggregateDropped) rather than evicting established keys. The existing test that asserted "service0 evicted in favor of service10" is updated to assert the new semantics. The other cap-related test ("evicted entry was already flushed") still passes unchanged.3. Eliminate
MetricKey: inline its fields ontoAggregateEntryMetricKeyexisted for two reasons — being theLRUCachekey (replaced byAggregateTable's Hashtable mechanics) and being the labels arg toMetricWriter.add(the only thing left). Folds its 10 UTF8 label fields + 3 primitives ontoAggregateEntry, changesMetricWriter.add(MetricKey, AggregateMetric)→add(AggregateEntry), and deletesMetricKey.java+MetricKeys.java.The 12 UTF8 caches that used to be split between
MetricKey(9) andConflatingMetricsAggregator(3, with overlap) are consolidated onAggregateEntry. One cache per field type now.Latent bug fix: the prior
matches(SpanSnapshot)usedObjects.equalson raw fields. If the same logical key was delivered once asStringand once asUTF8BytesString(differentCharSequenceimpls of identical content),Objects.equalsreturns false and the table would split into two entries for the same key. The newmatchesuses content-equality (UTF8BytesString.toString()returns the underlyingStringin O(1)), collapsing them correctly.Test impact:
AggregateEntry.of(...)mirrors the priornew MetricKey(...)positional args, so test diffs are mostly mechanical. About 56 test sites migrated acrossConflatingMetricAggregatorTest,SerializingMetricWriterTest, andMetricsIntegrationTest.Benchmarks
2 forks × 5 iter × 15s, producer publish() latency:
All within noise — this PR is a consumer-side refactor, so producer publish() shouldn't move much. The win is structural (one less class, no per-miss MetricKey allocation, no double-cache lookups, smaller per-entry footprint) plus higher consumer throughput that lets the inbox keep up at higher sustained producer rates before
onStatsInboxFullfires.Net code delta across the three commits: −713 / +609 = -104 lines, mostly from
MetricKeydeletion and consolidating tests away fromPair<MetricKey, AggregateMetric>indirection.Test plan
./gradlew :dd-trace-core:test --tests 'datadog.trace.common.metrics.*'passes (incl. the newAggregateTableTest)./gradlew :dd-trace-core:compileJava :dd-trace-core:compileTestGroovy :dd-trace-core:compileJmhJava :dd-trace-core:compileTraceAgentTestGroovyall green./gradlew spotlessCheckcleanstats.dropped_aggregatessemantics at high cardinality (especially the new "drop new on cap overrun" path vs. the old "evict LRU" path)🤖 Generated with Claude Code