Add infrastructure for logging performance statistics#109
Merged
fdwr merged 10 commits intomicrosoft:mainfrom Mar 3, 2026
Merged
Add infrastructure for logging performance statistics#109fdwr merged 10 commits intomicrosoft:mainfrom
fdwr merged 10 commits intomicrosoft:mainfrom
Conversation
Add demos/webnn-perf.js - a shared lightweight module that uses the W3C
Performance API (performance.mark/measure) and structured console events
([WebNN:Perf] prefix) to instrument WebNN API phases. This enables
browser automation tooling (e.g. Playwright in webnn-test) to collect
performance metrics without any UI changes.
Instrumented phases across all 8 demos:
- webnn.context.create: MLContext creation
- webnn.model.fetch: Model download/OPFS load
- webnn.session.create: ORT InferenceSession creation (graph compile)
- webnn.inference.first: First inference (cold run)
- webnn.inference: Subsequent inference iterations
Each event includes metadata (model name, device, provider, iteration)
and is accessible via:
- Performance API: performance.getEntriesByType('measure')
- Console: structured JSON with [WebNN:Perf] prefix
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Add demos/__tests__/webnn-perf.test.js with 12 tests covering: - configure() merges default metadata - time() returns wrapped function result - time() creates performance measure entries with metadata - time() increments seq counter for repeated calls - time() measures non-trivial durations accurately - time() re-throws errors from wrapped functions - time() emits console event with error field on failure - Console output has structured JSON with [WebNN:Perf] prefix - getEntries() filters to only webnn-prefixed measures - reset() clears entries, counters, and defaults Uses Node's built-in test runner (node:test) - no new dependencies. Adds 'test' script to package.json: node --test demos/__tests__/ Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Collaborator
ibelem
reviewed
Feb 25, 2026
…trategies, e.g. transformers.js)
ibelem
reviewed
Mar 2, 2026
The phi-3-mini demo is being removed in PR microsoft#112 (replaced by Phi-4 Mini), so drop the performance instrumentation changes from this demo. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Contributor
|
LGTM, thanks @adrastogi ! @Honry @fdwr PTAL |
ibelem
approved these changes
Mar 3, 2026
fdwr
approved these changes
Mar 3, 2026
Collaborator
fdwr
left a comment
There was a problem hiding this comment.
Mostly nits plus a question. Otherwise LGTM.
Co-authored-by: Dwayne Robinson <fdwr@hotmail.com>
Co-authored-by: Dwayne Robinson <fdwr@hotmail.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Why is this change being made?
There's a useful test harness for running these samples located here. This automation currently scrapes timing data from DOM elements (e.g. #buildTime, #latency), which can be fragile and inconsistent across demos. We want a standardized, structured way to emit performance metrics for the different WebNN API phases so that automation tooling can reliably collect them without being tied to UI implementation details.
What changed?
This change attempts to introduce a new shared module to provide performance instrumentation using the W3C Performance API (
performance.mark()/performance.measure()) with structured[WebNN:Perf]console events as a secondary channel. It exposesWebNNPerf.time(name, fn, metadata)which wraps any async operation with automatic timing, emitting both a PerformanceMeasure entry and a JSON console log line.All 8 demos are instrumented with minimal code changes that wrap existing WebNN/ORT API calls. The instrumented phases are: *
webnn.context.create(MLContext creation)webnn.model.fetch(model download / OPFS load)webnn.session.create(InferenceSession creation / graph compile)webnn.inference.first(first cold inference)webnn.inference(subsequent inference iterations).Each event carries metadata including model name, device type, provider, and iteration number. Automation can collect metrics via the Performance API (
performance.getEntriesByType('measure').filter(e => e.name.startsWith('webnn.'))) or by listening for[WebNN:Perf]-prefixed console messages.A "test" npm script was also added to package.json using Node's built-in test runner.
How was the change tested?
Ran the new unit tests to verify functionality of the instrumentation class. I tried also running the website locally - I'm having a little more trouble here because even after acquiring the models, various samples that I tested out (e.g., Segment Anything, Image Classification) are not able to load them. Still trying to debug what's going on there. I do see the messages being logged to the console in F12 dev tools though.
Note: I extensively used Copilot to help with crafting this PR - I am still reviewing / double-checking its work, but wanted to post the PR here to get any initial thoughts on the direction and whether this seems righteous (cc: @fdwr).