<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="/feed.xml" rel="self" type="application/atom+xml" /><link href="/" rel="alternate" type="text/html" /><updated>2026-02-10T09:18:22+00:00</updated><id>/feed.xml</id><title type="html">Web Machine Learning</title><subtitle>Making Machine Learning a first-class web citizen
</subtitle><entry><title type="html">Introduction to Web Neural Network API (WebNN)</title><link href="/get-started/2024/05/16/introduction-to-web-neural-network-api.html" rel="alternate" type="text/html" title="Introduction to Web Neural Network API (WebNN)" /><published>2024-05-16T13:55:22+00:00</published><updated>2024-05-16T13:55:22+00:00</updated><id>/get-started/2024/05/16/introduction-to-web-neural-network-api</id><content type="html" xml:base="/get-started/2024/05/16/introduction-to-web-neural-network-api.html"><![CDATA[<p>The Web Neural Network API (WebNN) brings accelerated machine learning capabilities directly to web applications. With WebNN, developers can harness the power of neural networks within the browser environment, enabling a wide range of AI-driven use cases without relying on external servers or plugins.</p>

<h3 id="what-is-webnn">What is WebNN?</h3>

<p>WebNN is a JavaScript API that provides a high-level interface for executing neural network inference tasks efficiently on various hardware accelerators, such as CPUs, GPUs, and dedicated AI chips (sometimes called NPUs or TPUs). By utilizing hardware acceleration, WebNN enables faster and more power-efficient execution of machine learning models, making it ideal for real-time applications and scenarios where latency is critical.</p>

<!-- more -->

<h3 id="programming-model">Programming Model</h3>

<p>WebNN follows a simple programming model, allowing developers to perform inference tasks with minimal complexity. The API is focused on defining the operations and infrastructure necessary to execute machine learning models, rather than handling higher-level functionalities such as model loading, parsing, or management. WebNN is designed to be agnostic to model formats and leaves the responsibility of loading and parsing models to other libraries (such as ONNX.js or Tensorflow.js) or the web application itself.</p>

<p>At a high level WebNN essentially has 2 steps to run a model:</p>

<ul>
  <li>
    <p>Model Construction: In WebNN the first step is to construct the model using the MLGraphBuilder API. Once the model has been constructed it can be built into an executable graph.</p>
  </li>
  <li>
    <p>Model Execution: Once the executable graph has been constructed, data is input and the graph executes inference tasks to obtain predictions or classifications. WebNN provides methods for selecting backends (either explicitly or by characteristics) that then process the input data and return output results from the model.</p>
  </li>
</ul>

<p>WebNN leverages hardware accelerators to accelerate the execution of models. Since WebNN is hardware and model agnostic it can use any of the available hardware resources (whether CPU, GPU, NPU, TPU, etc), maximizing performance and minimizing latency, enabling smooth and responsive user experiences.</p>

<h3 id="sample-code">Sample Code</h3>

<p>Let’s take a look at example pseudo-code that demonstrates how to perform inference using WebNN. In this example, we will show the basic API for model construction and execution. More details of model construction are covered in other tutorials.</p>

<div class="language-js highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="cm">/* Create a context and MLGraphBuilder */</span>
<span class="kd">const</span> <span class="nx">context</span> <span class="o">=</span> <span class="k">await</span> <span class="nb">navigator</span><span class="p">.</span><span class="nx">ml</span><span class="p">.</span><span class="nx">createContext</span><span class="p">(</span><span class="cm">/* execution parameters */</span><span class="p">);</span>
<span class="kd">const</span> <span class="nx">builder</span> <span class="o">=</span> <span class="k">new</span> <span class="nx">MLGraphBuilder</span><span class="p">(</span><span class="nx">context</span><span class="p">);</span>

<span class="cm">/* Construct the model */</span>
<span class="c1">// WebNN supports a core set of ML operators - a full list can be found at </span>
<span class="c1">// https://www.w3.org/TR/webnn/#api-mlgraphbuilder</span>


<span class="cm">/* Build executable graph */</span>
<span class="kd">const</span> <span class="nx">graph</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">builder</span><span class="p">.</span><span class="nx">build</span><span class="p">({</span><span class="dl">'</span><span class="s1">output</span><span class="dl">'</span><span class="p">:</span> <span class="nx">output</span><span class="p">});</span>

<span class="cm">/* Arrange input and output buffers */</span>
<span class="kd">const</span> <span class="nx">inputBuffer</span> <span class="o">=</span> <span class="k">new</span> <span class="nb">Float32Array</span><span class="p">(</span><span class="nx">TENSOR_SIZE</span><span class="p">);</span>
<span class="kd">const</span> <span class="nx">outputBuffer</span> <span class="o">=</span> <span class="k">new</span> <span class="nb">Float32Array</span><span class="p">(</span><span class="nx">TENSOR_SIZE</span><span class="p">);</span>

<span class="kd">const</span> <span class="nx">inputs</span> <span class="o">=</span> <span class="p">{</span><span class="dl">'</span><span class="s1">input</span><span class="dl">'</span><span class="p">:</span> <span class="nx">inputBuffer</span><span class="p">};</span>

<span class="kd">const</span> <span class="nx">outputs</span> <span class="o">=</span> <span class="p">{</span><span class="dl">'</span><span class="s1">output</span><span class="dl">'</span><span class="p">:</span> <span class="nx">outputBuffer</span><span class="p">};</span>

<span class="cm">/* Execute the compiled graph with the specified inputs. */</span>
<span class="kd">const</span> <span class="nx">results</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">context</span><span class="p">.</span><span class="nx">compute</span><span class="p">(</span><span class="nx">graph</span><span class="p">,</span> <span class="nx">inputs</span><span class="p">,</span> <span class="nx">outputs</span><span class="p">);</span>

<span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="dl">'</span><span class="s1">Output values: </span><span class="dl">'</span> <span class="o">+</span> <span class="nx">results</span><span class="p">.</span><span class="nx">outputs</span><span class="p">.</span><span class="nx">output</span><span class="p">);</span>
</code></pre></div></div>

<p>This is a basic pseudo-code example of how to use WebNN API to construct a model and execute it. In a real-world scenario, you would replace the placeholders (/* Construct the model <em>/, /</em> Arrange input and output buffers */, etc.) with actual model construction, input data, and execution parameters specific to your application.</p>

<h3 id="conclusion">Conclusion</h3>

<p>WebNN opens up exciting possibilities for integrating machine learning capabilities into web applications, enabling developers to create innovative experiences powered by AI directly in the browser. With its simple programming model and support for hardware acceleration, WebNN empowers developers to build responsive and efficient AI-driven solutions that run seamlessly on a wide range of devices.</p>]]></content><author><name>Paul Cooper</name></author><category term="get-started" /><summary type="html"><![CDATA[The Web Neural Network API (WebNN) brings accelerated machine learning capabilities directly to web applications. With WebNN, developers can harness the power of neural networks within the browser environment, enabling a wide range of AI-driven use cases without relying on external servers or plugins. What is WebNN? WebNN is a JavaScript API that provides a high-level interface for executing neural network inference tasks efficiently on various hardware accelerators, such as CPUs, GPUs, and dedicated AI chips (sometimes called NPUs or TPUs). By utilizing hardware acceleration, WebNN enables faster and more power-efficient execution of machine learning models, making it ideal for real-time applications and scenarios where latency is critical.]]></summary></entry><entry><title type="html">Try out WebNN API early using WebNN Polyfill</title><link href="/blog/2022/12/08/try-out-webnn-api-early-using-webnn-polyfill.html" rel="alternate" type="text/html" title="Try out WebNN API early using WebNN Polyfill" /><published>2022-12-08T02:42:03+00:00</published><updated>2022-12-08T02:42:03+00:00</updated><id>/blog/2022/12/08/try-out-webnn-api-early-using-webnn-polyfill</id><content type="html" xml:base="/blog/2022/12/08/try-out-webnn-api-early-using-webnn-polyfill.html"><![CDATA[<p>The <a href="https://www.npmjs.com/package/@webmachinelearning/webnn-polyfill">WebNN Polyfill</a> has been published to NPM.</p>

<p>It is a JavaScript implementation of the WebNN API, based on
<a href="https://github.com/tensorflow/tfjs">TensorFlow.js</a> that supports multiple backends for both
Web browsers and Node.js.</p>

<p>With this polyfill, Web developers are able to experience the WebNN API
early before the native implementations are shipped. Meanwhile, it can
be treated as an independent implementation to help validate the feasibility
and stability of the WebNN specification.</p>

<!-- more -->

<h2 id="usage">Usage</h2>

<p>Import the package via either NPM or a script tag.</p>

<ul>
  <li>Via NPM</li>
</ul>

<div class="language-js highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">import</span> <span class="dl">'</span><span class="s1">@webmachinelearning/webnn-polyfill</span><span class="dl">'</span><span class="p">;</span>
</code></pre></div></div>

<ul>
  <li>Via a script tag</li>
</ul>

<div class="language-html highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nt">&lt;script </span><span class="na">src=</span><span class="s">"https://cdn.jsdelivr.net/npm/@webmachinelearning/webnn-polyfill/dist/webnn-polyfill.js"</span><span class="nt">&gt;&lt;/script&gt;</span>
</code></pre></div></div>

<p>Before using WebNN API, you should set backend to enable TensorFlow.js as follows.
Currently WebNN Polyfill supports 3 backends: <code class="language-plaintext highlighter-rouge">CPU</code>, <code class="language-plaintext highlighter-rouge">WebGL</code> and <code class="language-plaintext highlighter-rouge">WASM</code>. <code class="language-plaintext highlighter-rouge">CPU</code> backend
has higher numerical precision, while <code class="language-plaintext highlighter-rouge">WebGL</code> backend provides better performance.
A new backend <code class="language-plaintext highlighter-rouge">WebGPU</code> is work in progress.</p>

<div class="language-js highlighter-rouge"><div class="highlight"><pre class="highlight"><code>    <span class="kd">const</span> <span class="nx">backend</span> <span class="o">=</span> <span class="dl">'</span><span class="s1">webgl</span><span class="dl">'</span><span class="p">;</span> <span class="c1">// or 'cpu', 'wasm'</span>
    <span class="kd">const</span> <span class="nx">context</span> <span class="o">=</span> <span class="k">await</span> <span class="nb">navigator</span><span class="p">.</span><span class="nx">ml</span><span class="p">.</span><span class="nx">createContext</span><span class="p">();</span>
    <span class="kd">const</span> <span class="nx">tf</span> <span class="o">=</span> <span class="nx">context</span><span class="p">.</span><span class="nx">tf</span><span class="p">;</span>
    <span class="k">await</span> <span class="nx">tf</span><span class="p">.</span><span class="nx">setBackend</span><span class="p">(</span><span class="nx">backend</span><span class="p">);</span>
    <span class="k">await</span> <span class="nx">tf</span><span class="p">.</span><span class="nx">ready</span><span class="p">();</span>
</code></pre></div></div>

<h2 id="samples">Samples</h2>

<p>Try out the live version of <a href="https://webmachinelearning.github.io/webnn-samples/">WebNN samples</a> which fall back to the
WebNN Polyfill in the browser without native WebNN API support. These samples
showcase various popular use cases for neural networks powered by WebNN API.</p>

<h2 id="vision">Vision</h2>

<p>We will continuously develop the WebNN Polyfill to keep it aligned with the WebNN API
spec. At the same time, WebNN API development in <a href="https://bugs.chromium.org/p/chromium/issues/detail?id=1273291">Chromium</a>
is in full swing, we hope it won’t take too long to bring hardware acceleration
access to Web developers through WebNN API.</p>]]></content><author><name>Wanming Lin, Bruce Dai</name></author><category term="blog" /><summary type="html"><![CDATA[The WebNN Polyfill has been published to NPM. It is a JavaScript implementation of the WebNN API, based on TensorFlow.js that supports multiple backends for both Web browsers and Node.js. With this polyfill, Web developers are able to experience the WebNN API early before the native implementations are shipped. Meanwhile, it can be treated as an independent implementation to help validate the feasibility and stability of the WebNN specification.]]></summary></entry><entry><title type="html">Progressing Web Machine Learning Innovations to W3C Standards Track</title><link href="/blog/2021/04/27/progressing-web-machine-learning-innovations-to-w3c-standards-track.html" rel="alternate" type="text/html" title="Progressing Web Machine Learning Innovations to W3C Standards Track" /><published>2021-04-27T10:00:00+00:00</published><updated>2021-04-27T10:00:00+00:00</updated><id>/blog/2021/04/27/progressing-web-machine-learning-innovations-to-w3c-standards-track</id><content type="html" xml:base="/blog/2021/04/27/progressing-web-machine-learning-innovations-to-w3c-standards-track.html"><![CDATA[<p>🌱 This <a href="https://www.w3.org/community/webmachinelearning/">W3C Community Group</a> started incubating work in 2018 for a possible Web Neural Network API, in response to encouraging feedback from a <a href="https://www.w3.org/2018/10/24-webmachinelearning-minutes.html">TPAC breakout session</a>. Starting October 2018, this Community Group identified key use cases working with diverse participants including major browser vendors, key ML JS frameworks, interested hardware vendors, web developers, and started drafting the Web Neural Network API specification.</p>

<p>🚀 Following the two-year incubation period in this Community Group, the W3C has launched the <a href="https://www.w3.org/groups/wg/webmachinelearning">Web Machine Learning Working Group</a> to standardize the <a href="https://www.w3.org/TR/webnn/">Web Neural Network API</a>, now graduating from its incubation stage. This Community Group continues its incubation function for new machine learning capabilities working in parallel with the newly formed Working Group, similarly to e.g. W3C’s WebAssembly and WebGPU efforts.</p>

<!-- more -->

<p>👏 Huge thanks to all the <a href="https://www.w3.org/community/webmachinelearning/">W3C Community Group</a> and <a href="https://www.w3.org/2020/06/machine-learning-workshop/">W3C workshop participants</a> for their contributions that have helped shape this work, and W3C for providing a venue to advance this cross-industry effort toward wide adoption.</p>

<p>📢 Please join the <a href="https://www.w3.org/2004/01/pp-impl/130674/instructions">Working Group</a> and read more about our journey and what lies ahead of us <a href="../20/w3c-launches-the-web-machine-learning-working-group-our-journey.html">here</a> or from the <a href="https://www.w3.org/blog/2021/04/w3c-launches-the-web-machine-learning-working-group/">W3C blog post</a>.</p>

<p>Anssi Kostiainen<br />
Web Machine Learning Community Group Chair</p>]]></content><author><name>Anssi Kostiainen</name></author><category term="blog" /><summary type="html"><![CDATA[🌱 This W3C Community Group started incubating work in 2018 for a possible Web Neural Network API, in response to encouraging feedback from a TPAC breakout session. Starting October 2018, this Community Group identified key use cases working with diverse participants including major browser vendors, key ML JS frameworks, interested hardware vendors, web developers, and started drafting the Web Neural Network API specification. 🚀 Following the two-year incubation period in this Community Group, the W3C has launched the Web Machine Learning Working Group to standardize the Web Neural Network API, now graduating from its incubation stage. This Community Group continues its incubation function for new machine learning capabilities working in parallel with the newly formed Working Group, similarly to e.g. W3C’s WebAssembly and WebGPU efforts.]]></summary></entry><entry><title type="html">W3C Launches the Web Machine Learning Working Group</title><link href="/blog/2021/04/20/w3c-launches-the-web-machine-learning-working-group.html" rel="alternate" type="text/html" title="W3C Launches the Web Machine Learning Working Group" /><published>2021-04-20T10:00:00+00:00</published><updated>2021-04-20T10:00:00+00:00</updated><id>/blog/2021/04/20/w3c-launches-the-web-machine-learning-working-group</id><content type="html" xml:base="/blog/2021/04/20/w3c-launches-the-web-machine-learning-working-group.html"><![CDATA[<h2 id="introduction">Introduction</h2>

<p>Machine Learning (ML) is a branch of Artificial Intelligence. A subfield of ML called Deep Learning with its various neural network architectures enables new compelling user experiences for web applications. <a href="https://www.w3.org/TR/webnn/#usecases">Use cases</a> range from improved video conferencing to accessibility-improving features, with potential improved privacy over cloud-based solutions. Enabling these use cases and more is the focus of the newly launched <a href="https://www.w3.org/groups/wg/webmachinelearning">Web Machine Learning Working Group</a>.</p>

<p><img src="/assets/images/webml-logo-sm.png" alt="WebNN Logo" /></p>

<h2 id="progress">Progress</h2>

<p>While some of these use cases can be implemented in-device in a constrained manner with existing Web APIs (e.g. WebGL graphics API or in the future <a href="https://gpuweb.github.io/gpuweb/">WebGPU</a>), the lack of access to platform capabilities such as dedicated ML hardware accelerators and native instructions constraint the scope of experiences and leads to inefficient implementations on modern hardware.</p>

<!-- more -->

<p>With these design goals in mind, a <a href="https://www.w3.org/community/webmachinelearning/">W3C Community Group</a> started incubating work in 2018 for a possible Web Neural Network API, in response to encouraging feedback from a <a href="https://www.w3.org/2018/10/24-webmachinelearning-minutes.html">TPAC breakout session</a>. Starting October 2018, this Community Group identified key use cases working with diverse participants including major browser vendors, key ML JS frameworks, interested hardware vendors, and web developers. After identification of the key use cases, the group decomposed the key use cases into requirements and started drafting the <a href="https://webmachinelearning.github.io/webnn">Web Neural Network API specification</a> in mid-2019. The aim of this use case-driven design process was to put user needs first.</p>

<h3 id="quotes">Quotes</h3>

<blockquote>
  <p>“Having access to the native ML accelerators, machine learning frameworks such as TensorFlow.js can greatly improve model execution efficiency and truly democratize ML for web developers.”</p>

  <p>– Ping Yu, TLM for <a href="https://www.tensorflow.org/js">TensorFlow.js</a> at Google</p>
</blockquote>

<blockquote>
  <p>“The <a href="https://www.w3.org/2020/06/machine-learning-workshop/talks/access_purpose_built_ml_hardware_with_web_neural_network_api.html#slide-10">early empirical results from the Web Neural Network API implementations</a> demonstrate tremendous power &amp; performance improvements of the Web AI workloads. Through access to the full native AI capabilities of the modern heterogeneous hardware, the Web Neural Network API enables a whole new transformative class of intelligent user experiences on the Open Web Platform across a variety of hardware, software, and device types.”</p>

  <p>– Ningxin Hu, Principal Engineer, Web Platform Engineering at Intel</p>
</blockquote>

<p>W3C organized a <a href="https://www.w3.org/2020/06/machine-learning-workshop">workshop on Web and Machine Learning</a> over the course of August and September 2020. This workshop brought together web platform and machine learning practitioners to survey the broader intersection of Web technologies and Machine Learning, and one of the <a href="https://www.w3.org/2020/06/machine-learning-workshop/report.html">conclusions of the Workshop</a> was to propose that <a href="https://lists.w3.org/Archives/Public/public-new-work/2021Feb/0007.html">a new W3C Working Group should be formed</a> to standardize the Web Neural Network API, graduating from its incubation stage. As of 2021, the Community Group continues its incubation function working in parallel with the Working Group, similarly to e.g. W3C’s WebAssembly and WebGPU efforts.</p>

<blockquote>
  <p>“The Web Neural Network API is a very important step toward the future of the Intelligent Web where AI is infused into the user’s daily web experiences. With the current advances and the pace of innovations in the AI hardware landscape, it’ll help connect those experiences from the clouds and make them personal to the users through seamless native hardware performance on the edge devices across the entire web. That’s the future worth dreaming about!”</p>

  <p>– <a href="https://www.w3.org/2020/06/machine-learning-workshop/talks/accelerated_graphics_and_compute_api_for_machine_learning_directml.html">Chai Chaoweeraprasit</a>, Principal Engineering Lead, Machine Learning and Compute Platform at Microsoft</p>
</blockquote>

<p>The <a href="https://www.w3.org/groups/wg/webmachinelearning">Web Machine Learning Working Group</a> plans to publish the First Public Working Draft of the Web Neural Network API during the first half of 2021 and welcomes new participants from the diverse W3C community to take part in helping identify new use cases, documenting ethical risks and their mitigations, contributing to technical work, conducting wide reviews in privacy, security, accessibility, and other important areas to ensure the perspectives of the diverse web community are heard. – which given some of the ethical impact of Machine Learning algorithms will be particularly critical to the work of the group. Join us!</p>

<p>We would like to thank all the W3C Community Group and W3C workshop participants for their contributions that have helped shape this work, and W3C for providing a venue to advance this cross-industry effort toward wide adoption.</p>

<p><em>This post is co-authored by Anssi Kostiainen (Working Group Chair), Ningxin Hu and Chai Chaoweeraprasit (Web Neural Network API Editors), and Ping Yu (TensorFlow.js Core team).</em></p>

<p><em>Source: <a href="https://www.w3.org/blog/2021/04/w3c-launches-the-web-machine-learning-working-group/">W3C Launches the Web Machine Learning Working Group</a></em></p>]]></content><author><name>Dominique Hazaël-Massieux</name></author><category term="blog" /><summary type="html"><![CDATA[Introduction Machine Learning (ML) is a branch of Artificial Intelligence. A subfield of ML called Deep Learning with its various neural network architectures enables new compelling user experiences for web applications. Use cases range from improved video conferencing to accessibility-improving features, with potential improved privacy over cloud-based solutions. Enabling these use cases and more is the focus of the newly launched Web Machine Learning Working Group. Progress While some of these use cases can be implemented in-device in a constrained manner with existing Web APIs (e.g. WebGL graphics API or in the future WebGPU), the lack of access to platform capabilities such as dedicated ML hardware accelerators and native instructions constraint the scope of experiences and leads to inefficient implementations on modern hardware.]]></summary></entry><entry><title type="html">WebNN-Native Build and Run</title><link href="/doc/2021/03/25/webnn-native-build-and-run.html" rel="alternate" type="text/html" title="WebNN-Native Build and Run" /><published>2021-03-25T10:00:00+00:00</published><updated>2021-03-25T10:00:00+00:00</updated><id>/doc/2021/03/25/webnn-native-build-and-run</id><content type="html" xml:base="/doc/2021/03/25/webnn-native-build-and-run.html"><![CDATA[<p>WebNN-native is a native implementation of the <a href="https://www.w3.org/TR/webnn/">Web Neural Network API</a>.</p>

<p>It provides several building blocks:</p>

<ul>
  <li><strong>WebNN C/C++ headers</strong> that applications and other building blocks use.
    <ul>
      <li>The <code class="language-plaintext highlighter-rouge">webnn.h</code> that is an one-to-one mapping with the WebNN IDL.</li>
      <li>A C++ wrapper for the <code class="language-plaintext highlighter-rouge">webnn.h</code></li>
    </ul>
  </li>
  <li><strong>Backend implementations</strong> that use platforms’ ML APIs:
    <ul>
      <li><strong>DirectML</strong> on Windows 10</li>
      <li><strong>OpenVINO</strong> on Windows 10 and Linux</li>
      <li><strong>oneDNN</strong> on Windows 10 and Linux</li>
      <li><strong>XNNPACK</strong> on Windows 10 and Linux</li>
      <li><em>Other backends are to be added</em></li>
    </ul>
  </li>
</ul>

<!-- more -->

<p>WebNN-native uses the code of other open source projects:</p>

<ul>
  <li>The code generator and infrastructure code of <a href="https://dawn.googlesource.com/dawn/">Dawn</a> project.</li>
  <li>The DirectMLX and device wrapper of <a href="https://github.com/microsoft/DirectML">DirectML</a> project.</li>
  <li>The <a href="https://github.com/google/XNNPACK">XNNPACK</a></li>
</ul>

<h2 id="build-and-run">Build and Run</h2>

<h3 id="install-depot_tools">Install <code class="language-plaintext highlighter-rouge">depot_tools</code></h3>

<p>WebNN-native uses the Chromium build system and dependency management so you need to <a href="http://commondatastorage.googleapis.com/chrome-infra-docs/flat/depot_tools/docs/html/depot_tools_tutorial.html#_setting_up">install depot_tools</a> and add it to the PATH.</p>

<p><strong>Notes</strong>:</p>

<ul>
  <li>On Windows, you’ll need to set the environment variable <code class="language-plaintext highlighter-rouge">DEPOT_TOOLS_WIN_TOOLCHAIN=0</code>. This tells depot_tools to use your locally installed version of Visual Studio (by default, depot_tools will try to download a Google-internal version).</li>
</ul>

<h3 id="get-the-code">Get the code</h3>

<p>Get the source code as follows:</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Clone the repo as "webnn-native"</span>
<span class="o">&gt;</span> git clone https://github.com/webmachinelearning/webnn-native.git webnn-native <span class="o">&amp;&amp;</span> <span class="nb">cd </span>webnn-native

<span class="c"># Bootstrap the gclient configuration</span>
<span class="o">&gt;</span> <span class="nb">cp </span>scripts/standalone.gclient .gclient

<span class="c"># Fetch external dependencies and toolchains with gclient</span>
<span class="o">&gt;</span> gclient <span class="nb">sync</span>
</code></pre></div></div>

<h3 id="setting-up-the-build">Setting up the build</h3>

<p>Generate build files using <code class="language-plaintext highlighter-rouge">gn args out/Debug</code> or <code class="language-plaintext highlighter-rouge">gn args out/Release</code>.</p>

<p>A text editor will appear asking build options, the most common option is <code class="language-plaintext highlighter-rouge">is_debug=true/false</code>; otherwise <code class="language-plaintext highlighter-rouge">gn args out/Release --list</code> shows all the possible options.</p>

<p>To build with DirectML backend, set build option <code class="language-plaintext highlighter-rouge">webnn_enable_dml=true</code>.</p>

<p>To build with OpenVINO backend, set build option <code class="language-plaintext highlighter-rouge">webnn_enable_openvino=true</code>.</p>

<p>To build with oneDNN backend, set build option <code class="language-plaintext highlighter-rouge">webnn_enable_onednn=true</code>.</p>

<p>To build with XNNPACK backend, set build option <code class="language-plaintext highlighter-rouge">webnn_enable_xnnpack=true</code>.</p>

<h3 id="build">Build</h3>

<p>Then use <code class="language-plaintext highlighter-rouge">ninja -C out/Release</code> or <code class="language-plaintext highlighter-rouge">ninja -C out/Debug</code> to build WebNN-native.</p>

<p><strong>Notes</strong></p>

<ul>
  <li>To build with XNNPACK backend, please build XNNPACK first, e.g. by <a href="https://github.com/google/XNNPACK/blob/master/scripts/build-local.sh"><code class="language-plaintext highlighter-rouge">XNNPACK/scripts/build-local.sh</code></a>.</li>
</ul>

<h3 id="run-tests">Run tests</h3>

<p>Run unit tests, for example <code class="language-plaintext highlighter-rouge">./out/Release/webnn_unittests</code>.</p>

<p>Run end2end tests, for example <code class="language-plaintext highlighter-rouge">./out/Release/webnn_end2end_tests</code>.</p>

<p><strong>Notes</strong>:</p>

<ul>
  <li>For OpenVINO backend, please <a href="https://docs.openvinotoolkit.org/2021.2/openvino_docs_install_guides_installing_openvino_linux.html#install-openvino">install 2021.2 version</a> and <a href="https://docs.openvinotoolkit.org/2021.2/openvino_docs_install_guides_installing_openvino_linux.html#set-the-environment-variables">set the environment variables</a> before running the end2end tests.</li>
  <li>For oneDNN backend on Linux, please set the <code class="language-plaintext highlighter-rouge">LD_LIBRARY_PATH</code> environment variable to the out folder before running the end2end tests, e.g. <code class="language-plaintext highlighter-rouge">LD_LIBRARY_PATH=./out/Release ./out/Release/webnn_end2end_tests</code>.</li>
</ul>

<h2 id="license">License</h2>

<p>Apache 2.0 Public License.</p>]]></content><author><name>Ningxin, Junwei, Bruce and Mingming</name></author><category term="doc" /><summary type="html"><![CDATA[WebNN-native is a native implementation of the Web Neural Network API. It provides several building blocks: WebNN C/C++ headers that applications and other building blocks use. The webnn.h that is an one-to-one mapping with the WebNN IDL. A C++ wrapper for the webnn.h Backend implementations that use platforms’ ML APIs: DirectML on Windows 10 OpenVINO on Windows 10 and Linux oneDNN on Windows 10 and Linux XNNPACK on Windows 10 and Linux Other backends are to be added]]></summary></entry><entry><title type="html">Noise Suppression Net 2 (NSNet2)</title><link href="/get-started/2021/03/17/noise-suppression-net-v2.html" rel="alternate" type="text/html" title="Noise Suppression Net 2 (NSNet2)" /><published>2021-03-17T12:42:03+00:00</published><updated>2021-03-17T12:42:03+00:00</updated><id>/get-started/2021/03/17/noise-suppression-net-v2</id><content type="html" xml:base="/get-started/2021/03/17/noise-suppression-net-v2.html"><![CDATA[<p>In the WebNN API, the <a href="https://www.w3.org/TR/webnn/#operand"><code class="language-plaintext highlighter-rouge">Operand</code></a> objects represent input, output, and constant multi-dimensional arrays known as <a href="https://mathworld.wolfram.com/Tensor.html">tensors</a>. The <a href="https://www.w3.org/TR/webnn/#api-mlcontext"><code class="language-plaintext highlighter-rouge">NeuralNetworkContext</code></a> defines a set of operations that facilitate the construction and execution of this computational graph. Such operations may be accelerated with dedicated hardware such as the GPUs, CPUs with extensions for deep learning, or dedicated ML accelerators. These operations defined by the WebNN API are required by <a href="https://github.com/webmachinelearning/webnn/blob/master/op_compatibility/first_wave_models.md">models</a> that address key application use cases. Additionally, the WebNN API provides affordances to builder a computational graph, compile the graph, execute the graph, and integrate the graph with other Web APIs that provide input data to the graph e.g. media APIs for image or video frames and sensor APIs for sensory data.</p>

<p>This <a href="https://www.w3.org/TR/webnn/#examples">example</a> builds, compiles, and executes a graph comprised of three ops, takes four inputs and returns one output.</p>

<!-- more -->

<p>There are many important <a href="https://www.w3.org/TR/webnn/#usecases-application">application use cases</a> for high-performance neural network inference. One such use case is deep-learning noise suppression (DNS) in web-based video conferencing. The following sample shows how the <a href="https://github.com/microsoft/DNS-Challenge/tree/master/NSNet2-baseline">NSNet2</a> baseline implementation of deep learning-based noise suppression model may be implemented using the WebNN API.</p>

<div class="language-js highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// Noise Suppression Net 2 (NSNet2)</span>
<span class="c1">// Baseline Model for Deep Noise Suppression Challenge (DNS) 2021.</span>
<span class="k">export</span> <span class="kd">class</span> <span class="nx">NSNet2</span> <span class="p">{</span>
  <span class="kd">constructor</span><span class="p">()</span> <span class="p">{</span>
    <span class="k">this</span><span class="p">.</span><span class="nx">context_</span> <span class="o">=</span> <span class="kc">null</span><span class="p">;</span>
    <span class="k">this</span><span class="p">.</span><span class="nx">builder_</span> <span class="o">=</span> <span class="kc">null</span><span class="p">;</span>
    <span class="k">this</span><span class="p">.</span><span class="nx">graph_</span> <span class="o">=</span> <span class="kc">null</span><span class="p">;</span>
    <span class="k">this</span><span class="p">.</span><span class="nx">frameSize</span> <span class="o">=</span> <span class="mi">161</span><span class="p">;</span>
    <span class="k">this</span><span class="p">.</span><span class="nx">hiddenSize</span> <span class="o">=</span> <span class="mi">400</span><span class="p">;</span>
  <span class="p">}</span>

  <span class="k">async</span> <span class="nx">load</span><span class="p">(</span><span class="nx">contextOptions</span><span class="p">,</span> <span class="nx">baseUrl</span><span class="p">,</span> <span class="nx">batchSize</span><span class="p">,</span> <span class="nx">frames</span><span class="p">)</span> <span class="p">{</span>
    <span class="k">this</span><span class="p">.</span><span class="nx">context_</span> <span class="o">=</span> <span class="k">await</span> <span class="nb">navigator</span><span class="p">.</span><span class="nx">ml</span><span class="p">.</span><span class="nx">createContext</span><span class="p">(</span><span class="nx">contextOptions</span><span class="p">);</span>
    <span class="k">this</span><span class="p">.</span><span class="nx">builder_</span> <span class="o">=</span> <span class="k">new</span> <span class="nx">MLGraphBuilder</span><span class="p">(</span><span class="k">this</span><span class="p">.</span><span class="nx">context_</span><span class="p">);</span>
    <span class="c1">// Create constants by loading pre-trained data from .npy files.</span>
    <span class="kd">const</span> <span class="nx">weight172</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">buildConstantByNpy</span><span class="p">(</span>
      <span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">,</span>
      <span class="nx">baseUrl</span> <span class="o">+</span> <span class="dl">"</span><span class="s2">172.npy</span><span class="dl">"</span>
    <span class="p">);</span>
    <span class="kd">const</span> <span class="nx">biasFcIn0</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">buildConstantByNpy</span><span class="p">(</span>
      <span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">,</span>
      <span class="nx">baseUrl</span> <span class="o">+</span> <span class="dl">"</span><span class="s2">fc_in_0_bias.npy</span><span class="dl">"</span>
    <span class="p">);</span>
    <span class="kd">const</span> <span class="nx">weight192</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">buildConstantByNpy</span><span class="p">(</span>
      <span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">,</span>
      <span class="nx">baseUrl</span> <span class="o">+</span> <span class="dl">"</span><span class="s2">192.npy</span><span class="dl">"</span>
    <span class="p">);</span>
    <span class="kd">const</span> <span class="nx">recurrentWeight193</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">buildConstantByNpy</span><span class="p">(</span>
      <span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">,</span>
      <span class="nx">baseUrl</span> <span class="o">+</span> <span class="dl">"</span><span class="s2">193.npy</span><span class="dl">"</span>
    <span class="p">);</span>
    <span class="kd">const</span> <span class="nx">bias194</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">buildConstantByNpy</span><span class="p">(</span>
      <span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">,</span>
      <span class="nx">baseUrl</span> <span class="o">+</span> <span class="dl">"</span><span class="s2">194_0.npy</span><span class="dl">"</span>
    <span class="p">);</span>
    <span class="kd">const</span> <span class="nx">recurrentBias194</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">buildConstantByNpy</span><span class="p">(</span>
      <span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">,</span>
      <span class="nx">baseUrl</span> <span class="o">+</span> <span class="dl">"</span><span class="s2">194_1.npy</span><span class="dl">"</span>
    <span class="p">);</span>
    <span class="kd">const</span> <span class="nx">weight212</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">buildConstantByNpy</span><span class="p">(</span>
      <span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">,</span>
      <span class="nx">baseUrl</span> <span class="o">+</span> <span class="dl">"</span><span class="s2">212.npy</span><span class="dl">"</span>
    <span class="p">);</span>
    <span class="kd">const</span> <span class="nx">recurrentWeight213</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">buildConstantByNpy</span><span class="p">(</span>
      <span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">,</span>
      <span class="nx">baseUrl</span> <span class="o">+</span> <span class="dl">"</span><span class="s2">213.npy</span><span class="dl">"</span>
    <span class="p">);</span>
    <span class="kd">const</span> <span class="nx">bias214</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">buildConstantByNpy</span><span class="p">(</span>
      <span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">,</span>
      <span class="nx">baseUrl</span> <span class="o">+</span> <span class="dl">"</span><span class="s2">214_0.npy</span><span class="dl">"</span>
    <span class="p">);</span>
    <span class="kd">const</span> <span class="nx">recurrentBias214</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">buildConstantByNpy</span><span class="p">(</span>
      <span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">,</span>
      <span class="nx">baseUrl</span> <span class="o">+</span> <span class="dl">"</span><span class="s2">214_1.npy</span><span class="dl">"</span>
    <span class="p">);</span>
    <span class="kd">const</span> <span class="nx">weight215</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">buildConstantByNpy</span><span class="p">(</span>
      <span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">,</span>
      <span class="nx">baseUrl</span> <span class="o">+</span> <span class="dl">"</span><span class="s2">215.npy</span><span class="dl">"</span>
    <span class="p">);</span>
    <span class="kd">const</span> <span class="nx">biasFcOut0</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">buildConstantByNpy</span><span class="p">(</span>
      <span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">,</span>
      <span class="nx">baseUrl</span> <span class="o">+</span> <span class="dl">"</span><span class="s2">fc_out_0_bias.npy</span><span class="dl">"</span>
    <span class="p">);</span>
    <span class="kd">const</span> <span class="nx">weight216</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">buildConstantByNpy</span><span class="p">(</span>
      <span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">,</span>
      <span class="nx">baseUrl</span> <span class="o">+</span> <span class="dl">"</span><span class="s2">216.npy</span><span class="dl">"</span>
    <span class="p">);</span>
    <span class="kd">const</span> <span class="nx">biasFcOut2</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">buildConstantByNpy</span><span class="p">(</span>
      <span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">,</span>
      <span class="nx">baseUrl</span> <span class="o">+</span> <span class="dl">"</span><span class="s2">fc_out_2_bias.npy</span><span class="dl">"</span>
    <span class="p">);</span>
    <span class="kd">const</span> <span class="nx">weight217</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">buildConstantByNpy</span><span class="p">(</span>
      <span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">,</span>
      <span class="nx">baseUrl</span> <span class="o">+</span> <span class="dl">"</span><span class="s2">217.npy</span><span class="dl">"</span>
    <span class="p">);</span>
    <span class="kd">const</span> <span class="nx">biasFcOut4</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">buildConstantByNpy</span><span class="p">(</span>
      <span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">,</span>
      <span class="nx">baseUrl</span> <span class="o">+</span> <span class="dl">"</span><span class="s2">fc_out_4_bias.npy</span><span class="dl">"</span>
    <span class="p">);</span>
    <span class="c1">// Build up the network.</span>
    <span class="kd">const</span> <span class="nx">input</span> <span class="o">=</span> <span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">.</span><span class="nx">input</span><span class="p">(</span><span class="dl">"</span><span class="s2">input</span><span class="dl">"</span><span class="p">,</span> <span class="p">{</span>
      <span class="na">type</span><span class="p">:</span> <span class="dl">"</span><span class="s2">float32</span><span class="dl">"</span><span class="p">,</span>
      <span class="na">dimensions</span><span class="p">:</span> <span class="p">[</span><span class="nx">batchSize</span><span class="p">,</span> <span class="nx">frames</span><span class="p">,</span> <span class="k">this</span><span class="p">.</span><span class="nx">frameSize</span><span class="p">],</span>
    <span class="p">});</span>
    <span class="kd">const</span> <span class="nx">relu20</span> <span class="o">=</span> <span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">.</span><span class="nx">relu</span><span class="p">(</span>
      <span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">.</span><span class="nx">add</span><span class="p">(</span><span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">.</span><span class="nx">matmul</span><span class="p">(</span><span class="nx">input</span><span class="p">,</span> <span class="nx">weight172</span><span class="p">),</span> <span class="nx">biasFcIn0</span><span class="p">)</span>
    <span class="p">);</span>
    <span class="kd">const</span> <span class="nx">transpose31</span> <span class="o">=</span> <span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">.</span><span class="nx">transpose</span><span class="p">(</span><span class="nx">relu20</span><span class="p">,</span> <span class="p">{</span>
      <span class="na">permutation</span><span class="p">:</span> <span class="p">[</span><span class="mi">1</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">2</span><span class="p">],</span>
    <span class="p">});</span>
    <span class="kd">const</span> <span class="nx">initialState92</span> <span class="o">=</span> <span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">.</span><span class="nx">input</span><span class="p">(</span><span class="dl">"</span><span class="s2">initialState92</span><span class="dl">"</span><span class="p">,</span> <span class="p">{</span>
      <span class="na">type</span><span class="p">:</span> <span class="dl">"</span><span class="s2">float32</span><span class="dl">"</span><span class="p">,</span>
      <span class="na">dimensions</span><span class="p">:</span> <span class="p">[</span><span class="mi">1</span><span class="p">,</span> <span class="nx">batchSize</span><span class="p">,</span> <span class="k">this</span><span class="p">.</span><span class="nx">hiddenSize</span><span class="p">],</span>
    <span class="p">});</span>
    <span class="kd">const</span> <span class="p">[</span><span class="nx">gru94</span><span class="p">,</span> <span class="nx">gru93</span><span class="p">]</span> <span class="o">=</span> <span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">.</span><span class="nx">gru</span><span class="p">(</span>
      <span class="nx">transpose31</span><span class="p">,</span>
      <span class="nx">weight192</span><span class="p">,</span>
      <span class="nx">recurrentWeight193</span><span class="p">,</span>
      <span class="nx">frames</span><span class="p">,</span>
      <span class="k">this</span><span class="p">.</span><span class="nx">hiddenSize</span><span class="p">,</span>
      <span class="p">{</span>
        <span class="na">bias</span><span class="p">:</span> <span class="nx">bias194</span><span class="p">,</span>
        <span class="na">recurrentBias</span><span class="p">:</span> <span class="nx">recurrentBias194</span><span class="p">,</span>
        <span class="na">initialHiddenState</span><span class="p">:</span> <span class="nx">initialState92</span><span class="p">,</span>
        <span class="na">returnSequence</span><span class="p">:</span> <span class="kc">true</span><span class="p">,</span>
      <span class="p">}</span>
    <span class="p">);</span>
    <span class="kd">const</span> <span class="nx">squeeze95</span> <span class="o">=</span> <span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">.</span><span class="nx">squeeze</span><span class="p">(</span><span class="nx">gru93</span><span class="p">,</span> <span class="p">{</span> <span class="na">axes</span><span class="p">:</span> <span class="p">[</span><span class="mi">1</span><span class="p">]</span> <span class="p">});</span>
    <span class="kd">const</span> <span class="nx">initialState155</span> <span class="o">=</span> <span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">.</span><span class="nx">input</span><span class="p">(</span><span class="dl">"</span><span class="s2">initialState155</span><span class="dl">"</span><span class="p">,</span> <span class="p">{</span>
      <span class="na">type</span><span class="p">:</span> <span class="dl">"</span><span class="s2">float32</span><span class="dl">"</span><span class="p">,</span>
      <span class="na">dimensions</span><span class="p">:</span> <span class="p">[</span><span class="mi">1</span><span class="p">,</span> <span class="nx">batchSize</span><span class="p">,</span> <span class="k">this</span><span class="p">.</span><span class="nx">hiddenSize</span><span class="p">],</span>
    <span class="p">});</span>
    <span class="kd">const</span> <span class="p">[</span><span class="nx">gru157</span><span class="p">,</span> <span class="nx">gru156</span><span class="p">]</span> <span class="o">=</span> <span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">.</span><span class="nx">gru</span><span class="p">(</span>
      <span class="nx">squeeze95</span><span class="p">,</span>
      <span class="nx">weight212</span><span class="p">,</span>
      <span class="nx">recurrentWeight213</span><span class="p">,</span>
      <span class="nx">frames</span><span class="p">,</span>
      <span class="k">this</span><span class="p">.</span><span class="nx">hiddenSize</span><span class="p">,</span>
      <span class="p">{</span>
        <span class="na">bias</span><span class="p">:</span> <span class="nx">bias214</span><span class="p">,</span>
        <span class="na">recurrentBias</span><span class="p">:</span> <span class="nx">recurrentBias214</span><span class="p">,</span>
        <span class="na">initialHiddenState</span><span class="p">:</span> <span class="nx">initialState155</span><span class="p">,</span>
        <span class="na">returnSequence</span><span class="p">:</span> <span class="kc">true</span><span class="p">,</span>
      <span class="p">}</span>
    <span class="p">);</span>
    <span class="kd">const</span> <span class="nx">squeeze158</span> <span class="o">=</span> <span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">.</span><span class="nx">squeeze</span><span class="p">(</span><span class="nx">gru156</span><span class="p">,</span> <span class="p">{</span> <span class="na">axes</span><span class="p">:</span> <span class="p">[</span><span class="mi">1</span><span class="p">]</span> <span class="p">});</span>
    <span class="kd">const</span> <span class="nx">transpose159</span> <span class="o">=</span> <span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">.</span><span class="nx">transpose</span><span class="p">(</span><span class="nx">squeeze158</span><span class="p">,</span> <span class="p">{</span>
      <span class="na">permutation</span><span class="p">:</span> <span class="p">[</span><span class="mi">1</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">2</span><span class="p">],</span>
    <span class="p">});</span>
    <span class="kd">const</span> <span class="nx">relu163</span> <span class="o">=</span> <span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">.</span><span class="nx">relu</span><span class="p">(</span>
      <span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">.</span><span class="nx">add</span><span class="p">(</span>
        <span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">.</span><span class="nx">matmul</span><span class="p">(</span><span class="nx">transpose159</span><span class="p">,</span> <span class="nx">weight215</span><span class="p">),</span>
        <span class="nx">biasFcOut0</span>
      <span class="p">)</span>
    <span class="p">);</span>
    <span class="kd">const</span> <span class="nx">relu167</span> <span class="o">=</span> <span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">.</span><span class="nx">relu</span><span class="p">(</span>
      <span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">.</span><span class="nx">add</span><span class="p">(</span><span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">.</span><span class="nx">matmul</span><span class="p">(</span><span class="nx">relu163</span><span class="p">,</span> <span class="nx">weight216</span><span class="p">),</span> <span class="nx">biasFcOut2</span><span class="p">)</span>
    <span class="p">);</span>
    <span class="kd">const</span> <span class="nx">output</span> <span class="o">=</span> <span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">.</span><span class="nx">sigmoid</span><span class="p">(</span>
      <span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">.</span><span class="nx">add</span><span class="p">(</span><span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">.</span><span class="nx">matmul</span><span class="p">(</span><span class="nx">relu167</span><span class="p">,</span> <span class="nx">weight217</span><span class="p">),</span> <span class="nx">biasFcOut4</span><span class="p">)</span>
    <span class="p">);</span>
    <span class="k">return</span> <span class="p">{</span> <span class="nx">output</span><span class="p">,</span> <span class="nx">gru94</span><span class="p">,</span> <span class="nx">gru157</span> <span class="p">};</span>
  <span class="p">}</span>

  <span class="k">async</span> <span class="nx">build</span><span class="p">(</span><span class="nx">outputOperand</span><span class="p">)</span> <span class="p">{</span>
    <span class="k">this</span><span class="p">.</span><span class="nx">graph_</span> <span class="o">=</span> <span class="k">await</span> <span class="k">this</span><span class="p">.</span><span class="nx">builder_</span><span class="p">.</span><span class="nx">build</span><span class="p">(</span><span class="nx">outputOperand</span><span class="p">);</span>
  <span class="p">}</span>

  <span class="k">async</span> <span class="nx">compute</span><span class="p">(</span>
    <span class="nx">inputBuffer</span><span class="p">,</span>
    <span class="nx">initialState92Buffer</span><span class="p">,</span>
    <span class="nx">initialState155Buffer</span><span class="p">,</span>
    <span class="nx">outputBuffer</span><span class="p">,</span>
    <span class="nx">gru94Buffer</span><span class="p">,</span>
    <span class="nx">gru157Buffer</span>
  <span class="p">)</span> <span class="p">{</span>
    <span class="kd">const</span> <span class="nx">inputs</span> <span class="o">=</span> <span class="p">{</span>
      <span class="na">input</span><span class="p">:</span> <span class="nx">inputBuffer</span><span class="p">,</span>
      <span class="na">initialState92</span><span class="p">:</span> <span class="nx">initialState92Buffer</span><span class="p">,</span>
      <span class="na">initialState155</span><span class="p">:</span> <span class="nx">initialState155Buffer</span><span class="p">,</span>
    <span class="p">};</span>
    <span class="kd">const</span> <span class="nx">outputs</span> <span class="o">=</span> <span class="p">{</span>
      <span class="na">output</span><span class="p">:</span> <span class="nx">outputBuffer</span><span class="p">,</span>
      <span class="na">gru94</span><span class="p">:</span> <span class="nx">gru94Buffer</span><span class="p">,</span>
      <span class="na">gru157</span><span class="p">:</span> <span class="nx">gru157Buffer</span><span class="p">,</span>
    <span class="p">};</span>
    <span class="kd">const</span> <span class="nx">results</span> <span class="o">=</span> <span class="k">await</span> <span class="k">this</span><span class="p">.</span><span class="nx">context_</span><span class="p">.</span><span class="nx">compute</span><span class="p">(</span><span class="k">this</span><span class="p">.</span><span class="nx">graph_</span><span class="p">,</span> <span class="nx">inputs</span><span class="p">,</span> <span class="nx">outputs</span><span class="p">);</span>
    <span class="k">return</span> <span class="nx">results</span><span class="p">.</span><span class="nx">outputs</span><span class="p">;</span>
  <span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>

<p>Try the live version of the <a href="https://webmachinelearning.github.io/webnn-samples/nsnet2/">WebNN NSNet2 example</a>. This live version builds upon <a href="https://github.com/webmachinelearning/webnn-samples/blob/master/nsnet2/nsnet2.js">nsnet2.js</a> that implements the above code snippet as a JS module.</p>]]></content><author><name>Web Machine Learning Working Group</name></author><category term="get-started" /><summary type="html"><![CDATA[In the WebNN API, the Operand objects represent input, output, and constant multi-dimensional arrays known as tensors. The NeuralNetworkContext defines a set of operations that facilitate the construction and execution of this computational graph. Such operations may be accelerated with dedicated hardware such as the GPUs, CPUs with extensions for deep learning, or dedicated ML accelerators. These operations defined by the WebNN API are required by models that address key application use cases. Additionally, the WebNN API provides affordances to builder a computational graph, compile the graph, execute the graph, and integrate the graph with other Web APIs that provide input data to the graph e.g. media APIs for image or video frames and sensor APIs for sensory data. This example builds, compiles, and executes a graph comprised of three ops, takes four inputs and returns one output.]]></summary></entry><entry><title type="html">Build Your First Graph with WebNN API</title><link href="/get-started/2021/03/15/build-your-first-graph-with-webnn-api.html" rel="alternate" type="text/html" title="Build Your First Graph with WebNN API" /><published>2021-03-15T12:42:03+00:00</published><updated>2021-03-15T12:42:03+00:00</updated><id>/get-started/2021/03/15/build-your-first-graph-with-webnn-api</id><content type="html" xml:base="/get-started/2021/03/15/build-your-first-graph-with-webnn-api.html"><![CDATA[<p>A core abstraction behind popular neural networks is a
computational graph, a directed graph with its nodes corresponding to
operations (ops) and input variables. One node’s output value is the input
to another node.</p>

<p>The WebNN API brings this abstraction to the web.</p>

<p>In the WebNN API, the Operand objects represent
input, output, and constant multi-dimensional arrays known
as tensors. The NeuralNetworkContext defines a set of operations
that facilitate the construction and execution of this computational
graph. Such operations may be accelerated with dedicated hardware such as
the GPUs, CPUs with extensions for deep learning, or dedicated
ML accelerators. These operations defined by the WebNN API are required
by models that address key application use cases.</p>

<!-- more -->

<p>Additionally,
the WebNN API provides affordances to builder a computational graph,
compile the graph, execute the graph, and integrate the graph with other Web
APIs that provide input data to the graph e.g. media APIs for image or
video frames and sensor APIs for sensory data.</p>

<p>This example builds,
compiles, and executes a graph comprised of three ops, takes four inputs
and returns one output:</p>

<div class="language-js highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">const</span> <span class="nx">context</span> <span class="o">=</span> <span class="nb">navigator</span><span class="p">.</span><span class="nx">ml</span><span class="p">.</span><span class="nx">createContext</span><span class="p">({</span><span class="na">powerPreference</span><span class="p">:</span> <span class="dl">'</span><span class="s1">low-power</span><span class="dl">'</span><span class="p">});</span>

<span class="c1">// The following code builds a graph as:</span>
<span class="c1">// constant1 ---+</span>
<span class="c1">//              +--- Add ---&gt; intermediateOutput1 ---+</span>
<span class="c1">// input1    ---+                                    |</span>
<span class="c1">//                                                   +--- Mul---&gt; output</span>
<span class="c1">// constant2 ---+                                    |</span>
<span class="c1">//              +--- Add ---&gt; intermediateOutput2 ---+</span>
<span class="c1">// input2    ---+</span>

<span class="c1">// Use tensors in 4 dimensions.</span>
<span class="kd">const</span> <span class="nx">TENSOR_DIMS</span> <span class="o">=</span> <span class="p">[</span><span class="mi">1</span><span class="p">,</span> <span class="mi">2</span><span class="p">,</span> <span class="mi">2</span><span class="p">,</span> <span class="mi">2</span><span class="p">];</span>
<span class="kd">const</span> <span class="nx">TENSOR_SIZE</span> <span class="o">=</span> <span class="mi">8</span><span class="p">;</span>

<span class="kd">const</span> <span class="nx">builder</span> <span class="o">=</span> <span class="k">new</span> <span class="nx">MLGraphBuilder</span><span class="p">(</span><span class="nx">context</span><span class="p">);</span>

<span class="c1">// Create OperandDescriptor object.</span>
<span class="kd">const</span> <span class="nx">desc</span> <span class="o">=</span> <span class="p">{</span><span class="na">type</span><span class="p">:</span> <span class="dl">'</span><span class="s1">float32</span><span class="dl">'</span><span class="p">,</span> <span class="na">dimensions</span><span class="p">:</span> <span class="nx">TENSOR_DIMS</span><span class="p">};</span>

<span class="c1">// constant1 is a constant operand with the value 0.5.</span>
<span class="kd">const</span> <span class="nx">constantBuffer1</span> <span class="o">=</span> <span class="k">new</span> <span class="nb">Float32Array</span><span class="p">(</span><span class="nx">TENSOR_SIZE</span><span class="p">).</span><span class="nx">fill</span><span class="p">(</span><span class="mf">0.5</span><span class="p">);</span>
<span class="kd">const</span> <span class="nx">constant1</span> <span class="o">=</span> <span class="nx">builder</span><span class="p">.</span><span class="nx">constant</span><span class="p">(</span><span class="nx">desc</span><span class="p">,</span> <span class="nx">constantBuffer1</span><span class="p">);</span>

<span class="c1">// input1 is one of the input operands. Its value will be set before execution.</span>
<span class="kd">const</span> <span class="nx">input1</span> <span class="o">=</span> <span class="nx">builder</span><span class="p">.</span><span class="nx">input</span><span class="p">(</span><span class="dl">'</span><span class="s1">input1</span><span class="dl">'</span><span class="p">,</span> <span class="nx">desc</span><span class="p">);</span>

<span class="c1">// constant2 is another constant operand with the value 0.5.</span>
<span class="kd">const</span> <span class="nx">constantBuffer2</span> <span class="o">=</span> <span class="k">new</span> <span class="nb">Float32Array</span><span class="p">(</span><span class="nx">TENSOR_SIZE</span><span class="p">).</span><span class="nx">fill</span><span class="p">(</span><span class="mf">0.5</span><span class="p">);</span>
<span class="kd">const</span> <span class="nx">constant2</span> <span class="o">=</span> <span class="nx">builder</span><span class="p">.</span><span class="nx">constant</span><span class="p">(</span><span class="nx">desc</span><span class="p">,</span> <span class="nx">constantBuffer2</span><span class="p">);</span>

<span class="c1">// input2 is another input operand. Its value will be set before execution.</span>
<span class="kd">const</span> <span class="nx">input2</span> <span class="o">=</span> <span class="nx">builder</span><span class="p">.</span><span class="nx">input</span><span class="p">(</span><span class="dl">'</span><span class="s1">input2</span><span class="dl">'</span><span class="p">,</span> <span class="nx">desc</span><span class="p">);</span>

<span class="c1">// intermediateOutput1 is the output of the first Add operation.</span>
<span class="kd">const</span> <span class="nx">intermediateOutput1</span> <span class="o">=</span> <span class="nx">builder</span><span class="p">.</span><span class="nx">add</span><span class="p">(</span><span class="nx">constant1</span><span class="p">,</span> <span class="nx">input1</span><span class="p">);</span>

<span class="c1">// intermediateOutput2 is the output of the second Add operation.</span>
<span class="kd">const</span> <span class="nx">intermediateOutput2</span> <span class="o">=</span> <span class="nx">builder</span><span class="p">.</span><span class="nx">add</span><span class="p">(</span><span class="nx">constant2</span><span class="p">,</span> <span class="nx">input2</span><span class="p">);</span>

<span class="c1">// output is the output operand of the Mul operation.</span>
<span class="kd">const</span> <span class="nx">output</span> <span class="o">=</span> <span class="nx">builder</span><span class="p">.</span><span class="nx">mul</span><span class="p">(</span><span class="nx">intermediateOutput1</span><span class="p">,</span> <span class="nx">intermediateOutput2</span><span class="p">);</span>

<span class="c1">// Build graph.</span>
<span class="kd">const</span> <span class="nx">graph</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">builder</span><span class="p">.</span><span class="nx">build</span><span class="p">({</span><span class="dl">'</span><span class="s1">output</span><span class="dl">'</span><span class="p">:</span> <span class="nx">output</span><span class="p">});</span>

<span class="c1">// Setup the input buffers with value 1.</span>
<span class="kd">const</span> <span class="nx">inputBuffer1</span> <span class="o">=</span> <span class="k">new</span> <span class="nb">Float32Array</span><span class="p">(</span><span class="nx">TENSOR_SIZE</span><span class="p">).</span><span class="nx">fill</span><span class="p">(</span><span class="mi">1</span><span class="p">);</span>
<span class="kd">const</span> <span class="nx">inputBuffer2</span> <span class="o">=</span> <span class="k">new</span> <span class="nb">Float32Array</span><span class="p">(</span><span class="nx">TENSOR_SIZE</span><span class="p">).</span><span class="nx">fill</span><span class="p">(</span><span class="mi">1</span><span class="p">);</span>

<span class="c1">// Asynchronously execute the built model with the specified inputs.</span>
<span class="kd">const</span> <span class="nx">inputs</span> <span class="o">=</span> <span class="p">{</span>
  <span class="dl">'</span><span class="s1">input1</span><span class="dl">'</span><span class="p">:</span> <span class="p">{</span><span class="na">data</span><span class="p">:</span> <span class="nx">inputBuffer1</span><span class="p">},</span>
  <span class="dl">'</span><span class="s1">input2</span><span class="dl">'</span><span class="p">:</span> <span class="p">{</span><span class="na">data</span><span class="p">:</span> <span class="nx">inputBuffer2</span><span class="p">},</span>
<span class="p">};</span>
<span class="kd">const</span> <span class="nx">outputs</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">graph</span><span class="p">.</span><span class="nx">compute</span><span class="p">(</span><span class="nx">inputs</span><span class="p">);</span>

<span class="c1">// Log the shape and computed result of the output operand.</span>
<span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="dl">'</span><span class="s1">Output shape: </span><span class="dl">'</span> <span class="o">+</span> <span class="nx">outputs</span><span class="p">.</span><span class="nx">output</span><span class="p">.</span><span class="nx">dimensions</span><span class="p">);</span>
<span class="c1">// Output shape: 1,2,2,2</span>
<span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="dl">'</span><span class="s1">Output value: </span><span class="dl">'</span> <span class="o">+</span> <span class="nx">outputs</span><span class="p">.</span><span class="nx">output</span><span class="p">.</span><span class="nx">data</span><span class="p">);</span>
<span class="c1">// Output value: 2.25,2.25,2.25,2.25,2.25,2.25,2.25,2.25</span>
</code></pre></div></div>

<p>Try the live version of the <a href="https://webmachinelearning.github.io/webnn-samples/code/">WebNN simple graphs example</a>.</p>]]></content><author><name>Web Machine Learning Working Group</name></author><category term="get-started" /><summary type="html"><![CDATA[A core abstraction behind popular neural networks is a computational graph, a directed graph with its nodes corresponding to operations (ops) and input variables. One node’s output value is the input to another node. The WebNN API brings this abstraction to the web. In the WebNN API, the Operand objects represent input, output, and constant multi-dimensional arrays known as tensors. The NeuralNetworkContext defines a set of operations that facilitate the construction and execution of this computational graph. Such operations may be accelerated with dedicated hardware such as the GPUs, CPUs with extensions for deep learning, or dedicated ML accelerators. These operations defined by the WebNN API are required by models that address key application use cases.]]></summary></entry><entry><title type="html">Call for Review: Web Machine Learning WG Charter</title><link href="/blog/2021/02/25/call-for-review-web-machine-learning-wg-charter.html" rel="alternate" type="text/html" title="Call for Review: Web Machine Learning WG Charter" /><published>2021-02-25T12:42:03+00:00</published><updated>2021-02-25T12:42:03+00:00</updated><id>/blog/2021/02/25/call-for-review-web-machine-learning-wg-charter</id><content type="html" xml:base="/blog/2021/02/25/call-for-review-web-machine-learning-wg-charter.html"><![CDATA[<p>Today W3C Advisory Committee Representatives received a Proposal
to review a <a href="https://www.w3.org/2021/02/proposed-machine-learning-charter.html">draft charter</a> for the Web Machine Learning Working Group.</p>

<p>As part of ensuring that the community is aware of proposed work
at W3C, this draft charter is public during the Advisory
Committee review period.</p>

<p>W3C invites public comments through 03:59 UTC on 2021-03-27
(23:59, Eastern time on 2021-03-26) on the proposed charter.
Please send comments to public-new-work@w3.org, which has a public archive <a href="http://lists.w3.org/Archives/Public/public-new-work/">lists.w3.org/Archives/Public/public-new-work</a>.</p>

<!-- more -->

<p>Other than comments sent in formal responses by W3C Advisory
Committee Representatives, W3C cannot guarantee a response to
comments. If you work for a W3C Member [1], please coordinate
your comments with your Advisory Committee Representative. For
example, you may wish to make public comments via this list and
have your Advisory Committee Representative refer to it from his
or her formal review comments.</p>

<p>If you should have any questions or need further information, please
contact Dominique Hazael-Massieux, Team Contact for the proposed
Web Machine Learning Working Group <a href="mailto:dom@w3.org">dom@w3.org</a>.</p>

<blockquote>
  <p>Source: <a href="https://lists.w3.org/Archives/Public/public-new-work/2021Feb/0007.html">Proposed W3C Charter: Web Machine Learning Working Group (until 2021-03-26/27)</a></p>
</blockquote>]]></content><author><name>Xueyuan Jia</name></author><category term="blog" /><summary type="html"><![CDATA[Today W3C Advisory Committee Representatives received a Proposal to review a draft charter for the Web Machine Learning Working Group. As part of ensuring that the community is aware of proposed work at W3C, this draft charter is public during the Advisory Committee review period. W3C invites public comments through 03:59 UTC on 2021-03-27 (23:59, Eastern time on 2021-03-26) on the proposed charter. Please send comments to public-new-work@w3.org, which has a public archive lists.w3.org/Archives/Public/public-new-work.]]></summary></entry><entry><title type="html">Call for Participation in Machine Learning for the Web Community Group</title><link href="/blog/2018/10/03/call-for-participation-in-machine-learning-for-the-web-community-group.html" rel="alternate" type="text/html" title="Call for Participation in Machine Learning for the Web Community Group" /><published>2018-10-03T12:42:03+00:00</published><updated>2018-10-03T12:42:03+00:00</updated><id>/blog/2018/10/03/call-for-participation-in-machine-learning-for-the-web-community-group</id><content type="html" xml:base="/blog/2018/10/03/call-for-participation-in-machine-learning-for-the-web-community-group.html"><![CDATA[<p>The <a href="https://www.w3.org/community/webmachinelearning">Machine Learning for the Web Community Group</a> has been launched.</p>

<p>The mission of the Machine Learning for the Web Community Group (WebML CG) is to make Machine Learning a first-class web citizen by incubating and developing a dedicated low-level Web API for machine learning inference in the browser. Please see the <a href="https://webmachinelearning.github.io/charter">charter</a> for more information.</p>

<p>The group invites browser engine developers, hardware vendors, web application developers, and the broader web community with interest in Machine Learning to participate.</p>

<!-- more -->

<p>In order to <a href="https://www.w3.org/community/webmachinelearning/join">join the group</a>, you will need a <a href="https://www.w3.org/accounts/request">W3C account</a>. Please note, however, that <a href="https://www.w3.org/community/about/faq/#is-w3c-membership-required-to-participate-in-a-community-or-business-group">W3C Membership</a> is not required to join a Community Group.</p>

<p>This is a community initiative. This group was originally proposed on 2018-10-03 by Anssi Kostiainen. The following people supported its creation: Anssi Kostiainen, Rijubrata Bhaumik, Zoltan Kis, Mike O'Neill, Philip Laszkowicz, Tomoyuki Shimizu. W3C’s hosting of this group does not imply endorsement of the activities.</p>

<p>The group must now <a href="https://www.w3.org/community/about/faq/#how-do-we-choose-a-chair">choose a chair</a>. Read more about <a href="https://www.w3.org/community/about/faq/#how-do-we-get-started-in-a-new-group">how to get started in a new group</a> and <a href="https://www.w3.org/community/about/good-practice-for-running-a-group/">good practice for running a group</a>.</p>

<p>We invite you to share news of this new group in social media and other channels.</p>

<p>If you believe that there is an issue with this group that requires the attention of the W3C staff, please email us at <a href="mailto:site-comments@w3.org">site-comments@w3.org</a>.</p>

<p>Thank you,<br />
W3C Community Development Team</p>]]></content><author><name>W3C Community Development Team</name></author><category term="blog" /><summary type="html"><![CDATA[The Machine Learning for the Web Community Group has been launched. The mission of the Machine Learning for the Web Community Group (WebML CG) is to make Machine Learning a first-class web citizen by incubating and developing a dedicated low-level Web API for machine learning inference in the browser. Please see the charter for more information. The group invites browser engine developers, hardware vendors, web application developers, and the broader web community with interest in Machine Learning to participate.]]></summary></entry></feed>