Thank you for your interest in contributing to ExecuTorch! We want to make it easy to contribute to this project. ## Dev Install Set up your environment by following the instructions at https://pytorch.org/executorch/main/getting-started-setup to clone the repo and install the necessary requirements. Refer to this [document](docs/source/using-executorch-building-from-source.md) to build ExecuTorch from source. ### Dev Setup for Android For Android, please refer to the [Android documentation](docs/source/using-executorch-android.md). ### Dev Setup for Apple For Apple, please refer to the [iOS documentation](docs/source/using-executorch-ios.md). ## Codebase structure
executorch ├── backends - Backend delegate implementations for various hardware targets. Each backend uses partitioner to split the graph into subgraphs that can be executed on specific hardware, quantizer to optimize model precision, and runtime components to execute the graph on target hardware. For details refer to the backend documentation and the Export and Lowering tutorial for more information. │ ├── apple - Apple-specific backends. │ │ ├── coreml - CoreML backend for Apple devices. See doc. │ │ └── mps - Metal Performance Shaders backend for Apple devices. See doc. │ ├── arm - ARM architecture backends. See doc. │ ├── cadence - Cadence-specific backends. See doc. │ ├── example - Example backend implementations. │ ├── mediatek - MediaTek-specific backends. See doc. │ ├── openvino - OpenVINO backend for Intel hardware. │ ├── qualcomm - Qualcomm-specific backends. See doc. │ ├── transforms - Transformations for backend optimization. │ ├── vulkan - Vulkan backend for cross-platform GPU support. See doc. │ └── xnnpack - XNNPACK backend for optimized neural network operations. See doc. ├── codegen - Tooling to autogenerate bindings between kernels and the runtime. ├── configurations - Configuration files. ├── devtools - Model profiling, debugging, and inspection. Please refer to the tools documentation for more information. │ ├── bundled_program - a tool for validating ExecuTorch model. See doc. │ ├── etdump - ETDump - a format for saving profiling and debugging data from runtime. See doc. │ ├── etrecord - ETRecord - AOT debug artifact for ExecuTorch. See doc. │ ├── inspector - Python API to inspect ETDump and ETRecord. See doc. │ └── visualization - Visualization tools for representing model structure and performance metrics. ├── docs - Static docs tooling and documentation source files. ├── examples - Examples of various user flows, such as model export, delegates, and runtime execution. ├── exir - Ahead-of-time library: model capture and lowering APIs. EXport Intermediate Representation (EXIR) is a format for representing the result of torch.export. This directory contains utilities and passes for lowering the EXIR graphs into different dialects and eventually suitable to run on target hardware. │ ├── _serialize - Serialize final export artifact. │ ├── backend - Backend delegate ahead of time APIs. │ ├── capture - Program capture. │ ├── dialects - Op sets for various dialects in the export process. Please refer to the EXIR spec and the backend dialect doc for more details. │ ├── emit - Conversion from ExportedProgram to ExecuTorch execution instructions. │ ├── operator - Operator node manipulation utilities. │ ├── passes - Built-in compiler passes. │ ├── program - Export artifacts. │ ├── serde - Graph module serialization/deserialization. │ ├── verification - IR verification. ├── extension - Extensions built on top of the runtime. │ ├── android - ExecuTorch wrappers for Android apps. Please refer to the Android documentation and Javadoc for more information. │ ├── apple - ExecuTorch wrappers for iOS apps. Please refer to the iOS documentation on how to integrate into Apple platform for more information. │ ├── aten_util - Converts to and from PyTorch ATen types. │ ├── data_loader - 1st party data loader implementations. │ ├── evalue_util - Helpers for working with EValue objects. │ ├── gguf_util - Tools to convert from the GGUF format. │ ├── kernel_util - Helpers for registering kernels. │ ├── llm - Library to run LLM on ExecuTorch including common optimization passes, runtime C++ components. Please refer to the LLM documentation for more information. │ ├── memory_allocator - 1st party memory allocator implementations. │ ├── module - A simplified C++ wrapper for the runtime. An abstraction that deserializes and executes an ExecuTorch artifact (.pte file). Refer to the module documentation for more information. │ ├── parallel - C++ threadpool integration. │ ├── pybindings - Python API for executorch runtime. This is powering up the runtime Python API for ExecuTorch. │ ├── pytree - C++ and Python flattening and unflattening lib for pytrees. │ ├── runner_util - Helpers for writing C++ PTE-execution tools. │ ├── tensor - Tensor maker and## Contributing workflow We actively welcome your pull requests (PRs). If you're completely new to open-source projects, GitHub, or ExecuTorch, please see our [New Contributor Guide](docs/source/new-contributor-guide.md) for a step-by-step walkthrough on making your first contribution. Otherwise, read on. 1. [Claim an issue](#claiming-issues), if present, before starting work. If an issue doesn't cover the work you plan to do, consider creating one to provide context about it, and to build consensus about the scope and solution. 1. Create your new branch from `main` in your forked repo, with a name describing the work you're completing; e.g., `add-feature-x`. 1. If you've added code that should be tested, add tests. Ensure all tests pass. See the [testing section](#testing) for more information. 1. If you've changed APIs or added a new tool or feature, [update the documentation](#updating-documentation). 1. If you added an experimental API or deprecated an existing API, follow the [API Life Cycle and Deprecation Policy](docs/source/api-life-cycle.md). 1. Make sure your code follows the [style guides](#coding-style) and passes the [lint checks](#lintrunner). 1. If you haven't already, complete the [Contributor License Agreement ("CLA")](#contributor-license-agreement-cla). 1. Create a pull request in the `pytorch/executorch` Github repo using the [instructions below](#pull-requests). ## Issues ### Creating Issues We use GitHub issues to track public bugs and feature requests. Ensure that the issue title is clear and descriptive, and that the description has sufficient instructions to be able to reproduce the issue. Meta has a [bounty program](https://www.facebook.com/whitehat/) for the safe disclosure of security bugs. In those cases, please go through the process outlined on that page and do not file a public issue. ### Issue Labels #### Module/Partner Labels [Labels beginning with `module:`](https://github.com/pytorch/executorch/labels?q=%22module%3A+%22) indicate the area that the issue relates to. The ExecuTorch oncall will typically add this label. [Labels beginning with `partner:`](https://github.com/pytorch/executorch/labels?q=%22partner%3A+%22) indicate the ExecuTorch partner who owns the issue. The ExecuTorch oncall will typically add this label. #### Lifecycle Labels The ExecuTorch oncall will triage new issues. If the issue requires more information from the issue's author, oncall will add the `need-user-input` label and wait for the author to respond. Once the issue contains enough information, the oncall will: - Ensure that the title is descriptive - Add one of the labels: - `bug`: The issue describes an unexpected problem - `feature`: The issue describes a request for new functionality - `rfc`: The issue describes a proposed change to functionality - Add one `module:` label or one `partner:` label, as described above - Add the `triaged` label After this point, the oncall has finished the triage process, and the module owner or partner is responsible for resolving the issue. (See https://github.com/pytorch/executorch/issues/7679 for the mapping of labels to owners.) ### Claiming Issues We'd love your help closing out [open issues](https://github.com/pytorch/executorch/issues?q=sort%3Aupdated-desc+is%3Aissue+is%3Aopen) in the Github repo. 1. Find an issue with the [`actionable`](https://github.com/pytorch/executorch/issues?q=sort%3Aupdated-desc+is%3Aissue+is%3Aopen+label%3Aactionable) or [`good first issue`](https://github.com/pytorch/executorch/issues?q=sort%3Aupdated-desc+is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) label that is not currently assigned to anyone. - If you'd like to work on an issue that is assigned but hasn't been updated in a while, discuss a hand-off with the current assignee in the issue comments. - If you'd like to work on an issue that isn't marked `actionable`, please comment on the issue to ask about its status and wait for a response. 1. Set yourself as the assignee of the issue. 1. If you decide not to finish the issue, update the issue with information to help the next person, then remove yourself from the assignee list. 1. When creating pull requests (PRs), mention the issue number like `#1234` in the PR description details (the first comment in the PR conversation thread). 1. When the final PR has merged and resolves the issue, close the issue with the button at the bottom of the issue's page. ## Coding Style ### lintrunner We use [`lintrunner`](https://pypi.org/project/lintrunner/) to help make sure the code follows our standards. Set it up with: ``` ./install_requirements.sh # (automatically run by install_executorch.sh) lintrunner init ``` Then run `lintrunner` from the root of the repo to see its suggestions, or run `lintrunner -a` to automatically apply the suggestions. ### Git Hooks A pre-commit hook runs lintrunner automatically on every commit. Install it with: ``` git config core.hooksPath .githooks ``` This is also done automatically by `./install_executorch.sh`. If lintrunner auto-fixes files, the commit will be blocked so you can review the changes with `git diff` before re-committing. ### Python Style ExecuTorch Python code follows the style used by the PyTorch core project. ### C++ Style ExecuTorch code uses the [Google C++ Style](https://google.github.io/styleguide/cppguide.html), with modifications. Rationale: Google style is close to the C++ style used by PyTorch core, although PyTorch core does not explicitly document its C++ style. Google style is well documented, and has exceptional tooling support. **Modifications** to the Google C++ style, to make it closer to the code in PyTorch core: - Function and method names should use `lower_snake_case()`. This follows the convention that PyTorch core inherited from its namesake Python, and is the biggest modification to the Google C++ style. - File names should use `lower_snake_case.cpp` (not `.cc`, and not `PascalCase.cpp`). This follows the most common pattern in PyTorch core. - Headers should use `#pragma once` instead of manual include guards. This follows the most common pattern in PyTorch core. - All includes should use `TensorPtr, details in this documentation. For how to useTensorPtrandModule, please refer to the "Using ExecuTorch with C++" doc. │ ├── testing_util - Helpers for writing C++ tests. │ ├── threadpool - Threadpool. │ └── training - Experimental libraries for on-device training. ├── kernels - 1st party kernel implementations. │ ├── aten - ATen kernel implementations. │ ├── optimized - Optimized kernel implementations. │ ├── portable - Reference implementations of ATen operators. │ ├── prim_ops - Special ops used in executorch runtime for control flow and symbolic primitives. │ └── quantized - Quantized kernel implementations. ├── profiler - Utilities for profiling runtime execution. ├── runtime - Core C++ runtime. These components are used to execute the ExecuTorch program. Please refer to the runtime documentation for more information. │ ├── backend - Backend delegate runtime APIs. │ ├── core - Core structures used across all levels of the runtime. Basic components such asTensor,EValue,ErrorandResultetc. │ ├── executor - Model loading, initialization, and execution. Runtime components that execute the ExecuTorch program, such asProgram,Method. Refer to the runtime API documentation for more information. │ ├── kernel - Kernel registration and management. │ └── platform - Layer between architecture specific code and portable C++. ├── schema - ExecuTorch PTE file format flatbuffer schemas. ├── scripts - Utility scripts for building libs, size management, dependency management, etc. ├── shim_et - Compatibility layer between OSS and Internal builds. ├── test - Broad scoped end-to-end tests. ├── third-party - Third-party dependencies. ├── tools - Tools for building ExecuTorch from source, for different built tools (CMake, Buck). └── util - Various helpers and scripts.