1- # Portable Programming
1+ # Portable C++ Programming
22
3- NOTE: This document covers the runtime code: i.e., the code that needs to build
4- for and execute in target hardware environments. These rules do not necessarily
5- apply to code that only runs on the development host, like authoring tools.
3+ NOTE: This document covers the code that needs to build for and execute in
4+ target hardware environments. This applies to the core execution runtime, as
5+ well as kernel and backend implementations in this repo. These rules do not
6+ necessarily apply to code that only runs on the development host, like authoring
7+ or build tools.
68
79The ExecuTorch runtime code is intendend to be portable, and should build for a
810wide variety of systems, from servers to mobile phones to DSPs, from POSIX to
@@ -26,12 +28,14 @@ allocation, the code may not use:
2628- ` malloc() ` , ` free() `
2729- ` new ` , ` delete `
2830- Most ` stdlibc++ ` types; especially container types that manage their own
29- memory, like ` string ` and ` vector ` .
31+ memory like ` string ` and ` vector ` , or memory-management wrapper types like
32+ ` unique_ptr ` and ` shared_ptr ` .
3033
3134And to help reduce complexity, the code may not depend on any external
3235dependencies except:
33- - ` flatbuffers `
34- - ` caffe2/... ` (only for ATen mode)
36+ - ` flatbuffers ` (for ` .pte ` file deserialization)
37+ - ` flatcc ` (for event trace serialization)
38+ - Core PyTorch (only for ATen mode)
3539
3640## Platform Abstraction Layer (PAL)
3741
@@ -46,13 +50,13 @@ like:
4650## Memory Allocation
4751
4852Instead of using ` malloc() ` or ` new ` , the runtime code should allocate memory
49- using the ` MemoryManager ` (` //executorch/runtime/executor/MemoryManager .h ` ) provided by
50- the client.
53+ using the ` MemoryManager ` (` //executorch/runtime/executor/memory_manager .h ` )
54+ provided by the client.
5155
5256## File Loading
5357
54- Instead of loading program files directly, clients should provide buffers with
55- the data already loaded.
58+ Instead of loading files directly, clients should provide buffers with the data
59+ already loaded, or wrapped in types like ` DataLoader ` .
5660
5761## Integer Types
5862
@@ -145,8 +149,8 @@ value to the lean mode type, like:
145149ET_CHECK_MSG(
146150 input.dim() == output.dim(),
147151 "input.dim() %zd not equal to output.dim() %zd",
148- ssize_t( input.dim() ),
149- ssize_t( output.dim() ));
152+ (ssize_t) input.dim(),
153+ (ssize_t) output.dim());
150154```
151155In this case, ` Tensor::dim() ` returns ` ssize_t ` in lean mode, while
152156` at::Tensor::dim() ` returns ` int64_t ` in ATen mode. Since they both conceptually
0 commit comments