@@ -9,7 +9,7 @@ erlang_python automatically detects the optimal execution mode based on your Pyt
99``` erlang
1010% % Check current execution mode
1111py :execution_mode ().
12- % % => free_threaded | subinterp | multi_executor
12+ % % => free_threaded | worker | owngil | multi_executor
1313
1414% % Check number of executor threads
1515py :num_executors ().
@@ -21,9 +21,9 @@ py:num_executors().
2121| Mode | Python Version | Parallelism | GIL Behavior | Best For |
2222| ------| ----------------| -------------| --------------| ----------|
2323| ** free_threaded** | 3.13+ (nogil build) | True N-way | None | Maximum throughput |
24- | ** owngil** | 3.12 + | True N-way | Per-interpreter (dedicated thread) | CPU-bound parallel |
25- | ** subinterp ** | 3.12+ | None (shared GIL) | Shared GIL (pool) | High call frequency |
26- | ** multi_executor** | Any | GIL contention | Shared, round-robin | I/O-bound, compatibility |
24+ | ** owngil** | 3.14 + | True N-way | Per-interpreter (dedicated thread) | CPU-bound parallel |
25+ | ** worker ** | 3.12+ | GIL contention | Shared GIL | Default, compatibility |
26+ | ** multi_executor** | < 3.12 | GIL contention | Shared, round-robin | I/O-bound, legacy |
2727
2828### Free-Threaded Mode (Python 3.13+)
2929
@@ -101,13 +101,13 @@ tensorflow) always run on the same OS thread, preventing segfaults and state cor
101101
102102### Mode Comparison
103103
104- | Aspect | Free-Threaded | Subinterpreter | Multi-Executor |
105- | --------| ---------------| ----------------| ----------------|
106- | ** Parallelism** | True N-way | True N-way | GIL contention |
107- | ** State Isolation** | Shared | Isolated | Shared |
108- | ** Memory Overhead** | Low | Higher (per-interp) | Low |
109- | ** Module Compatibility** | Limited | Most modules | All modules |
110- | ** Python Version** | 3.13+ (nogil) | 3.12+ | Any |
104+ | Aspect | Free-Threaded | OWN_GIL | Worker | Multi-Executor |
105+ | --------| ---------------| ---------| - -------| ----------------|
106+ | ** Parallelism** | True N-way | True N-way | GIL contention | GIL contention |
107+ | ** State Isolation** | Shared | Isolated | Shared | Shared |
108+ | ** Memory Overhead** | Low | Higher (per-interp) | Low | Low |
109+ | ** Module Compatibility** | Limited | Most modules | All modules | All modules |
110+ | ** Python Version** | 3.13+ (nogil) | 3.14+ | 3. 12+ | < 3.12 |
111111
112112### When to Use Each Mode
113113
@@ -117,41 +117,48 @@ tensorflow) always run on the same OS thread, preventing segfaults and state cor
117117- You're running CPU-bound workloads
118118- Memory efficiency is important
119119
120- ** Use OWN_GIL (Python 3.12 +) when:**
120+ ** Use OWN_GIL (Python 3.14 +) when:**
121121- You need true CPU parallelism across Python contexts
122122- Running long computations (ML inference, data processing)
123123- Workload benefits from multiple independent Python interpreters
124124- You can tolerate higher per-call latency for better throughput
125125
126- ** Use Subinterpreters/Shared-GIL (Python 3.12+) when:**
126+ ** Use Worker (Python 3.12+, default ) when:**
127127- You need high call frequency with low latency
128- - Individual operations are short
129- - You want namespace isolation without thread overhead
130- - Memory efficiency is important (shared interpreter pool )
128+ - Maximum module compatibility is required
129+ - Shared state between contexts is needed
130+ - Running libraries that don't support subinterpreters (torch, etc. )
131131
132132** Use Multi-Executor (Python < 3.12) when:**
133133- Running on older Python versions
134134- Your workload is I/O-bound (GIL released during I/O)
135- - You need compatibility with all Python modules
136- - Shared state between workers is required
135+ - Thread affinity for numpy/torch is needed
137136
138137### Pros and Cons
139138
140- ** Subinterpreter Mode Pros:**
139+ ** Worker Mode Pros:**
140+ - Maximum module compatibility (all C extensions work)
141+ - Low memory overhead (single interpreter)
142+ - Shared state between contexts
143+ - Default mode for Python 3.12+
144+
145+ ** Worker Mode Cons:**
146+ - GIL contention limits parallelism
147+ - No isolation between contexts
148+
149+ ** OWN_GIL Mode Pros:**
141150- True parallelism without GIL contention
142151- Complete isolation (crashes don't affect other contexts)
143152- Each context has clean namespace (no state bleed)
144- - 25-30% faster cast operations vs worker mode
145153
146- ** Subinterpreter Mode Cons:**
154+ ** OWN_GIL Mode Cons:**
147155- Higher memory usage (each interpreter loads modules separately)
148156- Some C extensions don't support subinterpreters
149- - No shared state between contexts (use Shared State API)
150- - asyncio event loop integration requires main interpreter
157+ - Requires Python 3.14+
151158
152159** Free-Threaded Mode Pros:**
153160- True parallelism with shared state
154- - Lower memory overhead than subinterpreters
161+ - Lower memory overhead than OWN_GIL
155162- Simplest mental model (like regular threading)
156163
157164** Free-Threaded Mode Cons:**
0 commit comments