Skip to content

Commit 8884c40

Browse files
authored
docs: add migration tables for NumPy as PyTorch
PR-URL: #1004 Reviewed-by: Athan Reines <kgryte@gmail.com> Reviewed-by: Ralf Gommers <ralf.gommers@gmail.com>
1 parent 408141c commit 8884c40

1 file changed

Lines changed: 98 additions & 0 deletions

File tree

spec/draft/migration_guide.md

Lines changed: 98 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -237,3 +237,101 @@ offers a set of useful utility functions, such as:
237237
For now, the migration from a specific library (e.g., NumPy) to a standard
238238
compatible setup requires a manual intervention for each failing API call,
239239
but, in the future, we're hoping to provide tools for automating the migration process.
240+
241+
## Migration patterns for selected libraries
242+
243+
Below, you can find a non-exhaustive list of API calls that are present in NumPy
244+
and PyTorch but are not supported by the Array API Standard. For each of them, we
245+
provide the recommended alternative from the standard, along with some notes on
246+
how to use it.
247+
248+
### NumPy
249+
250+
Please note that `xp` is a convention for the array namespace variable, but all
251+
the alternatives provided in the tables below can be used with the original `np`
252+
name as well.
253+
254+
```py
255+
import numpy as np
256+
xp = np
257+
```
258+
259+
| NumPy API | Array API | Notes |
260+
| --- | --- | --- |
261+
| `np.transpose(x, axes)` | `xp.permute_dims(x, axes)` | `None` is not supported |
262+
| `np.concatenate(...)` | `xp.concat(...)` | |
263+
| `np.power(x, y)` | `xp.pow(x, y)` | |
264+
| `np.absolute(x)` | `xp.abs(x)` | |
265+
| `np.invert(x)` | `xp.bitwise_invert(x)` | |
266+
| `np.left_shift(x, n)` | `xp.bitwise_left_shift(x, n)` | |
267+
| `np.right_shift(x, n)` | `xp.bitwise_right_shift(x, n)` | |
268+
| `np.arcsin(x)` | `xp.asin(x)` | |
269+
| `np.arccos(x)` | `xp.acos(x)` | |
270+
| `np.arctan(x)` | `xp.atan(x)` | |
271+
| `np.arctan2(y, x)` | `xp.atan2(y, x)` | |
272+
| `np.arcsinh(x)` | `xp.asinh(x)` | |
273+
| `np.arccosh(x)` | `xp.acosh(x)` | |
274+
| `np.arctanh(x)` | `xp.atanh(x)` | |
275+
| `np.bool_` | `xp.bool` | |
276+
| `np.array(x)` | `xp.asarray(x)` | |
277+
| `np.ascontiguousarray(x)` | `xp.asarray(x, copy=True)` | Use with `copy=True` to ensure contiguous array |
278+
| `x.astype(dtype)` | `xp.astype(x, dtype)` | |
279+
| `np.unique(x)` | `xp.unique_values(x)` | |
280+
| `np.unique(x, return_counts=True)` | `xp.unique_counts(x)` | |
281+
| `np.unique(x, return_inverse=True)` | `xp.unique_inverse(x)` | |
282+
| `np.unique(x, return_index=True, return_inverse=True, return_counts=True)` | `xp.unique_all(x)` | |
283+
| `np.linalg.norm(x)` | `xp.linalg.vector_norm(x)` or `xp.linalg.matrix_norm(x)` | |
284+
| `np.dot(a, b)` | `xp.matmul(a, b)` or `xp.vecdot(a, b)` or `xp.tensordot(a, b, axes=1)` | |
285+
| `np.vstack((a, b))` | `xp.concat((a, b), axis=0)` | |
286+
| `np.row_stack(...)` | `xp.concat((a, b), axis=0)` | |
287+
| `np.hstack((a, b))` | `xp.concat((a, b), axis=1)` | |
288+
| `np.column_stack((a, b))` | `xp.concat(...)` | Use with `xp.reshape` to ensure 2-D |
289+
| `np.dstack((a, b))` | `xp.concat((a, b), axis=2)` | |
290+
| `np.trace(x)` | `xp.linalg.trace(x)` | |
291+
| `np.diagonal(x)` | `xp.linalg.diagonal(x)` | |
292+
| `np.cross(a, b)` | `xp.linalg.cross(a, b)` | |
293+
| `np.outer(a, b)` | `xp.linalg.outer(a, b)` | |
294+
| `np.matmul(a, b)` | `xp.linalg.matmul(a, b)` or `xp.matmul(a, b)` | |
295+
| `np.ravel` | `xp.reshape(x, (-1,))` | |
296+
| `x.flatten` | `xp.reshape(x, (-1,))` | |
297+
298+
### PyTorch
299+
300+
For PyTorch, we use `array-api-compat` for the transition, so it's a required
301+
dependency for the migration process. You can import it as follows:
302+
303+
```py
304+
import array_api_compat.torch as torch
305+
xp = torch
306+
```
307+
308+
| PyTorch API | Array API | Notes |
309+
| --- | --- | --- |
310+
| `torch.transpose(x, dim0, dim1)` | `xp.permute_dims(x, axes)` | |
311+
| `torch.permute(x, dims)` | `xp.permute_dims(x, axes)` | |
312+
| `torch.cat(...)` | `xp.concat(...)` | |
313+
| `torch.absolute(x)` | `xp.abs(x)` | |
314+
| `torch.clamp(x, min, max)` | `xp.clip(x, min, max)` | |
315+
| `torch.bitwise_not(x)` | `xp.bitwise_invert(x)` | |
316+
| `torch.arcsin(x)` | `xp.asin(x)` | |
317+
| `torch.arccos(x)` | `xp.acos(x)` | |
318+
| `torch.arctan(x)` | `xp.atan(x)` | |
319+
| `torch.arctan2(y, x)` | `xp.atan2(y, x)` | |
320+
| `torch.arcsinh(x)` | `xp.asinh(x)` | |
321+
| `torch.arccosh(x)` | `xp.acosh(x)` | |
322+
| `torch.arctanh(x)` | `xp.atanh(x)` | |
323+
| `torch.tensor` | `xp.asarray` | |
324+
| `x.astype(dtype)` | `xp.astype(x, dtype)` | |
325+
| `torch.unique(x)` | `xp.unique_values(x)` | |
326+
| `torch.unique(x, return_counts=True)` | `xp.unique_counts(x)` | |
327+
| `torch.unique(x, return_inverse=True)` | `xp.unique_inverse(x)` | |
328+
| `torch.unique(x, return_index=True, return_inverse=True, return_counts=True)` | `xp.unique_all(x)` | |
329+
| `torch.linalg.norm(x)` | `xp.linalg.vector_norm(x)` or `xp.linalg.matrix_norm(x)` | |
330+
| `torch.dot(a, b)` | `xp.matmul(a, b)` or `xp.vecdot(a, b)` or `xp.tensordot(a, b, axes=1)` | |
331+
| `torch.vstack((a, b))` | `xp.concat((a, b), axis=0)` | |
332+
| `torch.hstack((a, b))` | `xp.concat((a, b), axis=1)` | |
333+
| `torch.dstack((a, b))` | `xp.concat((a, b), axis=2)` | |
334+
| `torch.trace(x)` | `xp.linalg.trace(x)` | |
335+
| `torch.diagonal(x)` | `xp.linalg.diagonal(x)` | |
336+
| `torch.cross(a, b)` | `xp.linalg.cross(a, b)` | |
337+
| `torch.outer(a, b)` | `xp.linalg.outer(a, b)` | |

0 commit comments

Comments
 (0)