I am using the tensorly library to compute a PARAFAC decomposition of a tensor. It suceeds on CPU, but fails on MPS with a PyTorch internal assert error. I've created a parallel issue in the PyTorch repo here.
import tensorly as tl
tl.set_backend("pytorch")
from tensorly.decomposition import parafac
x = torch.ones(12,3,12).to("mps") # also fails with .zeros and if multiplied by a scalar
print(x.shape)
weights, factors = parafac(x.detach(), 12, init="random", tol=1e-6)
a, m, b = factors
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[166], [line 6](vscode-notebook-cell:?execution_count=166&line=6)
3 # x.unsqueeze_(1).repeat(1, 3, 1).contiguous()
4 print(x.shape)
----> [6](vscode-notebook-cell:?execution_count=166&line=6) weights, factors = parafac(x.detach(), 12, init="random", tol=1e-6)
8 a, m, b = factors
File ~/Documents/Code/Research/collaborative-attention/.pixi/envs/default/lib/python3.12/site-packages/tensorly/decomposition/_cp.py:426, in parafac(tensor, rank, n_iter_max, init, svd, normalize_factors, orthogonalise, tol, random_state, verbose, return_errors, sparsity, l2_reg, mask, cvg_criterion, fixed_modes, svd_mask_repeats, linesearch, callback)
418 pseudo_inverse = (
419 tl.reshape(weights, (-1, 1))
420 * pseudo_inverse
421 * tl.reshape(weights, (1, -1))
422 )
423 mttkrp = unfolding_dot_khatri_rao(tensor, (weights, factors), mode)
425 factor = tl.transpose(
--> [426](https://file+.vscode-resource.vscode-cdn.net/Users/ivan/Documents/Code/Research/collaborative-attention/~/Documents/Code/Research/collaborative-attention/.pixi/envs/default/lib/python3.12/site-packages/tensorly/decomposition/_cp.py:426) tl.solve(tl.conj(tl.transpose(pseudo_inverse)), tl.transpose(mttkrp))
427 )
428 factors[mode] = factor
430 # Will we be performing a line search iteration
File ~/Documents/Code/Research/collaborative-attention/.pixi/envs/default/lib/python3.12/site-packages/tensorly/backend/__init__.py:202, in BackendManager.dispatch_backend_method.<locals>.wrapped_backend_method(*args, **kwargs)
198 def wrapped_backend_method(*args, **kwargs):
199 """A dynamically dispatched method
200
...
--> [202](https://file+.vscode-resource.vscode-cdn.net/Users/ivan/Documents/Code/Research/collaborative-attention/~/Documents/Code/Research/collaborative-attention/.pixi/envs/default/lib/python3.12/site-packages/tensorly/backend/__init__.py:202) return getattr(
203 cls._THREAD_LOCAL_DATA.__dict__.get("backend", cls._backend), name
204 )(*args, **kwargs)
RuntimeError: false INTERNAL ASSERT FAILED at "/Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/BatchLinearAlgebra.cpp":1614, please report a bug to PyTorch. torch.linalg.solve: Argument 2 has illegal value. Most certainly there is a bug in the implementation calling the backend library.
>>> import sys; print("Python", sys.version)
Python 3.12.11 | packaged by conda-forge | (main, Jun 4 2025, 14:38:53) [Clang 18.1.8 ]
>>> import numpy; print("NumPy", numpy.__version__)
NumPy 2.3.3
>>> import scipy; print("SciPy", scipy.__version__)
SciPy 1.16.2
>>> import tensorly; print("TensorLy", tensorly.__version__)
TensorLy 0.9.0
>>> import torch; print("PyTorch", torch.__version__)
PyTorch 2.8.0
Describe the bug
I am using the tensorly library to compute a PARAFAC decomposition of a tensor. It suceeds on CPU, but fails on MPS with a PyTorch internal assert error. I've created a parallel issue in the PyTorch repo here.
Steps or Code to Reproduce
Expected behavior
Successful termination
Actual result
Versions