Runtime Environment Torch not compiled with CUDA enabled


I built a interactive container using

SUBMISSION_TRACK=fincrime CPU_or_GPU=gpu make interact-container

Then in the container, when I run the following script

import torch
X = torch.tensor([1]).to("cuda")

I got the following error

Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/conda/envs/condaenv/lib/python3.9/site-packages/torch/cuda/", line 211, in _lazy_init
  raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

I exported the condaenv in to a .yml file and built a new environment directly on my local machine, and the same error happens. My machine does have a GPU with Cuda 11.7, and my GPU works previously. May I know how I can get cuda/gpu working in the container? Thank you!

Hi @yizhewan,

Thanks for reporting this. We believe this issue should now be fixed in the latest version of the code in the repository and in the latest version of the container image on the registry. (Fixed by b059c7e)

1 Like

I confirm the problem is solved. Thank you!