How to Install PyTorch with ROCm Acceleration on Ubuntu 24.04

Updated on December 2, 2024
How to Install PyTorch with ROCm Acceleration on Ubuntu 24.04 header image

Introduction

PyTorch is an open-source framework for building and training machine learning models, especially deep learning. ROCm is AMD's platform for GPU computing, enabling high-performance computing and machine learning on AMD GPUs. A ROCm-enabled PyTorch container is a pre-built, portable environment that includes PyTorch configured to run efficiently on AMD GPUs. By using this container, you can leverage the power of AMD GPUs for machine learning without worrying about manually setting up or configuring dependencies. It simplifies the process of running PyTorch applications on GPU hardware.

In this article, you are to download and run a ROCm supported PyTorch container, and install PyTorch using Pip for ROCm compute platform.

Use ROCm Supported PyTorch Containers

In this section, you are to download and run a ROCm supported PyTorch container. You are to also check the GPU availability from the container.

Before proceeding with the steps in this section, ensure you have downloaded Docker with ROCm GPU Support on Ubuntu 24.04.

  1. Pull the ROCm supported container for PyTorch.

    console
    $ docker pull rocm/pytorch:latest
    
  2. Run a temporary Docker container.

    console
    $ docker run --rm -it --device=/dev/kfd --device=/dev/dri --security-opt seccomp=unconfined --shm-size 8G rocm/pytorch:latest
    

    The above command runs a temporary container enabling access to GPU devices (/dev/kfd and /dev/dri) for ROCm-supported PyTorch workloads. To learn more about ROCm devices.

  3. Verify GPU availability from the container.

    console
    $ rocm-smi
    $ python3 -c 'import torch; print(torch.cuda.is_available())'
    

    The above command's output should display all the devices along with their specifications and true will be printed if the GPU is available.

  4. Exit and destroy the temporary container.

    console
    $ exit
    

Install PyTorch on Host using Pip

In this section, you are to install PyTorch on the host machine using Pip and check for GPU availability.

  1. Navigate to the PyTorch Start Locally page.

  2. Make the following selections:

    • PyTorch Build: Stable
    • Your OS: Linux
    • Package: Pip
    • Language: Python
    • Compute Platform: ROCm

    Copy the command displayed on the webpage after making the selections and execute the command on the terminal.

    Note
    If you encounter a timeout error while downloading the PyTorch package, you can resolve it by adding the --timeout flag with an appropriate value to the installation command. Alternatively, you can use the --no-cache-dir flag to restart the installation from scratch, ensuring that any incomplete or corrupted cached downloads are ignored.
  3. Verify GPU availability.

    console
    $ python3 -c 'import torch; print(torch.cuda.is_available())'
    

    The above command should print true if the GPU is available.

Conclusion

In this article, you downloaded and ran a ROCm supported PyTorch container with access to ROCm supported devices. Furthermore, you also installed PyTorch using Pip for ROCm compute platform.

More Resources