How to Install TensorFlow with ROCm Acceleration on Ubuntu 24.04
Introduction
TensorFlow is an open-source framework widely used for building and training machine learning models, including deep learning. ROCm is AMD's platform for GPU computing, designed to deliver high-performance computing and machine learning capabilities on AMD GPUs. A ROCm-enabled TensorFlow container is a pre-configured, portable environment that includes TensorFlow optimized for running efficiently on AMD GPUs. Using this container allows you to utilize the power of AMD GPUs for machine learning tasks without the complexity of manually setting up or managing dependencies. It streamlines the process of deploying TensorFlow applications on GPU hardware.
In this article, you are to download and run a ROCm supported TensorFlow container, and install TensorFlow using Pip for ROCm compute platform.
Use ROCm Supported TensorFlow Containers
In this section, you are to download and run a ROCm supported TensorFlow container. You are to also check the GPU availability from the container.
Before proceeding with the steps in this section, ensure you have downloaded Docker with ROCm GPU Support on Ubuntu 24.04.
Pull the ROCm supported container for TensorFlow.
console$ docker pull rocm/tensorflow:latest
Run a temporary Docker container.
console$ docker run --rm -it --device=/dev/kfd --device=/dev/dri --security-opt seccomp=unconfined --shm-size 8G rocm/tensorflow:latest
The above command runs a temporary container enabling access to GPU devices (
/dev/kfd
and/dev/dri
) for ROCm-supported TensorFlow workloads. To learn more about ROCm devices.Verify GPU availability from the temporary container.
console$ rocm-smi $ python3 -c 'import tensorflow; print(tensorflow.config.list_physical_devices())'
The above command's output should display all the devices along with their specifications.
Exit and destory the temporary container.
console$ exit
Install TensorFlow on Host using Pip
In this section, you are to install TensorFlow on the host machine using Pip and check for GPU availability.
Fetch the ROCm version.
console$ amd-smi version
Navigate to official installation page and review the support matrix to find the compatible TensorFlow version to accompany ROCm. For example, if your ROCm version is
6.2.x
then the compatible TensorFlow versions would be2.14, 2.15, 2.16
.Install TensorFlow using a pre-built wheel.
console$ python3 -m pip install tensorflow-rocm=={TENSORFLOW_VERSION} -f https://repo.radeon.com/rocm/manylinux/rocm-rel-{ROCM_VERSION}
Replace
{TENSORFLOW_VERSION}
with suitable version according to support matrix mentioned in the previous instruction. Additionally, replace{ROCM_VERSION}
with only the first two numbers.For example, if your ROCm version is
6.2.x
then your command should look like the below example command.python3 -m pip install tensorflow-rocm==2.14 -f https://repo.radeon.com/rocm/manylinux/rocm-rel-6.2
Verify GPU availability.
console$ python3 -c 'import tensorflow; print(tensorflow.config.list_physical_devices())'
The above command's output should display all the devices along with their specifications.
Conclusion
In this article, you downloaded and ran a ROCm supported TensorFlow container with access to ROCm supported devices. Furthermore, you also installed TensorFlow using Pip for ROCm compute platform.