Vultr DocsLatest Content

How to Downgrade or Reinstall NVIDIA Drivers on Vultr Bare Metal and Passthrough GPU Instances

Updated on 17 September, 2025
Guide
Learn how to reinstall, downgrade, or configure NVIDIA GPU drivers on Vultr Bare Metal and Passthrough Instances for stable performance.
How to Downgrade or Reinstall NVIDIA Drivers on Vultr Bare Metal and Passthrough GPU Instances header image

NVIDIA drivers are essential for enabling GPU acceleration on Bare Metal instances, but mismatched versions or failed updates may cause instability, performance drops, or NVML related errors. In such cases, you may need to downgrade to a stable release or reinstall the current driver to restore proper functionality.

Note
This guide applies to both Bare Metal GPU instances and Passthrough GPU instances deployed with a GPU-enabled OS image.
  • Bare Metal: Drivers are installed directly on dedicated hardware.
  • Passthrough: Drivers are installed inside a virtual machine where a physical GPU is assigned from the host. These instances do not require Fabric Manager as GPU initialization and communication are handled entirely by the guest driver, and NVSwitch is managed at the host level.
If you are installing from a base OS, you must follow NVIDIA's official documentation for your specific hardware. Also this guide does not apply to vGPU instances, where driver versions are managed by the host hypervisor. For more details on vGPU, see How to Manage vGPU on Vultr Cloud GPU Instances.

Follow this guide to remove, reinstall, or downgrade NVIDIA drivers on Vultr Bare Metal and Passthrough GPU instances to keep your workloads running efficiently.

Prerequisite

Before you begin, you need to:

  • Have access to a GPU-enabled Bare Metal instance or Passthrough GPU instance as a non-root user with sudo privileges.
  • Ensure that no GPU workloads are running during the driver removal or installation process.

Install DKMS Package

The NVIDIA driver uses the Dynamic Kernel Module Support (DKMS) framework to automatically rebuild kernel modules whenever the kernel updates. This ensures the NVIDIA driver remains functional after system upgrades.

  1. Update the package index.

    console
    $ sudo apt update
    
  2. Install the dkms package.

    console
    $ sudo apt install -y dkms
    
  3. View the dkms package version.

    console
    $ dkms --version
    

    A version number in the output verifies that DKMS is installed correctly.

Remove Existing NVIDIA Drivers

Before downgrading or reinstalling, you must completely remove the existing NVIDIA drivers and related CUDA packages. This ensures there are no conflicts during the new installation.

  1. Remove CUDA, cuBLAS, and Nsight packages.

    console
    $ sudo apt-get --assume-yes --purge remove "*cublas*" "cuda*" "nsight*"
    
  2. Remove NVIDIA drivers and libraries.

    console
    $ sudo apt-get --assume-yes --purge remove "*nvidia*"
    
  3. Reboot the system to unload any remaining driver modules.

    console
    $ sudo reboot
    

Configure the Official NVIDIA Repository

To install the NVIDIA drivers, you need access to the official NVIDIA repository. Adding the repository lets you fetch specific driver versions and stay consistent with NVIDIA's distribution.

  1. Set your Ubuntu version.

    console
    $ UBUNTU_VERSION=$(lsb_release -rs | sed -e 's/\.//')
    
  2. Download the NVIDIA keyring package.

    console
    $ wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu${UBUNTU_VERSION}/x86_64/cuda-keyring_1.1-1_all.deb
    
  3. Install the keyring package.

    console
    $ sudo dpkg -i cuda-keyring_1.1-1_all.deb
    
  4. Download the repository signing key.

    console
    $ wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu${UBUNTU_VERSION}/x86_64/cuda-archive-keyring.gpg
    
  5. Move the key to the keyrings directory.

    console
    $ sudo mv cuda-archive-keyring.gpg /usr/share/keyrings/cuda-archive-keyring.gpg
    
  6. Add the CUDA repository.

    console
    $ echo "deb [signed-by=/usr/share/keyrings/cuda-archive-keyring.gpg] https://developer.download.nvidia.com/compute/cuda/repos/ubuntu${UBUNTU_VERSION}/x86_64/ /" | sudo tee /etc/apt/sources.list.d/cuda-ubuntu${UBUNTU_VERSION}-x86_64.list
    
  7. Update the package index.

    console
    $ sudo apt update
    

Install Latest NVIDIA Drivers

Install the appropriate driver and toolkit packages for your GPU model. The required packages differ depending on whether you are using B200 and newer GPUs or H100/A100 GPUs that require Fabric Manager. The following steps apply equally to Bare Metal GPU instances and Passthrough GPU instances, since both provide direct access to a physical NVIDIA GPU.

B200 and Newer GPUs (no Fabric Manager required)

For B200 and newer GPUs, install the NVIDIA open drivers, CUDA toolkit, NVLink libraries, and container runtime support.

  1. Install the NVIDIA open drivers, CUDA toolkit, and NVLink libraries.

    console
    $ sudo apt install --assume-yes nvidia-open cuda-toolkit nvlink5
    
  2. Install the NVIDIA container runtime and supporting libraries.

    console
    $ sudo apt install --assume-yes nvidia-container-toolkit nvidia-container-toolkit-base libnvidia-container-tools libnvidia-container1
    
  3. Reboot the system to load the new drivers.

    console
    $ sudo reboot
    
  4. Verify the NVIDIA drivers are installed.

    console
    $ nvidia-smi
    

H100 and Older GPUs (with Fabric Manager)

H100 and older data center GPUs, such as the A100, use NVLink and NVSwitch to enable high-bandwidth, peer-to-peer communication across multiple GPUs in the same system. To support these features, NVIDIA requires both the CUDA proprietary drivers and the Fabric Manager service. Without Fabric Manager running alongside the proper drivers, NVLink/NVSwitch interconnects will not function correctly.

  1. Install the CUDA drivers, Fabric Manager, and CUDA toolkit.

    console
    $ sudo apt install --assume-yes cuda-drivers-fabricmanager cuda-toolkit
    
  2. Install the NVIDIA container runtime and supporting libraries.

    console
    $ sudo apt install --assume-yes nvidia-container-toolkit nvidia-container-toolkit-base libnvidia-container-tools libnvidia-container1
    
  3. Reboot the system to load the new drivers.

    console
    $ sudo reboot
    
  4. Verify that the NVIDIA drivers are installed.

    console
    $ nvidia-smi
    
  5. Enable and start the Fabric Manager service.

    console
    $ sudo systemctl enable --now nvidia-fabricmanager
    
  6. Verify that Fabric Manager is running.

    console
    $ sudo systemctl status nvidia-fabricmanager
    

    Output:

    ● nvidia-fabricmanager.service - NVIDIA fabric manager service
         Loaded: loaded (/lib/systemd/system/nvidia-fabricmanager.service; enabled; vendor preset: enabled)
         Active: active (running) since Tue 2025-09-02 13:47:35 UTC; 1h 17min ago
       Main PID: 4811 (nv-fabricmanage)

Install Specific Versions of NVIDIA Drivers and Packages

NVIDIA maintains driver branches to simplify installation of specific versions. Each GPU generation requires a strict minimum driver version.

  • NVIDIA HGX-2 and HGX A100 systems: Minimum driver version 450.xx
  • NVIDIA HGX H100 systems: Minimum driver version 525.xx
  • NVIDIA HGX B200 and HGX B100 systems: Minimum driver version 570.xx

Append the version to the package name to install a specific driver branch. For example:

  • nvidia-open-570: Installs the open driver 570 branch (recommended for B200 systems).
  • cuda-drivers-550: Installs the proprietary driver 550 branch (suitable for H100 systems).
  • cuda-12-8 and cuda-toolkit-12-8: Installs CUDA 12.8 and its toolkit.
Note
NVIDIA does not provide versioned branches for the container toolkit. If you require a specific version, you must specify it explicitly, for example:
console
$ sudo apt install nvidia-container-toolkit=VERSION

Example Installation using 570 Drivers with CUDA 12.8

NVIDIA provides multiple driver branches depending on the GPU generation. Follow the steps below to install the 570 driver branch with CUDA 12.8.

B200 and Newer GPUs

For B200 and other newer GPUs, install the open 570 driver branch and the CUDA 12.8 toolkit.

  1. Install the NVIDIA drivers, CUDA toolkit, and NVLink support.

    console
    $ sudo apt install --assume-yes nvidia-open-570 cuda-toolkit-12-8 nvlink5-570
    
  2. Install the NVIDIA container runtime and supporting libraries.

    console
    $ sudo apt install --assume-yes nvidia-container-toolkit nvidia-container-toolkit-base libnvidia-container-tools libnvidia-container1
    
  3. Reboot the system to load the new drivers.

    console
    $ sudo reboot
    
  4. Verify that the NVIDIA drivers are installed.

    console
    $ nvidia-smi
    

H100 and Older GPUs

For H100 and older GPUs, install the CUDA 570 driver branch with Fabric Manager and the CUDA 12.8 toolkit. Fabric Manager is required to enable NVLink/NVSwitch functionality in multi-GPU systems.

  1. Install the CUDA drivers, Fabric Manager, and CUDA toolkit.

    console
    $ sudo apt install --assume-yes cuda-drivers-fabricmanager-570 cuda-toolkit-12-8
    
  2. Install the NVIDIA container runtime and supporting libraries.

    console
    $ sudo apt install --assume-yes nvidia-container-toolkit nvidia-container-toolkit-base libnvidia-container-tools libnvidia-container1
    
  3. Reboot the system to load the new drivers.

    console
    $ sudo reboot
    
  4. Verify that the NVIDIA drivers are installed.

    console
    $ nvidia-smi
    
  5. Enable and start the Fabric Manager service.

    console
    $ sudo systemctl enable --now nvidia-fabricmanager
    
  6. Verify that Fabric Manager is running.

    console
    $ sudo systemctl status nvidia-fabricmanager
    

    Output:

    ● nvidia-fabricmanager.service - NVIDIA fabric manager service
         Loaded: loaded (/lib/systemd/system/nvidia-fabricmanager.service; enabled; vendor preset: enabled)
         Active: active (running) since Tue 2025-09-02 13:47:35 UTC; 1h 17min ago
       Main PID: 4811 (nv-fabricmanage)

Conclusion

You have successfully downgraded or reinstalled NVIDIA drivers on your Bare Metal or Passthrough GPU instance and verified that the correct version is active. For systems that require Fabric Manager, you ensured NVLink and NVSwitch features are properly enabled. With the drivers, CUDA toolkit, and container runtime installed, your environment is now ready for high-performance GPU workloads and containerized deployments.

Comments

No comments yet.