
Vultr Cloud GPU instances provide access to NVIDIA vGPU technology, enabling virtualized graphics processing for applications requiring GPU acceleration. This includes machine learning workloads, video processing, and virtual desktop infrastructure (VDI) solutions. Proper management of vGPU drivers and licensing ensures optimal performance and compliance with NVIDIA's licensing requirements.
Follow this guide to manage vGPU on your Vultr Cloud GPU instances.
Prerequisites
Before you begin, you need to:
- Have access to an Vultr Cloud GPU instance as a non-root user with sudo privileges.
Install DKMS Package
Dynamic Kernel Module Support (DKMS) is a Linux framework that simplifies the management of kernel modules by automatically building, installing, and updating them whenever the kernel changes. Kernel modules such as GPU drivers, VirtualBox drivers, or ZFS are normally compiled for a specific kernel version, which can cause compatibility issues after a system update. DKMS eliminates this problem by recompiling and reinstalling modules against the new kernel, ensuring they remain functional and consistent across upgrades without requiring manual intervention.
Debian-based Distributions
console$ sudo apt install -y dkms
RPM-based Distributions
console$ sudo dnf install -y dkms
Install NVIDIA Driver
Before installing, verify that your system detects the NVIDIA GPU hardware and confirm whether the NVIDIA kernel module is already loaded. The installation script should only be run if the GPU is detected but the driver is not installed.
Verify GPU and Kernel Module
Verify that your system detects the NVIDIA GPU hardware.
console$ lspci | grep -i nvidia
This command should always return a result on GPU-enabled instances, even if drivers are not installed.
Your output should be similar to the one below:
06:00.0 3D controller: NVIDIA Corporation GA100 [A100 PCIe 80GB] (rev a1)
If no output is displayed, your server does not have an attached NVIDIA GPU, and you should not proceed with driver installation.
Check if the NVIDIA kernel module is loaded.
console$ lsmod | grep nvidia
Note
Driver Installation Steps
If your system detects the NVIDIA GPU but the driver is not loaded, you need to install it manually. Skip this step if the driver was already found in the previous check.
Ensure the DKMS package is installed, then run the NVIDIA driver installation script.
console$ sudo /opt/nvidia/install.sh
Reboot your system to load the new NVIDIA driver and nouveau blacklist to take effect.
console$ sudo reboot
After reboot, verify that the drivers are installed and the GPU is accessible.
console$ lsmod | grep nvidia $ nvidia-smi
Output:
nvidia_drm 122880 0 nvidia_modeset 1355776 1 nvidia_drm ... Thu Aug 28 07:39:00 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.54.15 Driver Version: 550.54.15 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA A40-48Q On | 00000000:06:00.0 Off | 0 | ......
Update NVIDIA Driver
Before updating, verify that your system detects the NVIDIA GPU and that the current driver is installed. The update script should only be run if the GPU is present and the driver is already loaded.
Verify that your system detects the NVIDIA GPU hardware.
console$ lspci | grep -i nvidia
If no output is displayed, your server does not have an attached NVIDIA GPU, and you should not proceed with updating the driver.
Check if the NVIDIA kernel module is loaded.
console$ lsmod | grep nvidia
Note
Driver Update Steps
Run the NVIDIA driver update script.
console$ sudo /opt/nvidia/update.sh
This updates the drivers while preserving your existing configuration and licensing settings.
Reboot your system to apply the update.
console$ sudo reboot
After reboot, verify that the updated drivers are active.
console$ nvidia-smi
Your output should be similar to the one below:
+-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.90.07 Driver Version: 550.90.07 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | ...
Troubleshoot Common Issues
If your NVIDIA drivers or vGPU services are not working as expected, use the steps below to diagnose and resolve common issues. These checks help you verify whether essential services are running, licensing is valid, and drivers are properly installed. If any component is missing or inactive, reinstalling the drivers with DKMS support usually resolves the problem.
Verify Driver Manager is Running
The nvidia-driver-manager.service
unit file should be installed on your system as part of the NVIDIA driver package. This service is primarily used to manage GPU drivers but does not need to remain running continuously.
Check the driver manager service status.
$ sudo systemctl status nvidia-driver-manager.service
Output:
● nvidia-driver-manager.service - NVIDIA Driver Manager
Loaded: loaded (/etc/systemd/system/nvidia-driver-manager.service; enabled; preset: enabled)
Active: inactive (dead) since Tue 2025-08-12 19:15:11 UTC; 43s ago
Main PID: 1060 (code=exited, status=0/SUCCESS)
CPU: 63ms
If the service is missing, ensure your operating system's DKMS package is installed and reinstall the drivers using sudo /opt/nvidia/install.sh
.
Check Licensing Status
The nvidia-gridd.service
manages NVIDIA vGPU licensing to enable advanced features such as multi-instance GPU (MIG) and virtual desktop infrastructure (VDI). This service must be active and correctly configured for licensed vGPU deployments.
Verify that the service is running and license is present.
$ sudo systemctl status nvidia-gridd.service
$ sudo nvidia-smi -q | grep License
Output:
● nvidia-gridd.service - NVIDIA Grid Daemon
Loaded: loaded (/usr/lib/systemd/system/nvidia-gridd.service; enabled; preset: enabled)
Active: active (running) since Thu 2025-08-28 07:55:11 UTC; 1h 12min ago
Main PID: 1945 (nvidia-gridd)
Tasks: 3 (limit: 144799)
Memory: 4.3M (peak: 5.0M)
CPU: 25ms
CGroup: /system.slice/nvidia-gridd.service
└─1945 /usr/bin/nvidia-gridd
......
vGPU Software Licensed Product
License Status : Licensed (Expiry: 2025-8-29 7:55:13 GMT)
If nvidia-gridd.service
is missing, ensure your OS DKMS package is installed for your operating system and reinstall the drivers using sudo /opt/nvidia/install.sh
.
Check if Driver is Installed
Verify that the NVIDIA drivers are properly installed and loaded by checking both the driver status and the kernel modules.
$ sudo nvidia-smi
$ sudo lsmod | grep nvidia
Output:
Thu Aug 28 09:14:23 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.54.15 Driver Version: 550.54.15 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA A40-48Q On | 00000000:06:00.0 Off | 0 |
......
nvidia_uvm 4927488 0
nvidia_drm 122880 0
nvidia_modeset 1355776 1 nvidia_drm
......
If either nvidia-smi fails or no NVIDIA kernel modules are listed, ensure your operating system's DKMS package is installed and reinstall the drivers using sudo /opt/nvidia/install.sh
.
Unsupported Linux Distributions
For Linux distributions that are not officially supported, you can use the alternative GPU installation script. This script provides best-effort support and may not receive the same level of validation as standard drivers.
Install Driver
Ensure that the DKMS package is installed on your system.
Run the alternative GPU installation script.
console$ sudo bash /opt/nvidia/linux_gpu.sh
ImportantReboot your system to apply the configuration and load the drivers
console$ sudo reboot
After reboot, verify the installation.
console$ nvidia-smi
If successful, the output should confirm that the NVIDIA drivers are loaded and the GPU is accessible
Conclusion
Managing NVIDIA vGPU on Vultr Cloud GPU instances ensures that your GPU resources are properly configured, drivers are up to date, and licensing is valid. By following this guide, you have learned how to:
- Install and update NVIDIA drivers using DKMS for kernel compatibility.
- Verify GPU detection, driver status, and kernel modules.
- Check essential services such as
nvidia-gridd.service
and ensure licensing is active. - Troubleshoot common issues and confirm proper driver installation.
- Use the alternative installation script for unsupported Linux distributions when necessary.
No comments yet.