How to Mount Vultr Block Storage Volume on Linux

Updated on November 27, 2024

Mounting Vultr Block Storage volume on Linux provides flexible and scalable file storage for Vultr Cloud Compute instances. Linux distributions including Debian, Ubuntu, CentOS, Rocky Linux, Alma Linux, Arch Linux, and Alpine Linux support NVMe and HDD-based Vultr Block Storage volumes.

Follow this guide to mount Vultr Block Storage Volume on Linux.

Warning
The following commands may destroy data on existing volumes. Use a new Vultr Block Storage volume to avoid data loss due to file system changes and partitioning.
  1. Attach Vultr Block Storage Volume to Linux.

  2. Use the lsblk utility to list all storage disks attached to the Vultr Cloud Compute instance and verify the Vultr Block Storage volume disk name.

    console
    $ lsblk
    

    Output:

    NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
    sr0     11:0    1 1024M  0 rom
    vda    254:0    0   25G  0 disk
    ├─vda1 254:1    0  512M  0 part /boot/efi
    └─vda2 254:2    0 24.5G  0 part /
    vdb    254:16   0   40G  0 disk

    The Vultr Block Storage volume attaches as /vdb based on the above output. The first Vultr Block storage volume attaches to Linux as /dev/vdb and additional volume disk names increment in alphabetical order, such as /dev/vdc and /dev/vdd.

  3. Create a new disk label using the parted utility.

    console
    $ sudo parted -s /dev/vdb mklabel gpt
    
  4. Create a primary partition to fill the entire disk space.

    console
    $ sudo parted -s /dev/vdb unit mib mkpart primary 0% 100%
    
  5. Create an EXT4 file system on the primary partition and format it.

    console
    $ sudo mkfs.ext4 /dev/vdb1
    

    Output:

    mke2fs 1.47.0 (5-Feb-2023)
    Discarding disk blocks: done
    Creating filesystem with 10485248 4k blocks and 2621440 inodes
    Filesystem UUID: 95b1f596-e044-4dcd-beb3-a94877960e4d
    Superblock backups stored on blocks:
            32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
            4096000, 7962624
    
    Allocating group tables: done
    Writing inode tables: done
    Creating journal (65536 blocks): done
    Writing superblocks and filesystem accounting information: done
  6. Create a new mount point directory for the Vultr Block Storage volume partition.

    console
    $ sudo mkdir /mnt/blockstorage
    
  7. View detailed information about the Vultr Block Storage volume partition and note its UUID value.

    console
    $ sudo blkid /dev/vdb1
    

    Output:

    /dev/vdb1: UUID="95b1f596-e044-4dcd-beb3-a94877960e4d" BLOCK_SIZE="4096" TYPE="ext4" PARTLABEL="primary" PARTUUID="a7eb098c-288d-4040-9aac-38b36d4e63e7"
  8. Add a new entry to the /etc/fstab file to automatically mount the Vultr Block Storage volume at boot. Replace UUID-VALUE with the actual Vultr Block Storage volume partition UUID.

    console
    $ sudo echo >> sudo /etc/fstab
    $ sudo echo "UUID=UUID-VALUE /mnt/blockstorage ext4 defaults,noatime,nofail 0 0" >> sudo /etc/fstab
    
  9. Mount the Vultr Block Storage volume partition.

    console
    $ mount /mnt/blockstorage
    
  10. View all active storage volumes on the Vultr Cloud Compute instance and verify that the new Vultr Block Storage volume is available.

    console
    $ lsblk
    

    Output:

    NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
    sr0     11:0    1 1024M  0 rom  
    vda    254:0    0   25G  0 disk 
    ├─vda1 254:1    0  512M  0 part /boot/efi
    └─vda2 254:2    0 24.5G  0 part /
    vdb    254:16   0   40G  0 disk 
    └─vdb1 254:17   0   40G  0 part /mnt/blockstorage