How to Migrate Google Cloud Persistent Disks to Vultr Block Storage

Vultr Block Storage is a high performance, highly available, and scalable storage solution that allows you to expand its storage capacity independently from its compute instance. Google Cloud Platform (GCP) persistent disks are block storage volumes you can mount on virtual machines and store data similar to Vultr Block Storage volumes.
Follow this guide to migrate Google Cloud Platform (GCP) persistent disks to Vultr Block Storage. You will set up a migration environment, transfer your data, and verify the integrity of the transferred files using Rsync.
Prerequisites
Before you begin, you need to:
- Have access to an Ubuntu 24.04 LTS based Google Compute Engine instance as a non-root sudo user.
- Have an existing Persistent Disk that's attached to your Google Compute Engine instance.
- Have access to an Ubuntu 24.04 LTS based Vultr Cloud Compute instance as a non-root sudo user.
- Attach a Vultr Block Storage volume to the instance.
Set Up the Migration Environment
In this section, you'll prepare the Ubuntu instances of both, your Vultr account and your Google Cloud Project, for the data migration. To prepare your GCP environment, open the SSH session of your Compute Engine instance in your browser window and follow these steps:
List the filesystem devices attached to your instance.
console$ lsblk -f
Below is an example output:
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS loop0 0 100% /snap/core20/2434 loop1 0 100% /snap/google-cloud-cli/307 loop2 0 100% /snap/snapd/23545 sda ├─sda1 ext4 1.0 cloudimg-rootfs 95c27671-daaf-425b-a74d-013408e1ed14 6.6G 24% / ├─sda14 ├─sda15 vfat FAT32 UEFI 1B50-CD0D 98.2M 6% /boot/efi └─sda16 ext4 1.0 BOOT ce82e1aa-5e74-4563-8892-24ec5047a744 758.1M 7% /boot sdb
sda
contains your operating system files.sdb
is the unmounted persistent disk. GCP persistent disks' naming convention appends an english alphabet tosd
in sequence for each additional disk, such assdc
,sdd
,sde
, and so on.Create the
/mnt/persistentdisk
directory to mount the disk.console$ sudo mkdir /mnt/persistentdisk
Mount the
/dev/sdb
disk to the/mnt/persistentdisk
directory.console$ sudo mount /dev/sdb /mnt/persistentdisk
If you want to migrate data from a specific partition of your disk, you can replace the
/dev/sdb
device with your partition.List the attached devices with additional details.
console$ lsblk -f
Example output:
NAME FSTYPE LABEL UUID FSAVAIL FSUSE% MOUNTPOINT loop0 0 100% /snap/core20/2434 loop1 0 100% /snap/google-cloud-cli/307 loop2 0 100% /snap/lxd/29619 loop3 0 100% /snap/snapd/23545 sda ├─sda1 ext4 cloudimg-rootfs ba81b8eb-e557-4f1b-a83f-8e655aa5c744 7.5G 21% / ├─sda14 └─sda15 vfat UEFI 7CBD-1472 98.3M 6% /boot/efi sdb ext4 a9955ba0-3a92-4133-bdf4-23db945af1e1 9.2G 0% /mnt/persistentdisk
You can see that the filesystem format for the
sdb
Block Storage volume isext4
and it is mounted at/mnt/persistentdisk
. Note this format for use with your destination Vultr Block Storage volume.Change the ownership of the mount directory and its files to your username to make the disk and its content readable and writable by your username.
console$ sudo chown -R gcpuser:gcpuser /mnt/persistentdisk
Replace
gcpuser
with your actual username in the command above.Navigate to the disk's mount directory.
console$ cd /mnt/persistentdisk
If your disk is empty, create some dummy files for testing purposes.
console$ touch test1.txt test2.txt
Add content to these dummy files.
console$ echo "Hi from Vultr" | tee test1.txt test2.txt
View disk usage and check the volume's usage ratio.
console$ df -h
Example output:
Filesystem Size Used Avail Use% Mounted on /dev/root 8.7G 2.3G 6.4G 27% / tmpfs 480M 0 480M 0% /dev/shm tmpfs 192M 968K 191M 1% /run tmpfs 5.0M 0 5.0M 0% /run/lock efivarfs 56K 24K 27K 48% /sys/firmware/efi/efivars /dev/sda16 881M 61M 759M 8% /boot /dev/sda15 105M 6.1M 99M 6% /boot/efi tmpfs 96M 12K 96M 1% /run/user/1001 /dev/sdb 9.8G 32K 9.3G 1% /mnt/persistentdisk
In the above output, the
/dev/sdb
Block Storage volume, mounted on the/mnt/persistentdisk
directory, shows a1%
usage.You should consider the size and file format of your source device and create an appropriately sized Vultr Block Storage instance to transfer the data to.
To prepare your Vultr environment, access your Vultr Cloud Compute instance via SSH as a non-root sudo user, and follow the below steps:
List the attached devices.
console$ lsblk -f
Example output:
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS sr0 vda ├─vda1 vfat FAT32 4873-68FE 504.8M 1% /boot/efi └─vda2 ext4 1.0 0056d2d8-b1b5-4079-8941-7b85c28b4c31 39.8G 18% / vdb
vdb
is the unmounted Block Storage volume.vda
contains your operating system files. Vultr Block Storage volumes follow thevd
naming convention, with additional disks namedvdc
,vdd
,vde
, based on the number of attached volumes.Format the volume with the filesystem of the source persistent disk, such as
ext4
.console$ sudo mkfs.ext4 /dev/vdb
Create the
/mnt/blockstorage
directory to mount the disk.console$ sudo mkdir /mnt/blockstorage
Mount the
/dev/vdb
volume in the/mnt/blockstorage
directory.console$ sudo mount /dev/vdb /mnt/blockstorage
Change the ownership of the mount directory to ensure it's writable by your non-root user.
console$ sudo chown linuxuser:linuxuser /mnt/blockstorage
Replace
linuxuser
with your non-root username.View disk usage and check that the volume is mounted.
console$ df -h
Example output:
Filesystem Size Used Avail Use% Mounted on tmpfs 197M 1.2M 196M 1% /run efivarfs 256K 14K 238K 6% /sys/firmware/efi/efivars /dev/vda2 52G 9.0G 40G 19% / tmpfs 982M 0 982M 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock /dev/vda1 511M 6.2M 505M 2% /boot/efi /dev/vdb 14G 32K 13G 1% /mnt/blockstorage tmpfs 197M 12K 197M 1% /run/user/1002
In the above output, the
/dev/vdb
Block Storage volume is mounted to the/mnt/blockstorage
directory.If your GCP persistent disk has multiple partitions, create matching partitions on your Vultr Block Storage volume with the same filesystem format and an equal or larger size.
Migrate the Persistent Disk to Vultr Block Storage
In this section, you will migrate the files using the Rsync tool. Rsync uses SSH to securely transfer the files over an encrypted connection. In the GCP Compute instance:
Use the below command to start the transfer. Replace
linuxuser
with your Vultr Cloud Compute instance's non-root username and192.0.2.1
with its IP address.console$ rsync -avz --partial --progress --exclude="lost+found" /mnt/persistentdisk/ linuxuser@192.0.2.1:/mnt/blockstorage/
Parameters of the command above do the following:
-a
- Archive mode (preserves file structure, permissions, timestamps).-v
- Generates more verbose output.-z
- Compresses the files during data transfer.--progress
- Displays real-time transfer progress.--partial
- If the transfer interrupts, Rsync doesn't delete any not entirely transferred file.--exclude="lost+found"
excludes thelost+found
directory from transferring to avoid permission errors and irrelevant data. The filesystem automatically generates this directory for recovery purposes and does not contain user data.
You will be prompted to confirm if you want to connect to the remote host. Enter
yes
at the prompt.Enter the password of the your Vultr Cloud Compute username at the next prompt.
Wait for the transfer to complete. The transfer may take from several minutes to hours, depending on the data size.
If the file transfer command interrupts or exits due to an error, execute the
rsync
command again and enter the SSH password to resume the transfer.console$ rsync -avz --partial --progress --exclude="lost+found" /mnt/persistentdisk/ linuxuser@192.0.2.1:/mnt/blockstorage/
If you have multiple partitions in your Google Persistent disk, you can repeat this process for each one of them.
Test the Vultr Block Storage Volume
In this section, you will verify that all files transferred without any errors. You will use the Rsync file integrity check, and then create a list of checksums as an extra verification layer of integrity. You will also preview the content of files to ensure they are accessible.
Verify that all files transferred without errors. Replace
linuxuser
with your Vultr instance's non-root username and192.0.2.1
with the Vultr instance's IP address.console$ rsync -avc /mnt/persistentdisk/ --exclude "lost+found" linuxuser@192.0.2.1:/mnt/blockstorage/
The command above verifies that the files transferred without errors. It re-uploads any corrupted or missing files.
Enter the SSH password of the Vultr instance.
linuxuser@192.0.2.1's password:
Wait for the integrity check to complete.
Example output:
sending incremental file list sent 265 bytes received 14 bytes 15.94 bytes/sec total size is 1,090,519,063 speedup is 3,908,670.48
Navigate to your user's home directory.
console$ cd ~
Generate a list of hash checksums of all the files in the Google Cloud persistent disk.
console$ find /mnt/persistentdisk -type d -name "lost+found" -prune -o -type f -exec sha256sum {} \; > persistent_disk_checksums.txt
The command above skips computing checksums for the
lost+found
directory. Rsync didn't transfer this directory, as it is automatically generated by the filesystem for recovery purposes and does not contain user data. The command may take from minutes to hours to finish executing, depending on the data size.If the command above doesn't return any output, the hash list was generated.
Generate a list of hash checksums of all the files on the Block Storage volume and download it. Replace
linuxuser
with your Vultr instance's non-root username and192.0.2.1
with the Vultr instance's IP address.console$ ssh linuxuser@192.0.2.1 'find /mnt/blockstorage -type d -name "lost+found" -prune -o -type f -exec sha256sum {} \;' > blockstorage_checksums.txt
Enter the SSH password of the Vultr instance.
linuxuser@192.0.2.1's password:
If the command finishes without any output, it generated the hash list without errors.
Sort the Block Storage checksum list to make the file comparable with the other checksum file.
console$ sort blockstorage_checksums.txt -o blockstorage_checksums.txt
Sort the persistent disk checksum list to make the file comparable with the other checksum file.
console$ sort persistent_disk_checksums.txt -o persistent_disk_checksums.txt
Adjust paths in the persistent disk checksum file to match the Block Storage path structure.
console$ sed -i 's|/mnt/persistentdisk|/mnt/blockstorage|' persistent_disk_checksums.txt
This ensures that both checksum files are comparable in the next step.
Compare the checksums.
console$ diff persistent_disk_checksums.txt blockstorage_checksums.txt
If the command above returns no output, it means the files transferred without any corruption.
Otherwise, if the command returns any output similar to below, please repeat the file transfer process by going back to the previous section. You won't have to wait for the entire process to finish again, as Rsync will only transfer the missing or corrupted files.
3c3 < e75bad1889255d4806f5e14838c81c4fa848b53ed8af29d1bdd517de07dad3b1 /mnt/blockstorage/test1.txt --- > e15bad1889255d4806f5e14838c81c4fa848b53ed8af29d1bdd517de07dad3b1 /mnt/blockstorage/test1.txt
The output above means that the file
test1.txt
transferred with errors.
Cutover to Vultr Block Storage
Once the data is transferred successfully, you can cut over your application to use Vultr Block Storage.
- Update your application's DNS records to point to your Vultr Cloud Compute instance's IP.
- Delete your Google Persistent disk and its Compute Engine instances if they are no longer required.
- Reconfigure all applications to use the new Vultr Block Storage volume, and ensure no application uses the old persistent disk.
- Set proper file ownership and modification permissions.
Conclusion
You have migrated data from a GCP Persistent Disk to a Vultr Block Storage volume. You've used the Rsync tool to transfer the data and verified its integrity and accessibility. For more information on Vultr Block Storage, visit the official documentation.