Vultr DocsLatest Content

How to Create a Private Mesh Network with Vultr and Google Cloud Platform (GCP) Using Netbird

Updated on 16 October, 2025
Guide
Set up a secure multi-cloud VPN by connecting Vultr and Google Cloud Platform using NetBird.
How to Create a Private Mesh Network with Vultr and Google Cloud Platform (GCP) Using Netbird header image

NetBird is an open-source peer-to-peer VPN platform that creates secure, private networks across distributed infrastructure. It uses WireGuard-based tunnels to establish encrypted mesh networks, allowing low-latency connections between nodes without manual firewall or port forwarding.

This guide explains how to self-host the NetBird control plane on a Vultr instance and connect peers across both Vultr and Google Cloud Platform (GCP). By the end, you have a functional multi-cloud mesh network, with peers in Vultr and GCP communicating securely over private addresses. You will also learn how to designate an exit node for centralized egress routing.

Prerequisites

Before setting up your Vultr–GCP mesh network, make sure you have:

  • An Ubuntu-based Vultr instance that will host the NetBird control plane.
    • Example: deployed in the Delhi (DEL) region.
    • This instance needs a domain name with its DNS A record pointing to the public IP, such as netbird.example.com.
  • At least one additional Ubuntu-based Vultr instance to join the network as a peer.
    • Example: deployed in the Amsterdam (AMS) region.
  • A Google Cloud VM running Ubuntu, deployed in a region of your choice (for example, us-central1).

Deploy the NetBird Control Plane on Vultr

The control plane coordinates all peers in your network and must run on a publicly accessible server. In this setup, you will deploy it on a Vultr instance using Docker. The deployment includes the management service, signaling service, TURN/STUN server, and a default identity provider (Zitadel).

  1. Open firewall ports for HTTPS, signaling, management, and TURN/STUN on the Vultr control plane host.

    console
    $ sudo ufw allow 80/tcp
    $ sudo ufw allow 443/tcp
    $ sudo ufw allow 33073/tcp
    $ sudo ufw allow 10000/tcp
    $ sudo ufw allow 33080/tcp
    $ sudo ufw allow 3478/udp
    $ sudo ufw allow 49152:65535/udp
    $ sudo ufw reload
    
  2. Install Docker Engine, Docker Compose plugin, and required utilities from Docker’s official repository.

    console
    $ sudo apt update
    $ sudo apt install ca-certificates curl gnupg lsb-release -y
    $ sudo install -m 0755 -d /etc/apt/keyrings
    $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
    $ echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
    $ sudo apt update
    $ sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin jq -y
    
  3. Enable and start the Docker service.

    console
    $ sudo systemctl enable --now docker
    
  4. Add your user to the Docker group and refresh group membership.

    console
    $ sudo usermod -aG docker $USER
    $ newgrp docker
    
  5. Verify that the Docker Compose plugin is available.

    console
    $ docker compose version
    
  6. Run the NetBird installer with NETBIRD_DOMAIN set to your domain.

    console
    $ export NETBIRD_DOMAIN=netbird.example.com
    $ curl -fsSL https://github.com/netbirdio/netbird/releases/latest/download/getting-started-with-zitadel.sh | bash
    
    Note
    Replace netbird.example.com with the domain pointing to your Vultr control plane instance.
  7. After the installer completes, the NetBird management interface is available at:

    https://netbird.example.com

    Copy the credentials displayed in your terminal to log in to the dashboard and begin adding peers to your mesh network.

    Warning
    Do not close your terminal before copying the setup key and credentials. These values are only shown once during installation.

Configure GCP Firewall for NetBird Peers

Google Cloud Platform (GCP) controls network access at the VPC firewall level. By default, outbound (egress) traffic is open to all destinations. If your project uses custom firewall rules, you must confirm that outbound access to the NetBird control plane is allowed.

  1. Log in to the Google Cloud Console.

  2. In the left sidebar, click VPC Network.

  3. Select Firewall rules.

  4. Click Create firewall rule (or edit an existing one if outbound rules are restricted).

  5. Add egress rules that allow the following ports:

    • TCP: 80, 443, 33073, 10000, 33080
    • UDP: 3478, 49152–65535
  6. Verify connectivity from the GCP VM to your Vultr control plane domain.

    console
    $ curl -I https://netbird.example.com
    

    A response such as 200 OK confirms the peer can reach the control plane.

Add Peers to the NetBird Network

The recommended way to connect servers from Vultr and GCP is by using setup keys. Setup keys are pre-authorized tokens that let peers join automatically without an interactive login. See the NetBird Setup Keys documentation for more details.

  1. In the Admin Panel, navigate to Setup Keys and click Create Setup Key.

    • Assign a name (for example, multi-cloud-peers).
    • Configure usage limits as needed.
    • Copy the key value.
  2. On each peer, install the NetBird client.

    console
    $ curl -fsSL https://pkgs.netbird.io/install.sh | sh
    
  3. Register the peer with your self-hosted control plane.

    console
    $ sudo netbird up --management-url https://netbird.example.com --admin-url https://netbird.example.com --setup-key <SETUP_KEY>
    

    Replace <SETUP_KEY> with the copied key.

  4. In the Admin Panel, verify the peer appears online. Rename it to something descriptive, such as vultr-ams or gcp-vm, and assign it to groups as needed.

Final Verification

Once you register both Vultr and GCP peers, confirm they can communicate securely over the NetBird private network.

  1. In the Admin Panel, open the Peers tab.

    • Both peers should display as Online.
    • Each peer will have a 100.x.x.x mesh IP address assigned.
  2. From one peer (for example, your Vultr AMS VM), test connectivity to the other peer’s NetBird IP (for example, the GCP VM).

    console
    $ ping 100.x.x.x  # Replace with the actual mesh IP of the remote peer
    
  3. If the ping succeeds, the NetBird mesh is working as expected.

    Your output should be similar to the one below:

    PING 100.100.1.2 (100.100.1.2) 56(84) bytes of data.
    64 bytes from 100.100.1.2: icmp_seq=1 ttl=64 time=28.6 ms
    64 bytes from 100.100.1.2: icmp_seq=2 ttl=64 time=28.7 ms
  4. Repeat the ping in the opposite direction (from GCP > Vultr) to verify two-way connectivity.

Route Traffic Through an Exit Node

You can configure one peer as an exit node so others route their internet traffic through it. In this example, the Vultr AMS instance will act as the exit node, and the GCP VM will route through it.

Designate the Vultr AMS Instance as Exit Node

  1. In the Peers tab of the Admin Panel, select the peer named vultr-ams.

  2. Scroll down and click Set Up Exit Node.

    Peer detail view highlighting the Set up exit node button for vultr-ams

  3. Assign an identifier, such as ams-exit.

    Exit node configuration form showing identifier field filled with ams-exit

  4. In the Distribution Groups dropdown, select or create a group that will include the GCP peer (for example, gcp-nodes).

    Exit node configuration modal with gcp-nodes group selected

  5. Click Save Changes.

    The vultr-ams instance is now configured as an exit node.

Assign the GCP Peer to the Distribution Group

  1. In the Peers view, click the GCP VM (for example, gcp-vm).
  2. Assign it to the gcp-nodes group.
  3. Confirm that it now appears in the list of peers using vultr-ams as an exit node.

Verify Routing

To confirm that traffic from the GCP VM is routed through the Vultr AMS exit node, run the following on the AMS-region Vultr peer.

  1. Enable IP forwarding.

    console
    $ sudo sysctl -w net.ipv4.ip_forward=1
    
  2. Add a MASQUERADE rule to NAT outbound traffic via the correct network interface.

    console
    $ sudo iptables -t nat -A POSTROUTING -o $(ip route get 1.1.1.1 | awk '{print $5}') -j MASQUERADE
    
  3. Confirm IP forwarding is enabled.

    console
    $ sudo sysctl net.ipv4.ip_forward
    
  4. Monitor traffic routed through this peer.

    console
    $ sudo watch -n1 "iptables -t nat -v -L POSTROUTING"
    

    The output shows a live counter of packets hitting the MASQUERADE rule. You should see the packet and byte counts increase as traffic from the GCP VM flows through the Vultr AMS exit node.

    Chain POSTROUTING (policy ACCEPT 120 packets, 9850 bytes)
    pkts bytes target     prot opt in     out     source               destination
    25  2100 MASQUERADE  all  --  any    enp1s0  anywhere             anywhere
    Note
    If the packet count does not increase, confirm that the GCP peer is assigned to use vultr-ams as its exit node, and verify that both VM firewalls allow outbound connections.

NetBird Use Cases and Components

Common Multi-Cloud Use Cases

  • Hybrid Cloud Networking: Build a single private network that spans Vultr and GCP workloads.
  • Centralized Egress: Route Google Cloud traffic through a Vultr exit node for consistent geo-IP and monitoring.
  • Secure Application Mesh: Link Kubernetes clusters, VMs, or bare metal across providers without exposing them publicly.

Key Components

  • Management Service: Registers peers and applies network policies.
  • Signal & TURN Server: Helps peers connect behind NAT and relays traffic when direct tunnels aren’t possible.
  • Peer Agent: The client software that establishes encrypted WireGuard tunnels.
  • Setup Keys: Tokens that let headless servers or VMs join without manual login.
  • SSO Integration: Optional integration with identity providers like Google or GitHub.

Conclusion

In this guide, you deployed the NetBird control plane on a Vultr instance and connected peers across Vultr and Google Cloud. You registered peers using setup keys, verified private connectivity, and configured an exit node for centralized traffic routing. With this configuration, you can securely extend applications and workloads across providers, making Vultr–GCP deployments more flexible, resilient, and easier to manage.

Comments