
NetBird is an open-source peer-to-peer VPN platform that creates secure, private networks across distributed infrastructure. It uses WireGuard-based tunnels to establish encrypted mesh networks, allowing low-latency connections between nodes without manual firewall or port forwarding.
This guide explains how to self-host the NetBird control plane on a Vultr instance and connect peers across both Vultr and Google Cloud Platform (GCP). By the end, you have a functional multi-cloud mesh network, with peers in Vultr and GCP communicating securely over private addresses. You will also learn how to designate an exit node for centralized egress routing.
Prerequisites
Before setting up your Vultr–GCP mesh network, make sure you have:
- An Ubuntu-based Vultr instance that will host the NetBird control plane.
- Example: deployed in the Delhi (DEL) region.
- This instance needs a domain name with its DNS A record pointing to the public IP, such as
netbird.example.com
.
- At least one additional Ubuntu-based Vultr instance to join the network as a peer.
- Example: deployed in the Amsterdam (AMS) region.
- A Google Cloud VM running Ubuntu, deployed in a region of your choice (for example,
us-central1
).
Deploy the NetBird Control Plane on Vultr
The control plane coordinates all peers in your network and must run on a publicly accessible server. In this setup, you will deploy it on a Vultr instance using Docker. The deployment includes the management service, signaling service, TURN/STUN server, and a default identity provider (Zitadel).
Open firewall ports for HTTPS, signaling, management, and TURN/STUN on the Vultr control plane host.
console$ sudo ufw allow 80/tcp $ sudo ufw allow 443/tcp $ sudo ufw allow 33073/tcp $ sudo ufw allow 10000/tcp $ sudo ufw allow 33080/tcp $ sudo ufw allow 3478/udp $ sudo ufw allow 49152:65535/udp $ sudo ufw reload
Install Docker Engine, Docker Compose plugin, and required utilities from Docker’s official repository.
console$ sudo apt update $ sudo apt install ca-certificates curl gnupg lsb-release -y $ sudo install -m 0755 -d /etc/apt/keyrings $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg $ echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null $ sudo apt update $ sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin jq -y
Enable and start the Docker service.
console$ sudo systemctl enable --now docker
Add your user to the Docker group and refresh group membership.
console$ sudo usermod -aG docker $USER $ newgrp docker
Verify that the Docker Compose plugin is available.
console$ docker compose version
Run the NetBird installer with
NETBIRD_DOMAIN
set to your domain.console$ export NETBIRD_DOMAIN=netbird.example.com $ curl -fsSL https://github.com/netbirdio/netbird/releases/latest/download/getting-started-with-zitadel.sh | bash
Notenetbird.example.com
with the domain pointing to your Vultr control plane instance.After the installer completes, the NetBird management interface is available at:
https://netbird.example.com
Copy the credentials displayed in your terminal to log in to the dashboard and begin adding peers to your mesh network.
Warning
Configure GCP Firewall for NetBird Peers
Google Cloud Platform (GCP) controls network access at the VPC firewall level. By default, outbound (egress) traffic is open to all destinations. If your project uses custom firewall rules, you must confirm that outbound access to the NetBird control plane is allowed.
Log in to the Google Cloud Console.
In the left sidebar, click VPC Network.
Select Firewall rules.
Click Create firewall rule (or edit an existing one if outbound rules are restricted).
Add egress rules that allow the following ports:
- TCP: 80, 443, 33073, 10000, 33080
- UDP: 3478, 49152–65535
Verify connectivity from the GCP VM to your Vultr control plane domain.
console$ curl -I https://netbird.example.com
A response such as
200 OK
confirms the peer can reach the control plane.
Add Peers to the NetBird Network
The recommended way to connect servers from Vultr and GCP is by using setup keys. Setup keys are pre-authorized tokens that let peers join automatically without an interactive login. See the NetBird Setup Keys documentation for more details.
In the Admin Panel, navigate to Setup Keys and click Create Setup Key.
- Assign a name (for example,
multi-cloud-peers
). - Configure usage limits as needed.
- Copy the key value.
- Assign a name (for example,
On each peer, install the NetBird client.
console$ curl -fsSL https://pkgs.netbird.io/install.sh | sh
Register the peer with your self-hosted control plane.
console$ sudo netbird up --management-url https://netbird.example.com --admin-url https://netbird.example.com --setup-key <SETUP_KEY>
Replace
<SETUP_KEY>
with the copied key.In the Admin Panel, verify the peer appears online. Rename it to something descriptive, such as
vultr-ams
orgcp-vm
, and assign it to groups as needed.
Final Verification
Once you register both Vultr and GCP peers, confirm they can communicate securely over the NetBird private network.
In the Admin Panel, open the Peers tab.
- Both peers should display as Online.
- Each peer will have a
100.x.x.x
mesh IP address assigned.
From one peer (for example, your Vultr AMS VM), test connectivity to the other peer’s NetBird IP (for example, the GCP VM).
console$ ping 100.x.x.x # Replace with the actual mesh IP of the remote peer
If the ping succeeds, the NetBird mesh is working as expected.
Your output should be similar to the one below:
PING 100.100.1.2 (100.100.1.2) 56(84) bytes of data. 64 bytes from 100.100.1.2: icmp_seq=1 ttl=64 time=28.6 ms 64 bytes from 100.100.1.2: icmp_seq=2 ttl=64 time=28.7 ms
Repeat the ping in the opposite direction (from GCP > Vultr) to verify two-way connectivity.
Route Traffic Through an Exit Node
You can configure one peer as an exit node so others route their internet traffic through it. In this example, the Vultr AMS instance will act as the exit node, and the GCP VM will route through it.
Designate the Vultr AMS Instance as Exit Node
In the Peers tab of the Admin Panel, select the peer named
vultr-ams
.Scroll down and click Set Up Exit Node.
Assign an identifier, such as
ams-exit
.In the Distribution Groups dropdown, select or create a group that will include the GCP peer (for example,
gcp-nodes
).Click Save Changes.
The
vultr-ams
instance is now configured as an exit node.
Assign the GCP Peer to the Distribution Group
- In the Peers view, click the GCP VM (for example,
gcp-vm
). - Assign it to the
gcp-nodes
group. - Confirm that it now appears in the list of peers using
vultr-ams
as an exit node.
Verify Routing
To confirm that traffic from the GCP VM is routed through the Vultr AMS exit node, run the following on the AMS-region Vultr peer.
Enable IP forwarding.
console$ sudo sysctl -w net.ipv4.ip_forward=1
Add a MASQUERADE rule to NAT outbound traffic via the correct network interface.
console$ sudo iptables -t nat -A POSTROUTING -o $(ip route get 1.1.1.1 | awk '{print $5}') -j MASQUERADE
Confirm IP forwarding is enabled.
console$ sudo sysctl net.ipv4.ip_forward
Monitor traffic routed through this peer.
console$ sudo watch -n1 "iptables -t nat -v -L POSTROUTING"
The output shows a live counter of packets hitting the MASQUERADE rule. You should see the packet and byte counts increase as traffic from the GCP VM flows through the Vultr AMS exit node.
Chain POSTROUTING (policy ACCEPT 120 packets, 9850 bytes) pkts bytes target prot opt in out source destination 25 2100 MASQUERADE all -- any enp1s0 anywhere anywhere
Notevultr-ams
as its exit node, and verify that both VM firewalls allow outbound connections.
NetBird Use Cases and Components
Common Multi-Cloud Use Cases
- Hybrid Cloud Networking: Build a single private network that spans Vultr and GCP workloads.
- Centralized Egress: Route Google Cloud traffic through a Vultr exit node for consistent geo-IP and monitoring.
- Secure Application Mesh: Link Kubernetes clusters, VMs, or bare metal across providers without exposing them publicly.
Key Components
- Management Service: Registers peers and applies network policies.
- Signal & TURN Server: Helps peers connect behind NAT and relays traffic when direct tunnels aren’t possible.
- Peer Agent: The client software that establishes encrypted WireGuard tunnels.
- Setup Keys: Tokens that let headless servers or VMs join without manual login.
- SSO Integration: Optional integration with identity providers like Google or GitHub.
Conclusion
In this guide, you deployed the NetBird control plane on a Vultr instance and connected peers across Vultr and Google Cloud. You registered peers using setup keys, verified private connectivity, and configured an exit node for centralized traffic routing. With this configuration, you can securely extend applications and workloads across providers, making Vultr–GCP deployments more flexible, resilient, and easier to manage.