How to Provision Cloud Infrastructure on Vultr using Terraform
Introduction
Terraform is a declarative language that allows you to create, update, and delete infrastructure resources. Instead of writing step-by-step instructions to create resources, Terraform allows you to directly define the resources in a single file. If the resources do not exist yet, Terraform creates new ones. If they exist but the available states don't match with the expected states, Terraform modifies them. This allows you to efficiently manage the infrastructure resources, especially when the infrastructure is large and complicated.
Terraform consists of two main components, the core and provider:
- The core component reads the configuration files, stores the state for the resources, creates an execution plan, and applies it.
- The provider component creates methods to interact with various platforms using their APIs such as authentication and authorization, retrieving or manipulating the resources. Every platform has a provider.
This article explains how to provision Vultr Cloud Infrastructure using Terraform. You are to provision multiple resources such as cloud instances, Kubernetes Clusters, and databases using your Vultr Account API key.
Prerequisites
Before you begin:
Deploy a Ubuntu server to use a management machine
Activate and Copy your Vultr API Key from the Vultr Customer Portal Settings Page
When enabled, add your management machine IP to the allowed IP subnets list
Using SSH, access the server
Create as a non-root sudo user
Switch to the new sudo user account
# su sysadmin
> This article uses the example user
sysadmin
, replace the username with your actual system user accountTo use the
curl
commands in this article, export your Vultr API Key as a system environment variable to use thecurl
commands in this article$ export VULTR_API_KEY="your-vultr-api-key"
Install Terraform
Add the Terraform GPG key to your server
$ curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
Add the official Terraform repository to your APT sources
$ sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com focal main"
Update the server packages
$ sudo apt update
Install Terraform on the server
$ sudo apt install terraform
Install the Vultr Terraform Provider
The Vultr Terraform provider allows you to create infrastructure resources using your Vultr account API key. Install the provider with the correct information as described in the steps below.
Navigate to your user home directory
$ cd
Create a Terraform workspace to store your resource files
$ mkdir vultr-terraform
Switch to the new directory
$ cd vultr-terraform
Using a text editor such as
Nano
, create a new fileprovider.tf
to store the Vultr provider information$ nano provider.tf
Add the following contents to the file
terraform { required_providers { vultr = { source = "vultr/vultr" version = "2.15.1" } } } provider "vultr" { api_key = var.VULTR_API_KEY } variable "VULTR_API_KEY" {}
Save and close the file.
The above configuration instructs Terraform to use Vultr as the provider with the identifier value
vultr/vultr
and version2.15.1
. To find the latest version, visit the Vultr Provider GitHub repository.Create a new file named
terraform.tfvars
to define your Vultr API key$ nano terraform.tfvars
Add the following directive to the file. Replace
your_vultr_api_key
with your actual value Vultr API keyVULTR_API_KEY = "your_vultr_api_key"
Save and close the file.
Initialize Terraform to install the Vultr Terraform provider
$ terraform init
When successful, your output should look like the one below:
Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work.
You have activated and authenticated Terraform to work with your Vultr account. You can define resources and deploy them to your account as you would through the graphical Vultr Customer Portal.
Deploy Vultr Cloud Instances
To deploy Vultr Cloud Instances using Terraform, choose your desired server name, specifications, and region. Then, define the Terraform instance details and apply changes to your Vultr account as described below.
Create a new Terraform resource file
vultr_instance.tf
$ nano vultr_instance.tf
Add the following contents to the file
resource "vultr_instance" "my_instance" { label = "sample-server" plan = "vc2-1c-1gb" region = "sgp" os_id = "387" enable_ipv6 = true }
Save and close the file.
Below is what the above module file defines:
vultr_instance
: Sets the Vultr resource type you intend to deploy,vultr_instance
declares a server instance. Replace the valueexample_ubuntu_instance
with your desired alias to distinguish the instance.label
: Specifies the instance label. Replacesample-server
with your desired instance name to uniquely identify the resource in your Vultr account.plan
: Sets your desired instance specification.vc2-1c-1gb
plan matches a Vultr instance with typevc2
, 1 vCPU core, and 1 GB RAM. To view a full list of all available plans, visit the Vultr Plans documentationregion
: Specifies your desired Vultr region to deploy the instance.sgp
deploys the instance to the Singapore Vultr location. To view a list of all available regions, visit the Vultr Datacenter locations page and use the short form of a location. For exampleNJ
translates toNew Jersey
. Use the following command to list the available locations by ID:$ curl "https://api.vultr.com/v2/regions" \ -X GET \ -H "Authorization: Bearer ${VULTR_API_KEY}"
os_id
: Sets the instance Operating System (OS) by ID. The value387
represents Ubuntu 20.04. For a list of all available operating system codes, run the following command:$ curl "https://api.vultr.com/v2/plans" \ -X GET \ -H "Authorization: Bearer ${VULTR_API_KEY}"
enable_ipv6
: Enables a public IPV6 address on the Vultr instance
Preview the changes you are about to apply
$ terraform plan
Output:
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # vultr_instance.example_instance will be created + resource "vultr_instance" "example_instance" { + allowed_bandwidth = (known after apply) + app_id = (known after apply) + backups = "disabled" + date_created = (known after apply) + ddos_protection = false + default_password = (sensitive value) + disk = (known after apply)
Create the Vultr instance
$ terraform apply
When prompted, enter
yes
to confirm that you want to apply the changesDo you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value:
When successful, your output should look like the one below:
vultr_instance.example_instance: Creating... vultr_instance.example_instance: Still creating... [10s elapsed] ... vultr_instance.example_instance: Creation complete after 1m22s [id=e8914416-4900-42bc-a5d0-80772240a29a] Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
You have deployed a Ubuntu 20.04 instance with 1 vCPU and 1 GB RAM in the Singapore Vultr region. To view your instance details, either access your Vultr account dashboard or use the Vultr CLI to reveal login information and usage statistics within your terminal session.
Deploy Kubernetes Clusters
Create a new Kubernetes resource file
kubernetes_cluster.tf
$ nano kubernetes_cluster.tf
Add the following contents to the file
resource "vultr_kubernetes" "first_kubernetes_cluster" { region = "sgp" label = "my-cluster" version = "v1.27.2+1" node_pools { node_quantity = 3 plan = "vc2-2c-4gb" label = "my-app-nodes" auto_scaler = true min_nodes = 1 max_nodes = 4 } }
Save and close the file.
Below are the resource definitions in the above file:
vultr_kubernetes
: Sets Vultr Kubernetes Engine (VKE) as the resource type.first_kubernetes_cluster
is the module alias that differentiates the Terraform resource, replace it with your desired valueregion
: Defines your target Vultr datacenter region.sgp
deploys your VKE cluster in the Vultr Singapore regionlabel
: Sets your Kubernetes Cluster label. Replacemy-cluster
with your desired label that describes your cluster.version
: Specifies your target Kubernetes version. To view the available VKE versions, run the following command:$ curl https://api.vultr.com/v2/kubernetes/versions
The available versions should display in your output like the one below:
{"versions":["v1.27.2+1","v1.26.5+1","v1.25.10+1"]}
node_pools
: Defines the VKE node specificationsnode_quantity
: Sets the number of VKE nodes to add to your clusterplan
: Sets the node specifications plan.vc2-2c-4gb
defines regular compute nodes with 2 vCPU cores and 4 GB RAM. View the Vultr Plans list to set your desired VKE node specifications.label
: Defines the descriptive label of your VKE nodes. Replacemy-app-nodes
with your desired node labelauto-scaler
:true
enables auto-scaling on your VKE nodes,false
disables auto-scalingmin_nodes
: Sets the minimum nodes in the poolmax_nodes
Sets the maximum number of nodes in the node pool
To create the VKE cluster, verify that your node pool at least has
1
defined node. The above module file creates a 3-node VKE clusterView the Terraform changes you are about to apply
$ terraform plan
Create the Kubernetes cluster
$ terraform apply
When prompted, enter
yes
to apply changes to create the VKE cluster. Wait until the cluster creation completes with the following output and keep note of the generated cluster ID:vultr_kubernetes.first_kubernetes_cluster: Still creating... [2m40s elapsed] vultr_kubernetes.first_kubernetes_cluster: Still creating... [2m50s elapsed] vultr_kubernetes.first_kubernetes_cluster: Still creating... [3m0s elapsed] vultr_kubernetes.first_kubernetes_cluster: Creation complete after 3m6s [id=e565d8a5-480b-47f0-930e-a974d2767fef] Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
You have created a 3-node Vultr Kubernetes Engine Cluster with auto-scaling to up to 4 nodes. To view your cluster information, access your Vultr account dashboard or use the Vultr CLI for detailed output on the cluster statistics.
Add Node Pools to a Vultr Kubernetes Engine (VKE) Cluster
To add a new node pool and scale your VKE cluster, edit the cluster terraform resource file, and define new nodes as described below.
Edit the
kubernetes_cluster.tf
file$ nano kubernetes_cluster.tf
Add the following configurations at the end of the file. Replace
vke_cluster_id
with the ID displayed in your cluster creation outputresource "vultr_kubernetes_node_pools" "additional_node_pools" { cluster_id = "${vultr_kubernetes.first_kubernetes_cluster.id}" node_quantity = 1 plan = "vc2-4c-8gb" label = "additional-node-pool" tag = "additional-node-pool" auto_scaler = true min_nodes = 1 max_nodes = 2 }
Save and close the file.
Below is what the node resource definitions represent:
vultr_kubernetes_node_pools
: Defines the Vultr Kubernetes node pools resource typeadditional-node-pools
: Sets an alias for your Terraform resource namecluster_id
: Sets the target VKE cluster to scale with additional nodes. The${vultr_kubernetes.first_kubernetes_cluster.id}
represents a cluster with the alias namefirst_kubernetes_cluster
node_quantity
: Defines the total number of additional nodesplan
: Sets the node server specifications.vc2-4c-8gb
defines nodes with 4 vCPU cores and 8 GB RAMlabel
: Defines your custom descriptive node labeltag
: Optional tag to identify the node poolauto_scaler
: Activates or deactivates auto-scaling of the cluster nodesmin_nodes
: Sets the minimum number of nodesmax_nodes
Sets the maximum number of nodes
Create the additional node pool and attach it to your existing Kubernetes cluster
$ terraform apply
Your output should look like the one below:
vultr_kubernetes_node_pools.additional-node-pools: Still creating... [1m10s elapsed] vultr_kubernetes_node_pools.additional-node-pools: Still creating... [1m20s elapsed] vultr_kubernetes_node_pools.additional-node-pools: Still creating... [1m30s elapsed] vultr_kubernetes_node_pools.additional-node-pools: Creation complete after 1m37s [id=5567557a-7165-5153-524c-4b7878483847] Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
You have scaled your Kubernetes Cluster to include new additional nodes. To verify the change, visit your cluster dashboard or use Vultr CLI to view the nodes attached to your VKE cluster.
Deploy Object and Block Storage Volumes
You can deploy S3-compatible Vultr Object storage volumes or attach a Block storage volume on your Vultr instances. Depending on your storage volume of choice, define Terraform module configurations to deploy storage on your Vultr account as described in the steps below.
Deploy Vultr Object Storage
Create a new
object_storage.tf
file$ nano object_storage.tf
Add the following contents to the file
resource "vultr_object_storage" "example_object_storage" { cluster_id = 4 label = "Example Object Storage" }
Save and close the file
The above configuration deploys Vultr Object Storage to the Singapore Vultr region with the following declarations:
cluster_id
: Sets your Vultr Object Storage deployment region. The value4
deploys Object Storage to the Vultr Singapore region. To view the region IDs, run the following command:$ curl "https://api.vultr.com/v2/object-storage/clusters" \ -X GET \ -H "Authorization: Bearer ${VULTR_API_KEY}"
label
: Defines your descriptive Vultr Object Storage label for identification
Preview the Terraform object storage changes you are about to apply
$ terraform plan
Output:
# vultr_object_storage.newexample_object_storage will be created + resource "vultr_object_storage" "newexample_object_storage" { + cluster_id = 4 + date_created = (known after apply) + id = (known after apply) + label = "Example Object Storage"
Apply the Vultr Object Storage volume to your account
$ terraform apply
Output:
vultr_object_storage.example_object_storage: Still creating... [50s elapsed] vultr_object_storage.example_object_storage: Still creating... [1m0s elapsed] vultr_object_storage.example_object_storage: Creation complete after 1m2s [id=397d9828-1b8b-4b7e-85c2-76b2e2529cc7] Apply complete! Resources: 1 added, 1 changed, 0 destroyed.
In the Vultr Customer Portal, navigate to the Object Storage dashboardto view your deployed Vultr Object Storage
To create buckets, install the s3cmd CLI tool on your server and manage your Vultr Object storage
Deploy Vultr Block Storage
Create a new
block_storage.tf
file$ nano block_storage.tf
Add the following contents to the file
resource "vultr_block_storage" "example_block_storage" { size_gb = 10 region = "sgp" label = "New Block Storage" }
Save and close the file
Below is what the resource configurations represent:
vultr_block_storage
: Defines Vultr Block Storage as the resource typeexample_block_storage
: Sets the resource alias namesize_gb
: Defines the Vultr Block Storage volume space.10
creates a volume with 10GB of free space. supported values range from 10 to 40000 GBsregion
: Sets your target Vultr Block Storage deployment region
Preview the Terraform block storage changes you are about to apply
$ terraform plan
Apply the Vultr Object Storage volume to your account
$ terraform apply
Output:
vultr_block_storage.ubuntu_block_storage: Still creating... [20s elapsed] vultr_block_storage.ubuntu_block_storage: Creation complete after 22s [id=4c916f24-1e99-415c-9ddb-cf16f67f3f76] Apply complete! Resources: 1 added, 0 changed, 1 destroyed.
To verify that your Vultr Object Storage Volume deploys correctly, visit your Vultr account dashboard or fetch instance details using the Vultr CLI
Attach Block Storage to a Vultr Cloud Instance
Edit the
vultr_instance.tf
file$ nano block_storage.tf
Update the file to include the
attached_to_instance
declaration with your Vultr instance resource alias IDresource "vultr_block_storage" "example_block_storage" { size_gb = 10 region = "sgp" attached_to_instance = "${vultr_instance.my_instance.id}" }
Save and close the file.
The
attached_to_instance
declaration instructs Terraform to attach the Vultr Block Storage volume to the target instance alias ID. Replace themy_instance.id
value with your actual Vultr server instance alias you created earlierApply changes to add block storage to your Vultr instance
$ terraform apply
Output:
vultr_block_storage.example_block_storage: Modifying... [id=168db64d-c2a1-435c-9f01-8af7279a8fcb] vultr_block_storage.example_block_storage: Still modifying... [id=168db64d-c2a1-435c-9f01-8af7279a8fcb, 10s elapsed] vultr_block_storage.example_block_storage: Modifications complete after 14s [id=168db64d-c2a1-435c-9f01-8af7279a8fcb] Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
In your Vultr Block Storage dashboard, verify that the volume attaches to your server. To use the Vultr Block Storage volume on your server, mount it to a server directory such as /mnt
for access by all system users.
Deploy Managed Databases
In this section, declare Terraform resource definitions to:
- Create a managed database
- Add a user to the Managed Database
- View the deployed managed database
Create a Vultr Managed Database
Create a new database resource file
database.tf
$ nano database.tf
Add the following declarations to the file
resource "vultr_database" "prod_redis_database" { database_engine = "redis" database_engine_version = "7" region = "sgp" plan = "vultr-dbaas-startup-occ-mo-2-26-16" label = "production-database" }
Save and close the file.
The above Vultr Managed Database for Caching resource configurations represent the following values:
vultr_database
: Defines a Vultr Managed Database resourcedatabase_engine
: Sets the Vultr Managed Database Engine,redis
defines a Redis® database. Supported values areredis
,mysql
, andpg
.database_engine_version
: Sets the Vultr Managed Database version.7
deploys a Redis® 7.0 Vultr Database. To view the available versions, visit the Create Database Vultr API pageregion
: Defines your Vultr Managed Database region,sgp
deploys the database to the Singapore Vultr regionplan
: Sets the Vultr Managed Database plan ID.vultr-dbaas-startup-occ-mo-2-26-16
defines the database with a backend server with 2 vCPus, and 16GB RAM. To view a list of available plans, run the following command:$ curl "https://api.vultr.com/v2/databases/plans" -X GET -H "Authorization: Bearer ${VULTR_API_KEY}" > export.txt
Find your desired plan and verify the supported engines in your output like the one below:
{"id":"vultr-dbaas-startup-occ-mo-2-26-16","number_of_nodes":1,"type":"occ_mo","vcpu_count":2,"ram":16384,"disk":42,"monthly_cost":160,"supported_engines":{"mysql":false,"pg":false,"redis":true},
label
: Defines your custom descriptive label for the Vultr Managed Database. Replaceproduction-database
with your desired value.
Preview the changes you are about to apply
$ terraform plan
You should see a similar output in the console
Terraform will perform the following actions: # vultr_database.redis will be created + resource "vultr_database" "prod_redis_database" { + database_engine = "redis" + database_engine_version = "7" + label = "production-database" + plan = "vultr-dbaas-startup-occ-mo-2-26-16" + region = "sgp" } Plan: 1 to add, 0 to change, 0 to destroy.
Apply changes to deploy the Vultr Managed Database
$ terraform apply
Output:
vultr_database.redis: Creating... vultr_database.redis: Still creating... [10s elapsed] vultr_database.redis: Still creating... [4m50s elapsed] vultr_database.redis: Creation complete after 4m56s [id=3339b4b9-55db-4b1e-9ce9-130ea1bc686f] Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
You have deployed a Vultr Managed Database, visit your Vultr account dashboard or use the Vultr CLI to view the database details.
Create a New Vultr Managed Database User
Edit the
database.tf
file$ nano database.tf
Add the following declarations at the end of the file
resource "vultr_database_user" "new_redis_database_user" { database_id = "${vultr_database.prod_redis_database.id}" username = "redisUser" password = "redisPassword" }
Save and close the file.
vultr_database_user
: Defines a new Vultr Managed Database user resourcenew_redis_database_user
: Sets the resource alias ID namedatabase_id
: Sets the target database alias resource ID to apply the new user.prod_redis_database.id
points to the Vultr Managed Database resource you deployed earlier.username
: Defines a new database usernamepassword
: Sets the new database user password
Apply changes to create the new Vultr Managed Database user
$ terraform apply
Output:
vultr_database.redis: Modifying... [id=4f87efba-d514-4ed0-876c-669b04ae3e78] vultr_database.redis: Modifications complete after 6s [id=4f87efba-d514-4ed0-876c-669b04ae3e78] vultr_database_user.redis_database_user: Creating... vultr_database_user.redis_database_user: Creation complete after 7s [id=redisUser] Apply complete! Resources: 1 added, 1 changed, 0 destroyed.
You have added a new user to your Vultr Managed Database, to verify the new user, visit the Managed Database dashboard using your Vultr account or the Vultr CLI
Create a Vultr Virtual Private Cloud (VPC)
A Vultr Virtual Private Cloud (VPC) is an isolated private network that interconnects multiple Vultr instances to the same subnet. Vultr offers both VPC and VPC 2.0 products to interconnect Vultr instances together. In this section, deploy both VPC versions to your Vultr account as described below.
Deploy a Vultr VPC Network
Create a new VPC resource
vpc.tf
$ nano vpc.tf
Add the following contents to the file
resource "vultr_vpc" "prod_vpc" { description = "production servers vpc" region = "sgp" v4_subnet = "192.168.0.0" v4_subnet_mask = 24 }
Save and close the file.
Below is what the above resource definitions represent:
vultr_vpc
: Defines the Vultr VPC resource typeprod_vpc
: Sets the resource alias name for identificationdescription
: Defines the Vultr VPC descriptive labelregion
: Sets the Vultr region to deploy the VPCv4_subnet
: Defines the Vultr VPC private IP Address subnet. You can use any of the following allowed RFC1918 address classes:10.0.0.0 - 10.255.255.255 (10/8 prefix) 172.16.0.0 - 172.31.255.255 (172.16/12 prefix) 192.168.0.0 - 192.168.255.255 (192.168/16 prefix)
v4_subnet_mask
: Sets the private IP address subnet mask.24
sets a255.255.255.0
subnet mask for the VPC IP block
Apply changes to create the Vultr VPC
$ terraform apply
Output:
vultr_vpc.prod_vpc: Creating... vultr_vpc.prod_vpc: Creation complete after 7s [id=61591879-383c-44fa-a8ee-cbbbe3f500cb]
You have created a Vultr VPC resource. To verify the private network, visit your Vultr account VPC Networks page
Attach a Vultr VPC to a Vultr Cloud Instance
Edit your target Vultr Cloud Instance file
$ nano vultr_instance.tf
Update the file to include the
vpc_ids
declaration with your Vultr VPC ID generated during deploymentresource "vultr_instance" "my_instance" { label = "sample-server" plan = "vc2-1c-1gb" region = "sgp" os_id = "387" enable_ipv6 = true vpc_ids = ["61591879-383c-44fa-a8ee-cbbbe3f500cb"] }
Save and close the file
Apply changes to your Vultr account
$ terraform apply
Output:
vultr_instance.mynew_instance: Still modifying... [id=db294f28-e98e-4eb5-9d16-192c6210f528, 10s elapsed] vultr_instance.mynew_instance: Modifications complete after 18s [id=db294f28-e98e-4eb5-9d16-192c6210f528] Apply complete! Resources: 0 added, 2 changed, 0 destroyed.
In your Vultr Customer Portal, visit your Vultr instance settings page and verify that it's attached to the Vultr VPC network.
Destroy Terraform Infrastructure Resources
To destroy deployed Terraform infrastructure resources from your Vultr account, define your target resources using the following syntax
$ terraform destroy -target=Vultr-resource_type.resource-name
For example, to destroy the Vultr Cloud Instance you deployed earlier, run the following command
$ terraform destroy -target=vultr_instance_my_instance
When prompted, verify the resources to destroy, enter
yes
, and press Enter to destroy the Vultr instance as displayed in the following output:# vultr_block_storage.example_block_storage will be destroyed - resource "vultr_block_storage" "example_block_storage" { - attached_to_instance = "db294f28-e98e-4eb5-9d16-192c6210f528" -> null - block_type = "high_perf" -> null } # vultr_instance.mynew_instance will be destroyed - resource "vultr_instance" "mynew_instance" { - allowed_bandwidth = 3 -> null - app_id = 0 -> null Do you really want to destroy all resources? Terraform will destroy all your managed infrastructure, as shown above. There is no undo. Only 'yes' will be accepted to confirm. Enter a value:
To delete all cloud infrastructure resources deployed using Terraform, run the following command:
$ terraform destroy
> Running the above command is not recommended, when using it, verify that you are destroying the correct Vultr resources deployed using Terraform
Verify the infrastructure resources you are about to destroy, then, enter
yes
to destroy all resources. To cancel, press Ctrl + C. When successful, your output should look like the one below:Plan: 0 to add, 0 to change, 7 to destroy. Do you really want to destroy all resources? Terraform will destroy all your managed infrastructure, as shown above. There is no undo. Only 'yes' will be accepted to confirm. Enter a value: yes vultr_vpc.prod_vpc: Destroying... [id=61591879-383c-44fa-a8ee-cbbbe3f500cb] vultr_database_user.new_redis_database_user: Destroying... [id=redisUser] vultr_object_storage.example_object_storage: Destroying... [id=397d9828-1b8b-4b7e-85c2-76b2e2529cc7] vultr_block_storage.example_block_storage: Destroying... [id=168db64d-c2a1-435c-9f01-8af7279a8fcb] vultr_object_storage.newexample_object_storage: Destroying... [id=638a9869-f66f-47ab-a442-1fd096fd24b4] vultr_object_storage.example_object_storage: Destruction complete after 4s
Terraform Commands
To correctly run Terraform to deploy Vultr Cloud Infrastructure resources, follow the operation commands below to initialize, validate, and apply resource configurations to your Vultr account.
Init
: Initializes the Terraform working directory and installs the defined provider pluginsRefresh
: Reads the state of cloud infrastructure resources, and updates the Terraform state file to match the resource statusValidate
: Looks up for any syntax, formatting errors, or wrong configurations in the Terraform resource filesPlan
: Lists the Terraform changes you are about to apply to your Vultr accountApply
: Applies the defined Terraform resource configurations to your Vultr accountDestroy
: Deletes the Vultr Cloud Infrastructure Terraform resources synchronized from your working project directory
For more information about the Terraform commands, visit the official CLI documentation
Conclusion
You have installed Terraform and the Vultr Terraform provider to provision cloud resources using your Vultr API key. You can create multiple project directories to store different resource definitions in multiple locations and avoid overriding any changes using strong commands such as terraform destroy
. For more information on how to use Terraform, visit the Vultr Terraform registry