Terraform¶
Reference¶
Install¶
- ::
wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg –dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg echo “deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main” | sudo tee /etc/apt/sources.list.d/hashicorp.list sudo apt update && sudo apt install terraform
Know which versions are available for a certain package
- ::
apt list –all-versions terraform
Install a specific version of a package
- ::
sudo apt install terraform=1.7.5-1
Commands¶
Terraform init
terraform init -reconfigure -get=true -backend-config=access_key=$SPACES_ACCESS_TOKEN -backend-config=secret_key=$SPACES_SECRET_KEY
terraform init -upgrade -reconfigure -get=true -backend-config=access_key=$SPACES_ACCESS_TOKEN -backend-config=secret_key=$SPACES_SECRET_KEY
terraform init -upgrade
Terraform destroy
terraform taint digitalocean_droplet.project-droplet
terraform taint digitalocean_droplet.project-droplet[0]
terraform taint digitalocean_database_db.database-name
Terraform plan
terraform plan -var "do_token=${DIGITALOCEAN_ACCESS_TOKEN}" -out=tfplan -input=false
terraform plan -var "do_token=${DIGITALOCEAN_ACCESS_TOKEN}" -var "spaces_token=${SPACES_ACCESS_TOKEN}" -var "spaces_secret=${SPACES_SECRET_KEY}" -out=tfplan -input=false
Terraform apply
terraform apply -input=false tfplan
View outputs
terraform output
Know providers dependencies
- ::
terraform providers
Configure S3 backend¶
1terraform {
2 backend "s3" {
3 endpoint = "https://<s3_endpoint>"
4 bucket = "<s3_bucket_name>"
5
6 # whatever directory/file name (*.tfstate) structure desired
7 key = "terraform/global/tags/terraform.tfstate"
8
9 skip_credentials_validation = true # Needed for non AWS S3
10 skip_metadata_api_check = true # Needed for non AWS S3
11 region = "eu-west-2" # Needed for non AWS S3. Basically this gets ignored, but field is needed
12 }
13}
Or, recent terraform versions ‘>=v.1.6.6’:
1terraform {
2 backend "s3" {
3 endpoints = {
4 s3 = "https://<s3_endpoint>"
5 }
6
7 bucket = "<s3_bucket_name>"
8
9 # whatever directory/file name (*.tfstate) structure desired
10 key = "terraform/global/tags/terraform.tfstate"
11
12 skip_credentials_validation = true # Needed for non AWS S3
13 skip_requesting_account_id = true # Needed for non AWS S3
14 skip_metadata_api_check = true # Needed for non AWS S3
15 skip_s3_checksum = true # Needed for non AWS S3 [https://github.com/hashicorp/terraform/issues/34086]
16 region = "eu-west-2" # Needed for non AWS S3. Basically this gets ignored, but field is needed
17 }
18}
Acess S3 remote state¶
- In exporting working directory :
Configure S3 backend for respective working directory;
Configure resource and output desired attribute to be used in other workflow:
1resource "digitalocean_tag" "env_dev" {
2 name = "development"
3}
1output "env_dev_id" {
2 value = "${digitalocean_tag.env_dev.id}"
3 description = "Tag ENVIRONMENT Development"
4}
- In importing working directory:
Configure S3 backend for respective working directory;
- Requirements:
Configured profile with access_key and secret;
Configure “terraform_remote_state” resource to access remote working directory state. In resource reference get output exported field from “terraform_remote_state”:
1data "terraform_remote_state" "tags" {
2 backend = "s3"
3 config = {
4 endpoint = "https://<s3_endpoint>"
5 bucket = "<s3_bucket_name>"
6 # Path to the file we want to retrieve data
7 key = "terraform/global/tags/terraform.tfstate"
8
9 shared_credentials_file = "/etc/boto.cfg"
10 profile = "digitalocean"
11
12 skip_credentials_validation = true # Needed for non AWS S3
13 skip_metadata_api_check = true # Needed for non AWS S3
14 region = "eu-west-2" # Needed for non AWS S3. Basically this gets ignored, but field is needed
15 }
16}
17
18resource "digitalocean_droplet" "project-droplet" {
19 ...
20 tags = ["${data.terraform_remote_state.tags.outputs.env_prod_id}"]
Or, recent terraform versions ‘>=v.1.6.6’:
1data "terraform_remote_state" "tags" {
2backend = "s3"
3config = {
4 endpoints = {
5 s3 = lookup(var.s3, "ENDPOINT")
6 }
7
8 bucket = lookup(var.s3, "BUCKET")
9
10 # Path to the file we want to retrieve data
11 key = "terraform/eco-b24/webb24/terraform.tfstate"
12
13 shared_credentials_files = [lookup(var.s3, "CRED_FILE")]
14 profile = lookup(var.s3, "PROFILE")
15
16 skip_credentials_validation = true # Needed for non AWS S3
17 skip_requesting_account_id = true # Needed for non AWS S3
18 skip_metadata_api_check = true # Needed for non AWS S3
19 region = "eu-west-2" # Needed for non AWS S3. Basically this gets ignored, but field is needed
20 }
21}
Import Resources¶
The current implementation of Terraform import can only import resources into the state. It does not generate configuration. A future version of Terraform will also generate configuration.
Because of this, prior to running terraform import it is necessary to write manually a resource configuration block for the resource, to which the imported object will be mapped.
Manually create a resource:
1resource "digitalocean_domain" "default" {
2 #
3}
The name “default” here is local to the module where it is declared and is chosen by the configuration author. This is distinct from any ID issued by the remote system, which may change over time while the resource name remains constant.
2a. Terraform import (domain example):
terraform import -var "do_token=${DIGITALOCEAN_ACCESS_TOKEN}" digitalocean_domain.default sportmultimedia.pt
2b. Terraform import (firewall example):
Firewall ID obtained via:
doctl compute firewall list
terraform import -var "do_token=${DIGITALOCEAN_ACCESS_TOKEN}" digitalocean_firewall.project-firewall 9b3c63d3-86bb-4187-b9f9-d777b80f4674
2c. Terraform import (droplet example):
Firewall ID obtained via:
doctl compute droplet list | grep <droplet name>
terraform import -var "do_token=${DIGITALOCEAN_ACCESS_TOKEN}" digitalocean_droplet.project-droplet 116576246
Edit resource to match:
1resource "digitalocean_domain" "default" {
2 name = "sportmultimedia.pt"
3}