P3: Resolving Terraform "state" problems

We rely on Terraform to manage a significant slice of our infrastructure following the IaC approach. IaC code is kept in a dedicated GIT project hosted within our self-hosted GitLab instance. Such a project keep track of "current" infrastructure state (Terraform 'state') as well.

Today I discovered a mismatch between the state and the real infrastructure. If not addressed and solved, such a mismatch could cause serious problem to the infrastructure itself (dropping and re-creating three OpenStack instances…​ and related services!)

In this article, I’m going to detail how we addressed such a problem

1 - Context

I needed to add a new VM/instance to our OpenStack-based infrastructure. To do this, I was going to add a specific new .tf file but…​ just before beginning, I checked that everything was in place.

So I loaded environment settings, so to be able to be authenticated by both OpenStack and GitLab:

[verzulli@XPSGarr ~]$ cd git/garrlab/IaC_GARRistini/
[verzulli@XPSGarr IaC_GARRistini]$ source 00_secrets.env
[verzulli@XPSGarr IaC_GARRistini]$ source 01_loadenv.env

and then check the state:

[verzulli@XPSGarr IaC_GARRistini]$ terraform state list
data.openstack_compute_keypair_v2.DV_SSH_KEY
data.openstack_networking_network_v2.NET_GARRISTINI
data.openstack_networking_secgroup_v2.SECGROUP_ALL_ALL
data.openstack_networking_subnet_v2.SUBNET_GARRISTINI
openstack_compute_instance_v2.JAMES_INFRA
openstack_compute_instance_v2.LOGBITER_INFRA
openstack_compute_instance_v2.sandbox
openstack_networking_port_v2.JAMES_INFRA_PORT
openstack_networking_port_v2.LOGBITER_INFRA_PORT
openstack_networking_port_v2.sandbox-netlink
[verzulli@XPSGarr IaC_GARRistini]$

Then I checked that such state properly reflected the real infrastructure with terraform plan. Unfortunately I got a surprise, at the end of the output:

[...]
Plan: 5 to add, 0 to change, 3 to destroy.
[...]

That was a problem, as it meant that 3 resources were going to be DESTROYED.

As no known modification had been applied, I decided to investigate.

From the terraform plan output, I got those lines:

[verzulli@XPSGarr IaC_GARRistini]$ terraform plan
[...]
# openstack_compute_instance_v2.JAMES_INFRA must be replaced
-/+ resource "openstack_compute_instance_v2" "JAMES_INFRA" {
    [...]
    ~ image_name          = "Image not found" -> "Ubuntu 20.04 - GARR" # forces replacement
      name                = "james-infra"
    [...]
    }
# openstack_compute_instance_v2.LOGBITER_INFRA must be replaced
-/+ resource "openstack_compute_instance_v2" "LOGBITER_INFRA" {
    [...]
    ~ image_name          = "Image not found" -> "Ubuntu 20.04 - GARR" # forces replacement
      name                = "logbiter-infra"
    [...]
    }
# openstack_compute_instance_v2.sandbox must be replaced
-/+ resource "openstack_compute_instance_v2" "sandbox" {
    [...]
    ~ image_name          = "Image not found" -> "Ubuntu 20.04 - GARR" # forces replacement
      name                = "sandbox"
    [...]
    }
[...]

so the reference to the OpenStack image used to ship the three existing VMs (james-infra, logbiter-infra, sandbox) was not the same as declared in .tf files (Ubuntu 20.02 - GARR) and, as such, Terraform decided it needed to DESTROY and RECREATE such three VMs.

As a start, I checked if such image was included in the current set provided by OpenStack:

[verzulli@XPSGarr IaC_GARRistini]$ openstack image list
+--------------------------------------+----------------------------+--------+
| ID                                   | Name                       | Status |
+--------------------------------------+----------------------------+--------+
[...]
| a1b1155b-6844-4356-b77e-ce9f21087643 | Ubuntu 20.04 - GARR        | active |
[...]
+--------------------------------------+----------------------------+--------+

So, what’s the problem?

I asked Terraform to provide a bit more details about the State, with terraform pull:

[verzulli@XPSGarr IaC_GARRistini]$ terraform state pull | grep -A 1 "image_id"
            "image_id": "00ece84f-c723-4ea0-a4fb-5555bea06c8d",
            "image_name": "Ubuntu 20.04 - GARR",
--
            "image_id": "00ece84f-c723-4ea0-a4fb-5555bea06c8d",
            "image_name": "Ubuntu 20.04 - GARR",
--
            "image_id": "00ece84f-c723-4ea0-a4fb-5555bea06c8d",
            "image_name": "Ubuntu 20.04 - GARR",
[verzulli@XPSGarr IaC_GARRistini]$

Here you can see that the image_id stored within the State, it’s 00ece84f-c723-4ea0-a4fb-5555bea06c8d, actually a different ID with respect to the one associated to the current OpenStack image, with same name:

  • Image in terraform state: 00ece84f-c723-4ea0-a4fb-5555bea06c8d

  • Image in OpenStack image set: a1b1155b-6844-4356-b77e-ce9f21087643

I also checked if the old ID was still present, associated to some image and…​

[verzulli@XPSGarr IaC_GARRistini]$ openstack image show a1b1155b-6844-4356-b77e-ce9f21087643 | grep 'updated_at' | cut -c 1-50
| updated_at       | 2023-02-08T10:01:18Z
[verzulli@XPSGarr IaC_GARRistini]$ openstack image show 00ece84f-c723-4ea0-a4fb-5555bea06c8d | grep 'updated_at' | cut -c 1-50
No Image found for 00ece84f-c723-4ea0-a4fb-5555bea06c8d
[verzulli@XPSGarr IaC_GARRistini]$

So, now it’s clear that the image used to build the three VMs were gone, and replaced with a different one, with same name.

Ok. Now that the problem is clear…​ how can we fix it?

The idea was to proceed with:

  1. DELETE the reference of the three instances from the state;

  2. RE-IMPORT the three instances to the state, from the current OpenStack status.

Let’s try…​

As the terraform state rm provide a -dry-run option, I just checked:

[verzulli@XPSGarr IaC_GARRistini]$ terraform state rm -dry-run openstack_compute_instance_v2.JAMES_INFRA openstack_compute_instance_v2.LOGBITER_INFRA openstack_compute_instance_v2.sandbox
Acquiring state lock. This may take a few moments...
Would remove openstack_compute_instance_v2.JAMES_INFRA
Would remove openstack_compute_instance_v2.LOGBITER_INFRA
Would remove openstack_compute_instance_v2.sandbox
Releasing state lock. This may take a few moments...
[verzulli@XPSGarr IaC_GARRistini]$

and it seemed ok. So just relaunch the command WITHOUT the -dry-run:

[verzulli@XPSGarr IaC_GARRistini]$ terraform state rm openstack_compute_instance_v2.JAMES_INFRA openstack_compute_instance_v2.LOGBITER_INFRA openstack_compute_instance_v2.sandbox
Acquiring state lock. This may take a few moments...
Removed openstack_compute_instance_v2.JAMES_INFRA
Removed openstack_compute_instance_v2.LOGBITER_INFRA
Removed openstack_compute_instance_v2.sandbox
Successfully removed 3 resource instance(s).
Releasing state lock. This may take a few moments...
[verzulli@XPSGarr IaC_GARRistini]$

Done. Now the current state should miss such three instances. Let’s check:

[verzulli@XPSGarr IaC_GARRistini]$ terraform state list
data.openstack_compute_keypair_v2.DV_SSH_KEY
data.openstack_networking_network_v2.NET_GARRISTINI
data.openstack_networking_secgroup_v2.SECGROUP_ALL_ALL
data.openstack_networking_subnet_v2.SUBNET_GARRISTINI
openstack_networking_port_v2.JAMES_INFRA_PORT
openstack_networking_port_v2.LOGBITER_INFRA_PORT
openstack_networking_port_v2.sandbox-netlink
[verzulli@XPSGarr IaC_GARRistini]$

Exactly.

Now let’s import back the data. In the terraform openstack provider webpage is clearly reported that: "Importing instances can be tricky, since the nova api does not offer all information provided at creation time for later retrieval. Network interface attachment order, and number and sizes of ephemeral disks are examples of this.". Anyway, we’re lucky as our three running instances had only exactly one disk, and exactly one network interface.

I double checked the instances names, as reported by OpenStack:

[verzulli@XPSGarr IaC_GARRistini]$ openstack server list
+--------------------------------------+----------------+--------+------------------------------------+-----------+-----------+
| ID                                   | Name           | Status | Networks                           | Image     | Flavor    |
+--------------------------------------+----------------+--------+------------------------------------+-----------+-----------+
[...]
| 9da4363c-bd77-448f-b53c-3f557b94336e | sandbox        | ACTIVE | net_shared_garristini=172.30.0.225 |           | c1.large  |
| bfab06c7-123a-444b-adff-33a1f49ff007 | james-infra    | ACTIVE | net_shared_garristini=172.30.0.211 |           | c1.large  |
| e3fec953-28ef-4571-8c44-c5a785b9af34 | logbiter-infra | ACTIVE | net_shared_garristini=172.30.0.237 |           | m1.xlarge |
[...]
+--------------------------------------+----------------+--------+------------------------------------+-----------+-----------+

As reported in the official page, an ID was required. Actually that’s the ID as known by OpenStack. So:

[verzulli@XPSGarr IaC_GARRistini]$ terraform import openstack_compute_instance_v2.sandbox 9da4363c-bd77-448f-b53c-3f557b94336e
Acquiring state lock. This may take a few moments...
data.openstack_networking_network_v2.NET_GARRISTINI: Reading...
data.openstack_compute_keypair_v2.DV_SSH_KEY: Reading...
data.openstack_networking_subnet_v2.SUBNET_GARRISTINI: Reading...
data.openstack_networking_secgroup_v2.SECGROUP_ALL_ALL: Reading...
data.openstack_networking_secgroup_v2.SECGROUP_ALL_ALL: Read complete after 3s [id=d637293d-fc0f-4345-b5fb-6626959a9265]
data.openstack_networking_subnet_v2.SUBNET_GARRISTINI: Read complete after 3s [id=9bc888c1-570a-4f4e-8d76-e34f2ff26f65]
data.openstack_compute_keypair_v2.DV_SSH_KEY: Read complete after 4s [id=DV_XPS]
data.openstack_networking_network_v2.NET_GARRISTINI: Read complete after 9s [id=1c5a73ca-581a-4bfe-84e1-e2d1f56144b4]
openstack_compute_instance_v2.sandbox: Importing from ID "9da4363c-bd77-448f-b53c-3f557b94336e"...
openstack_compute_instance_v2.sandbox: Import prepared!
Prepared openstack_compute_instance_v2 for import
openstack_compute_instance_v2.sandbox: Refreshing state... [id=9da4363c-bd77-448f-b53c-3f557b94336e]
Import successful!
The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
Releasing state lock. This may take a few moments...
[verzulli@XPSGarr IaC_GARRistini]$

It worked (it seems!). Let’s proceed with the other two instances:

[verzulli@XPSGarr IaC_GARRistini]$ terraform import openstack_compute_instance_v2.JAMES_INFRA bfab06c7-123a-444b-adff-33a1f49ff007
Acquiring state lock. This may take a few moments...
data.openstack_networking_subnet_v2.SUBNET_GARRISTINI: Reading...
data.openstack_compute_keypair_v2.DV_SSH_KEY: Reading...
data.openstack_networking_network_v2.NET_GARRISTINI: Reading...
data.openstack_networking_secgroup_v2.SECGROUP_ALL_ALL: Reading...
data.openstack_networking_subnet_v2.SUBNET_GARRISTINI: Read complete after 4s [id=9bc888c1-570a-4f4e-8d76-e34f2ff26f65]
data.openstack_compute_keypair_v2.DV_SSH_KEY: Read complete after 5s [id=DV_XPS]
data.openstack_networking_network_v2.NET_GARRISTINI: Read complete after 5s [id=1c5a73ca-581a-4bfe-84e1-e2d1f56144b4]
data.openstack_networking_secgroup_v2.SECGROUP_ALL_ALL: Read complete after 7s [id=d637293d-fc0f-4345-b5fb-6626959a9265]
openstack_compute_instance_v2.JAMES_INFRA: Importing from ID "bfab06c7-123a-444b-adff-33a1f49ff007"...
openstack_compute_instance_v2.JAMES_INFRA: Import prepared!
Prepared openstack_compute_instance_v2 for import
openstack_compute_instance_v2.JAMES_INFRA: Refreshing state... [id=bfab06c7-123a-444b-adff-33a1f49ff007]
Import successful!
The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
Releasing state lock. This may take a few moments...
[verzulli@XPSGarr IaC_GARRistini]$

and the last one:

[verzulli@XPSGarr IaC_GARRistini]$ terraform import openstack_compute_instance_v2.LOGBITER_INFRA e3fec953-28ef-4571-8c44-c5a785b9af34
Acquiring state lock. This may take a few moments...
data.openstack_networking_network_v2.NET_GARRISTINI: Reading...
data.openstack_compute_keypair_v2.DV_SSH_KEY: Reading...
data.openstack_networking_secgroup_v2.SECGROUP_ALL_ALL: Reading...
data.openstack_networking_subnet_v2.SUBNET_GARRISTINI: Reading...
data.openstack_networking_network_v2.NET_GARRISTINI: Read complete after 3s [id=1c5a73ca-581a-4bfe-84e1-e2d1f56144b4]
data.openstack_networking_subnet_v2.SUBNET_GARRISTINI: Read complete after 3s [id=9bc888c1-570a-4f4e-8d76-e34f2ff26f65]
data.openstack_compute_keypair_v2.DV_SSH_KEY: Read complete after 4s [id=DV_XPS]
data.openstack_networking_secgroup_v2.SECGROUP_ALL_ALL: Read complete after 6s [id=d637293d-fc0f-4345-b5fb-6626959a9265]
openstack_compute_instance_v2.LOGBITER_INFRA: Importing from ID "e3fec953-28ef-4571-8c44-c5a785b9af34"...
openstack_compute_instance_v2.LOGBITER_INFRA: Import prepared!
Prepared openstack_compute_instance_v2 for import
openstack_compute_instance_v2.LOGBITER_INFRA: Refreshing state... [id=e3fec953-28ef-4571-8c44-c5a785b9af34]
Import successful!
The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
[verzulli@XPSGarr IaC_GARRistini]$

So, now, let’s look what’s inside the current state:

[verzulli@XPSGarr IaC_GARRistini]$ terraform state list
data.openstack_compute_keypair_v2.DV_SSH_KEY
data.openstack_networking_network_v2.NET_GARRISTINI
data.openstack_networking_secgroup_v2.SECGROUP_ALL_ALL
data.openstack_networking_subnet_v2.SUBNET_GARRISTINI
openstack_compute_instance_v2.JAMES_INFRA
openstack_compute_instance_v2.LOGBITER_INFRA
openpèstack_compute_instance_v2.sandbox
openstack_networking_port_v2.JAMES_INFRA_PORT
openstack_networking_port_v2.LOGBITER_INFRA_PORT
openstack_networking_port_v2.sandbox-netlink
[verzulli@XPSGarr IaC_GARRistini]$

Good! We have the three instances back! Let’s check for a final terraform plan…​

[verzulli@XPSGarr IaC_GARRistini]$ terraform plan
[...]
# openstack_compute_instance_v2.JAMES_INFRA must be replaced
-/+ resource "openstack_compute_instance_v2" "JAMES_INFRA" {
    [...]
    ~ image_name          = "Image not found" -> "Ubuntu 20.04 - GARR" # forces replacement
    + user_data           = "cdce685ee54f5c99fa76740a2dfa7d20b78f3019" # forces replacement
    ~ network {
        [...]
        + port           = "975ebcaf-bf42-46c8-9d63-f312d9319816" # forces replacement
        }
    }
# openstack_compute_instance_v2.LOGBITER_INFRA must be replaced
-/+ resource "openstack_compute_instance_v2" "LOGBITER_INFRA" {
    [...]
    ~ image_name          = "Image not found" -> "Ubuntu 20.04 - GARR" # forces replacement
    + user_data           = "cdce685ee54f5c99fa76740a2dfa7d20b78f3019" # forces replacement
    ~ network {
        [...]
        + port           = "41be1396-7735-4052-8075-5a3c1c56e84d" # forces replacement
        }
    }
# openstack_compute_instance_v2.sandbox must be replaced
-/+ resource "openstack_compute_instance_v2" "sandbox" {
    [...]
    ~ image_name          = "Image not found" -> "Ubuntu 20.04 - GARR" # forces replacement
    + user_data           = "a5a3b4761d7512bb9748074018b9034ab32e89b1" # forces replacement
    ~ network {
        [...]
        + port           = "d998e54f-c669-46c2-a57f-768ab612acd3" # forces replacement
        }
    }
[...]
Plan: 5 to add, 1 to change, 3 to destroy.

What?

It seems things are WORSE than before…​ and we still have exactly the same problem!

Actually, the import process reimported exactly the same, bad image reference 00ece84f-c723-4ea0-a4fb-5555bea06c8d:

[verzulli@XPSGarr IaC_GARRistini]$ terraform state pull | grep -A 1 "image_id"
            "image_id": "00ece84f-c723-4ea0-a4fb-5555bea06c8d",
            "image_name": "Image not found",
--
            "image_id": "00ece84f-c723-4ea0-a4fb-5555bea06c8d",
            "image_name": "Image not found",
--
            "image_id": "00ece84f-c723-4ea0-a4fb-5555bea06c8d",
            "image_name": "Image not found",

after some thinking…​ the behaviour above looked very reasonable…​ as probably the bad image reference is declared within Openstack, so from the Terraform point of view, after a delete/import cycle, the imported image_id is still the wrong one. Let’s check on OpenStack:

[verzulli@XPSGarr IaC_GARRistini]$ openstack server show james-infra
+-----------------------------+----------------------------------------------------------+
| Field                       | Value                                                    |
+-----------------------------+----------------------------------------------------------+
[...]
| id                          | bfab06c7-123a-444b-adff-33a1f49ff007                     |
| image                       | 00ece84f-c723-4ea0-a4fb-5555bea06c8d                     |
+-----------------------------+----------------------------------------------------------

Bingo! The bad image ref id (00ece84f-c723-4ea0-a4fb-5555bea06c8d) is tightly associate to the VM at the OpenStack level. The same problem apply to the other two VMs:

[verzulli@XPSGarr IaC_GARRistini]$ openstack server show logbiter-infra
+-----------------------------+----------------------------------------------------------+
| Field                       | Value                                                    |
+-----------------------------+----------------------------------------------------------+
[...]
| id                          | e3fec953-28ef-4571-8c44-c5a785b9af34                     |
| image                       | 00ece84f-c723-4ea0-a4fb-5555bea06c8d                     |
+-----------------------------+----------------------------------------------------------+
[verzulli@XPSGarr IaC_GARRistini]$ openstack server show sandbox
+-----------------------------+----------------------------------------------------------+
| Field                       | Value                                                    |
+-----------------------------+----------------------------------------------------------+
[...]
| id                          | 9da4363c-bd77-448f-b53c-3f557b94336e                     |
| image                       | 00ece84f-c723-4ea0-a4fb-5555bea06c8d                     |
+-----------------------------+----------------------------------------------------------+

So, at this time, it’s clear the problem is not related to Terraform but, instead, it’s due to a wrong image-id that, on the openstack side, is declared in the three instances.

Hence, I realized that up to now…​ I’ve simply wasted lot of time and, furthermore, messed-up the terraform state.

As I doubted that fixing the problem on the Openstack side was close to impossible (I thought that changing the image-reference of an instance should not be possible!), I decided to simply tell terraform to ignore the image_id attribute, by using the ignore_changes option in the lifecycle stanza

As the delete/import state change, on Terraform side, messed up the state…​ I checked if there were some way to "rollback" the state, as it was exactly before my manual changes.

In general, the state is not version-controlled but…​ in our case, the state is kept inside our GitLab instance, in a dedicated project. And I quickly discovered that GitLab keep track of each version. Let’s check:

[verzulli@XPSGarr IaC_GARRistini]$ terraform state pull | grep serial
"serial": 77,

so, current one, is version 77. Let’s try to fetch version 76:

[verzulli@XPSGarr tmp]$ export TOKEN="<redacted>"
[verzulli@XPSGarr tmp]$ curl -s --header "Private-Token: $TOKEN" "https://gitlab.garrlab.it/api/v4/projects/91/terraform/state/IAC-GARRISTINI/versions/76" | grep serial
    "serial": 76,

Good! We have old states! After some research, I got that serial 68 was the one existing before I started messing it up. So I fetched it:

[verzulli@XPSGarr tmp]$ curl -s --header "Private-Token: $TOKEN" "https://gitlab.garrlab.it/api/v4/projects/91/terraform/state/IAC-GARRISTINI/versions/68" > 68_ok.json
[verzulli@XPSGarr tmp]$ grep serial 68_ok.json
"serial": 68,

and tryied to push it:

[verzulli@XPSGarr IaC_GARRistini]$ terraform state push /tmp/68_ok.json
Acquiring state lock. This may take a few moments...
Failed to write state: cannot import state with serial 68 over newer state with serial 77
Releasing state lock. This may take a few moments...
[verzulli@XPSGarr IaC_GARRistini]$

That sound reasonable. I need to manualy update the serial, to reflect 78. So I edited the JSON file and retryed:

[verzulli@XPSGarr IaC_GARRistini]$ grep serial /tmp/68_ok.json
"serial": 78,
[verzulli@XPSGarr IaC_GARRistini]$ terraform state push /tmp/68_ok.json
Acquiring state lock. This may take a few moments...
[verzulli@XPSGarr IaC_GARRistini]$

Good! Should have worked. Let’s check:

[verzulli@XPSGarr IaC_GARRistini]$ terraform state pull | grep serial
"serial": 79,
[verzulli@XPSGarr IaC_GARRistini]$

and let’s see what’s inside, as for terraform point of view:

[verzulli@XPSGarr IaC_GARRistini]$ terraform state list
data.openstack_compute_keypair_v2.DV_SSH_KEY
data.openstack_networking_network_v2.NET_GARRISTINI
data.openstack_networking_secgroup_v2.SECGROUP_ALL_ALL
data.openstack_networking_subnet_v2.SUBNET_GARRISTINI
openstack_compute_instance_v2.JAMES_INFRA
openstack_compute_instance_v2.LOGBITER_INFRA
openstack_compute_instance_v2.sandbox
openstack_networking_port_v2.JAMES_INFRA_PORT
openstack_networking_port_v2.LOGBITER_INFRA_PORT
openstack_networking_port_v2.sandbox-netlink
[verzulli@XPSGarr IaC_GARRistini]$

Sounds ok. Let’s see what happens with a terraform plan:

[verzulli@XPSGarr IaC_GARRistini]$ terraform plan
Acquiring state lock. This may take a few moments...
[...]
# openstack_compute_instance_v2.JAMES_INFRA must be replaced
-/+ resource "openstack_compute_instance_v2" "JAMES_INFRA" {
    ~ image_name          = "Image not found" -> "Ubuntu 20.04 - GARR" # forces replacement
    [...]
    }
# openstack_compute_instance_v2.LOGBITER_INFRA must be replaced
-/+ resource "openstack_compute_instance_v2" "LOGBITER_INFRA" {
    ~ image_name          = "Image not found" -> "Ubuntu 20.04 - GARR" # forces replacement
    [...]
    }
# openstack_compute_instance_v2.sandbox must be replaced
-/+ resource "openstack_compute_instance_v2" "sandbox" {
    ~ image_name          = "Image not found" -> "Ubuntu 20.04 - GARR" # forces replacement
    [...]
    }
# openstack_compute_instance_v2.sonarqube will be created
+ resource "openstack_compute_instance_v2" "sonarqube" {
    [...]
    }
# openstack_networking_port_v2.sonarqube-netlink will be created
+ resource "openstack_networking_port_v2" "sonarqube-netlink" {
    [...]
    }
Plan: 5 to add, 0 to change, 3 to destroy.
[...]
[verzulli@XPSGarr IaC_GARRistini]$

That’s EXACTLY the status we had initially! Good!

So, now, I added following lifecycle stanza to the ` .tf` declarations of the three instances:

[...]
lifecycle {
    ignore_changes = [ image_name ]
}
[...]

and retryed a terraform plan:

[verzulli@XPSGarr IaC_GARRistini]$ terraform plan
Acquiring state lock. This may take a few moments...
[...]
Terraform will perform the following actions:
# openstack_compute_instance_v2.sandbox will be updated in-place
~ resource "openstack_compute_instance_v2" "sandbox" {
        id                  = "9da4363c-bd77-448f-b53c-3f557b94336e"
        name                = "sandbox"
    ~ security_groups     = [
        + "ALL_ALL",
        - "default",
        ]
        tags                = []
        # (13 unchanged attributes hidden)
    # (1 unchanged block hidden)
}
# openstack_compute_instance_v2.sonarqube will be created
+ resource "openstack_compute_instance_v2" "sonarqube" {
    [...]
    }
# openstack_networking_port_v2.sonarqube-netlink will be created
+ resource "openstack_networking_port_v2" "sonarqube-netlink" {
    [...]
    }
Plan: 2 to add, 1 to change, 0 to destroy.

Very very good! We’re not going to destroy our pre-existing three instances and will proceed with some ordinary task (adding a new instance and properly connect it the network)

So, let’s fire it!

[verzulli@XPSGarr IaC_GARRistini]$ terraform apply
Acquiring state lock. This may take a few moments...
[...]
Plan: 2 to add, 1 to change, 0 to destroy.
Changes to Outputs:
+ sonarqube_ip = (known after apply)
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
openstack_networking_port_v2.sonarqube-netlink: Creating...
openstack_compute_instance_v2.sandbox: Modifying... [id=9da4363c-bd77-448f-b53c-3f557b94336e]
openstack_networking_port_v2.sonarqube-netlink: Still creating... [10s elapsed]
openstack_compute_instance_v2.sandbox: Still modifying... [id=9da4363c-bd77-448f-b53c-3f557b94336e, 10s elapsed]
openstack_networking_port_v2.sonarqube-netlink: Provisioning with 'local-exec'...
openstack_networking_port_v2.sonarqube-netlink (local-exec): Executing: ["/bin/sh" "-c" "echo -n 'sonarqube ' >> _IPs.txt; echo '172.30.0.196' >> _IPs.txt"]
openstack_networking_port_v2.sonarqube-netlink: Creation complete after 13s [id=249882c9-db48-4e08-aaf0-44fc42fbd395]
openstack_compute_instance_v2.sonarqube: Creating...
openstack_compute_instance_v2.sandbox: Modifications complete after 17s [id=9da4363c-bd77-448f-b53c-3f557b94336e]
openstack_compute_instance_v2.sonarqube: Still creating... [10s elapsed]
openstack_compute_instance_v2.sonarqube: Still creating... [20s elapsed]
openstack_compute_instance_v2.sonarqube: Still creating... [30s elapsed]
openstack_compute_instance_v2.sonarqube: Creation complete after 39s [id=be4004cd-c30e-40b9-bde7-0e9ac45ab8f4]
Apply complete! Resources: 2 added, 1 changed, 0 destroyed.
Outputs:
infra_vm_ip_addr = tolist([
"172.30.0.237",
])
james_vm_ip_addr = tolist([
"172.30.0.211",
])
sandbox_ip = tolist([
"172.30.0.225",
])
sonarqube_ip = tolist([
"172.30.0.196",
])

Done! Mission complete!

Now the Terraform state perfectly reflect the infrastructure and…​ we can step forward playing with our infrastructure!

Thank you very much for your time and should you have comments of suggestions, reach us at info [at] garrlab [dot] it

If you’re a student and you’d like to put your hands on the above infrastructure, feel free to join us: GARRLab is "open" to you :-) Drop us an e-mail, and we’ll tell you how to proceed. Please, don’t hesitate!