Ansible Running Terraform Destroy, Before Deleting an EC2 Instance
Series: [Disposable Cloud Workstation]
Since I’m making a “throwaway” workstation and Terraform server, I need to be able to create and destroy EC2 instances on demand. I am experimenting with Terraform, and want to be able to destroy infrastructure created with Terraform before my EC2 is deleted.
My terraform.yml looks like this:
---
- name: Manage EC2 instance
hosts: localhost
roles:
- role: ec2
vars_files:
- vars/tf_workstation.yml
- vars/aws_auth_vars.yml
tasks:
- name: Refresh inventory
ansible.builtin.meta: refresh_inventory
tags: ['install']
- name: Run roles on install
tags: ['install']
hosts: terraform
become: yes
remote_user: ubuntu
pre_tasks:
- name: Wait for SSH connection
wait_for_connection:
timeout: 300
roles:
- role: aws_auth
vars_files:
- vars/aws_auth_vars.yml
- vars/tf_workstation.yml
- name: Other roles
hosts: terraform
become: yes
remote_user: ubuntu
roles:
- role: common
- role: auto_shutdown
vars_files:
- vars/tf_workstation.yml
- name: Manage Terraform
hosts: terraform
become: yes
remote_user: ubuntu
roles:
- role: terraform
vars_files:
- vars/aws_auth_vars.yml
- vars/tf_workstation.yml
In this playbook, the first task creates and/or updates an EC2 instance. The very last role manages Terraform. It installs, runs terraform init
, or runs terraform destroy
.
I’m using tags (ex. install, update, destroy, create, apply-infra, destroy-infra) to control which tasks are run, but even if I use tags to tear down a server, it won’t get rid of my terraformed infrastructure since that would run after the EC2 instance is deleted.
One of the biggest problems with this is that I’ll lose all my state and need to delete AWS resources by hand in the AWS console. Not fun.
I’ll eventually save state in some third party location, but I’m not there yet.
I thought of a couple ways to solve this, and decided to create another playbook that runs my Terraform role first, then my EC2 role. This seemed like a good way to prevent (my) human error and keep things reasonably organized.
Here’s how I set it up.
The Setup
In order to teardown terraformed infrastructure, then delete my EC2 instance, I run:
ansible-playbook terraform_teardown.yml --tags=destroy,destroy-infra
terraform_teardown.yml looks like this:
---
- name: Manage Terraform
hosts: terraform
become: yes
remote_user: ubuntu
roles:
- role: terraform
vars_files:
- vars/aws_auth_vars.yml
- vars/tf_workstation.yml
vars:
destroy: yes
- name: Delete EC2 instance
hosts: localhost
roles:
- role: ec2
vars_files:
- vars/tf_workstation.yml
- vars/aws_auth_vars.yml
First the terraform role is run, destroying terraformed infrastructure, then the ec2 role deletes the instance.
One notable thing here is that I’m setting a var here, “destroy”. More on that later. The tags specified on the command line apply to tags in my roles.
My terraform role looks like this:
---
# Reminder to set and use destroy in the correct playbook.
- name: Fail if destroy is undefined
ansible.builtin.fail:
msg: "The variable 'destroy' must be set to destroy infrastructure. (Are you using the correct playbook?)"
when: destroy is undefined
tags: ['never', 'destroy-infra']
- name: Tear down infrastructure
block:
- name: Remove service
community.general.terraform:
project_path: /home/ubuntu/proj
state: absent
become: true
become_user: ubuntu
environment:
AWS_ACCESS_KEY_ID: "{{ aws_access_key_id }}"
AWS_SECRET_ACCESS_KEY: "{{ aws_secret_access_key }}"
AWS_REGION: "{{ aws_region }}"
register: terraform
- debug: var=terraform
when: destroy
tags: ['never', 'destroy-infra']
- name: Install Terraform
block:
....
terraform destroy
is the second task that will be run in this playbook. The first task checks for the “destroy” variable to be set in order to run the playbook. Otherwise, we’ll get a failure message. I used this primarily to prevent myself from running the terraform.yml by accident (it doesn’t set “destroy”) anddeleting an instance without destroying infrastructure.