If I store Terraform state locally, then I delete the local .tfstate files, Terraform won’t know what resources I’ve created in AWS/other providers. I actually experienced this while I was setting up my Terraform workstation with Ansible. I hadn’t setup a terraform destroy step before deleting an instance and ended up needing to delete some VPCs in the AWS console, which is a bit of a slog!
I wanted to manage my state somewhere else, so I decided to try an account on Hashicorp.This is a culmination of posts that I’ve made up until this point.
Code for this post is here:
EC2 Terraform Remote Workstation EC2 SSH helper scripts What I Wanted I wanted to set up a throwaway server so that I could test and learn some new things, particularly devops. My requirements were:
Easy to start. Easy to discard. Easy SSH management. Cheap to run. So, I created the above repos to help me create that type of environment.Since I’m making a “throwaway” workstation and Terraform server, I need to be able to create and destroy EC2 instances on demand. I am experimenting with Terraform, and want to be able to destroy infrastructure created with Terraform before my EC2 is deleted.
My terraform.yml looks like this:
--- - name: Manage EC2 instance hosts: localhost roles: - role: ec2 vars_files: - vars/tf_workstation.yml - vars/aws_auth_vars.yml tasks: - name: Refresh inventory ansible.See the repo for this script here.
I like efficiency and minimizing keystrokes.
For my EC2 management script, I needed to specify the host’s name like this:
ec2 start some_host_name I didn’t really want to type out the host name every time, so I threw together an autocompletion script.
This way, I can type this:
ec2 start so<tab> and have it autocomplete to this:
ec2 start some_host_name I don’t do completions all the time, so a good guide is An Introduction to Bash Completion.See the repo for this script here.
In the last post, I created an Ansible role to configure an EC2 instance to shutdown when no active SSH connection was detected.
I still had a couple of problems, though:
Problems First, when an EC2 instance reboots, its public IP address will change (if configured to have one). This meant that my local ~/.ssh/config would be out of date. I’m connecting to an IP in the Hostname setting, so I won’t connect if the IP is wrong!I was looking for a way to shutdown my EC2 instances when not in use. The scenario is that when I’m done for the day and I forget to shutdown, I want it to be done for me.
Forgetting to shutdown a t2.micro isn’t a big deal, but larger instances cost more and it just doesn’t seem efficient to run something when you’re not using it.
The Options On investigation, I found two options:Turns out I was a little rusty on remembering how Ansible become works, and how I should apply it to the Ubuntu image I was using on EC2 instances.
In this post, I was using this command:
sudo awk '/root\s+ALL=\(ALL:ALL\) ALL/ {print; print "ubuntu ALL=(ALL) NOPASSWD:ALL"; next}1' /etc/sudoers | sudo tee /etc/sudoers to give myself passwordless sudo through Ansible. Notable things:
I was running this using the User Data setting, which runs scripts on an instance’s first boot.I wanted to add my AWS access keys to my instance without manually copying them over. I’m using Ansible to manually provision a Terraform instance, so it’ll need the keys to interact with AWS services.
Of course, you can’t just add the keys to your repo. Anyone with access to the repo can see your keys, which is especially bad if your repo is semi or fully public.
The solution is ansible-vault.I started with the EC2 instance that I built here.
I’m building some things out in AWS, so I decided to automate the creation of a Terraform to handle building infrastructure.
I’ll start out with something rudimentary and build it up. The end goal is the ability to create and provision the server from my laptop, pull project code, do some Terraform tasks, then terminate the instance.
This is probably not very typical, but it’s good practice for devops.I’ve been sort of relearning how to use Ansible and trying to improve my automations. To that end, I’m building out a project from the ground up to try to fill in missing holes in my knowledge and maybe learn some new tricks.
In this article, I’ll create an Ansible playbook that:
creates an EC2 instance creates SSH keys for that instance configures my laptop to be able to SSH in immediately destroys the instance and keys, and removes the corresponding config from my laptop This will be a pretty simple playbook, and security is not a priority yet, so there will be a few security issues and drawbacks that I’ll talk about at the end.I typically run Ansible in a virtualenv instead of using my Linux distro’s packaged version.
I started having some trouble running AWS playbooks, getting this error message on a playbook:
"Failed to import the required Python library (botocore and boto3) on x270's Python /usr/bin/python3. Please read the module documentation and install it in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter" Here are the conditions: