I was looking for a way to shutdown my EC2 instances when not in use. The scenario is that when I’m done for the day and I forget to shutdown, I want it to be done for me.
Forgetting to shutdown a t2.micro isn’t a big deal, but larger instances cost more and it just doesn’t seem efficient to run something when you’re not using it.
The Options
On investigation, I found two options:
Use CloudWatch
CloudWatch is AWS’s monitoring service. They actually recommend it for the usecase of shutting down servers, saying:
If you (or your developers) are forgetful, you can detect unused EC2 instances and shut them down.
The CloudWatch method relies on monitoring some metric like network data or CPU load. I think this is fine, and has the benefit of being easy to set up in the AWS console.
I wanted to be able to use other providers, like Linode or Digital Ocean, so I was looking for a more generic solution based on SSH connection.
This post on StackExchange by Fernando Correia was just what I was looking for.
Note that the script requires netstat from net-tools. net-tools is installed in the below role tasks. This is the script:
#!/bin/bash
#
# Shuts down the host on inactivity.
#
# Designed to be executed as root from a cron job.
# It will power off on the 2nd consecutive run without an active ssh session.
# That prevents an undesirable shutdown when the machine was just started, or on a brief disconnect.
#
# To enable, add this entry to /etc/crontab:
# */5 * * * * root /home/ubuntu/dotfiles/bin/shutdown-if-inactive
#
set -o nounset -o errexit -o pipefail
MARKER_FILE="/tmp/ssh-inactivity-flag"
STATUS=$(netstat | grep ssh | grep ESTABLISHED &>/dev/null && echo active || echo inactive)
if [ "$STATUS" == "inactive" ]; then
if [ -f "$MARKER_FILE" ]; then
echo "Powering off due to ssh inactivity."
poweroff # See https://unix.stackexchange.com/a/196014/56711
else
# Create a marker file so that it will shut down if still inactive on the next time this script runs.
touch "$MARKER_FILE"
fi
else
# Delete marker file if it exists
rm --force "$MARKER_FILE"
fi
To add this to my EC2 instances, I created a new role, “auto-shutdown”. The directory looks like this:
auto_shutdown/
├── defaults
│ └── main.yml
├── files
│ └── shutdown-if-inactive
└── tasks
└── main.yml
defaults/main.yml
---
auto_shutdown_cron_minutes_entry: "*/30"
files/shutdown-if-inactive
Contains the above script.
tasks/main.yml
---
- name: Install net-tools for netstat
ansible.builtin.apt:
pkg: net-tools
state: present
- name: Create scripts directory
file:
path: /home/ubuntu/.scripts
state: directory
mode: 0755
owner: ubuntu
group: ubuntu
- name: Copy shutdown-if-inactive script
copy:
src: ./files/shutdown-if-inactive
dest: /home/ubuntu/.scripts/shutdown-if-inactive
owner: ubuntu
group: ubuntu
mode: 0744
- name: Create cronjob to call shutdown-if-inactive
become: yes
become_user: root
cron:
name: shutdown-if-inactive
user: root
minute: "{{ auto_shutdown_cron_minutes_entry }}"
job: "/home/ubuntu/.scripts/shutdown-if-inactive"
The scripts creation directory kind of doesn’t belong here - it might need to be used by another role. But if that occurs, I will pull it out into its own task or role.
Disconnect SSH on Inactivity
One last thing I wanted to do it shut off SSH automatically after a time limit on my local machine. This step is optional. I tend to leave terminal windows open, but if I leave my computer for some time, I’m probably not coming back for a while.
The Typical Setup
Normal SSH behavior is for you to be logged out on when your SSH connection is idle. I had altered my ~/.ssh/config globally like this:
Host *
TCPKeepAlive yes
ServerAliveInterval 60
....
This sends messages to the server to prevent a logout. I’ve also configured /etc/ssh/sshd_config with something like:
ClientAliveInterval 60
ClientAliveCountMax 5
These settings are the opposite of what I want for throwaway test servers that might incur some large costs! I figured I just needed to make some adjustments.
However, I couldn’t seem to get any combination of settings (I tried some others as well, such as the ~/.ssh/config setting ServerAliveCountMax
) to work correctly!
Running ssh myhost -vvv
told me that I still was getting repeated keep alive messages. After some searching, I came across this AskUbuntu post by Doug Smythies. It turns out the behavior changed from OpenSSH 7-ish to 8-ish.
So, a simpler approach, and what I chose to implement, is simply setting TMOUT=300 in /etc/profile on the remote server. I thought this solution was particularly good because it’s simple and easy to set.
There are some caveats to using this.
- name: Log out of server on idle
tags: ['never', 'create','update']
block:
- name: Update /etc/profile
ansible.builtin.lineinfile:
path: /etc/profile
line: export TMOUT={{ bash_profile_logout_seconds }}
state: present
Success!
I was happy that this simple setup works perfectly! My instance shut off while I was writing this article, which led me to also lengthen the shutoff time and put it in a variable as seen in the above role.
And one new problem
When you shut down and then boot an EC2 instance the IP address changes.
My EC2 role adds an entry to my local ~/.ssh/config for seamless SSH-ing to the EC2 instance. With my shutdown script operating, that means that my ~/.ssh/config will be out of date every time I reboot my instance.
My next step will be creating a script to look up the IP address of my instance, based on the instance ID, then use it to dynamically get the IP address.