See the code for this project here: hugo-jsonify-data. One of the features that makes Hugo so versatile is how it converts data into content. While this is an “easy” task for many developers, Hugo also builds that content into an easily managed and hosted static website - not “easy” for a typical CMS. You can easily plunk your Hugo site into an S3 bucket, webserving Linux server, or static-specific hosting service like Netlify, Cloudflare Pages, or Vercel.
Building static sites with Hugo is fast and fun! I’ve built out a few sites and am constantly delighted by the new features that are introduced into Hugo. One of these features is “mounts”. What’s a Mount and What Does it Do? Think of this as a part of Hugo modules. Modules plug into your project to replace or add components in the standard Hugo directories: static content layouts data assets i18n archetypes As part of this, a Hugo mount simply allows you to mount directories that are external to your project into your project.
This is a brief description of my local testing of running my Ansible role that I created here. That role installs Caddy, then manages /etc/hosts and the Caddyfile to allow using local domains for websites for local development (instead of localhost:8080, for instance). Reverse Proxying Docker Services I thought maybe Docker would do some funny stuff that would prevent me from reverse proxying services running in Docker containers, but this is not the case.
Files for this project are located https://github.com/branhamt/proxy-local-dev. It’s pretty handy to setup a domain on your local machine that can point to a local server/port combo for local development. Instead of typing localhost:8080 in your address bar, you can type somesite.lcl, or some other domain name you’ve chosen. This is really handy when you don’t want to have to remember port numbers for all the things running on your machine. Since it’s a multi-part process, it can be a bit of a chore to set it up.
I decided to go for my Certified Kubernetes Administrator (CKA) certification, provided through The Linux Foundation. It was arduous! My Score I scored an 85 on the exam - my first try (you get one free retake). I didn’t find any of the questions particularly difficult, although I did find the wording on some of them to be either vague or weird, leading me to wonder whether I had completed questions or actually fulfilled the requirements.
I wanted to setup some python virtualenvs on a laptop, and wanted to improve the existing Ansible automation. Goals were: Install multiple virtualenvs. Install the virtualenv according to a requirements file. Don’t install if the virtualenv exists. Don’t install if the requirements.txt doesn’t exist. Here is our vars file: virtualenvs: - { reqs: "/some/dir/project1", venv: "proj1" } - { reqs: "/some/dir/project2", venv: "proj2" } - { reqs: "/some/dir/project3", venv: "proj3" } And here is a tasks file (debug is included if you want to have some output):
If I store Terraform state locally, then I delete the local .tfstate files, Terraform won’t know what resources I’ve created in AWS/other providers. I actually experienced this while I was setting up my Terraform workstation with Ansible. I hadn’t setup a terraform destroy step before deleting an instance and ended up needing to delete some VPCs in the AWS console, which is a bit of a slog! I wanted to manage my state somewhere else, so I decided to try an account on Hashicorp.
I’ve been experimenting with keyboards and layouts lately, but I find a lot of the typical linux keyboard config programs don’t work reliably. I always seem to have some trouble tweaking them or actually getting them to work. Enter KMonad. KMonad allows you to create a config file, per keyboard, with a particular layout. It supports layers, key combos, custom key mappings - pretty much whatever you want to do with your keyboard.
This is a culmination of posts that I’ve made up until this point. Code for this post is here: EC2 Terraform Remote Workstation EC2 SSH helper scripts What I Wanted I wanted to set up a throwaway server so that I could test and learn some new things, particularly devops. My requirements were: Easy to start. Easy to discard. Easy SSH management. Cheap to run. So, I created the above repos to help me create that type of environment.
Since I’m making a “throwaway” workstation and Terraform server, I need to be able to create and destroy EC2 instances on demand. I am experimenting with Terraform, and want to be able to destroy infrastructure created with Terraform before my EC2 is deleted. My terraform.yml looks like this: --- - name: Manage EC2 instance hosts: localhost roles: - role: ec2 vars_files: - vars/tf_workstation.yml - vars/aws_auth_vars.yml tasks: - name: Refresh inventory ansible.
See the repo for this script here. I like efficiency and minimizing keystrokes. For my EC2 management script, I needed to specify the host’s name like this: ec2 start some_host_name I didn’t really want to type out the host name every time, so I threw together an autocompletion script. This way, I can type this: ec2 start so<tab> and have it autocomplete to this: ec2 start some_host_name I don’t do completions all the time, so a good guide is An Introduction to Bash Completion.
See the repo for this script here. In the last post, I created an Ansible role to configure an EC2 instance to shutdown when no active SSH connection was detected. I still had a couple of problems, though: Problems First, when an EC2 instance reboots, its public IP address will change (if configured to have one). This meant that my local ~/.ssh/config would be out of date. I’m connecting to an IP in the Hostname setting, so I won’t connect if the IP is wrong!
I was looking for a way to shutdown my EC2 instances when not in use. The scenario is that when I’m done for the day and I forget to shutdown, I want it to be done for me. Forgetting to shutdown a t2.micro isn’t a big deal, but larger instances cost more and it just doesn’t seem efficient to run something when you’re not using it. The Options On investigation, I found two options:
Turns out I was a little rusty on remembering how Ansible become works, and how I should apply it to the Ubuntu image I was using on EC2 instances. In this post, I was using this command: sudo awk '/root\s+ALL=\(ALL:ALL\) ALL/ {print; print "ubuntu ALL=(ALL) NOPASSWD:ALL"; next}1' /etc/sudoers | sudo tee /etc/sudoers to give myself passwordless sudo through Ansible. Notable things: I was running this using the User Data setting, which runs scripts on an instance’s first boot.
I wanted to add my AWS access keys to my instance without manually copying them over. I’m using Ansible to manually provision a Terraform instance, so it’ll need the keys to interact with AWS services. Of course, you can’t just add the keys to your repo. Anyone with access to the repo can see your keys, which is especially bad if your repo is semi or fully public. The solution is ansible-vault.
I started with the EC2 instance that I built here. I’m building some things out in AWS, so I decided to automate the creation of a Terraform to handle building infrastructure. I’ll start out with something rudimentary and build it up. The end goal is the ability to create and provision the server from my laptop, pull project code, do some Terraform tasks, then terminate the instance. This is probably not very typical, but it’s good practice for devops.
Normally, when you SSH to a new server, you’ll get a message that looks like this: The authenticity of host ***** can't be established. RSA key fingerprint is *****. Are you sure you want to continue connecting (yes/no)? This becomes annoying when you’re testing new server setups and you’re quickly building and tearing down new servers. This message is mostly controlled by the StrictHostKeyChecking setting in your ~/.ssh/config file. There are many recommendations on the web to simply disable StrictHostKeyChecking like this:
I’ve been sort of relearning how to use Ansible and trying to improve my automations. To that end, I’m building out a project from the ground up to try to fill in missing holes in my knowledge and maybe learn some new tricks. In this article, I’ll create an Ansible playbook that: creates an EC2 instance creates SSH keys for that instance configures my laptop to be able to SSH in immediately destroys the instance and keys, and removes the corresponding config from my laptop This will be a pretty simple playbook, and security is not a priority yet, so there will be a few security issues and drawbacks that I’ll talk about at the end.
I had a problem with SSH today that I don’t think I’ve experienced before. Received disconnect from <ip address> port 22:2: Too many authentication failures I am using keys created in AWS via Ansible, with the private identity key saved to my local machine. I’ve been tweaking some settings to play with things and was repeatedly destroying instances, security groups, and AWS-created keys. I run Ubuntu 22.04 with ssh-agent, if you want a couple of clues to the solution.
I typically run Ansible in a virtualenv instead of using my Linux distro’s packaged version. I started having some trouble running AWS playbooks, getting this error message on a playbook: "Failed to import the required Python library (botocore and boto3) on x270's Python /usr/bin/python3. Please read the module documentation and install it in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter" Here are the conditions: