Ansible and Terraform
Ansible is great for configuration management and does a good job of creating an environment that is (mostly) free of configuration drift. Using AWX, I’ve created a fair number of playbooks and roles which are executed regularly to keep things up to date and humming along.
Terraform is a tool designed to enable Infrastructure-as-Code (IaC), meaning that the infrastructure is declaratively defined in code which can be managed by code ersioning tools such as git.
Cluster Storage
Now that the new host is up and functioning in the cluster, I need to start thinking about allocating the storage.
The new host is showing the following storage:
local local-lvm qnap-iso qnap-zfs The original Proxmox host has a local and local-vm storage as well, but I confirmed that these are not the same (should be obvious). These are local to that hosts and resides on the disk that was selected for installation of the Proxmox (500GB SSD).
Redundancy Redundancy
This particular saga and the ones that will follow began with a simple sale on meh.com. A few months ago they had as their daily deal a motherboard for $35 USD so I snagged it not knowing what I would do with it. I began putting together a list of components I wanted to purchase with it, again not knowing exactly what I wanted to do it it.
I knew I wanted to add some redundancy to my self-hosted/homelab setup which is built around a QNAP NAS and a single Proxmox hosts.
AWX Update
I recently updated one of my old posts about AWX because I discovered that the nfs-client storage provisioner which automatically creates persistent volumes using an existing NFS mount had stopped working. It not only stopped working, but it was deprecated and I had to find a new one that worked.
In the process, I noticed that I never updated that post to include the updates to how AWX is installed and updated on Kubernetes.
The Rabbit Hole
I’ve just recently passed the 1 year anniversary of setting up my home kubernetes cluster in which I used VMs running RancherOS on a Proxmox hypervisor to quickly spin up nodes. Then used Rancher server to initate the cluster which also provided a convenient GUI to get some workloads up and going without needing to learn all of the concepts all at once. It was a good strategy.
Even as I’ve been setting up new workloads and automating changes through the use of Ansible along with more traditional workload definition using YAML, I continue to manage some of the early workloads such as Nextcloud directly using the Rancher GUI.