A Kubernetes homelab cluster using K3S and ARM64 SBCs.

~ 3 min read

Building a Homelab with Rancher K3S

A Kubernetes homelab cluster using K3S and ARM64 SBCs.

  • devops
  • gitlab
  • kubernetes

Back in 2020, I decided to learn Kubernetes, and dove in by attempting to set up a cluster as a day project. Big mistake. The learning curve felt exponential, and I ran into incompatibilities with the ARM devices I was using as a platform. I retreated to HashiCorp Nomad and Consul, which honestly, was a very enjoyable experience to work with. But I didn’t like to admit defeat, so in late 2021, I revisited Kubernetes.

I discovered Rancher K3S, a lightweight version of Kubernetes which could run on my ARM hardware, and also had the benefit of a small resource footprint. I used Ansible to manage the configuration of the devices involved.

This project was undertaken as a personal learning experience to gain a deeper understanding of how Kubernetes worked, and how to apply it to consumer hardware I had available. While this was developed purely for my curiosity, the source code is available for reference at GitLab/Industrial-Banana.

Hardware

Five single board computing devices running Ubuntu Server 20.04/21.04 formed the cluster.

  • 3 Raspberry Pis (4Gb-8Gb ARM64)
  • 1 RockPro64 (4Gb ARM64)
  • 1 LattePanda Delta (4Gb x86)
  • NAS running UnRAID with 9Tb of platter and SSD storage
  • SSD and NVMe storage devices

The SBCs ran Ubuntu Server 20.04/21.04. The NAS acted as a NFS storage provider and hosted the cluster’s etcd PostgreSQL database. The SSD and NVMe drives attached to each device formed distributed storage for the cluster using Rancher Longhorn, automating the process of creating volume claims for pods and deployments.

Software

Rancher K3S

K3S is a lightweight Kubernetes distro created by Rancher Labs. It is ideal for devices with constrained resources, such as SBCs or edge devices, and supports ARM architecture chipsets.

Ansible

Ansible was used to provision the devices and maintain configuration of the devices via several roles:

  • Bootstrap and provision devices
  • Configure attached storage
  • Install K3S to all cluster nodes
  • Configure server and worker nodes
  • Install Helm, secrets encryption, deploy Argo CD to the cluster

Argo CD

One of the Ansible roles deploys Argo CD to the cluster. When first deployed, Argo is configured to connect to a git repo I defined containing Kubernetes manifest files built with Kustomize. These manifests define the configuration for the services listed below, and kicks off an ordered sequence of deployments that installs:

  • Rancher LongHorn: Distributed persistent storage management to the cluster.
  • Cert-Manager: Automated TLS generation for ingress endpoints. Uses Let’s Encrypt and CloudFlare’s DNS API.
  • Prometheus Operator: Generates Kubernetes Service Monitors for all services in the cluster, monitors nodes and cluster services.
  • AlertManager: Generates alerts from Prometheus. Sends alerts and status updates to private Telegram channel.
  • MetalLB: Load balancer and IP address pool for services.
  • Traefik: Ingress manager. Services that expose HTTP endpoints receive an HTTPS subdomain URL.
  • Loki: Centralized logging from cluster nodes.
  • Kured: Controlled cluster reboots.
  • Kubernetes Dashboard

Argo continues to monitor these manifests. New commits trigger Argo to pull the updated manifest(s), and apply it to the cluster.


Related Articles