What is this project about?


The scope of this project is to create a kubernetes cluster at home using Raspberry Pis and to use Ansible to automate the deployment and configuration.

This is an educational project to explore kubernetes cluster configurations using an ARM architecture and its automation using Ansible.

As part of the project the goal is to use a lightweight Kubernetes flavor based on K3S and deploy cluster basic services such as: 1) distributed block storage for POD’s persistent volumes, LongHorn, 2) backup/restore solution for the cluster, Velero and Restic, 3) service mesh architecture, Linkerd, and 4) observability platform based on metrics monitoring solution, Prometheus, logging and analytics solution, EFḰ+LG stack (Elasticsearch-Fluentd/Fluentbit-Kibana + Loki-Grafana), and distributed tracing solution, Tempo.

The following picture shows the set of opensource solutions used for building this cluster:


Design Principles

  • Use ARM 64 bits operating system enabling the possibility of using Raspberry PI B nodes with 8GB RAM. Currently only Ubuntu supports 64 bits ARM distribution for Raspberry Pi.
  • Use ligthweigh Kubernetes distribution (K3S). Kuberentes distribution with a smaller memory footprint which is ideal for running on Raspberry PIs
  • Use of distributed storage block technology, instead of centralized NFS system, for pod persistent storage. Kubernetes block distributed storage solutions, like Rook/Ceph or Longhorn, in their latest versions have included ARM 64 bits support.
  • Use of opensource projects under the CNCF: Cloud Native Computing Foundation umbrella
  • Use latest versions of each opensource project to be able to test the latest Kubernetes capabilities.
  • Use of Ansible for automating the configuration of the cluster and cloud-init to automate the initial installation of the Raspberry Pis.

What I have built so far

From hardware perspective I built two different versions of the cluster

  • Release 1.0: Basic version using dedicated USB flash drive for each node and centrazalized SAN as additional storage


  • Release 2.0: Adding dedicated SSD disk to each node of the cluster and improving a lot the overall cluster performance


What I have developed so far

From software perspective I have develop the following: Ansible playbooks and roles

  1. cloud-init config files and Ansible playbooks/roles for automatizing the installation and deployment of Pi-Cluster.

    All source code can be found in the following github repository

    Repo Description Github
    pi-cluster PI Cluster Ansible
  2. Aditionally several ansible roles have been developed to automate different configuration tasks on Ubuntu-based servers that can be reused in other projects. These roles are used by Pi-Cluster Ansible Playbooks

    Each ansible role source code can be found in its dedicated Github repository and is published in Ansible-Galaxy to facilitate its installation with ansible-galaxy command.

    Ansible role Description Github
    ricsanfre.security Automate SSH hardening configuration tasks
    ricsanfre.ntp Chrony NTP service configuration
    ricsanfre.firewall NFtables firewall configuration
    ricsanfre.dnsmasq Dnsmasq configuration
    ricsanfre.storage Configure LVM
    ricsanfre.iscsi_target Configure iSCSI Target
    ricsanfre.iscsi_initiator Configure iSCSI Initiator
    ricsanfre.k8s_cli Install kubectl and Helm utilities
    ricsanfre.fluentbit Configure fluentbit
    ricsanfre.minio Configure Minio S3 server
    ricsanfre.backup Configure Restic
  3. This documentation website picluster.ricsanfre.com, hosted in Github pages.

    Static website generated with Jekyll.

    Source code can be found in the Pi-cluster repository under docs directory

Software used and latest version tested

The software used and the latest version tested of each component

Type Software Latest Version tested Notes
OS Ubuntu 20.04.3 OS need to be tweaked for Raspberry PI when booting from external USB
Control Ansible 2.12.1  
Control cloud-init 21.4 version pre-integrated into Ubuntu 20.04
Kubernetes K3S v1.24.7 K3S version
Kubernetes Helm v3.6.3  
Metrics Kubernetes Metrics Server v0.6.1 version pre-integrated into K3S
Computing containerd v1.6.8-k3s1 version pre-integrated into K3S
Networking Flannel v0.19.2 version pre-integrated into K3S
Networking CoreDNS v1.9.1 version pre-integrated into K3S
Networking Metal LB v0.13.7 Helm chart version: 0.13.7
Service Mesh Linkerd v2.12.2 Helm chart version: linkerd-control-plane-1.9.4
Service Proxy Traefik v2.9.1 Helm chart version: 18.1.0
Storage Longhorn v1.3.2 Helm chart version: 1.3.2
SSL Certificates Certmanager v1.10.0 Helm chart version: v1.10.0
Logging ECK Operator 2.4.0 Helm chart version: 2.4.0
Logging Elastic Search 8.1.2 Deployed with ECK Operator
Logging Kibana 8.1.2 Deployed with ECK Operator
Logging Fluentbit 2.0.4 Helm chart version: 0.21.0
Logging Fluentd 1.15.2 Helm chart version: 0.3.9. Custom docker image from official v1.15.2
Logging Loki 2.6.1 Helm chart grafana/loki version: 3.3.0
Monitoring Kube Prometheus Stack 0.60.1 Helm chart version: 41.6.1
Monitoring Prometheus Operator 0.60.1 Installed by Kube Prometheus Stack. Helm chart version: 41.6.1
Monitoring Prometheus 2.39.1 Installed by Kube Prometheus Stack. Helm chart version: 41.6.1
Monitoring AlertManager 0.24.0 Installed by Kube Prometheus Stack. Helm chart version: 41.6.1
Monitoring Grafana 9.2.1 Helm chart version grafana-6.43.0. Installed as dependency of Kube Prometheus Stack chart. Helm chart version: 41.6.1
Monitoring Prometheus Node Exporter 1.3.1 Helm chart version: prometheus-node-exporter-4.3.1. Installed as dependency of Kube Prometheus Stack chart. Helm chart version: 41.6.1
Monitoring Prometheus Elasticsearch Exporter 1.5.0 Helm chart version: prometheus-elasticsearch-exporter-4.15.1
Backup Minio RELEASE.2022-09-22T18-57-27Z  
Backup Restic 0.12.1  
Backup Velero 1.9.3 Helm chart version: 2.32.1

Last Update: Oct 30, 2022