Answer By Grok3
Hardware and VM Configuration
Physical Machines
- Machine 1: Quad-core CPU, 16GB RAM, 500GB storage (SSD or HDD).
- Machine 2: Quad-core CPU, 16GB RAM, 500GB storage (SSD or HDD).
These specifications are modest and suitable for a home lab. Adjust based on your specific hardware and application needs.
Virtual Machines
- Create four VMs in total:
- Physical Machine 1: VM1 and VM2.
- Physical Machine 2: VM3 and VM4.
- VM Specifications: Allocate 2 vCPUs and 4GB RAM per VM. This provides sufficient resources for k3s and applications while maintaining efficiency. Adjust as needed (e.g., Nextcloud may require more RAM).
- Operating System: Install Alpine Linux on each VM for its lightweight and efficient nature, ideal for containers and VMs.
This configuration distributes the workload across both physical machines, offering resilience if one machine fails.
Kubernetes Cluster Setup with k3s
k3s is a lightweight Kubernetes distribution, perfect for home labs due to its minimal setup requirements.
Cluster Roles
- Master Node: Designate VM1 (on Physical Machine 1) as the master node. In k3s, the master manages the control plane and can also run workloads.
- Worker Nodes: Use VM2 (Physical Machine 1), VM3, and VM4 (Physical Machine 2) as worker nodes to run application pods.
A single-master setup is used for simplicity. High availability (HA) with multiple masters is possible but adds complexity (e.g., requiring an external database like etcd or MySQL), which may be unnecessary for a home lab unless uptime is critical.
Installation Steps
- Prepare Each VM: Update Alpine Linux and install necessary packages for k3s and NFS:
apk update
apk add iptables ip6tables conntrack-tools nfs-utils- Install k3s on the Master Node (VM1):
Install and start the k3s server:
curl -sfL https://get.k3s.io | sh -
Retrieve the node token for joining worker nodes:
cat /var/lib/rancher/k3s/server/node-token
Save this token (e.g., <token>).
3. Install k3s on Worker Nodes (VM2, VM3, VM4):
On each worker VM, join the cluster using the master's IP and token:
``` bash
curl -sfL https://get.k3s.io | K3S_URL=https://<master-ip>:6443 K3S_TOKEN=<token> sh -Replace
-
Verify the Cluster: On the master node, install kubectl (if not already present) and check the cluster status:
kubectl get nodesAll four VMs should appear with a Ready status.
Networking
- Ensure all VMs are on the same local network (e.g., using bridged networking in your virtualization software).
- k3s uses Flannel by default for pod networking, which should work out of the box.
- Open port 6443 on the master node for the API server and ensure firewall rules allow communication between VMs (or disable firewalls for simplicity in a home lab).
Persistent Storage Setup
Applications like Nextcloud and Vaultwarden require persistent storage for data such as files and databases. NFS is used here for its simplicity and flexibility.
NFS Servers
- Set up an NFS server on each physical machine:
- Physical Machine 1: Export /srv/nfs1 (e.g., 100GB).
- Physical Machine 2: Export /srv/nfs2 (e.g., 100GB).
- Configure NFS to allow access from all VM IP addresses on your local network.
PersistentVolumes in Kubernetes
Create two PersistentVolumes (PVs) in k3s to utilize these NFS shares:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs1
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
nfs:
server: <IP of Physical Machine 1>
path: /srv/nfs1
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs2
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
nfs:
server: <IP of Physical Machine 2>
path: /srv/nfs2Use ReadWriteMany for applications like Nextcloud, where multiple pods may need shared access. For databases (e.g., MySQL for Bookstack), ReadWriteOnce is sufficient.
PersistentVolumeClaims (PVCs)
- Each application will create a PVC to request storage from these PVs. Kubernetes will automatically handle mounting the NFS shares, thanks to nfs-utils installed on the VMs.
Why NFS?
- NFS allows any node to access storage over the network, providing flexibility in pod scheduling. While local storage (e.g., hostPath) is simpler, it ties data to specific nodes, reducing resilience. In a home lab, NFS performance over a local network should be adequate.
Deploying Applications
Deploy your applications as pods in the k3s cluster using deployments, services, and ingresses.
Container Images
- Use official or community Docker images:
- Nextcloud: nextcloud:latest
- Vaultwarden: vaultwarden/server:latest
- Paperless-ngx: ghcr.io/paperless-ngx/paperless-ngx:latest
- Bookstack: lscr.io/linuxserver/bookstack:latest
- Refer to each application’s documentation for specific requirements (e.g., environment variables, ports).
Deployment Example (Nextcloud)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nextcloud-data
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nextcloud
spec:
replicas: 1
selector:
matchLabels:
app: nextcloud
template:
metadata:
labels:
app: nextcloud
spec:
containers:
- name: nextcloud
image: nextcloud:latest
ports:
- containerPort: 80
volumeMounts:
- name: data
mountPath: /var/www/html
volumes:
- name: data
persistentVolumeClaim:
claimName: nextcloud-data
---
apiVersion: v1
kind: Service
metadata:
name: nextcloud-service
spec:
selector:
app: nextcloud
ports:
- port: 80
targetPort: 80Create similar manifests for other applications, adjusting storage and database configurations as needed.
Databases
- Applications like Bookstack (MySQL) and Paperless-ngx (PostgreSQL) require separate database pods:
- Deploy a MySQL pod with a PVC for Bookstack.
- Deploy a PostgreSQL pod with a PVC for Paperless-ngx.
- Use NFS-backed PVs for simplicity, though local storage could improve database performance if pods are pinned to specific nodes.
Simplifying with Helm
- Use Helm charts for easier deployment:
- Nextcloud: helm install nextcloud nextcloud/nextcloud
- Bookstack: Available via community charts (e.g., LinuxServer).
- Install Helm on your cluster and search for charts on Artifact Hub.
Accessing Applications
- Use k3s’s built-in Traefik ingress controller to expose applications:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nextcloud-ingress
spec:
rules:
- host: nextcloud.mydomain.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nextcloud-service
port:
number: 80- Edit your local /etc/hosts file to map nextcloud.mydomain.local to a node’s IP, or set up a local DNS server.
- Optionally, enable HTTPS with Let’s Encrypt via Traefik for added security.
Network Diagram
The following diagram illustrates the setup, showing the physical machines, VMs, NFS servers, and the Kubernetes cluster:
+--------------------------------+ +--------------------------------+
| Physical Machine 1 | | Physical Machine 2 |
| | | |
| +---------+ +---------+ | | +---------+ +---------+ |
| | VM1 | | VM2 | | | | VM3 | | VM4 | |
| | (Master)| | (Worker)| | | | (Worker)| | (Worker)| |
| +---------+ +---------+ | | +---------+ +---------+ |
| | NFS1 (/srv/nfs1) | | | NFS2 (/srv/nfs2) |
+---------|----------------------+ +---------|----------------------+
| |
+------------------------------------------+
|
| Local Network
|
+-------------------+
| Kubernetes Cluster|
| (k3s) |
| - Master: VM1 |
| - Workers: VM2, |
| VM3, |
| VM4 |
+-------------------+
|
|
+-------------------+
| Applications |
| - Nextcloud |
| - Vaultwarden |
| - Paperless-ngx |
| - Bookstack |
| (Deployed on |
| worker nodes) |
+-------------------+- Physical Machines: Host the VMs and NFS servers.
- VMs: Run Alpine Linux and form the Kubernetes cluster.
- NFS Servers: Provide persistent storage for applications.
- Kubernetes Cluster: Manages application deployments and storage access.
Additional Considerations
Resource Monitoring
- Start with 2 vCPUs and 4GB RAM per VM. Use kubectl top to monitor resource usage and adjust if applications like Nextcloud require more resources.
Resilience
- With a single master node, if Physical Machine 1 fails, the cluster loses management capabilities, though running pods on VM3 and VM4 will persist. For critical setups, consider backing up the master node configuration (/var/lib/rancher/k3s) to restore elsewhere.
Backups
- Regularly back up NFS directories (e.g., /srv/nfs1, /srv/nfs2) to an external drive or another machine to protect application data.
Documentation
- Record VM IPs, NFS configurations, and Kubernetes manifests for easy recovery or scaling.
Summary
This setup provides a lightweight, functional home lab Kubernetes cluster using k3s:
- Hardware: Two physical machines (quad-core, 16GB RAM, 500GB storage each).
- VMs: Four VMs (two per machine) running Alpine Linux, with 2 vCPUs and 4GB RAM each.
- Cluster: One master node (VM1) and three worker nodes (VM2, VM3, VM4).
- Storage: NFS servers on both physical machines, with PersistentVolumes for application data.
- Applications: Deploy Nextcloud, Vaultwarden, Paperless-ngx, Bookstack, etc., via Helm or YAML, using Traefik for access. This configuration is simple to manage, leverages Alpine’s efficiency, and meets home lab needs while allowing for future experimentation and scaling!