Answer by Grok 3
Hardware and VM Configuration
Physical Machines
- Machine 1: Quad-core CPU, 16GB RAM, 500GB storage (SSD or HDD).
- Machine 2: Quad-core CPU, 16GB RAM, 500GB storage (SSD or HDD).
These specs are modest and suitable for a home lab. Adjust based on your hardware and application requirements.
Virtual Machines
- Create four VMs in total:
- Physical Machine 1: VM1 and VM2.
- Physical Machine 2: VM3 and VM4.
- VM Specifications: Allocate 2 vCPUs and 4GB RAM per VM. This ensures sufficient resources for k3s and applications while keeping usage low. Increase as needed (e.g., Nextcloud may need more RAM).
- Operating System: Install Alpine Linux on each VM for its lightweight design, ideal for containers and VMs.
This setup distributes workloads across both physical machines, providing some resilience if one fails.
Kubernetes Cluster Setup with k3s
k3s is a lightweight Kubernetes distribution, perfect for home labs due to its simplicity.
Cluster Roles
- Master Node: Designate VM1 (on Physical Machine 1) as the master node. In k3s, the master handles the control plane and can run workloads.
- Worker Nodes: Use VM2 (Physical Machine 1), VM3, and VM4 (Physical Machine 2) as worker nodes for application pods.
A single-master setup is chosen for simplicity. High availability (HA) with multiple masters adds complexity (e.g., requiring an external database like etcd), which may be overkill for a home lab unless uptime is critical.
Installation Steps
-
Prepare Each VM:
-
Update Alpine Linux and install required packages (IPv6 support omitted):
apk update apk add iptables conntrack-tools nfs-utils
-
-
Install k3s on the Master Node (VM1):
-
Install and start the k3s server with IPv6 disabled:
curl -sfL https://get.k3s.io | sh -s - --disable-ipv6 -
Retrieve the node token for workers:
cat /var/lib/rancher/k3s/server/node-tokenSave this token (e.g.,
<token>).
-
-
Install k3s on Worker Nodes (VM2, VM3, VM4):
-
Join each worker to the cluster using the master’s IP and token, with IPv6 disabled:
curl -sfL https://get.k3s.io | K3S_URL=https://<master-ip>:6443 K3S_TOKEN=<token> sh -s - --disable-ipv6Replace
<master-ip>with VM1’s IP address.
-
-
Verify the Cluster:
-
On the master, install
kubectl(if needed) and check the cluster:kubectl get nodes -
All four VMs should show as
Ready.
-
Networking
- Ensure VMs are on the same local network (e.g., bridged networking in your virtualization software) using IPv4.
- k3s uses Flannel for pod networking by default, which will operate over IPv4 with
--disable-ipv6. - Open port 6443 on the master for the API server and ensure VM communication is allowed over IPv4 (or disable firewalls for simplicity).
Persistent Storage Setup
Applications like Nextcloud and Vaultwarden need persistent storage. This setup uses an existing NFS server already running on Physical Machine 1.
NFS Server (Pre-Existing)
-
Assumption: An NFS server is already running on Physical Machine 1 with an exported directory (e.g.,
/srv/nfs) accessible over IPv4. -
Verification:
-
Confirm the NFS export is active and accessible from all VMs. On any VM, test with:
showmount -e <IP of Physical Machine 1>This should list the exported directory (e.g.,
/srv/nfs). -
Ensure the export allows access from all VM IPv4 addresses (e.g.,
192.168.1.0/24). Check/etc/exportson Physical Machine 1 (e.g.,/srv/nfs 192.168.1.0/24(rw,sync,no_subtree_check)). -
Verify mountability from a VM:
mkdir /mnt/test mount -t nfs <IP of Physical Machine 1>:/srv/nfs /mnt/testIf successful, unmount it:
umount /mnt/test
-
-
Storage Capacity: Assume
/srv/nfshas sufficient space (e.g., 200GB) for all applications. Adjust the size in the PV if different.
PersistentVolumes in Kubernetes
-
Create a single PersistentVolume (PV) in k3s to use the existing NFS share:
apiVersion: v1 kind: PersistentVolume metadata: name: pv-nfs spec: capacity: storage: 200Gi accessModes: - ReadWriteMany nfs: server: <IP of Physical Machine 1> path: /srv/nfs -
Use ReadWriteMany for apps like Nextcloud needing shared access. For databases (e.g., MySQL), ReadWriteOnce is fine if pods are isolated. Replace <IP of Physical Machine 1> with the actual IPv4 address and adjust path if the export differs (e.g., /exports/data).
PersistentVolumeClaims (PVCs)
Applications create PVCs to request storage from this PV. Kubernetes mounts the NFS share automatically with nfs-utils.
Example PVC for Nextcloud:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nextcloud-data
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50GiNotes on Using Existing NFS
-
Since the NFS server is pre-existing, no setup steps are provided here. Ensure it’s configured to handle the load of all VMs and applications.
-
If the export path or permissions differ from /srv/nfs, update the PV configuration accordingly.
Deploying Applications
Deploy applications as pods using deployments, services, and ingresses.
Container Images
-
Use these Docker images:
- Nextcloud: nextcloud:latest
- Vaultwarden: vaultwarden/server:latest
- Paperless-ngx: ghcr.io/paperless-ngx/paperless-ngx:latest
- Bookstack: lscr.io/linuxserver/bookstack:latest
-
Check documentation for specific configurations (e.g., ports, variables).
Deployment Example (Nextcloud)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nextcloud-data
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nextcloud
spec:
replicas: 1
selector:
matchLabels:
app: nextcloud
template:
metadata:
labels:
app: nextcloud
spec:
containers:
- name: nextcloud
image: nextcloud:latest
ports:
- containerPort: 80
volumeMounts:
- name: data
mountPath: /var/www/html
volumes:
- name: data
persistentVolumeClaim:
claimName: nextcloud-data
---
apiVersion: v1
kind: Service
metadata:
name: nextcloud-service
spec:
selector:
app: nextcloud
ports:
- port: 80
targetPort: 80- Adapt this for other apps, adjusting storage and database needs.
Databases
-
Deploy database pods:
- MySQL with a PVC for Bookstack.
- PostgreSQL with a PVC for Paperless-ngx.
-
Use the single NFS-backed PV. Local storage could improve performance if pinned to a node.
Simplifying with Helm
-
Use Helm charts:
- Nextcloud: helm install nextcloud nextcloud/nextcloud
- Bookstack: Search community charts (e.g., LinuxServer).
-
Install Helm and explore Artifact Hub.
Accessing Applications
- Use k3s’s Traefik ingress controller:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nextcloud-ingress
spec:
rules:
- host: nextcloud.mydomain.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nextcloud-service
port:
number: 80 - Map nextcloud.mydomain.local to a node’s IPv4 address in /etc/hosts or use a local DNS server. Enable HTTPS with Let’s Encrypt via Traefik for security.
Additional Considerations
Resource Monitoring
- Start with 2 vCPUs and 4GB RAM per VM. Use kubectl top to monitor and adjust as needed.
Resilience
-
Single Master: If Physical Machine 1 fails, the master (VM1) and VM2 (worker) go down, and cluster management stops. Pods on VM3 and VM4 persist until restarted.
-
Single NFS Server: If Physical Machine 1 fails, the NFS server becomes unavailable, halting all application data access. Mitigate by:
-
Backing up /var/lib/rancher/k3s (master config) and /srv/nfs (data) to an external drive or Physical Machine 2.
-
Planning to restore the NFS server on Physical Machine 2 if needed (assuming the existing setup allows for this).
-
Backups
- Back up the NFS directory (e.g., /srv/nfs) and k3s config (/var/lib/rancher/k3s) regularly to an external drive or Physical Machine 2, depending on your existing backup strategy for the NFS server.
Documentation
- Record VM IPv4 addresses, the NFS export path (e.g., /srv/nfs), and manifests for reference. Note the existing NFS server’s configuration details (IP, export path, permissions).
Summary
-
Hardware: Two physical machines (quad-core, 16GB RAM, 500GB storage each).
-
VMs: Four VMs (two per machine) with Alpine Linux, 2 vCPUs, 4GB RAM each.
-
Cluster: One master (VM1), three workers (VM2, VM3, VM4), IPv6 disabled.
-
Storage: Pre-existing NFS server on Physical Machine 1 with one PersistentVolume.
-
Apps: Deploy Nextcloud, Vaultwarden, Paperless-ngx, Bookstack via Helm/YAML with Traefik.
This setup integrates seamlessly with your existing NFS server, keeping the design lightweight and efficient for an IPv4-only home lab. Ensure the NFS server’s reliability, as it’s a single point of failure!