2 VMs for worker nodes
Here’s the updated network diagram with an additional worker node (VM3) on Physical Machine 1:
Network Diagram
+---------------------------------------------------------------------------------------------------+
| Home Lab Network |
| (IPv4-only, e.g., 192.168.1.0/24) |
| |
| +-----------------------------+ +-----------------------------+ |
| | Physical Machine 1 | | Physical Machine 2 | |
| | +-------------------------+ | | +-------------------------+ | |
| | | NFS Server | | | | | | |
| | | - IP: 192.168.1.5 | | | | VM2 (Worker Node) | | |
| | | - Exports: /mnt/nfs | | | | - IP: 192.168.1.11 | | |
| | | (NFS v4, RW access) | | | | - k3s Agent | | |
| | +------------|-------------+ | | | - Apps: | | |
| | | | | | - Vaultwarden | | |
| | +------------|-------------+ | | | - Paperless-ngx | | |
| | | VM1 (Master Node) | | | +------------|-------------+ | |
| | | - IP: 192.168.1.10 | | +---------------|--------------+ |
| | | - k3s Control Plane | | | |
| | | - kube-apiserver | | | |
| | | - etcd (embedded) | | | |
| | | - MetalLB Controller | | | |
| | | - Nginx Ingress | | | |
| | +------------|-------------+ | | |
| | | | | |
| | +------------|-------------+ | | |
| | | VM3 (Worker Node) | | | |
| | | - IP: 192.168.1.12 | | | |
| | | - k3s Agent | | | |
| | | - Apps: | | | |
| | | - Nextcloud | | | |
| | | - Bookstack | | | |
| | +------------|-------------+ | | |
| +--------------|---------------+ | |
| | | |
| | Router/Gateway | |
| | (e.g., 192.168.1.1) | |
| +---------------------+-------------------------------+ |
+---------------------------------------------------------------------------------------------------+Key Changes
-
Physical Machine 1:
- Now hosts VM3 (
192.168.1.12) as an additional worker node. - Shares the NFS Server (
192.168.1.5) with all nodes. - Traffic between VM1 (master) and VM3 (worker) stays local to the physical machine, reducing latency.
- Now hosts VM3 (
-
App Distribution:
- VM2 (Worker on Physical Machine 2): Runs
VaultwardenandPaperless-ngx. - VM3 (Worker on Physical Machine 1): Runs
NextcloudandBookstack.
- VM2 (Worker on Physical Machine 2): Runs
-
Networking:
- All Workers (VM2 and VM3) connect to the master (VM1) via
6443and Flannel via8472. - NFS Traffic: VM3 accesses the NFS server locally (within Physical Machine 1), improving storage performance.
- All Workers (VM2 and VM3) connect to the master (VM1) via
Steps to Add VM3 as a Worker
1. Provision VM3:
- Install Alpine Linux, disable IPv6, assign static IP
192.168.1.12. - Install dependencies:
apk update apk add openrc nfs-utils cni-plugins containerd rc-update add containerd rc-service containerd start
2. Join the k3s Cluster:
On VM3:
curl -sfL https://get.k3s.io | K3S_URL=https://192.168.1.10:6443 K3S_TOKEN=<MASTER_NODE_TOKEN> sh -3. Verify Node Registration:
On VM1 (Master):
kubectl get nodes
# Output should include:
# NAME STATUS ROLES AGE VERSION
# vm1 Ready control-plane,master 5h v1.27.4+k3s1
# vm2 Ready <none> 3h v1.27.4+k3s1
# vm3 Ready <none> 1m v1.27.4+k3s1Example App Deployment on VM3
Bookstack Deployment:
# bookstack.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: bookstack
spec:
replicas: 1
selector:
matchLabels:
app: bookstack
template:
metadata:
labels:
app: bookstack
spec:
nodeSelector:
kubernetes.io/hostname: vm3 # Schedule on VM3
containers:
- name: bookstack
image: linuxserver/bookstack:latest
ports:
- containerPort: 80
volumeMounts:
- name: nfs-storage
mountPath: /config
volumes:
- name: nfs-storage
persistentVolumeClaim:
claimName: nfs-pvcTraffic Flow with VM3
+----------------+ +----------------+ +----------------+ +----------------+ +----------------+
| External User | --> | MetalLB VIP | --> | Nginx Ingress | --> | Bookstack Pod | --> | NFS Storage |
| (Nextcloud) | | (192.168.1.200)| | (VM1) | | (VM3) | | (192.168.1.5) |
+----------------+ +----------------+ +----------------+ +----------------+ +----------------+Benefits of Adding VM3
- Load Balancing: Distribute app workloads across multiple workers.
- Fault Tolerance: If Physical Machine 2 fails, VM3 can take over critical apps.
- Local NFS Access: VM3 accesses NFS storage on the same physical machine, reducing network latency.
Security Adjustments
- Update firewall rules on Physical Machine 1 to allow traffic between VM1 and VM3:
iptables -A INPUT -s 192.168.1.12 -j ACCEPT # Allow VM3 → VM1
Troubleshooting Tips
- If VM3 fails to join:
- Check network connectivity:
ping 192.168.1.10from VM3. - Verify the token in
/var/lib/rancher/k3s/server/node-tokenon VM1.
- Check network connectivity:
- If NFS mounts fail on VM3:
- Confirm NFS exports:
showmount -e 192.168.1.5.
- Confirm NFS exports:
Let me know if you need further refinements or app-specific manifests!