Deepthink R1
Here's a step-by-step guide to set up your k3s home lab:
1. Host Machine Preparation
Physical Machines:
- Ensure virtualization enabled in BIOS (Intel VT-x/AMD-V)
- Install Alpine Linux on both physical machines
- Configure bridge networking for VMs
2. Virtual Machine Setup (Alpine Linux)
Common Configuration for All VMs:
# Disable IPv6
echo -e "net.ipv6.conf.all.disable_ipv6 = 1\nnet.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf
sysctl -p
# Add to /etc/modprobe.d/aliases.conf
echo "alias net-pf-10 off" >> /etc/modprobe.d/aliases.conf
echo "alias ipv6 off" >> /etc/modprobe.d/aliases.conf
# Install dependencies
apk add curl nfs-utils cni-plugins openrc iptables
# Enable cgroups
echo "cgroup /sys/fs/cgroup cgroup defaults 0 0" >> /etc/fstab
echo "cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory" >> /etc/modules
reboot3. k3s Cluster Setup
Master Node (VM1):
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable traefik --disable servicelb --node-ip <VM1_IP> --flannel-iface eth0" sh -
# Get node token
cat /var/lib/rancher/k3s/server/node-tokenWorker Nodes (VM2 & VM3):
curl -sfL https://get.k3s.io | K3S_URL=https://<VM1_IP>:6443 \
K3S_TOKEN=<NODE_TOKEN> \
INSTALL_K3S_EXEC="--node-ip <VM_NODE_IP> --flannel-iface eth0" sh -4. NFS Configuration
On Physical Machine 1 (NFS Server):
# Edit exports file
echo "/mnt/nfs-share <VM1_IP>/24(rw,sync,no_subtree_check)" >> /etc/exports
echo "/mnt/nfs-share <VM3_IP>/24(rw,sync,no_subtree_check)" >> /etc/exports
exportfs -ravOn All Cluster Nodes:
apk add nfs-utils
mkdir -p /mnt/k8s-nfs
echo "<NFS_SERVER_IP>:/mnt/nfs-share /mnt/k8s-nfs nfs defaults 0 0" >> /etc/fstab
mount -a5. Storage Class Configuration
Create nfs-storage.yaml:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
archiveOnDelete: "false"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-provisioner
spec:
replicas: 1
selector:
matchLabels:
app: nfs-provisioner
template:
metadata:
labels:
app: nfs-provisioner
spec:
containers:
- name: nfs-provisioner
image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: <NFS_SERVER_IP>
- name: NFS_PATH
value: /mnt/nfs-share
volumes:
- name: nfs-client-root
nfs:
server: <NFS_SERVER_IP>
path: /mnt/nfs-shareApply with kubectl apply -f nfs-storage.yaml
6. Application Deployment Example (Nextcloud)
nextcloud.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nextcloud-pvc
spec:
storageClassName: nfs
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nextcloud
spec:
selector:
matchLabels:
app: nextcloud
template:
metadata:
labels:
app: nextcloud
spec:
containers:
- name: nextcloud
image: nextcloud:latest
ports:
- containerPort: 80
volumeMounts:
- name: nextcloud-data
mountPath: /var/www/html
volumes:
- name: nextcloud-data
persistentVolumeClaim:
claimName: nextcloud-pvc
---
apiVersion: v1
kind: Service
metadata:
name: nextcloud-service
spec:
selector:
app: nextcloud
ports:
- protocol: TCP
port: 80
targetPort: 807. Networking Considerations
- Ensure firewall rules allow:
- 6443/tcp (Kubernetes API)
- 8472/udp (Flannel VXLAN)
- 10250/tcp (kubelet metrics)
- 2379-2380/tcp (etcd, if using)
- Configure MetalLB for LoadBalancer services if needed
8. Monitoring & Maintenance
Install kube-prometheus-stack:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm install prometheus prometheus-community/kube-prometheus-stackBackup Strategy:
- Regular etcd snapshots (for k3s data)
- Velero for application backups
- Cron jobs for NFS data backups
9. Security Recommendations
- Install Rancher's CIS Benchmark:
kubectl apply -f https://github.com/rancher/cis-operator/releases/download/v1.0.5/cis-operator.yaml - Enable Pod Security Policies
- Use Network Policies with Calico:
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
10. Troubleshooting Tips
- Check node status:
kubectl get nodes -o wide - View pod logs:
kubectl logs -f <pod-name> - Access worker nodes via SSH and check:
journalctl -u k3s-agent -f - Verify NFS mounts:
showmount -e <nfs-server-ip>
This setup provides a robust foundation for your self-hosted applications. Adjust resource limits and storage sizes based on your specific needs. Consider adding Longhorn for distributed block storage if you need redundancy across physical machines.