Answer By Deepseek

Here’s a recommended setup for a k3s-based Kubernetes cluster across two VMs on separate physical machines, optimized for simplicity, reliability, and scalability in a home lab environment:


1. Pre-Installation Requirements

VM Specifications

  • OS: Ubuntu Server 22.04 LTS (or Alpine Linux if preferred).
  • Resource Allocation:
    • Control Plane (Master) Node: 2 vCPUs, 4GB RAM, 20GB Disk.
    • Worker Node: 2 vCPUs, 2GB RAM, 20GB Disk.
  • Networking:
    • Static IP addresses for both VMs (e.g., 192.168.1.10 for master, 192.168.1.11 for worker).
    • Ensure ports are open between the VMs:
      • Control Plane: 6443 (Kubernetes API), 8472 (Flannel VXLAN), 10250 (kubelet).
      • Worker: Same as above, plus 2379-2380 (if using embedded etcd for HA).
    • Disable swap: sudo swapoff -a && sudo sed -i '/swap/d' /etc/fstab.

2. Cluster Setup

Install k3s on the Control Plane Node

Run on the master VM:

curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable servicelb --disable traefik" sh -s -
  • Flags:
    • --disable servicelb: Disables the default k3s load balancer (use MetalLB later if needed).
    • --disable traefik: Optional (disable if you prefer Nginx/Istio as ingress).

Retrieve the Join Token

sudo cat /var/lib/rancher/k3s/server/node-token

Copy the token (e.g., K10abc12345::abcdef...).

Join the Worker Node

Run on the worker VM:

curl -sfL https://get.k3s.io | K3S_URL=https://<MASTER_IP>:6443 K3S_TOKEN=<JOIN_TOKEN> sh -

Replace <MASTER_IP> and <JOIN_TOKEN> with your master node’s IP and token.


3. Post-Installation Steps

Verify the Cluster

On the master node:

sudo kubectl get nodes
# Expected output:
# NAME        STATUS   ROLES                  AGE   VERSION
# master-vm   Ready    control-plane,master   5m    v1.27.4+k3s1
# worker-vm   Ready    <none>                 2m    v1.27.4+k3s1

Configure kubectl Access

Copy the kubeconfig file to your local machine:

sudo cat /etc/rancher/k3s/k3s.yaml > ~/.kube/config && chmod 600 ~/.kube/config
sed -i "s/127.0.0.1/<MASTER_IP>/g" ~/.kube/config

4. Networking & Storage

CNI (Container Network Interface)

  • k3s uses Flannel by default. For advanced use cases (e.g., network policies), install Calico:
    kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

Persistent Storage

  • Deploy Longhorn (lightweight distributed storage):
    helm repo add longhorn https://charts.longhorn.io
    helm install longhorn longhorn/longhorn -n longhorn-system --create-namespace

5. Optional Add-Ons

MetalLB (Load Balancer)

For exposing services externally in a bare-metal/home lab:

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.10/config/manifests/metallb-native.yaml

Configure an IP pool (e.g., 192.168.1.100-192.168.1.150) in a ConfigMap.

Monitoring & Logging

  • Prometheus + Grafana:
    helm install prometheus prometheus-community/kube-prometheus-stack
  • Loki + Promtail (logs):
    helm install loki grafana/loki-stack

6. High Availability (Optional)

For a fault-tolerant control plane (requires a 3rd node or external DB):

  1. Use an external datastore (MySQL/PostgreSQL/etcd).
  2. Add --cluster-init to the master node’s k3s install command.
  3. Join additional control-plane nodes with --server --token <TOKEN> --server https://<MASTER_IP>:6443.

7. Security Best Practices

  • Restrict SSH Access: Use key-based authentication.
  • Enable Firewall: Allow only necessary ports (e.g., ufw on Ubuntu).
  • Pod Security Policies: Use kubectl apply -f pod-security-policy.yaml.

Troubleshooting Tips

  • Check k3s logs:
    journalctl -u k3s.service
  • Validate node connectivity:
    ping <WORKER_IP> && nc -zv <WORKER_IP> 6443

Summary

  • Control Plane: Handles cluster management and scheduling.
  • Worker Node: Runs workloads (pods/deployments).
  • Networking: Flannel/MetalLB for internal/external traffic.
  • Storage: Longhorn for persistent volumes.

This setup provides a lightweight, production-like Kubernetes environment for testing apps, CI/CD pipelines, or homelab experimentation. Adjust resource allocations and addons based on your workload needs!


This page was last edited on 2025-03-06 01:57

Powered by Wiki|Docs

This page was last edited on 2025-03-06 01:57

Mac
To whom it may concern

Powered by Wiki|Docs