Gemini

Absolutely! Let's break down how to set up your K3s home lab with Alpine Linux, disabled IPv6, and persistent storage via NFS, while accommodating your application needs.

1. Virtual Machine Setup (Alpine Linux):

  • Download Alpine Linux: Get the latest stable ISO from the Alpine Linux website. Choose the "standard" or "virtual" image.
  • Create VMs:
    • Create three VMs: k3s-master, k3s-worker1, k3s-worker2.
    • k3s-master and k3s-worker1 on physical machine 1.
    • k3s-worker2 on physical machine 2.
    • Allocate sufficient resources:
      • k3s-master: 2+ vCPUs, 4+ GB RAM, 20+ GB storage.
      • k3s-worker1 & k3s-worker2: 2+ vCPUs, 4+ GB RAM, 20+ GB storage.
    • Use bridged networking to ensure VMs can communicate across physical machines.
  • Install Alpine Linux:
    • Boot each VM from the Alpine ISO.
    • Follow the Alpine setup instructions:
      • Set the root password.
      • Configure networking (static IP addresses within your home network).
      • Install essential packages (e.g., openssh, sudo).
      • Enable ssh.
      • Install qemu-guest-agent to get proper vm status.

2. Disable IPv6:

  • Edit /etc/sysctl.conf on each VM:
      echo "net.ipv6.conf.all.disable_ipv6 = 1" | sudo tee -a /etc/sysctl.conf
      echo "net.ipv6.conf.default.disable_ipv6 = 1" | sudo tee -a /etc/sysctl.conf
      echo "net.ipv6.conf.lo.disable_ipv6 = 1" | sudo tee -a /etc/sysctl.conf
      sudo sysctl -p
  • Also edit /etc/default/grub and add ipv6.disable=1 to the GRUB_CMDLINE_LINUX_DEFAULT line. Then run sudo update-grub and reboot.

3. Install and Configure K3s:

  • On k3s-master:
      curl -sfL https://get.k3s.io | sh -s - --disable traefik --disable servicelb --disable local-storage
      sudo systemctl enable k3s
      sudo systemctl start k3s
      sudo cat /etc/rancher/k3s/k3s.yaml > ~/.kube/config
      sudo chmod 600 ~/.kube/config
      export KUBECONFIG=~/.kube/config
    • --disable traefik: We'll use a more flexible ingress controller (e.g., Nginx Ingress).
    • --disable servicelb: We will use MetalLB later, if loadbalancers are needed.
    • --disable local-storage: We want to use our NFS server for persistent storage.
  • On k3s-worker1 and k3s-worker2:
    • Get the join token from k3s-master: sudo cat /var/lib/rancher/k3s/server/node-token
    • Install K3s:
        curl -sfL https://get.k3s.io | K3S_URL="https://<k3s-master-ip>:6443" K3S_TOKEN="<your-node-token>" sh -s - --disable traefik --disable servicelb --disable local-storage
        sudo systemctl enable k3s-agent
        sudo systemctl start k3s-agent
    • Replace <k3s-master-ip> and <your-node-token> with the appropriate values.

4. NFS Setup (Physical Machine 1):

  • Ensure NFS server is installed and configured on physical machine 1.
  • Create a shared directory (e.g., /mnt/nfs-share).
  • Export the directory in /etc/exports:
      /mnt/nfs-share *(rw,sync,no_subtree_check)
  • Restart the NFS server: sudo systemctl restart nfs-kernel-server

5. NFS Client Setup (All VMs):

  • Install NFS client packages on all VMs: sudo apk add nfs-utils
  • Create the mount point on all VMs: sudo mkdir -p /mnt/k8s-nfs

6. Kubernetes NFS Persistent Volumes:

  • Create a PersistentVolume (PV) definition:
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: nfs-pv
      spec:
        capacity:
          storage: 10Gi # Adjust as needed
        accessModes:
          - ReadWriteMany
        nfs:
          server: <physical-machine-1-ip>
          path: "/mnt/nfs-share"
    • Replace <physical-machine-1-ip> with the IP address of your physical machine 1.
  • Create a PersistentVolumeClaim (PVC) definition:
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: nfs-pvc
      spec:
        accessModes:
          - ReadWriteMany
        resources:
          requests:
            storage: 10Gi # Must match PV storage
  • Apply these definitions: kubectl apply -f pv.yaml and kubectl apply -f pvc.yaml

7. Ingress Controller (Nginx Ingress):

  • Install Nginx Ingress Controller:
      kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.0/deploy/static/provider/baremetal/deploy.yaml
  • If you need load balancing, install MetalLB.
    • Install MetalLB : kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.9/config/manifests/metallb-native.yaml
    • Create a MetalLB configmap.
    • Configure your DNS to point to the MetalLB ip addresses.

8. Deploying Applications:

  • Use Helm charts for easy application deployment (e.g., Nextcloud, Vaultwarden, Paperless-ngx, Bookstack).
  • Example (Nextcloud):
      helm repo add nextcloud https://nextcloud.github.io/helm/
      helm install nextcloud nextcloud/nextcloud --set persistence.enabled=true --set persistence.existingClaim=nfs-pvc
  • Configure ingress rules for each application to expose them via your domain.
  • For each application, ensure that the persistent volume claim nfs-pvc is referenced within the application deployment's persistent volume claim section.

Important Considerations:

  • Security: Secure your K3s cluster and applications with appropriate network policies, RBAC, and secrets management.
  • Backups: Regularly back up your NFS share and Kubernetes configuration.
  • Monitoring: Set up monitoring tools (e.g., Prometheus, Grafana) to track cluster health and application performance.
  • DNS: Configure your home network's DNS server or use a service like Cloudflare DNS to point domain names to your ingress controller.
  • Updates: Keep your VMs, K3s, and applications up to date.
  • Resource allocation: Monitor your resource usage, and adjust the VM resources accordingly.
  • Firewall: configure your firewall to allow the needed traffic.

This comprehensive guide should help you set up your K3s home lab effectively. Remember to adapt the configurations to your specific needs and environment.


This page was last edited on 2025-03-08 03:21

Powered by Wiki|Docs

This page was last edited on 2025-03-08 03:21

Mac
To whom it may concern

Powered by Wiki|Docs