Grok 3

Setting Up a Home Lab Kubernetes Cluster with k3s

This guide outlines how to set up a Kubernetes cluster using k3s with two or more virtual machines (VMs) across two physical machines. The setup uses Alpine Linux, disables IPv6, leverages an existing NFS share on Physical Machine 1, and deploys applications like Nextcloud, Vaultwarden, Paperless-ngx, and Bookstack.

Overview

  • Physical Machine 1:
    • VM 1: Master node running k3s.
    • VM 2: Worker node running k3s.
  • Physical Machine 2:
    • VM 3: Worker node running k3s.
  • Operating System: Alpine Linux on all VMs.
  • Network: Bridged networking for cross-machine communication.
  • IPv6: Disabled on all VMs.
  • Storage: Existing NFS share on Physical Machine 1, used via PersistentVolumes (PVs).
  • Applications: Nextcloud, Vaultwarden, Paperless-ngx, Bookstack, etc., deployed as pods.

Step-by-Step Setup Instructions

1. Set Up the Virtual Machines

  • Create the VMs:

    • On Physical Machine 1:
      • VM 1 (Master Node): 2 CPUs, 4GB RAM.
      • VM 2 (Worker Node): 1 CPU, 2GB RAM.
    • On Physical Machine 2:
      • VM 3 (Worker Node): 1 CPU, 2GB RAM.
    • Install Alpine Linux on each VM using the latest stable ISO from the Alpine Linux website.
  • Configure Networking:

    • Use bridged networking in your virtualization software (e.g., VirtualBox, VMware) so all VMs are on the same subnet (e.g., 192.168.1.x).
    • Assign static IPs or ensure DHCP consistency.
    • Set hostnames:
      • VM 1: k3s-master
      • VM 2: k3s-worker1
      • VM 3: k3s-worker2
    • Verify connectivity: ping 192.168.1.101 (e.g., from VM 2 to VM 1).
  • Install Prerequisites:

    • Update and install packages on each VM:

      apk update && apk add curl nfs-utils
      • curl: For k3s installation.
      • nfs-utils: For NFS client support.

2. Disable IPv6 on All VMs

  • Disable IPv6 on each VM:

    • Create a configuration file:

      echo -e "net.ipv6.conf.all.disable_ipv6 = 1\nnet.ipv6.conf.default.disable_ipv6 = 1\nnet.ipv6.conf.lo.disable_ipv6 = 1" > /etc/sysctl.d/99-disable-ipv6.conf
    • Apply changes:

      sysctl -p /etc/sysctl.d/99-disable-ipv6.conf
    • Verify:

      ip a | grep inet6

      (No IPv6 addresses should appear.)

3. Install k3s on the Master Node (VM 1)

  • On VM 1 (k3s-master), install k3s:

    curl -sfL https://get.k3s.io | sh -
  • This starts the k3s server with SQLite as the default database.

  • Check node status:

    sh

    kubectl get nodes
    • Should list k3s-master as Ready.
  • Retrieve the node token:

    sh

    cat /var/lib/rancher/k3s/server/node-token
    • Save the token (e.g., K10...::server:...) for worker nodes.

4. Join Worker Nodes to the Cluster (VM 2 and VM 3)

  • On VM 2 (k3s-worker1) and VM 3 (k3s-worker2), join the cluster:

    sh

    curl -sfL https://get.k3s.io | K3S_URL=https://<master-ip>:6443 K3S_TOKEN=<node-token> sh -
    • Replace with VM 1’s IP (e.g., 192.168.1.100).

    • Replace with the token from VM 1.

  • Verify the cluster from VM 1:

    sh

    kubectl get nodes
    • Should list all three nodes as Ready.

5. Configure NFS Storage

  • Ensure NFS Accessibility:

    • Confirm the NFS share on Physical Machine 1 (e.g., /path/to/share) is exported:

      /path/to/share 192.168.1.0/24(rw,sync,no_subtree_check)

      (Edit /etc/exports on Physical Machine 1.)

    • Restart NFS:

      sh

      exportfs -ra && systemctl restart nfs-server
    • Test mounting (optional):

      sh

      mount -t nfs <nfs-server-ip>:/path/to/share /mnt
  • Create PersistentVolumes (PVs):

    • Example for Nextcloud:

      yaml

      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: nextcloud-pv
      spec:
        capacity:
          storage: 10Gi
        accessModes:
          - ReadWriteMany
        persistentVolumeReclaimPolicy: Retain
        nfs:
          server: <nfs-server-ip>  # e.g., 192.168.1.10
          path: /path/to/share/nextcloud
    • Create additional PVs for other apps (e.g., vaultwarden-pv, paperless-ngx-pv).

  • Create PersistentVolumeClaims (PVCs):

    • Example for Nextcloud:

      yaml

      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: nextcloud-pvc
      spec:
        accessModes:
          - ReadWriteMany
        resources:
          requests:
            storage: 10Gi
        volumeName: nextcloud-pv
    • Apply:

      sh

      kubectl apply -f pv-nextcloud.yaml
      kubectl apply -f pvc-nextcloud.yaml
    • Repeat for other apps.

6. Deploy Applications

  • Example: Deploy Nextcloud:

    • Save as nextcloud.yaml:

      apiVersion: apps/v1
      kind: Deployment
      metadata:
       name: nextcloud
      spec:
       replicas: 1
       selector:
       matchLabels:
       app: nextcloud
       template:
       metadata:
       labels:
       app: nextcloud
       spec:
       containers:
       - name: nextcloud
       image: nextcloud:latest
       ports:
       - containerPort: 80
       volumeMounts:
       - name: data
       mountPath: /var/www/html
       volumes:
       - name: data
       persistentVolumeClaim:
       claimName: nextcloud-pvc
      
      apiVersion: v1
      kind: Service
      metadata:
       name: nextcloud-service
      spec:
       selector:
       app: nextcloud
       ports:
       - port: 80
       targetPort: 80
      
      apiVersion: networking.k8s.io/v1
      kind: Ingress
      metadata:
       name: nextcloud-ingress
      spec:
       rules:
      
      - host: nextcloud.mylab.local
        http:
         paths:
      
        - path: /
          pathType: Prefix
          backend:
           service:
          name: nextcloud-service
          port:
           number: 80
  • Apply:

    sh

    kubectl apply -f nextcloud.yaml
  • Repeat for Other Applications:

    • Use similar manifests for:

      • Vaultwarden: vaultwarden/server:latest

      • Paperless-ngx: ghcr.io/paperless-ngx/paperless-ngx:latest

      • Bookstack: lscr.io/linuxserver/bookstack:latest

    • Adjust images, ports, and PVCs as needed.

    • Add database pods (e.g., MariaDB) if required.

7. Access the Applications

  • Ingress Configuration:

    • k3s uses Traefik by default on ports 80/443.

    • Edit /etc/hosts on your local machine:

      <vm1-ip> nextcloud.mylab.local vaultwarden.mylab.local paperless-ngx.mylab.local bookstack.mylab.local
      • Replace with VM 1’s IP (e.g., 192.168.1.100).
  • Test Access:


Additional Considerations

  • Resource Allocation:

    • Monitor and adjust VM resources as needed. k3s is lightweight, but apps like Nextcloud may need more.
  • Security:

    • Use HTTPS with self-signed certificates via Traefik for added security (optional).
  • Database Needs:

    • Deploy MariaDB/PostgreSQL pods or use SQLite for simplicity.
  • NFS Performance:

    • VM 3 on Physical Machine 2 may experience slight latency with NFS; acceptable for a home lab.

Final Cluster Settings Summary

  • Nodes:

    • Master: VM 1 (Physical Machine 1).

    • Workers: VM 2 (Physical Machine 1), VM 3 (Physical Machine 2).

  • OS: Alpine Linux, IPv6 disabled.

  • Networking: Bridged, same subnet, NFS accessible.

  • k3s Installation:

  • Storage: NFS-based PVs (e.g., /path/to/share/nextcloud).

  • Applications: Deployed with PVCs and Traefik ingress.

Enjoy your k3s home lab setup! Let me know if you need further assistance.


This page was last edited on 2025-03-08 10:44

Powered by Wiki|Docs

This page was last edited on 2025-03-08 10:44

Mac
To whom it may concern

Powered by Wiki|Docs