Grok3 Think

To set up your home lab Kubernetes cluster using k3s with the specifications you’ve provided, here’s a comprehensive guide to configure it effectively. Your setup involves three virtual machines (VMs) across two physical machines, using Alpine Linux, disabling IPv6, leveraging an existing NFS server for storage, disabling Traefik, and deploying applications like Nextcloud, Vaultwarden, Paperless-ngx, and Bookstack. Below is the recommended configuration:


Cluster Overview

  • Physical Machines:

    • Physical Machine 1: 192.168.123.100 (hosts NFS server and two VMs)

    • Physical Machine 2: 192.168.123.230 (hosts one VM)

  • Virtual Machines:

    • VM 1: 192.168.123.221 (Master Node) on Physical Machine 1

    • VM 2: 192.168.123.222 (Worker Node) on Physical Machine 1

    • VM 3: 192.168.123.223 (Worker Node) on Physical Machine 2

  • Operating System: Alpine Linux

  • Kubernetes Distribution: k3s (lightweight Kubernetes)

  • Networking: IPv6 disabled, all nodes on subnet 192.168.123.0/24

  • Storage: NFS server on 192.168.123.100

  • Applications: Nextcloud, Vaultwarden, Paperless-ngx, Bookstack, etc.

  • Ingress: Traefik disabled, Nginx Ingress Controller recommended


Step-by-Step Configuration

  1. Prepare All VMs (Master and Workers)

Since you’re using Alpine Linux and need to disable IPv6 and support NFS, perform these steps on all three VMs (192.168.123.221, 192.168.123.222, 192.168.123.223):

  • Install Alpine Linux:

    • Set up each VM with Alpine Linux as the base OS.
  • Disable IPv6:

    • Edit /etc/sysctl.conf to add the following lines:

      bash

      net.ipv6.conf.all.disable_ipv6 = 1
      net.ipv6.conf.default.disable_ipv6 = 1
    • Apply the changes:

      bash

      sysctl -p
    • This ensures the host operates in an IPv4-only mode, which is compatible with k3s when IPv6 is not explicitly enabled.

  • Install NFS Client Support:

    • Install the nfs-utils package to enable NFS mounting capabilities (required for Kubernetes to use NFS Persistent Volumes):

      bash

      apk add nfs-utils
  • Verify Network Connectivity:

    • Ensure all VMs can communicate with each other and the NFS server (192.168.123.100) on the 192.168.123.0/24 subnet. Test with ping between IPs.

  1. Set Up the Master Node (VM 1: 192.168.123.221)

The master node will run the k3s server, managing the cluster.

  • Install k3s Server:

    • Run the following command to install k3s with Traefik disabled and the correct IP bindings:

      bash

      curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--bind-address=192.168.123.221 --node-ip=192.168.123.221 --disable=traefik" sh -
    • --bind-address: Sets the API server to listen on the master’s IP.

    • --node-ip: Ensures the node registers with this IP.

    • --disable=traefik: Prevents the default Traefik Ingress Controller from being installed.

  • Retrieve the Node Token:

    • After installation, obtain the token needed to join worker nodes:

      bash

      cat /var/lib/rancher/k3s/server/node-token
    • Save this token (e.g., ) for use in the worker node setup.

  • Verify Installation:

    • Check the node status:

      bash

      kubectl get nodes
    • You should see 192.168.123.221 listed as Ready.


  1. Set Up the Worker Nodes

Join the worker nodes to the cluster using the master’s IP and the token.

  • Worker Node 1 (VM 2: 192.168.123.222):

    bash

    curl -sfL https://get.k3s.io | K3S_URL=https://192.168.123.221:6443 K3S_TOKEN=<token> INSTALL_K3S_EXEC="--node-ip=192.168.123.222" sh -
  • Worker Node 2 (VM 3: 192.168.123.223):

    bash

    curl -sfL https://get.k3s.io | K3S_URL=https://192.168.123.221:6443 K3S_TOKEN=<token> INSTALL_K3S_EXEC="--node-ip=192.168.123.223" sh -
  • Parameters:

    • K3S_URL: Points the agent to the master’s API server.

    • K3S_TOKEN: The token from the master node.

    • --node-ip: Ensures each worker registers with its specific IP.

  • Verify Cluster:

    • On the master node, run:

      bash

      kubectl get nodes
    • Confirm all three nodes (192.168.123.221, 192.168.123.222, 192.168.123.223) are listed as Ready.


  1. Configure NFS Storage

Use the existing NFS server on Physical Machine 1 (192.168.123.100) for persistent storage.

  • NFS Server Configuration:

    • On 192.168.123.100, ensure the NFS shares are exported. Edit /etc/exports:

      bash

      /nfs/nextcloud 192.168.123.0/24(rw,sync,no_subtree_check)
      /nfs/vaultwarden 192.168.123.0/24(rw,sync,no_subtree_check)
      /nfs/paperless 192.168.123.0/24(rw,sync,no_subtree_check)
      /nfs/bookstack 192.168.123.0/24(rw,sync,no_subtree_check)
    • Restart the NFS service:

      bash

      exportfs -ra
      systemctl restart nfs-server
  • Create Persistent Volumes (PVs):

    • For each application, define a PV. Example for Nextcloud:

      yaml

      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: nextcloud-pv
      spec:
        capacity:
          storage: 10Gi
        accessModes:
          - ReadWriteMany
        nfs:
          server: 192.168.123.100
          path: /nfs/nextcloud
    • Apply it:

      bash

      kubectl apply -f nextcloud-pv.yaml
    • Repeat for other applications (e.g., vaultwarden-pv, paperless-pv, bookstack-pv) with their respective NFS paths.

  • Create Persistent Volume Claims (PVCs):

    • Example for Nextcloud:

      yaml

      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: nextcloud-pvc
      spec:
        accessModes:
          - ReadWriteMany
        resources:
          requests:
            storage: 10Gi
    • Apply it:

      bash

      kubectl apply -f nextcloud-pvc.yaml
    • Create similar PVCs for other applications.


  1. Install Nginx Ingress Controller

Since Traefik is disabled, use Nginx Ingress Controller for routing to your applications.

  • Deploy the Ingress Controller:

    bash

    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/baremetal/deploy.yaml
    • This deploys the Nginx Ingress Controller, which binds to ports 80 and 443 on the nodes it runs on.
  • Verify Deployment:

    bash

    kubectl get pods -n ingress-nginx
    • Ensure the ingress-nginx-controller pod is running. Note the node it’s scheduled on (e.g., 192.168.123.221).

  1. Deploy Applications

Deploy each application with a Deployment, a ClusterIP Service, and an Ingress resource.

  • Example: Nextcloud

    • Deployment:

      yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: nextcloud
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: nextcloud
        template:
          metadata:
            labels:
              app: nextcloud
          spec:
            containers:
            - name: nextcloud
              image: nextcloud:latest
              ports:
              - containerPort: 80
              volumeMounts:
              - mountPath: /var/www/html
                name: storage
            volumes:
            - name: storage
              persistentVolumeClaim:
                claimName: nextcloud-pvc
    • Service:

      yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: nextcloud-service
      spec:
        type: ClusterIP
        ports:
        - port: 80
          targetPort: 80
        selector:
          app: nextcloud
    • Ingress:

      yaml

      apiVersion: networking.k8s.io/v1
      kind: Ingress
      metadata:
        name: nextcloud-ingress
      spec:
        rules:
        - host: nextcloud.example.com
          http:
            paths:
            - path: /
              pathType: Prefix
              backend:
                service:
                  name: nextcloud-service
                  port:
                    number: 80
    • Apply all:

      bash

      kubectl apply -f nextcloud-deployment.yaml -f nextcloud-service.yaml -f nextcloud-ingress.yaml
  • Repeat for Other Applications:

    • Use similar manifests for Vaultwarden, Paperless-ngx, Bookstack, etc., adjusting:

      • Image names (e.g., vaultwarden/server, ghcr.io/paperless-ngx/paperless-ngx, lscr.io/linuxserver/bookstack)

      • PVC names

      • Hostnames (e.g., vaultwarden.example.com, paperless.example.com, bookstack.example.com)


  1. Access the Applications
  • Set Up Client Access:

    • On your local machine, edit /etc/hosts to map hostnames to a node’s IP (e.g., the master node):

      bash

      192.168.123.221 nextcloud.example.com vaultwarden.example.com paperless.example.com bookstack.example.com
    • Access via:

  • Verify:

    • Ensure each application is running:

      bash

      kubectl get pods
    • Check Ingress routing:

      bash

      kubectl get ingress

Additional Notes

  • Master Node Workloads: In k3s, the master node can run workloads by default. For a small cluster, this is fine, but to reserve it for control plane tasks, taint it with:

    bash

    kubectl taint nodes <master-node-name> node-role.kubernetes.io/master:NoSchedule
  • Resource Management: Adjust CPU/memory requests and limits in deployments based on application needs and VM resources.

  • Security: For a home lab, basic setup is sufficient, but consider enabling HTTPS on the Ingress Controller and securing app configurations later.

  • Monitoring: Optionally, add Prometheus and Grafana for cluster monitoring in the future.


This configuration provides a lightweight, functional k3s cluster tailored to your home lab requirements, with NFS storage and hostname-based access via Nginx Ingress. Enjoy your setup!


This page was last edited on 2025-03-08 10:57

Powered by Wiki|Docs

This page was last edited on 2025-03-08 10:57

Mac
To whom it may concern

Powered by Wiki|Docs