Grok3

Hardware and VM Configuration

Physical Machines

  • Machine 1: Quad-core CPU, 16GB RAM, 500GB storage (SSD or HDD).
  • Machine 2: Quad-core CPU, 16GB RAM, 500GB storage (SSD or HDD).

These specs are modest and suitable for a home lab. Adjust based on your hardware and application requirements.

Virtual Machines

  • Create four VMs in total:
    • Physical Machine 1: VM1 and VM2.
    • Physical Machine 2: VM3 and VM4.
  • VM Specifications: Allocate 2 vCPUs and 4GB RAM per VM. This ensures sufficient resources for k3s and applications while keeping usage low. Increase as needed (e.g., Nextcloud may need more RAM).
  • Operating System: Install Alpine Linux on each VM for its lightweight design, ideal for containers and VMs.

This setup distributes workloads across both physical machines, providing some resilience if one fails.


Kubernetes Cluster Setup with k3s

k3s is a lightweight Kubernetes distribution, perfect for home labs due to its simplicity.

Cluster Roles

  • Master Node: Designate VM1 (on Physical Machine 1) as the master node. In k3s, the master handles the control plane and can run workloads.
  • Worker Nodes: Use VM2 (Physical Machine 1), VM3, and VM4 (Physical Machine 2) as worker nodes for application pods.

A single-master setup is chosen for simplicity. High availability (HA) with multiple masters adds complexity (e.g., requiring an external database like etcd), which may be overkill for a home lab unless uptime is critical.

Installation Steps

  1. Prepare Each VM:

    • Update Alpine Linux and install required packages:

      apk update
      apk add iptables ip6tables conntrack-tools nfs-utils
  2. Install k3s on the Master Node (VM1):

    • Install and start the k3s server:

      curl -sfL https://get.k3s.io | sh -
    • Retrieve the node token for workers:

      cat /var/lib/rancher/k3s/server/node-token

      Save this token (e.g., <token>).

  3. Install k3s on Worker Nodes (VM2, VM3, VM4):

    • Join each worker to the cluster using the master’s IP and token:

      curl -sfL https://get.k3s.io | K3S_URL=https://<master-ip>:6443 K3S_TOKEN=<token> sh -

      Replace <master-ip> with VM1’s IP address.

  4. Verify the Cluster:

    • On the master, install kubectl (if needed) and check the cluster:

      kubectl get nodes
    • All four VMs should show as Ready.

Networking

  • Ensure VMs are on the same local network (e.g., bridged networking in your virtualization software).
  • k3s uses Flannel for pod networking by default, which should work automatically.
  • Open port 6443 on the master for the API server and ensure VM communication is allowed (or disable firewalls for simplicity).

Persistent Storage Setup

Applications like Nextcloud and Vaultwarden need persistent storage for files and databases. NFS is used for its flexibility.

NFS Servers

  • Set up an NFS server on each physical machine:
    • Physical Machine 1: Export /srv/nfs1 (e.g., 100GB).
    • Physical Machine 2: Export /srv/nfs2 (e.g., 100GB).
  • Configure NFS to allow access from all VM IPs on your local network.

PersistentVolumes in Kubernetes

  • Create two PersistentVolumes (PVs) in k3s for NFS shares:
  apiVersion: v1
  kind: PersistentVolume
  metadata:
    name: pv-nfs1
  spec:
    capacity:

      storage: 100Gi

    accessModes:

      - ReadWriteMany

    nfs:

      server: <IP of Physical Machine 1>
      path: /srv/nfs1

  ---

  apiVersion: v1
  kind: PersistentVolume
  metadata:
    name: pv-nfs2
  spec:
    capacity:

      storage: 100Gi

    accessModes:

      - ReadWriteMany

    nfs:

      server: <IP of Physical Machine 2>
      path: /srv/nfs2

PersistentVolumeClaims (PVCs)

  • Each application will create a PVC to request storage from these PVs. Kubernetes will automatically handle mounting the NFS shares, thanks to nfs-utils installed on the VMs.

Why NFS?

  • NFS allows any node to access storage over the network, providing flexibility in pod scheduling. While local storage (e.g., hostPath) is simpler, it ties data to specific nodes, reducing resilience. In a home lab, NFS performance over a local network should be adequate.

Deploying Applications

Deploy your applications as pods in the k3s cluster using deployments, services, and ingresses.

Container Images

  • Use official or community Docker images:
    • Nextcloud: nextcloud:latest
    • Vaultwarden: vaultwarden/server:latest
    • Paperless-ngx: ghcr.io/paperless-ngx/paperless-ngx:latest
    • Bookstack: lscr.io/linuxserver/bookstack:latest
  • Refer to each application’s documentation for specific requirements (e.g., environment variables, ports).

Deployment Example (Nextcloud)

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nextcloud-data
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 50Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nextcloud
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nextcloud
  template:
    metadata:
      labels:
        app: nextcloud
    spec:
      containers:
      - name: nextcloud
        image: nextcloud:latest
        ports:
        - containerPort: 80
        volumeMounts:
        - name: data
          mountPath: /var/www/html
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: nextcloud-data
---
apiVersion: v1
kind: Service
metadata:
  name: nextcloud-service
spec:
  selector:
    app: nextcloud
  ports:
  - port: 80
    targetPort: 80

Create similar manifests for other applications, adjusting storage and database configurations as needed.

Databases

  • Applications like Bookstack (MySQL) and Paperless-ngx (PostgreSQL) require separate database pods:
    • Deploy a MySQL pod with a PVC for Bookstack.
    • Deploy a PostgreSQL pod with a PVC for Paperless-ngx.
  • Use NFS-backed PVs for simplicity, though local storage could improve database performance if pods are pinned to specific nodes.

Simplifying with Helm

  • Use Helm charts for easier deployment:
    • Nextcloud: helm install nextcloud nextcloud/nextcloud
    • Bookstack: Available via community charts (e.g., LinuxServer).
  • Install Helm on your cluster and search for charts on Artifact Hub.

Accessing Applications

  • Use k3s’s built-in Traefik ingress controller to expose applications:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nextcloud-ingress
spec:
  rules:
  - host: nextcloud.mydomain.local
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: nextcloud-service
            port:
              number: 80
  • Edit your local /etc/hosts file to map nextcloud.mydomain.local to a node’s IP, or set up a local DNS server.
  • Optionally, enable HTTPS with Let’s Encrypt via Traefik for added security.

Network Diagram

The following diagram illustrates the setup, showing the physical machines, VMs, NFS servers, and the Kubernetes cluster:

+--------------------------------+         +--------------------------------+
|       Physical Machine 1       |         |       Physical Machine 2       |
|                                |         |                                |
|  +---------+   +---------+     |         |  +---------+   +---------+     |
|  |  VM1    |   |  VM2    |     |         |  |  VM3    |   |  VM4    |     |
|  | (Master)|   | (Worker)|     |         |  | (Worker)|   | (Worker)|     |
|  +---------+   +---------+     |         |  +---------+   +---------+     |
|         | NFS1 (/srv/nfs1)     |         |         | NFS2 (/srv/nfs2)     |
+---------|----------------------+         +---------|----------------------+
          |                                          |
          +------------------------------------------+
                             |
                             |  Local Network
                             |
                    +-------------------+
                    | Kubernetes Cluster|
                    |   (k3s)           |
                    |   - Master: VM1   |
                    |   - Workers: VM2, |
                    |              VM3, |
                    |              VM4  |
                    +-------------------+
                             |
                             |
                    +-------------------+
                    | Applications      |
                    | - Nextcloud       |
                    | - Vaultwarden     |
                    | - Paperless-ngx   |
                    | - Bookstack       |
                    |   (Deployed on    |
                    |    worker nodes)  |
                    +-------------------+
  • Physical Machines: Host the VMs and NFS servers.
  • VMs: Run Alpine Linux and form the Kubernetes cluster.
  • NFS Servers: Provide persistent storage for applications.
  • Kubernetes Cluster: Manages application deployments and storage access.

Additional Considerations

Resource Monitoring

  • Start with 2 vCPUs and 4GB RAM per VM. Use kubectl top to monitor resource usage and adjust if applications like Nextcloud require more resources.

Resilience

  • With a single master node, if Physical Machine 1 fails, the cluster loses management capabilities, though running pods on VM3 and VM4 will persist. For critical setups, consider backing up the master node configuration (/var/lib/rancher/k3s) to restore elsewhere.

Backups

  • Regularly back up NFS directories (e.g., /srv/nfs1, /srv/nfs2) to an external drive or another machine to protect application data.

Documentation

  • Record VM IPs, NFS configurations, and Kubernetes manifests for easy recovery or scaling.

Summary

This setup provides a lightweight, functional home lab Kubernetes cluster using k3s:

  • Hardware: Two physical machines (quad-core, 16GB RAM, 500GB storage each).
  • VMs: Four VMs (two per machine) running Alpine Linux, with 2 vCPUs and 4GB RAM each.
  • Cluster: One master node (VM1) and three worker nodes (VM2, VM3, VM4).
  • Storage: NFS servers on both physical machines, with PersistentVolumes for application data.
  • Applications: Deploy Nextcloud, Vaultwarden, Paperless-ngx, Bookstack, etc., via Helm or YAML, using Traefik for access. This configuration is simple to manage, leverages Alpine’s efficiency, and meets home lab needs while allowing for future experimentation and scaling!

This page was last edited on 2025-03-06 02:44

Powered by Wiki|Docs

This page was last edited on 2025-03-06 02:44

Mac
To whom it may concern

Powered by Wiki|Docs