Adjust VMs worker nodes

Got it! You'll be setting up the cluster with:

  • Physical Machine 1: VM1 (Master Node) and VM2 (Worker Node)
  • Physical Machine 2: VM3 (Worker Node)

Here’s a revised and detailed setup guide for your Kubernetes cluster using k3s, Alpine Linux, and the specified configuration.


Cluster Topology

Physical Machine VM Name Role Resources
Physical Machine 1 VM1 Master Node 1 vCPU, 4GB RAM, 50GB Storage
Physical Machine 1 VM2 Worker Node 1 vCPU, 4GB RAM, 50GB Storage
Physical Machine 2 VM3 Worker Node 1 vCPU, 4GB RAM, 50GB Storage

Step-by-Step Configuration

1. Network Configuration

General Settings

  • Ensure all VMs have a private network interface (e.g., eth1) for cluster communication. You can use a shared physical network or a virtual network setup on your hypervisor (e.g., VirtualBox, Proxmox, etc.).
  • Assign static IP addresses to all VMs for consistency. For example:
    • VM1 (Master): 192.168.100.10
    • VM2 (Worker): 192.168.100.11
    • VM3 (Worker): 192.168.100.12

Disable IPv6

Run the following commands on all VMs to disable IPv6:

echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf
sysctl -p

2. Install Alpine Linux

Install Alpine Linux 3.18 or later on all VMs. During installation:

  • Select minimal installation for a lightweight setup.
  • Ensure the VMs have access to the internet for package updates.

3. Install Docker/Containerd

k3s relies on containerd as the container runtime. Install and configure containerd on all VMs:

apk update && apk add containerd

Enable and start containerd:

systemctl enable containerd && systemctl start containerd

4. Install k3s

Master Node (VM1)

Install and initialize the k3s cluster on the master node:

curl -sfL https://get.k3s.io | sh -

After installation, the cluster is running, and you can access it using kubectl:

kubectl get nodes

The output should show VM1 as the master node:

NAME     STATUS   ROLES           AGE   VERSION
vm1      Ready    control-plane   5s    v1.25.4+k3s5

Worker Nodes (VM2 and VM3)

On both worker nodes (VM2 and VM3), install k3s and join them to the cluster. Replace <MASTER-IP> with the master node's IP (192.168.100.10) and <JOIN-TOKEN> with the token from the master node (found in /var/lib/rancher/k3s/server/node-token):

curl -sfL https://get.k3s.io | K3S_URL=https://<MASTER-IP>:6443 K3S_TOKEN=<JOIN-TOKEN> sh -

Verify that all nodes are connected:

kubectl get nodes

The output should show all three nodes:

NAME     STATUS   ROLES           AGE   VERSION
vm1      Ready    control-plane   10m   v1.25.4+k3s5
vm2      Ready    <none>          2m    v1.25.4+k3s5
vm3      Ready    <none>          1m    v1.25.4+k3s5

5. NFS Storage Configuration

You mentioned using an existing NFS server on Physical Machine 1. Here’s how to integrate it into your Kubernetes cluster:

Mount NFS Share on Master Node (VM1)

Mount the NFS share on the master node. For example:

mount -t nfs <NFS-SERVER-IP>:/path/to/export /mnt/nfs

Create a PersistentVolume (PV)

Create a pv-nfs.yaml file:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
spec:
  storageClassName: manual
  capacity:
    storage: 100Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: <NFS-SERVER-IP>
    path: /path/to/export

Apply the PV:

kubectl apply -f pv-nfs.yaml

Create a PersistentVolumeClaim (PVC)

Create a pvc-nfs.yaml file:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 50Gi

Apply the PVC:

kubectl apply -f pvc-nfs.yaml

Now, any application deployed in the cluster can use this PVC for persistent storage.


6. Application Deployment

You can now deploy applications like Nextcloud, Vaultwarden, Paperless-ngx, Bookstack, and others. Here are some examples:

Nextcloud Deployment (Example)

Create a manifest file (nextcloud.yaml):

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nextcloud
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nextcloud
  template:
    metadata:
      labels:
        app: nextcloud
    spec:
      containers:
        - name: nextcloud
          image: nextcloud:23
          ports:
            - containerPort: 80
          volumeMounts:
            - mountPath: /var/www/html
              name: nextcloud-storage
      volumes:
        - name: nextcloud-storage
          persistentVolumeClaim:
            claimName: nfs-pvc

Apply the deployment:

kubectl apply -f nextcloud.yaml

7. Optional Enhancements

Monitoring

Install Prometheus and Grafana for monitoring:

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install prometheus prometheus-community/prometheus
helm install grafana grafana/grafana

Dashboard Access

Access the k3s dashboard:

k3s kubectl proxy

Visit http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/ in your browser.

Ingress Controller (Optional)

Install Nginx Ingress for exposing applications:

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-ingress-nginx/ingress-nginx

8. Final Notes

  • Ensure your VMs have proper firewall rules (e.g., allow traffic on port 6443, 30000-32767).
  • Use a reverse proxy (e.g., Nginx) with TLS to expose your applications securely.
  • Consider setting up backups for your Kubernetes cluster using tools like Velero.

If you need help with specific application deployments or additional configurations, let me know!

This page was last edited on 2025-03-06 04:26

Powered by Wiki|Docs

This page was last edited on 2025-03-06 04:26

Mac
To whom it may concern

Powered by Wiki|Docs