Diagram
graph TD
subgraph "Physical Machine 1 (192.168.123.100)"
B("VM1 (192.168.123.221)")
C("VM2 (192.168.123.222)")
I["NFS Server (/mnt/nfs-share)"]
I --- B
I --- C
end
subgraph "Physical Machine 2 (192.168.123.230)"
E("VM3 (192.168.123.223)")
I --- E
end
B --> F["k3s Master Node"]
C --> G["k3s Worker Node 1"]
E --> H["k3s Worker Node 2"]
F --> G
F --> H
subgraph "k3s Cluster"
F
G
H
J["Nginx Ingress Controller"]
K["Persistent Volumes (NFS)"]
L["Persistent Volume Claims"]
M["Pods (Nextcloud, Vaultwarden, etc.)"]
N["MetalLB (Optional)"]
end
F --> J
G --> J
H --> J
F --> K
G --> K
H --> K
K --> L
L --> M
J --> M
F --> N
G --> N
H --> N
I --> K
Key improvements for an architecture diagram:
- k3s Cluster Subgraph:
- Encapsulates the core Kubernetes components.
- Includes essential elements:
- Master and worker nodes.
- Nginx Ingress Controller (for routing traffic).
- Persistent Volumes (NFS).
- Persistent Volume Claims.
- Pods (representing your applications).
- MetalLB (optional, for load balancing).
- Connections:
- Clear connections between the nodes and the Ingress Controller.
- Connections between the nodes and the Persistent Volumes.
- Connections between the NFS Server and the Persistent Volumes.
- Connections between the Persistent Volumes and Persistent Volume Claims.
- Connections between the Persistent Volume Claims and the Pods.
- Connections between the Ingress controller and the Pods.
- Connections between the nodes and MetalLB.
- NFS Integration:
- Explicitly shows the NFS server providing storage to the Persistent Volumes.
- Logical Flow:
- The diagram illustrates the logical flow of traffic and storage within the architecture.
graph TD
subgraph "Physical Machine 1 (192.168.123.100)"
B("VM1 (192.168.123.221)")
C("VM2 (192.168.123.222)")
I["NFS Server (/mnt/nfs-share)"]
end
subgraph "Physical Machine 2 (192.168.123.230)"
A("VM3 (192.168.123.223)")
end
B --> F["k3s Master Node"]
C --> G["k3s Worker Node 1"]
A --> H["k3s Worker Node 2"]
F --> G
F --> H
I --> B
I --> C
I --> A
graph TD
subgraph "Physical Machine 1 (192.168.123.100)"
B("VM1 (192.168.123.221)")
C("VM2 (192.168.123.222)")
I["NFS Server (/mnt/nfs-share)"]
I -- NFS Mount --> B
I -- NFS Mount --> C
end
subgraph "Physical Machine 2 (192.168.123.230)"
E("VM3 (192.168.123.223)")
I -- NFS Mount --> E
end
B --> F["k3s Master Node"]
C --> G["k3s Worker Node 1"]
E --> H["k3s Worker Node 2"]
F --> G
F --> H
Key changes:
- The
I["NFS Server (/mnt/nfs-share)"]block is now positioned belowBandCwithin the "Physical Machine 1" subgraph. - Added
I -- NFS Mount --> B,I -- NFS Mount --> C, andI -- NFS Mount --> Eto clearly show the NFS mount connections between the NFS server and the VMs.
graph TD
subgraph "Physical Machine 1 (192.168.123.100)"
B("VM1 (192.168.123.221)")
C("VM2 (192.168.123.222)")
I["NFS Server (/mnt/nfs-share)"]
I --- B
I --- C
end
subgraph "Physical Machine 2 (192.168.123.230)"
E("VM3 (192.168.123.223)")
I --- E
end
B --> F["k3s Master Node"]
C --> G["k3s Worker Node 1"]
E --> H["k3s Worker Node 2"]
F --> G
F --> H
I've simply replaced the --> with --- for the NFS mount connections. This removes the arrowheads while keeping the visual connection between the NFS server and the VMs.