Homelab Infrastructure
📋 Note: The Cost Analysis and Build Your Own tabs are currently being updated with enhanced interactive content. Check back soon!
Enterprise Homelab Infrastructure
Welcome to the technical deep-dive into the infrastructure hosting this website. This is a production-grade Proxmox cluster running 34 services with high availability, automated failover, and professional network segmentation—all from a homelab.
Cluster Overview
Node Status
| Node | Type | CPU Usage | Memory | Services | Status |
|---|---|---|---|---|---|
| node0 | Nested VM (R630) | 3 LXCs | Online | ||
| node1 | Bare Metal (R630) | 8 LXCs, 4 VMs | Online | ||
| node2 | Hyper-V VM | 12 LXCs, 3 VMs | Online | ||
| node3 | Hyper-V VM | 4 VMs | Online |
High Availability Services
The following services are protected by Proxmox HA Manager. If a node fails, these services automatically migrate to a healthy node within ~2 minutes:
Network Architecture
Dual-VLAN design with proper network segmentation:
VLAN 1: Management + Storage
- node0: 172.16.15.2
- node1: 172.16.15.10
- node2: 172.16.15.11
- node3: 172.16.15.12
- TrueNAS: 172.16.15.21
- qDevice: 172.16.15.16
VLAN 2: Application Network
- WordPress: 10.10.152.10
- Cloudflare Tunnel: 10.10.152.11
- Grafana: 10.10.152.20
- Additional 20+ services
Storage Layer
TrueNAS ZFS Storage
Physical Hardware
Dell PowerEdge R630
- CPU: 2× Xeon processors
- RAM: Enterprise ECC memory
- Role: Hosts node1 (bare metal) + node0 (nested VM)
- Notes: Most powerful node, handles compute-intensive workloads
Windows 11 PC + Hyper-V
- Hypervisor: Microsoft Hyper-V
- Role: Hosts node2, node3
- Network: Provides inter-VLAN routing and storage NFS exports
- Notes: Multi-hypervisor architecture (Proxmox + Hyper-V)
Note: This page currently displays static data. Live metrics from the Proxmox API will be integrated in a future update.
Architecture Deep Dive
Explore the five layers that make up this production homelab infrastructure, from physical hardware through to running applications.
Layer 1: Physical Hardware
Two physical hosts provide the foundation for the entire infrastructure.
1Gbps LAN Connection
Dell R630 (Primary)
- Bare metal Proxmox VE host
- Nested virtualization capable
- Hardware RAID controller
- IPMI remote management
Windows 11 PC (Secondary)
- Hyper-V hypervisor
- Hosts 2 Proxmox VE VMs
- Multi-purpose workstation
- Local storage for VMs
Layer 2: Hypervisor Layer
Multi-hypervisor architecture combining Proxmox VE and Microsoft Hyper-V.
Proxmox HA Cluster "Hildreth"
qDevice (quorum arbitrator): 172.16.15.16
Proxmox VE 9.0.10
- 4-node high availability cluster
- Automatic VM/LXC migration on failure
- Shared storage via NFS
- Live migration support
Hyper-V Integration
- Proxmox VMs run on Hyper-V
- Full cluster participation
- Shares NFS storage from TrueNAS
- Multi-hypervisor architecture
Layer 3: Network Architecture
Dual-VLAN design provides network segmentation and security.
↓
↓ Splits into 2 VLANs ↓
VLAN 1: Management
Subnet: 172.16.15.0/24
Purpose: Cluster communication, storage, management
- node0: 172.16.15.2
- node1: 172.16.15.10
- node2: 172.16.15.11
- node3: 172.16.15.12
- TrueNAS: 172.16.15.21
- qDevice: 172.16.15.16
VLAN 2: Application
Subnet: 10.10.152.0/24
Purpose: User services, containers, app traffic
- WordPress: 10.10.152.10
- Cloudflare Tunnel: 10.10.152.11
- Grafana: 10.10.152.20
- 20+ additional services
Layer 4: Storage Layer
TrueNAS provides shared ZFS storage to all cluster nodes via NFS.
NFS Export: /mnt/WD-USB/WD-ZFS
All nodes mount NFS storage
ZFS Features
- Data integrity verification
- Snapshot support
- Compression enabled
- Self-healing on read
Usage
- LXC/VM templates
- ISO images
- Backups
- Shared volumes
Layer 5: Application Layer
34 services running across the cluster with high availability protection.
HA Protected (6 services)
Standard Services (28 services)
17 additional LXC containers + 11 VMs running across all nodes
- Development environments
- Testing infrastructure
- Network services
- Monitoring tools
- Personal applications
End-to-End Request Flow
How a request reaches this website through the infrastructure layers.
Internet Request
Visitor accesses eddykawira.com
Cloudflare
DNS resolution, CDN caching, DDoS protection
Cloudflare Tunnel (LXC 301)
Secure ingress - no exposed ports
WordPress LXC (300)
10.10.152.10 on node2 - Apache + PHP + MariaDB
Response
HTML rendered and returned to visitor
Interactive High Availability Demo
Click any healthy node to simulate a failure and watch Proxmox HA Manager automatically migrate services to healthy nodes. The simulation uses real Proxmox priority logic (node2 → node1 → node3 → node0).
Event Log LIVE PAUSED
No events yet
Auto-scrolling enabled. ~2–5s visible impact during HA failover.
How High Availability Works
Proxmox HA Manager orchestrates automatic service migration across cluster nodes in four critical phases
Detect
Cluster monitoring daemons continuously check node health via heartbeat signals. When a node becomes unresponsive, HA Manager immediately detects the failure.
Decide
HA Manager evaluates cluster state and selects target nodes based on priority groups (100 critical, 50 standard), resource capacity, and current load distribution.
Migrate
Services are live-migrated to healthy nodes. VMs use memory-state transfer (10-30s), while LXCs perform filesystem synchronization (30-90s). Service downtime: ~2-5 seconds.
Rebalance
When the failed node recovers, HA Manager can automatically rebalance services back to their preferred nodes based on priority settings and cluster optimization policies.