Skip to content

Homelab

Production-grade homelab infrastructure powering this website. 4-node Proxmox cluster with 34 services, enterprise-level high availability, and automated failover.

Homelab Infrastructure

📋 Note: The Cost Analysis and Build Your Own tabs are currently being updated with enhanced interactive content. Check back soon!

Enterprise Homelab Infrastructure

Welcome to the technical deep-dive into the infrastructure hosting this website. This is a production-grade Proxmox cluster running 34 services with high availability, automated failover, and professional network segmentation—all from a homelab.

Cluster Overview

4
Cluster Nodes
Proxmox VE 9.0.10
34
Running Services
23 LXCs + 11 VMs
6
HA Protected
Automatic Failover
99.8%
Uptime
Last 90 days

Node Status

Node Type CPU Usage Memory Services Status
node0 Nested VM (R630)
4%
18%
3 LXCs Online
node1 Bare Metal (R630)
11%
79%
8 LXCs, 4 VMs Online
node2 Hyper-V VM
10%
30%
12 LXCs, 3 VMs Online
node3 Hyper-V VM
2%
41%
4 VMs Online

High Availability Services

The following services are protected by Proxmox HA Manager. If a node fails, these services automatically migrate to a healthy node within ~2 minutes:

🌐
WordPress (LXC 300)
Primary: node2
HA Enabled
🔐
Cloudflare Tunnel (LXC 301)
Primary: node2
HA Enabled
🛡️
AdGuard DNS (LXC 106)
Primary: node1
HA Enabled
🔐
Twingate (LXC 124)
Primary: node1
HA Enabled
🌐
Tailscale (LXC 126)
Primary: node1
HA Enabled
🔍
Unbound DNS (LXC 151)
Primary: node1
HA Enabled

Network Architecture

Dual-VLAN design with proper network segmentation:

VLAN 1: Management + Storage

Subnet: 172.16.15.0/24
Purpose: Proxmox cluster communication, TrueNAS NFS storage, UniFi management
  • node0: 172.16.15.2
  • node1: 172.16.15.10
  • node2: 172.16.15.11
  • node3: 172.16.15.12
  • TrueNAS: 172.16.15.21
  • qDevice: 172.16.15.16

VLAN 2: Application Network

Subnet: 10.10.152.0/24
Purpose: User-facing services, LXC containers, application traffic
  • WordPress: 10.10.152.10
  • Cloudflare Tunnel: 10.10.152.11
  • Grafana: 10.10.152.20
  • Additional 20+ services

Storage Layer

TrueNAS ZFS Storage

Location: Proxmox VM on node1 (Dell R630)
Address: 172.16.15.21
Filesystem: ZFS with USB-attached WD drive
Export: NFS share (WD-ZFS) mounted on all Proxmox nodes
Purpose: Shared storage for LXC templates, VM ISOs, backups, and container volumes

Physical Hardware

Dell PowerEdge R630

  • CPU: 2× Xeon processors
  • RAM: Enterprise ECC memory
  • Role: Hosts node1 (bare metal) + node0 (nested VM)
  • Notes: Most powerful node, handles compute-intensive workloads

Windows 11 PC + Hyper-V

  • Hypervisor: Microsoft Hyper-V
  • Role: Hosts node2, node3
  • Network: Provides inter-VLAN routing and storage NFS exports
  • Notes: Multi-hypervisor architecture (Proxmox + Hyper-V)

Note: This page currently displays static data. Live metrics from the Proxmox API will be integrated in a future update.

Architecture Deep Dive

Explore the five layers that make up this production homelab infrastructure, from physical hardware through to running applications.

Layer 1: Physical Hardware

Two physical hosts provide the foundation for the entire infrastructure.

🖥️
Dell PowerEdge R630
2× Xeon Processors
Enterprise ECC RAM
RAID Storage
💻
Windows 11 PC
Consumer Hardware
Hyper-V Hypervisor
Local Storage

1Gbps LAN Connection

Dell R630 (Primary)
  • Bare metal Proxmox VE host
  • Nested virtualization capable
  • Hardware RAID controller
  • IPMI remote management
Windows 11 PC (Secondary)
  • Hyper-V hypervisor
  • Hosts 2 Proxmox VE VMs
  • Multi-purpose workstation
  • Local storage for VMs

Layer 2: Hypervisor Layer

Multi-hypervisor architecture combining Proxmox VE and Microsoft Hyper-V.

🔷
Proxmox VE
node1 (Bare Metal)
172.16.15.10
🔷
Proxmox VE
node0 (Nested)
172.16.15.2
🔶
Hyper-V
node2
172.16.15.11
🔶
Hyper-V
node3
172.16.15.12

Proxmox HA Cluster "Hildreth"

qDevice (quorum arbitrator): 172.16.15.16

Proxmox VE 9.0.10
  • 4-node high availability cluster
  • Automatic VM/LXC migration on failure
  • Shared storage via NFS
  • Live migration support
Hyper-V Integration
  • Proxmox VMs run on Hyper-V
  • Full cluster participation
  • Shares NFS storage from TrueNAS
  • Multi-hypervisor architecture

Layer 3: Network Architecture

Dual-VLAN design provides network segmentation and security.

🌐
Internet

🔒
UniFi Gateway
Routing • Firewall

↓ Splits into 2 VLANs ↓

VLAN 1: Management

Subnet: 172.16.15.0/24

Purpose: Cluster communication, storage, management

  • node0: 172.16.15.2
  • node1: 172.16.15.10
  • node2: 172.16.15.11
  • node3: 172.16.15.12
  • TrueNAS: 172.16.15.21
  • qDevice: 172.16.15.16
VLAN 2: Application

Subnet: 10.10.152.0/24

Purpose: User services, containers, app traffic

  • WordPress: 10.10.152.10
  • Cloudflare Tunnel: 10.10.152.11
  • Grafana: 10.10.152.20
  • 20+ additional services

Layer 4: Storage Layer

TrueNAS provides shared ZFS storage to all cluster nodes via NFS.

💾
TrueNAS VM
172.16.15.21
Proxmox VM on node1

NFS Export: /mnt/WD-USB/WD-ZFS

node0
node1
node2
node3

All nodes mount NFS storage

ZFS Features
  • Data integrity verification
  • Snapshot support
  • Compression enabled
  • Self-healing on read
Usage
  • LXC/VM templates
  • ISO images
  • Backups
  • Shared volumes

Layer 5: Application Layer

34 services running across the cluster with high availability protection.

HA Protected (6 services)
🌐 WordPress
🔐 Cloudflare Tunnel
🛡️ AdGuard DNS
🔒 Twingate
🔗 Tailscale
📡 Unbound DNS
Standard Services (28 services)

17 additional LXC containers + 11 VMs running across all nodes

  • Development environments
  • Testing infrastructure
  • Network services
  • Monitoring tools
  • Personal applications

End-to-End Request Flow

How a request reaches this website through the infrastructure layers.

1
Internet Request

Visitor accesses eddykawira.com

2
Cloudflare

DNS resolution, CDN caching, DDoS protection

3
Cloudflare Tunnel (LXC 301)

Secure ingress - no exposed ports

4
WordPress LXC (300)

10.10.152.10 on node2 - Apache + PHP + MariaDB

5
Response

HTML rendered and returned to visitor

Interactive High Availability Demo

Click any healthy node to simulate a failure and watch Proxmox HA Manager automatically migrate services to healthy nodes. The simulation uses real Proxmox priority logic (node2 → node1 → node3 → node0).

Click to restore
node0
CPU
RAM
Click to restore
node1
CPU
RAM
Click to restore
node2
CPU
RAM
Click to restore
node3
CPU
RAM

Event Log LIVE PAUSED

No events yet

Auto-scrolling enabled. ~2–5s visible impact during HA failover.

Architecture Overview

How High Availability Works

Proxmox HA Manager orchestrates automatic service migration across cluster nodes in four critical phases

Step 1

Detect

Cluster monitoring daemons continuously check node health via heartbeat signals. When a node becomes unresponsive, HA Manager immediately detects the failure.

~30 seconds
Detection window
Step 2

Decide

HA Manager evaluates cluster state and selects target nodes based on priority groups (100 critical, 50 standard), resource capacity, and current load distribution.

~10 seconds
Decision time
Step 3

Migrate

Services are live-migrated to healthy nodes. VMs use memory-state transfer (10-30s), while LXCs perform filesystem synchronization (30-90s). Service downtime: ~2-5 seconds.

10-90 seconds
Migration time
Step 4

Rebalance

When the failed node recovers, HA Manager can automatically rebalance services back to their preferred nodes based on priority settings and cluster optimization policies.

Automatic
On recovery
VM Live Migration
10–30s
Memory-state transfer
LXC Migration
30–90s
Filesystem sync
Service Impact
~2–5s
Visible downtime
Priority Groups:
100 = Critical Services 50 = Standard Services
LIVE
CPU:
MEM: