**Purpose:** Single source of truth for all network configurations (UDM Pro edge, Proxmox hosts, NPMplus, port forwarding)
**Recent:** Option B (RPC via Cloudflare Tunnel) active for 6 RPC hostnames. E2E: [05-network/E2E_CLOUDFLARE_DOMAINS_RUNBOOK.md](../05-network/E2E_CLOUDFLARE_DOMAINS_RUNBOOK.md); Option B: [05-network/OPTION_B_RPC_VIA_TUNNEL_RUNBOOK.md](../05-network/OPTION_B_RPC_VIA_TUNNEL_RUNBOOK.md).
**Proxmox cluster (verified 2026-04-02):** Five nodes, **quorate** (`pvecm`): **ml110**`192.168.11.10`, **r630-01**`.11`, **r630-02**`.12`, **r630-03**`.13`, **r630-04**`.14` (`r630-04.sankofa.nexus`). **r630-03** / **r630-04** remain **empty of guests**; workload stays on `.10`–`.12`. **Template vs live (read-only):**`bash scripts/verify/audit-proxmox-operational-template.sh` now SSHs **all five** IPs by default (`config/ip-addresses.conf`); ML110 may skip if SSH is down or host repurposed. **2026-04-08:**`config/proxmox-operational-template.json` + `ALL_VMIDS_ENDPOINTS.md` include Order **VMID 10000/10001/10020** (Postgres primary/replica + Redis on r630-01). **Package baseline (operator run):** all five nodes upgraded toward **pve-manager 9.1.7** and kernel **6.17.13-2-pve** (`apt full-upgrade`, **one node at a time**, reboot where a new kernel was installed). **r630-03** and **r630-04** had **no-subscription** apt sources applied first (they previously hit **401** on `enterprise.proxmox.com` without a subscription). **Shared LVM thin storage:**`data` / `local-lvm` in `/etc/pve/storage.cfg` include **ml110,r630-01,r630-03,r630-04**; **r630-04** uses dual SSDs in VG `pve` (~467GiB thin data) plus Ceph OSDs on four SSDs; **r630-03** uses **sda3+sdb** in VG `pve` (~1TiB thin data); **r630-03****sdc–sdh** are **LVM thin pools****`thin1-r630-03`** … **`thin6-r630-03`** (~226GiB each; provision script in repo). **Other workstations:** if SSH to **r630-04** fails with **host key changed**, run `bash scripts/verify/refresh-proxmox-host-key-r630-04.sh` (or `ssh-keygen -R 192.168.11.14`) after confirming the new key out-of-band.
| r630-03 | **r630-03.sankofa.nexus** | 192.168.11.13 | **Spare** (no LXCs/VMs); **pve** ~1TiB + **thin1-r630-03**…**thin6-r630-03** on 6×SSD | ✅ Active |
| r630-04 | **r630-04.sankofa.nexus** | 192.168.11.14 | **Spare** (no LXCs/VMs); **pve** thin ~467GiB + Ceph OSDs | ✅ Active |
**Naming:** Proxmox hypervisor **management DNS** uses **`short-hostname.sankofa.nexus`** (same label as the Host column + `.sankofa.nexus`; see `config/ip-addresses.conf``PROXMOX_FQDN_*`). Use FQDN for SSH, TLS cert SANs, and docs; IPs remain the wire target on VLAN 11. **Verify / bootstrap:**`bash scripts/verify/check-proxmox-mgmt-fqdn.sh` (`--print-hosts` for `/etc/hosts`); `bash scripts/security/ensure-proxmox-ssh-access.sh` (`--fqdn` when DNS exists).
**ML110 (192.168.11.10) repurposed:** ML110 Gen9 is being converted to **OPNsense/pfSense** with 8–12 GbE, acting as **WAN aggregator** between 6–10 Spectrum cable modems and the 2× UDM Pro gateways. After repurpose, .10 is the firewall appliance (not Proxmox). See [ML110_OPNSENSE_PFSENSE_WAN_AGGREGATOR.md](ML110_OPNSENSE_PFSENSE_WAN_AGGREGATOR.md). **Before repurpose:** Migrate all containers/VMs off ml110 to r630-01/r630-02 (or other R630s). **r630-03/04** are available as migration targets (no guests; local **data**/**local-lvm** storage live as of 2026-04-02).
**ml110 LVM hygiene (2026-04-02):** Stale **thin** LVs on **ml110** named **`vm-2503-disk-0`**, **`vm-6201-disk-0`**, **`vm-9000-*`** were **removed** after cluster config check: **2503** / **6201** live disks are on **r630-01** / **r630-02** (`/etc/pve/nodes/.../lxc/*.conf`); **9000** had no **`vmlist`** entry. **ml110 `pve-guests.service`:** can stay **`activating (start)`** for days if **`startall`** wedges (historical **`cfs-lock` / `vzstart`** timeouts). That blocks **`apt`** during **`pve-manager` postinst** ( **`systemctl reload-or-restart pvescheduler`** waits on **`pve-guests`** ). **Unblock:**`systemctl list-jobs`, then **`systemctl cancel <jobid>`** for **`pve-guests.service`** and **`pvescheduler.service`**, then **`dpkg --configure -a`** if needed. After a host reboot, confirm **validators****1003** / **1004** are **running** (`pct start` if not).
**CCIP Relay (r630-01):** Host service at `/opt/smom-dbis-138/services/relay`; relays Chain 138 → Mainnet; uses VMID 2201 (192.168.11.221) for RPC. See [07-ccip/CCIP_RELAY_DEPLOYMENT.md](../07-ccip/CCIP_RELAY_DEPLOYMENT.md).
**Four NPMplus instances (one per public IP):** 76.53.10.36, 76.53.10.37, 76.53.10.38, 76.53.10.40. See [04-configuration/NPMPLUS_FOUR_INSTANCES_MASTER.md](../04-configuration/NPMPLUS_FOUR_INSTANCES_MASTER.md).
**NPMplus #1 (76.53.10.36, LXC VMID 10233):** 192.168.11.166 (eth0) and 192.168.11.167 (eth1). Only **192.168.11.167** is used in UDM Pro port forwarding: 76.53.10.36:80 → 192.168.11.167:80, 76.53.10.36:443 → 192.168.11.167:443. Main d-bis.org, explorer, Option B RPC (6 hostnames), MIM4U, etc.
**NPMplus #3 (76.53.10.38, LXC VMID 10235):** 192.168.11.169 (single NIC). Port forwarding: 76.53.10.38:80/81/443 → 192.168.11.169:80/81/443. **Nathan's core-2 RPC, All Mainnet (Alltra), and HYBX** nodes and services route here. Designated public IP: 76.53.10.42. Public service names are intended to use the Cloudflare tunnel / proxied `CNAME` path first, with the direct edge kept as management or fallback. See [04-configuration/NPMPLUS_ALLTRA_HYBX_MASTER_PLAN.md](../04-configuration/NPMPLUS_ALLTRA_HYBX_MASTER_PLAN.md).
**Dev VM (VMID 5700):** 192.168.11.59. Shared Cursor dev environment, four users, Gitea (private GitOps). See [04-configuration/DEV_VM_GITOPS_PLAN.md](../04-configuration/DEV_VM_GITOPS_PLAN.md).
**IP reference format:** Use `IP (VMID)` or `VMID (IP)` consistently. Full registry: [02-architecture/VMID_ALLOCATION_FINAL.md](../02-architecture/VMID_ALLOCATION_FINAL.md).
- **Port forwarding:** 76.53.10.36:80/443 → 192.168.11.167:80/443 (NPMplus). **Origin for public traffic** = 76.53.10.36. Verify 76.53.10.36:80 and :443 are **open from the internet** before using Fastly or direct; see [05-network/EDGE_PORT_VERIFICATION_RUNBOOK.md](../05-network/EDGE_PORT_VERIFICATION_RUNBOOK.md).
- **NPMplus Alltra/HYBX:** 76.53.10.38:80/81/443 → 192.168.11.169:80/81/443 (port forward); 76.53.10.42 designated public IP. Public DNS for Alltra/HYBX services should prefer proxied Cloudflare tunnel `CNAME`s rather than direct `A` records to the designated IP. See [04-configuration/NPMPLUS_ALLTRA_HYBX_MASTER_PLAN.md](../04-configuration/NPMPLUS_ALLTRA_HYBX_MASTER_PLAN.md).
**Primary path (web/api):** DNS (Cloudflare) → Fastly or A 76.53.10.36 → UDM Pro (76.53.10.36:80/443) → NPMplus (192.168.11.167) → internal services. **Option B (RPC):** The 6 RPC HTTP hostnames use Cloudflare Tunnel (CNAME to cfargotunnel.com); cloudflared (e.g. VMID 102) → NPMplus https://192.168.11.167:443. See [05-network/OPTION_B_RPC_VIA_TUNNEL_RUNBOOK.md](../05-network/OPTION_B_RPC_VIA_TUNNEL_RUNBOOK.md). Verify 76.53.10.36:80/443 for direct/Fastly: [05-network/EDGE_PORT_VERIFICATION_RUNBOOK.md](../05-network/EDGE_PORT_VERIFICATION_RUNBOOK.md).
```
Internet
↓
Cloudflare DNS (optional proxy) → Fastly or 76.53.10.36
↓
UDM Pro (76.53.10.36:80/443 port forward)
↓
NPMplus (VMID 10233: 192.168.11.167:443)
↓
Internal Services
```
### Internal RPC Access
```
Internal Network (192.168.11.0/24)
↓
Direct to RPC Nodes (192.168.11.211-243:8545/8546)
```
---
## Firewall Rules
### P2P Communication
- **Port:** 30303 (TCP/UDP)
- **Allowed:** Between Besu nodes
- **Status:** ✅ Enabled
### RPC Access
- **Ports:** 8545 (HTTP), 8546 (WebSocket)
- **Allowed IPs:** 0.0.0.0/0 (public access)
- **Status:** ✅ Enabled
### Metrics Scraping
- **Port:** 9545
- **Allowed:** Monitoring systems
- **Status:** ✅ Enabled
---
## DNS Configuration
### Internal DNS
- **Primary:** 8.8.8.8
- **Secondary:** 8.8.4.4
- **Internal Domains:** sankofa.nexus (internal)
### Public DNS
- **Provider:** Cloudflare (retained for all public hostnames)
- **Domains:** d-bis.org, mim4u.org, defi-oracle.io, etc.
- **Public path:** Web/api: CNAME to Fastly (Option A) or A to 76.53.10.36 (Option C). **RPC (Option B):** The 6 RPC HTTP hostnames use CNAME to <tunnel-id>.cfargotunnel.com (Proxied); tunnel connector → NPMplus https://192.168.11.167:443. See [05-network/OPTION_B_RPC_VIA_TUNNEL_RUNBOOK.md](../05-network/OPTION_B_RPC_VIA_TUNNEL_RUNBOOK.md).
- **[IT_OPS_EDGE_DISCOVERY_IPS.md](../04-configuration/IT_OPS_EDGE_DISCOVERY_IPS.md)** - LAN discovery IPs (.23, .26 VMID 105 NPM, .2 UDM HA, workstations) for IT IPAM
- **[VLAN_FLAT_11_TO_SEGMENTED_RUNBOOK.md](../03-deployment/VLAN_FLAT_11_TO_SEGMENTED_RUNBOOK.md)** - ordered migration from flat VLAN 11 to segmented VLANs (operator checklist)