Files
proxmox/docs/04-configuration/ALL_VMIDS_ENDPOINTS.md
2026-04-13 21:41:14 -07:00

699 lines
46 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
# Complete VMID and Endpoints Reference
**Last Updated:** 2026-04-09
**Document Version:** 1.3
**Status:** Active Documentation — **Master (source of truth)** for VMID, IP, port, and domain mapping. See [MASTER_DOCUMENTATION_INDEX.md](../00-meta/MASTER_DOCUMENTATION_INDEX.md).
**Operational template (hosts, peering, deployment gates, JSON):** [../03-deployment/PROXMOX_VE_OPERATIONAL_DEPLOYMENT_TEMPLATE.md](../03-deployment/PROXMOX_VE_OPERATIONAL_DEPLOYMENT_TEMPLATE.md) · [`config/proxmox-operational-template.json`](../../config/proxmox-operational-template.json)
---
**Date**: 2026-04-09
**Status**: Current Active Configuration (Verified)
**Last Updated**: 2026-04-09
**Verification Status**: Order band **10000/10001/10020** reconciled to live `pct config` on r630-01; `scripts/verify/audit-proxmox-operational-template.sh` defaults to **five** Proxmox IPs (.10.14 from `ip-addresses.conf`).
---
## Quick Summary
- **Total VMIDs**: 50+ (excluding deprecated Cloudflared)
- **Running**: 45+
- **Stopped**: 5
- **Infrastructure Services**: 10
- **Blockchain Nodes**: 37 (Validators: 5, Sentries: 11, RPC: 21)
- **Application Services**: 22
---
## Infrastructure Services
### Proxmox Infrastructure (r630-01)
**Host note (verified 2026-03-30):** CTs **100105** run on **r630-01** (`192.168.11.11`), not r630-02. Older notes may say r630-02; use `pct list` on each node to confirm if you move guests.
| VMID | IP Address | Hostname | Status | Endpoints | Purpose |
|------|------------|----------|--------|-----------|---------|
| 100 | 192.168.11.32 | proxmox-mail-gateway | ✅ Running | SMTP: 25, 587, 465 | **Proxmox Mail Proxy** / email gateway (LAN SMTP relay); **587/465 enabled** on Postfix (`master.cf` append 2026-03-30) |
| 101 | 192.168.11.33 | proxmox-datacenter-manager | ✅ Running | Web: 8006 | Datacenter management |
| — | 192.168.11.30 | *(spare)* | — | — | **Was VMID 103 (Omada / TP-Link).** CT destroyed on r630-01 **2026-04-04**; IP available for reuse. |
| 104 | 192.168.11.31 | gitea | ✅ Running | Web: 80, 443 | Git repository |
| 105 | 192.168.11.26 | nginxproxymanager | ✅ Running | Web: 80, 81, 443 | Nginx Proxy Manager (legacy) |
| 130 | 192.168.11.27 | monitoring-1 | ✅ Running | Web: 80, 443 | Monitoring services — **Proxmox node not re-verified 2026-03-30** (confirm with `pct list` if needed). |
**Proxmox Mail Proxy (VMID 100):** On Proxmox VE this CT is the **mail proxy / gateway** for the lab (`proxmox-mail-gateway`, `192.168.11.32`). **Postfix listens on 25, 587 (STARTTLS, `smtpd_tls_security_level=may`), and 465 (SMTPS wrapper)** for `192.168.11.0/24` without SMTP AUTH; the server cert is **self-signed** (`CN=proxmox-mail-gateway`, `/etc/pmg/pmg-api.pem`). Apps should set **`SMTP_TLS_REJECT_UNAUTHORIZED=false`** on LAN (see `dbis_core/.env.example`) or install a trust anchor. Plain **25** remains available for trusted networks. Public SaaS (SES, SendGrid) is optional if you prefer not to relay internally.
### NPMplus (r630-01 / r630-02)
| VMID | IP Address | Hostname | Status | Endpoints | Purpose |
|------|------------|----------|--------|-----------|---------|
| 10233 | 192.168.11.167 | npmplus | ✅ Running | Web: 80, 81, 443 | NPMplus reverse proxy |
| 10234 | 192.168.11.168 | npmplus-secondary | ✅ Running | Web: 80, 81, 443 | NPMplus secondary (HA); restarted 2026-02-03 |
**Note**: NPMplus primary is on VLAN 11 (192.168.11.167). Secondary NPMplus instance on r630-02 for HA configuration.
**Operational note (2026-03-26):** if `192.168.11.167:81` accepts TCP but hangs without returning HTTP, CT `10233` may be wedged even when networking looks healthy. Rebooting it from `r630-01` with `pct reboot 10233` restored the expected `301` on port `81` and unblocked the API updater.
---
## RPC Translator Supporting Services
| VMID | IP Address | Hostname | Status | Endpoints | Purpose |
|------|------------|----------|--------|-----------|---------|
| 106 | 192.168.11.110 | redis-rpc-translator | ✅ Running | Redis: 6379 | Distributed nonce management |
| 107 | 192.168.11.111 | web3signer-rpc-translator | ✅ Running | Web3Signer: 9000 | Transaction signing |
| 108 | 192.168.11.112 | vault-rpc-translator | ✅ Running | Vault: 8200 | Secrets management |
---
## Blockchain Nodes - Validators (ChainID 138)
| VMID | IP Address | Hostname | Status | Endpoints | Purpose |
|------|------------|----------|--------|-----------|---------|
| 1000 | 192.168.11.100 | besu-validator-1 | ✅ Running | P2P: 30303, Metrics: 9545 | Validator node 1 |
| 1001 | 192.168.11.101 | besu-validator-2 | ✅ Running | P2P: 30303, Metrics: 9545 | Validator node 2 |
| 1002 | 192.168.11.102 | besu-validator-3 | ✅ Running | P2P: 30303, Metrics: 9545 | Validator node 3 |
| 1003 | 192.168.11.103 | besu-validator-4 | ✅ Running | P2P: 30303, Metrics: 9545 | Validator node 4 |
| 1004 | 192.168.11.104 | besu-validator-5 | ✅ Running | P2P: 30303, Metrics: 9545 | Validator node 5 |
---
## Blockchain Nodes - Sentries (ChainID 138)
| VMID | IP Address | Hostname | Status | Endpoints | Purpose |
|------|------------|----------|--------|-----------|---------|
| 1500 | 192.168.11.150 | besu-sentry-1 | ✅ Running | P2P: 30303, Metrics: 9545 | Sentry node 1 |
| 1501 | 192.168.11.151 | besu-sentry-2 | ✅ Running | P2P: 30303, Metrics: 9545 | Sentry node 2 |
| 1502 | 192.168.11.152 | besu-sentry-3 | ✅ Running | P2P: 30303, Metrics: 9545 | Sentry node 3 |
| 1503 | 192.168.11.153 | besu-sentry-4 | ✅ Running | P2P: 30303, Metrics: 9545 | Sentry node 4 |
| 1504 | 192.168.11.154 | besu-sentry-ali | ✅ Running | P2P: 30303, Metrics: 9545 | Sentry node (Ali) |
| 1505 | 192.168.11.213 | besu-sentry-alltra-1 | ✅ Running | P2P: 30303, Metrics: 9545 | Sentry (Alltra 1) |
| 1506 | 192.168.11.214 | besu-sentry-alltra-2 | ✅ Running | P2P: 30303, Metrics: 9545 | Sentry (Alltra 2) |
| 1509 | 192.168.11.219 | besu-sentry-thirdweb-01 | ✅ Running | P2P: 30303, Metrics: 9545 | Thirdweb sentry 1 |
| 1510 | 192.168.11.220 | besu-sentry-thirdweb-02 | ✅ Running | P2P: 30303, Metrics: 9545 | Thirdweb sentry 2 |
**Placement note (verified 2026-04-09):** validators `1003/1004`, sentries `1503-1510`, and RPCs `2102`, `2301`, `2304`, `2400`, `2402`, `2403` are on **r630-03** (`192.168.11.13`). RPCs `2201`, `2303`, `2305-2308`, and `2401` are on **r630-02** (`192.168.11.12`). `1505-1506` moved from `.170/.171` to `.213/.214` on 2026-02-01.
---
## RPC Nodes - NEW VMID Structure (ChainID 138)
**Migration Status**: ✅ Complete (2026-01-18)
All RPC nodes have been migrated to a new VMID structure for better organization.
### Core RPC Nodes
**Core Besu tier = VMIDs 21012103** (HTTP/WS **8545/8546**). **2101** and **2103** are on **r630-01** (`192.168.11.11`); **2102** is on **r630-03** (`192.168.11.13`). **2201** (public) and **2301** (private/Fireblocks) are separate lanes below.
| VMID | IP Address | Hostname | Status | Block | Peers | Endpoints | Purpose |
|------|------------|----------|--------|-------|-------|-----------|---------|
| 2101 | 192.168.11.211 | besu-rpc-core-1 | ✅ Running | 3,529,172 | 26 | Besu: 8545/8546, P2P: 30303, Metrics: 9545 | Core RPC **1**; default `RPC_URL_138` / `RPC_CORE_1`; **r630-01** |
| 2102 | 192.168.11.212 | besu-rpc-core-2 | ✅ Running | 3,529,172 | 26 | Besu: 8545/8546, P2P: 30303, Metrics: 9545 | Core RPC **2**; `RPC_CORE_2`; **r630-03** |
| 2103 | 192.168.11.217 | besu-rpc-core-thirdweb | ✅ Running | 3,529,172 | 26 | Besu: 8545/8546, P2P: 30303, Metrics: 9545 | Core RPC **Thirdweb admin**; `RPC_THIRDWEB_ADMIN_CORE`; NPM `rpc.tw-core.d-bis.org`; **r630-01** |
| **2201** | **192.168.11.221** | besu-rpc-public-1 | ✅ Running | 3,529,172 | 26 | Besu: 8545/8546, P2P: 30303, Metrics: 9545 | Public RPC node **(FIXED PERMANENT)** |
| 2301 | 192.168.11.232 | besu-rpc-private-1 | ✅ Running | 3,529,172 | 25 | Besu: 8545/8546, P2P: 30303, Metrics: 9545 | Private / Fireblocks RPC node |
**2026-04-02 health note:** `bash scripts/verify/check-chain138-rpc-health.sh` from this workspace received JSON-RPC responses from all sampled Chain 138 RPC nodes. Peer counts were **25-26** across the fleet, the observed head spread was **4 blocks**, and the public RPC capability probe passed for `https://rpc-http-pub.d-bis.org`.
### Named RPC Nodes (Ali/Luis/Putu)
| VMID | IP Address | Hostname | Status | Block | Peers | Endpoints | Purpose |
|------|------------|----------|--------|-------|-------|-----------|---------|
| 2303 | 192.168.11.233 | besu-rpc-ali-0x8a | ✅ Running | 3,529,172 | 26 | Besu: 8545/8546, P2P: 30303, Metrics: 9545 | Ali RPC (0x8a identity) |
| 2304 | 192.168.11.234 | besu-rpc-ali-0x1 | ✅ Running | 3,529,173 | 25 | Besu: 8545/8546, P2P: 30303, Metrics: 9545 | Ali RPC (0x1 identity) |
| 2305 | 192.168.11.235 | besu-rpc-luis-0x8a | ✅ Running | 3,529,176 | 26 | Besu: 8545/8546, P2P: 30303, Metrics: 9545 | Luis RPC (0x8a identity) |
| 2306 | 192.168.11.236 | besu-rpc-luis-0x1 | ✅ Running | 3,529,173 | 26 | Besu: 8545/8546, P2P: 30303, Metrics: 9545 | Luis RPC (0x1 identity) |
| 2307 | 192.168.11.237 | besu-rpc-putu-0x8a | ✅ Running | 3,529,174 | 25 | Besu: 8545/8546, P2P: 30303, Metrics: 9545 | Putu RPC (0x8a identity) |
| 2308 | 192.168.11.238 | besu-rpc-putu-0x1 | ✅ Running | 3,529,173 | 26 | Besu: 8545/8546, P2P: 30303, Metrics: 9545 | Putu RPC (0x1 identity) |
### ThirdWeb RPC Nodes
| VMID | IP Address | Hostname | Status | Block | Peers | Endpoints | Purpose |
|------|------------|----------|--------|-------|-------|-----------|---------|
| 2400 | 192.168.11.240 | thirdweb-rpc-1 | ✅ Running | 3,529,172 | 25 | **Nginx: 443**, Besu: 8545/8546, P2P: 30303, Metrics: 9545, Translator: 9645/9646 | ThirdWeb permissioned/private RPC with translator (primary); no `ADMIN` |
| 2401 | 192.168.11.241 | besu-rpc-thirdweb-0x8a-1 | ✅ Running | 3,529,172 | 26 | Besu: 8545/8546, P2P: 30303, Metrics: 9545 | ThirdWeb specialized RPC 1; no `ADMIN`; HTTP `ETH/NET/WEB3/DEBUG/TRACE`, WS `ETH/NET/WEB3` |
| 2402 | 192.168.11.242 | besu-rpc-thirdweb-0x8a-2 | ✅ Running | 3,529,172 | 26 | Besu: 8545/8546, P2P: 30303, Metrics: 9545 | ThirdWeb specialized RPC 2; no `ADMIN`; HTTP `ETH/NET/WEB3/DEBUG/TRACE`, WS `ETH/NET/WEB3` |
| 2403 | 192.168.11.243 | besu-rpc-thirdweb-0x8a-3 | ✅ Running | 3,529,172 | 26 | Besu: 8545/8546, P2P: 30303, Metrics: 9545 | ThirdWeb specialized RPC 3; no `ADMIN`; HTTP `ETH/NET/WEB3/DEBUG/TRACE`, WS `ETH/NET/WEB3` |
### Older Edge RPC Nodes
| VMID | IP Address | Hostname | Status | Block | Peers | Endpoints | Purpose |
|------|------------|----------|--------|-------|-------|-----------|---------|
| 2420 | 192.168.11.172 | besu-rpc-alltra-1 | ✅ Running | 3,529,172 | 25 | Besu: 8545/8546, P2P: 30303, Metrics: 9545 | Older edge Alltra RPC 1 |
| 2430 | 192.168.11.173 | besu-rpc-alltra-2 | ✅ Running | 3,529,172 | 30 | Besu: 8545/8546, P2P: 30303, Metrics: 9545 | Older edge Alltra RPC 2 |
| 2440 | 192.168.11.174 | besu-rpc-alltra-3 | ✅ Running | 3,529,172 | 30 | Besu: 8545/8546, P2P: 30303, Metrics: 9545 | Older edge Alltra RPC 3 |
| 2460 | 192.168.11.246 | besu-rpc-hybx-1 | ✅ Running | 3,529,172 | 30 | Besu: 8545/8546, P2P: 30303, Metrics: 9545 | Older edge HYBX RPC 1 |
| 2470 | 192.168.11.247 | besu-rpc-hybx-2 | ✅ Running | 3,529,172 | 30 | Besu: 8545/8546, P2P: 30303, Metrics: 9545 | Older edge HYBX RPC 2 |
| 2480 | 192.168.11.248 | besu-rpc-hybx-3 | ✅ Running | 3,529,172 | 30 | Besu: 8545/8546, P2P: 30303, Metrics: 9545 | Older edge HYBX RPC 3 |
### Static web (Chain 138 info site)
| VMID | IP Address | Hostname | Status | Endpoints | Purpose |
|------|------------|----------|--------|-----------|---------|
| 2410 | 192.168.11.218 | info-defi-oracle-web | ✅ Running | **HTTP: 80** (nginx) | **info.defi-oracle.io** Vite SPA (incl. `/governance`, `/ecosystem`, `/documentation`, `/disclosures`, `/agents`) + **`/token-aggregation/`** → `IP_BLOCKSCOUT``config/nginx/info-defi-oracle-io.site.conf`; `provision-info-defi-oracle-web-lxc.sh` + `sync-info-defi-oracle-to-vmid2400.sh`; **mev.defi-oracle.io** MEV Control GUI + `/api` → mev-admin-api — `config/nginx/mev-defi-oracle-io.site.conf.template` + `sync-mev-control-gui-defi-oracle.sh`; NPM upstream `IP_INFO_DEFI_ORACLE_WEB` (see [MEV_CONTROL_DEFI_ORACLE_IO_DEPLOYMENT.md](MEV_CONTROL_DEFI_ORACLE_IO_DEPLOYMENT.md)) |
**Note**: VMID 2400 is the primary ThirdWeb RPC with Nginx and RPC Translator — **do not** host the info SPA there. The 2026-04-02 probe showed all ThirdWeb-side nodes responding normally.
**Public Domain**: `rpc.public-0138.defi-oracle.io` → Routes to VMID 2400:443
---
## OLD RPC Node Lineage (Historical VMIDs)
**Status**: Historical lineage retained for audit trail and old-script cleanup. Treat these rows as retired `25xx` RPC allocations from the 2026-01-18 migration, not as current live placement.
The following VMIDs were the older `25xx` RPC identities before the `21xx/22xx/23xx/24xx` migration:
| VMID | Old IP Address | Old Hostname | Historical status | Replaced By |
|------|----------------|--------------|--------|-------------|
| 2500 | 192.168.11.250 | besu-rpc-1 | Retired in migration | VMID 2101 |
| 2501 | 192.168.11.251 | besu-rpc-2 | Retired in migration | VMID 2201 |
| 2502 | 192.168.11.252 | besu-rpc-3 | Retired in migration | VMID 2301 |
| 2503 | 192.168.11.253 | besu-rpc-ali-0x8a | Retired in migration | VMID 2303 |
| 2504 | 192.168.11.254 | besu-rpc-ali-0x1 | Retired in migration | VMID 2304 |
| 2505 | 192.168.11.201 | besu-rpc-luis-0x8a | Retired in migration | VMID 2305 |
| 2506 | 192.168.11.202 | besu-rpc-luis-0x1 | Retired in migration | VMID 2306 |
| 2507 | 192.168.11.203 | besu-rpc-putu-0x8a | Retired in migration | VMID 2307 |
| 2508 | 192.168.11.204 | besu-rpc-putu-0x1 | Retired in migration | VMID 2308 |
**Cleanup note (2026-04-12):** if you still see Besu-named `25xx` CT names elsewhere in the workspace, treat them as historical residue until each script/runbook is reconciled to the live VMID map above.
**Public Domains** (current live routing inventory):
- `rpc-http-prv.d-bis.org` → VMID `2101` (`192.168.11.211:8545`)
- `rpc-ws-prv.d-bis.org` → VMID `2101` (`192.168.11.211:8546`)
- `rpc-http-pub.d-bis.org` → VMID `2201` (`192.168.11.221:8545`)
- `rpc-ws-pub.d-bis.org` → VMID `2201` (`192.168.11.221:8546`)
- `rpc.public-0138.defi-oracle.io` → VMID `2400` (`192.168.11.240:443`)
---
## Application Services
### Blockchain Explorer
| VMID | IP Address | Hostname | Status | Endpoints | Purpose |
|------|------------|----------|--------|-----------|---------|
| 5000 | 192.168.11.140 | blockscout-1 | ✅ Running | Web: 80, 443; API: 4000 | Blockchain explorer |
**Public Domain**: `explorer.d-bis.org` → Routes to VMID 5000:80 (nginx serves the public UI and splits traffic across internal explorer services)
**Internal split note**: the live explorer on VMID `5000` is no longer just "web plus Blockscout API". Public traffic is currently split across nginx, Next frontend (`3000`), explorer config/API (`8081`), Blockscout (`4000`), token aggregation (`3001`), and static `/config/*` assets. See [`LIVE_DEPLOYMENT_MAP.md`](../../explorer-monorepo/deployment/LIVE_DEPLOYMENT_MAP.md) for the authoritative internal listener map.
---
### Firefly
| VMID | IP Address | Hostname | Status | Endpoints | Purpose |
|------|------------|----------|--------|-----------|---------|
| 6200 | 192.168.11.35 | firefly-1 | ✅ Running | Web: 80, 443, API: 5000 | Firefly DLT platform |
| 6201 | 192.168.11.57 | firefly-ali-1 | ⏸️ Stopped | None verified | FireFly standby / retired metadata only until intentionally rebuilt |
| 6202 | 192.168.11.175 | firefly-alltra-1 | ✅ Running | Web: 80 (placeholder) | ALLTRA Firefly stack (`IP_FIREFLY_ALLTRA_1`) |
| 6203 | 192.168.11.176 | firefly-alltra-2 | ✅ Running | Web: 80 (placeholder) | ALLTRA Firefly stack (`IP_FIREFLY_ALLTRA_2`) |
| 6204 | 192.168.11.249 | firefly-hybx-1 | ✅ Running | Web: 80 (placeholder) | HYBX Firefly stack (`IP_FIREFLY_HYBX_1`) |
| 6205 | 192.168.11.250 | firefly-hybx-2 | ✅ Running | Web: 80 (placeholder) | HYBX Firefly stack (`IP_FIREFLY_HYBX_2`); re-IPd from .172 2026-04-09 (was duplicate of Besu 2420/2500) |
**Note:** Firefly instances run on r630-02. VMID 6200 also on r630-02. As of 2026-04-05, `firefly.service` is enabled and active again after normalizing the compose `version` field for the installed `docker-compose`. VMIDs 62026205 share the ALLTRA/HYBX placeholder web pattern; NPMplus maps `firefly-hybx-*.d-bis.org` to 249/250 per `scripts/nginx-proxy-manager/update-npmplus-alltra-hybx-proxy-hosts.sh`.
---
### DBIS RTGS first-slice sidecars
| VMID | IP Address | Hostname | Status | Endpoints | Purpose |
|------|------------|----------|--------|-----------|---------|
| 5802 | 192.168.11.89 | rtgs-scsm-1 | ✅ Running | App: 8080, Redis: 6379 | DBIS RTGS `mifos-fineract-sidecar` / SCSM |
| 5803 | 192.168.11.90 | rtgs-funds-1 | ✅ Running | App: 8080, Redis: 6379 | DBIS RTGS `server-funds-sidecar` |
| 5804 | 192.168.11.92 | rtgs-xau-1 | ✅ Running | App: 8080, Redis: 6379 | DBIS RTGS `off-ledger-2-on-ledger-sidecar` |
**Operational note (2026-03-28/29):**
- These three sidecars are deployed internally on `r630-02` and return local actuator health.
- They can reach the live Mifos / Fineract surface on VMID `5800` at the HTTP layer.
- Canonical authenticated RTGS flow is still pending final Fineract tenant/auth freeze, so these should currently be treated as `runtime deployed, functionally partial`.
---
### Hyperledger Fabric
| VMID | IP Address | Hostname | Status | Endpoints | Purpose |
|------|------------|----------|--------|-----------|---------|
| 6000 | 192.168.11.113 | fabric-1 | ✅ Running | Peer: 7051, Orderer: 7050, Ops: 9443-9445 | Hyperledger Fabric sample network |
**Note:** As of 2026-04-05, VMID 6000 is an active sample-network runtime, not a stopped placeholder. `fabric-sample-network.service` keeps the network queryable across CT restarts, and `scripts/maintenance/ensure-fabric-sample-network-via-ssh.sh` is the manual repair path.
---
### Hyperledger Indy
| VMID | IP Address | Hostname | Status | Endpoints | Purpose |
|------|------------|----------|--------|-----------|---------|
| 6400 | 192.168.11.64 | indy-1 | ✅ Running | Indy: 9701-9708 | Hyperledger Indy network |
---
### Hyperledger Aries / AnonCreds
| VMID | IP Address | Hostname | Status | Endpoints | Purpose |
|------|------------|----------|--------|-----------|---------|
| 6500 | 192.168.11.88 | aries-1 | ✅ Running | ACA-Py DIDComm: 8030, Admin API: 8031 | Hyperledger Aries / AnonCreds agent runtime |
---
### Hyperledger Caliper
| VMID | IP Address | Hostname | Status | Endpoints | Purpose |
|------|------------|----------|--------|-----------|---------|
| 6600 | 192.168.11.93 | caliper-1 | ✅ Running | Local CLI workspace, outbound RPC to 192.168.11.211:8545 / 8546 | Hyperledger Caliper benchmark harness |
---
### OP Stack operator landing zone
| VMID | IP Address | Hostname | Status | Endpoints | Purpose |
|------|------------|----------|--------|-----------|---------|
| 5751 | 192.168.11.69 | op-stack-deployer-1 | ✅ Provisioned | SSH: 22 | `op-deployer` / `op-validator` / artifact workspace on `r630-02` (`thin5`) |
| 5752 | 192.168.11.70 | op-stack-ops-1 | ✅ Provisioned | SSH: 22 | OP Stack runtime/service staging (`op-node`, `op-geth`, batcher, proposer, challenger) on `r630-02` (`thin5`) |
---
### DBIS Core Services
| VMID | IP Address | Hostname | Status | Endpoints | Purpose |
|------|------------|----------|--------|-----------|---------|
| 10100 | 192.168.11.105 | dbis-postgres-primary | ✅ Running | PostgreSQL: 5432 | Primary database |
| 10101 | 192.168.11.106 | dbis-postgres-replica-1 | ✅ Running | PostgreSQL: 5432 | Database replica |
| 10120 | 192.168.11.125 | dbis-redis | ✅ Running | Redis: 6379 | Cache layer |
| 10130 | 192.168.11.130 | dbis-frontend | ✅ Running | Web: 80, 443 | Admin + secure **web** shell (see canonical hostnames below) |
| 10150 | 192.168.11.155 | dbis-api-primary | ✅ Running | TCP **3000** | **Live DBIS API source runtime** via `dbis-api.service`. Root `/` returns service metadata JSON; `/health` and `/v1/health` return JSON health. **Host:** r630-01. |
| 10151 | 192.168.11.156 | dbis-api-secondary | ✅ Running | TCP **3000** | Secondary **live DBIS API** node via `dbis-api.service`; mirrors 10150. |
**Canonical public hostnames (operator intent)**
| Hostname | Role | Typical NPM upstream (today) |
|----------|------|------------------------------|
| **d-bis.org** | Public institutional web | TBD — Gov Portals **DBIS** Next app or static export when cut over |
| **admin.d-bis.org** | Admin console | VMID **10130** `:80` |
| **secure.d-bis.org** | Member secure portal | VMID **10130** `:80` (path-based routing; see below) |
| **core.d-bis.org** | **DBIS Core** service root (`dbis_core`) | VMID **10150** `:3000` today; currently serves DBIS API metadata while the dedicated client UI cutover remains pending |
**Legacy:** `dbis-admin.d-bis.org` → same upstream as **admin.d-bis.org** if still in DNS.
**Public Domains (inventory)**:
- `admin.d-bis.org` → VMID 10130:80 (canonical admin)
- `dbis-admin.d-bis.org` → VMID 10130:80 (legacy alias, if configured)
- `secure.d-bis.org` → VMID 10130:80
- `dbis-api.d-bis.org` → NPM target VMID 10150:3000 (**live DBIS API**, `/health` + `/v1/health`)
- `dbis-api-2.d-bis.org` → NPM target VMID 10151:3000 (**live DBIS API**)
- `data.d-bis.org` → NPM target VMID 10150:3000 (`/v1/health` on the primary DBIS API node)
**Current DBIS API runtime:** `10150` and `10151` now expose the compiled/source **dbis_core** integration API through `dbis-api.service`. Older `192.168.11.150` / `.151` references remain historical only.
---
### Miracles In Motion (MIM4U)
| VMID | IP Address | Hostname | Status | Endpoints | Purpose |
|------|------------|----------|--------|-----------|---------|
| 7810 | 192.168.11.37 | mim-web-1 | ✅ Running | Web: 80, 443 | MIM4U web frontend |
| 7811 | 192.168.11.36 | mim-api-1 | ✅ Running | Web: 80, 443, API: Various | MIM4U service (web + API) |
**Public Domains** (NPMplus config):
- `mim4u.org` → Routes to `http://192.168.11.37:80` (VMID 7810 mim-web-1)
- `www.mim4u.org` → Routes to `http://192.168.11.37:80` (VMID 7810; optional NPMplus redirect www → apex)
- `secure.mim4u.org` → Routes to `http://192.168.11.37:80` (VMID 7810)
- `training.mim4u.org` → Routes to `http://192.168.11.37:80` (VMID 7810)
**Note**: All MIM4U domains route to VMID 7810 (mim-web-1) at 192.168.11.37. nginx on 7810 proxies `/api/` to VMID 7811 (192.168.11.36:3001).
---
### Sankofa Phoenix Services
**Status**: ✅ **DEPLOYED AND OPERATIONAL** (2026-01-20)
**Verified Deployed Services:**
| VMID | IP Address | Hostname | Status | Endpoints | Purpose |
|------|------------|----------|--------|-----------|---------|
| 7800 | 192.168.11.50 | sankofa-api-1 | ✅ Running | **Apollo :4000** loopback-only (`HOST=127.0.0.1`); **Tier-1 hub :8080** (`/graphql`→127.0.0.1:4000); hub `/health` | Phoenix API (Cloud Platform Portal) |
| 7801 | 192.168.11.51 | sankofa-portal-1 | ✅ Running | Web: 3000 | Hybrid cloud **client portal** (`portal.sankofa.nexus` / `admin.sankofa.nexus` when NPM routes); not the long-term corporate apex app — see `IP_SANKOFA_PUBLIC_WEB` / `sync-sankofa-public-web-to-ct.sh` |
| 7802 | 192.168.11.52 | sankofa-keycloak-1 | ✅ Running | Keycloak: 8080, Admin: /admin | Identity and Access Management |
| 7803 | 192.168.11.53 | sankofa-postgres-1 | ✅ Running | PostgreSQL: 5432 | Database Service |
| 7804 | 192.168.11.54 | (Gov Portals dev) | ✅ Running | Web: 80 | Gov Portals — DBIS, ICCC, OMNL, XOM (*.xom-dev.phoenix.sankofa.nexus) |
| 7805 | 192.168.11.72 | sankofa-studio | ✅ Running | API: 8000 | Sankofa Studio (FusionAI Creator) — studio.sankofa.nexus (IP .72; .55 = VMID 10230 order-vault) |
| 7806 | 192.168.11.63 | sankofa-public-web | ✅ Running | Web: 3000 | Corporate / marketing Next.js (Sankofa **repo root**); provision: `scripts/deployment/provision-sankofa-public-web-lxc-7806.sh`; deploy: `scripts/deployment/sync-sankofa-public-web-to-ct.sh`; NPM apex via **`IP_SANKOFA_PUBLIC_WEB`** (`.env` or override) |
**Public Domains** (NPMplus routing):
- `sankofa.nexus` / `www.sankofa.nexus`**`IP_SANKOFA_PUBLIC_WEB`:`SANKOFA_PUBLIC_WEB_PORT** (canonical current target: **7806** `192.168.11.63:3000`). Fleet script: `scripts/nginx-proxy-manager/update-npmplus-proxy-hosts-api.sh`. **`www`** → **301** → apex `https://sankofa.nexus` (`$request_uri`). ✅
- `portal.sankofa.nexus` / `admin.sankofa.nexus`**`IP_SANKOFA_CLIENT_SSO`:`SANKOFA_CLIENT_SSO_PORT** (typical: 7801 `:3000`). NextAuth / OIDC public URL: **`https://portal.sankofa.nexus`**. ✅ when NPM proxy rows exist (fleet script creates/updates them).
- `dash.sankofa.nexus` → Set **`IP_SANKOFA_DASH`** (+ `SANKOFA_DASH_PORT`) in `config/ip-addresses.conf` to enable upstream in the fleet script; IP allowlist at NPM is operator policy. 🔶 until dash app + env are set.
- `phoenix.sankofa.nexus` → NPM upstream **`http://192.168.11.50:8080`** (Tier-1 API hub on VMID **7800**; WebSocket upgrades **on**). Apollo listens on **`127.0.0.1:4000`** only (not reachable from VLAN); hub proxies to loopback. ✅ (2026-04-13 fleet + loopback bind)
- `www.phoenix.sankofa.nexus` → Same **:8080** upstream; **301** to **`https://phoenix.sankofa.nexus`**. ✅
- `the-order.sankofa.nexus` / `www.the-order.sankofa.nexus` → OSJ management portal (secure auth). App source: **the_order** at `~/projects/the_order`. NPMplus default upstream: **order-haproxy** `http://192.168.11.39:80` (VMID **10210**), which proxies to Sankofa portal `http://192.168.11.51:3000` (7801). Fallback: set `THE_ORDER_UPSTREAM_IP` / `THE_ORDER_UPSTREAM_PORT` to `.51` / `3000` if HAProxy is offline. **`www.the-order.sankofa.nexus`** → **301** **`https://the-order.sankofa.nexus`** (same as `www.sankofa` / `www.phoenix`).
- `studio.sankofa.nexus` → Routes to `http://192.168.11.72:8000` (Sankofa Studio / VMID 7805; app-owned `/``/studio/` redirect)
**Public verification evidence (2026-04-05):** `bash scripts/verify/verify-end-to-end-routing.sh --profile=public` passed with `DNS passed: 60`, `HTTPS passed: 44`, and `Failed: 0`; Sankofa root, portal, admin, Studio, Phoenix, The Order, and `info.defi-oracle.io` all returned healthy responses. See [verification_report.md](verification-evidence/e2e-verification-20260405_160949/verification_report.md).
**Service Details:**
- **Host:** r630-01 (192.168.11.11)
- **Network:** VLAN 11 (192.168.11.0/24)
- **Gateway:** 192.168.11.1
- **All services verified and operational**
**Note:** Sankofa services are deployed on VLAN 11 (192.168.11.x) as intended. All services are running and accessible.
---
### The Order — microservices (r630-01)
| VMID | IP Address | Hostname | Status | Endpoints | Purpose |
|------|------------|----------|--------|-----------|---------|
| 10000 | 192.168.11.44 | order-postgres-primary | ✅ Running | PostgreSQL: 5432 | Order DB primary (`ORDER_POSTGRES_PRIMARY`) |
| 10001 | 192.168.11.45 | order-postgres-replica | ✅ Running | PostgreSQL: 5432 | Order DB replica (`ORDER_POSTGRES_REPLICA`) |
| 10020 | 192.168.11.38 | order-redis | ✅ Running | Redis: 6379 | Order cache (`ORDER_REDIS_IP`; VMID 10020) |
| 10030 | 192.168.11.40 | order-identity | ✅ Running | API | Identity |
| 10040 | 192.168.11.41 | order-intake | ✅ Running | API | Intake |
| 10050 | 192.168.11.49 | order-finance | ✅ Running | API | Finance |
| 10060 | 192.168.11.42 | order-dataroom | ✅ Running | Web: 80 | Dataroom |
| 10070 | **192.168.11.87** | order-legal | ✅ Running | API | Legal — **use `IP_ORDER_LEGAL` (.87); not .54** |
| 10080 | 192.168.11.43 | order-eresidency | ✅ Running | API | eResidency |
| 10090 | 192.168.11.36 | order-portal-public | ✅ Running | Web | Public portal |
| 10091 | 192.168.11.35 | order-portal-internal | ✅ Running | Web | Internal portal |
| 10092 | 192.168.11.94 | order-mcp-legal | ✅ Running | API | MCP legal — moved off `.37` on 2026-03-29 to avoid MIM4U ARP conflict |
| 10200 | 192.168.11.46 | order-prometheus | ✅ Running | 9090 | Metrics (`IP_ORDER_PROMETHEUS`; not Order Redis) |
| 10201 | 192.168.11.47 | order-grafana | ✅ Running | 3000 | Dashboards |
| 10202 | 192.168.11.48 | order-opensearch | ✅ Running | 9200 | Search |
| 10203 | 192.168.11.222 | omdnl-org-web | ✅ Running | HTTP: 80 (nginx) | **omdnl.org** / **www.omdnl.org** static landing; `provision-omdnl-org-web-lxc.sh`; `sync-omdnl-org-static-to-ct.sh`; NPM `upsert-omdnl-org-proxy-host.sh`; `IP_OMDNL_ORG_WEB` |
| 10210 | 192.168.11.39 | order-haproxy | ✅ Running | 80 (HAProxy → portal :3000) | Edge for **the-order.sankofa.nexus**; HAProxy config via `config/haproxy/order-haproxy-10210.cfg.template` + `scripts/deployment/provision-order-haproxy-10210.sh` |
**Gov portals vs Order:** VMID **7804** alone uses **192.168.11.54** (`IP_GOV_PORTALS_DEV`). Order-legal must not use .54.
**MIM4U vs order-mcp-legal:** VMID **7810** alone uses **192.168.11.37** (`IP_MIM_WEB`). VMID **10092** now uses **192.168.11.94** (`IP_ORDER_MCP_LEGAL`) after the 2026-03-29 ARP conflict fix.
---
### Phoenix Vault Cluster (8640-8642)
| VMID | IP Address | Hostname | Status | Endpoints | Purpose |
|------|------------|----------|--------|-----------|---------|
| 8640 | 192.168.11.200 | vault-phoenix-1 | ✅ Running | Vault: 8200 | Phoenix Vault node 1 |
| 8641 | 192.168.11.215 | vault-phoenix-2 | ✅ Running | Vault: 8200 | Phoenix Vault node 2 |
| 8642 | 192.168.11.202 | vault-phoenix-3 | ✅ Running | Vault: 8200 | Phoenix Vault node 3 |
**Note:** 8641 moved from .201 to .215 (2026-02-01) to free CCIP Execute interim range. See [IP_CONFLICTS_CCIP_RANGE_RESOLVED_20260201.md](../../reports/status/IP_CONFLICTS_CCIP_RANGE_RESOLVED_20260201.md).
---
### Other Services
| VMID | IP Address | Hostname | Status | Endpoints | Purpose | Notes |
|------|------------|----------|--------|-----------|---------|-------|
| 5800 | 192.168.11.85 | (Mifos) | ✅ Running | Web: 80 | Mifos X + Fineract (OMNL) | LXC on r630-02; mifos.d-bis.org; see [MIFOS_R630_02_DEPLOYMENT.md](MIFOS_R630_02_DEPLOYMENT.md) |
| 5801 | 192.168.11.58 | dapp-smom | — | Web: 80 | DApp (frontend-dapp) for Chain 138 bridge | LXC; see [DAPP_LXC_DEPLOYMENT.md](../03-deployment/DAPP_LXC_DEPLOYMENT.md); NPMplus/tunnel dapp.d-bis.org |
| 10232 | 192.168.11.56 | CT10232 | ✅ Running | Various | Container service | ✅ **IP CONFLICT RESOLVED** |
| 10234 | 192.168.11.168 | npmplus-secondary | ⏸️ Stopped | Web: 80, 81, 443 | NPMplus secondary (HA) | On r630-02 |
---
### Oracle & Monitoring
| VMID | IP Address | Hostname | Status | Endpoints | Purpose |
|------|------------|----------|--------|-----------|---------|
| 3500 | 192.168.11.29 | oracle-publisher-1 | ✅ Running (verify on-chain) | Oracle: Various | **r630-02** `thin5`. Reprovisioned 2026-03-28 via `scripts/deployment/provision-oracle-publisher-lxc-3500.sh` (systemd `oracle-publisher`). If `updateAnswer` txs revert, set `PRIVATE_KEY` in `/opt/oracle-publisher/.env` to an EOA **authorized on the aggregator** (may differ from deployer). Metrics: `:8000/metrics`. |
| 3501 | 192.168.11.28 | ccip-monitor-1 | ✅ Running | Monitor: Various | CCIP monitoring; **migrated 2026-03-28** to **r630-02** `thin5` (`pvesh``/migrate --target-storage thin5`). |
| 5200 | 192.168.11.80 | cacti-1 | ✅ Running | Web: 80, 443; API: 4000 | Primary Cacti / Besu connector node; **host r630-02** (migrated 2026-02-15) |
| 5201 | 192.168.11.177 | cacti-alltra-1 | ✅ Running | Web: 80; API: 4000 | Public Alltra Cacti landing page and local Hyperledger Cacti API; `cacti-alltra.d-bis.org` |
| 5202 | 192.168.11.251 | cacti-hybx-1 | ✅ Running | Web: 80; API: 4000 | Public HYBX Cacti landing page and local Hyperledger Cacti API; `cacti-hybx.d-bis.org` |
---
### Legacy Monitor / RPC-Adjacent Nodes
| VMID | IP Address | Hostname | Status | Endpoints | Purpose |
|------|------------|----------|--------|-----------|---------|
| 3000 | 192.168.11.60 | ml110 | ✅ Running | No dedicated public listener verified | Legacy monitor / RPC-adjacent Ubuntu guest (`systemd-networkd` re-enabled 2026-04-05) |
| 3001 | 192.168.11.61 | ml110 | ✅ Running | No dedicated public listener verified | Legacy monitor / RPC-adjacent Ubuntu guest (`systemd-networkd` re-enabled 2026-04-05) |
| 3002 | 192.168.11.62 | ml110 | ✅ Running | No dedicated public listener verified | Legacy monitor / RPC-adjacent Ubuntu guest (`systemd-networkd` re-enabled 2026-04-05) |
| 3003 | 192.168.11.66 | ml110 | ✅ Running | No dedicated public listener verified | Legacy monitor / RPC-adjacent Ubuntu guest; readdressed from `.63` on 2026-04-05 to remove the Sankofa public-web IP conflict, and `systemd-networkd` was re-enabled the same day |
---
## Port Reference
### Standard Besu Ports
- **8545**: HTTP JSON-RPC
- **8546**: WebSocket JSON-RPC
- **30303**: P2P networking (TCP/UDP)
- **9545**: Prometheus metrics
### Standard Application Ports
- **80**: HTTP
- **443**: HTTPS
- **3000**: Node.js API
- **5432**: PostgreSQL
- **6379**: Redis
- **9000**: Web3Signer
- **8200**: Vault
---
## Network Architecture
### Public Internet Access Flow
```
Internet
Cloudflare (DNS + DDoS Protection)
NPMplus (VMID 10233: 192.168.0.166:443)
VM Nginx (443) → Backend Services
```
### Internal RPC Access
```
Internal Network (192.168.11.0/24)
Direct to RPC Nodes:
- VMID 2101: 192.168.11.211:8545 (HTTP) / 8546 (WS) - Core RPC 1 (r630-01)
- VMID 2102: 192.168.11.212:8545 (HTTP) / 8546 (WS) - Core RPC 2 (r630-03)
- VMID 2103: 192.168.11.217:8545 (HTTP) / 8546 (WS) - Core RPC Thirdweb admin (r630-01)
- VMID 2201: 192.168.11.221:8545 (HTTP) / 8546 (WS) - Public RPC
- VMID 2303: 192.168.11.233:8545 (HTTP) / 8546 (WS) - Ali 0x8a
- VMID 2304: 192.168.11.234:8545 (HTTP) / 8546 (WS) - Ali 0x1
- VMID 2305: 192.168.11.235:8545 (HTTP) / 8546 (WS) - Luis 0x8a
- VMID 2306: 192.168.11.236:8545 (HTTP) / 8546 (WS) - Luis 0x1
- VMID 2307: 192.168.11.237:8545 (HTTP) / 8546 (WS) - Putu 0x8a
- VMID 2308: 192.168.11.238:8545 (HTTP) / 8546 (WS) - Putu 0x1
- VMID 2400: 192.168.11.240:8545 (HTTP) / 8546 (WS) - ThirdWeb Primary
- VMID 2401: 192.168.11.241:8545 (HTTP) / 8546 (WS) - ThirdWeb 1
- VMID 2402: 192.168.11.242:8545 (HTTP) / 8546 (WS) - ThirdWeb 2
- VMID 2403: 192.168.11.243:8545 (HTTP) / 8546 (WS) - ThirdWeb 3
```
---
## Known Issues & Notes
### ✅ IP Address Conflicts - **RESOLVED**
**Status:****RESOLVED** - All conflicts fixed (2026-01-20)
1. **192.168.11.50**: ✅ **RESOLVED**
- VMID 7800 (sankofa-api-1): 192.168.11.50 ✅ **UNIQUE**
- VMID 10070 (order-legal): **192.168.11.87** (`IP_ORDER_LEGAL`) — moved off .54 2026-03-25 (ARP conflict with VMID 7804 gov-portals) ✅
2. **192.168.11.51**: ✅ **RESOLVED**
- VMID 7801 (sankofa-portal-1): 192.168.11.51 ✅ **UNIQUE**
- VMID 10230 (order-vault): Reassigned to 192.168.11.55 ✅
3. **192.168.11.52**: ✅ **RESOLVED**
- VMID 7802 (sankofa-keycloak-1): 192.168.11.52 ✅ **UNIQUE**
- VMID 10232 (CT10232): Reassigned to 192.168.11.56 ✅
4. **192.168.11.55**: ✅ **IN USE** — VMID 10230 (order-vault) only. Sankofa Studio (VMID 7805) uses **192.168.11.72** to avoid conflict.
**Resolution:** All IP conflicts resolved using `scripts/resolve-ip-conflicts.sh`
**Verification:** ✅ All IPs verified unique, all services operational
**IP conflicts (canonical):** [reports/status/IP_CONFLICTS_RESOLUTION_COMPLETE.md](../../reports/status/IP_CONFLICTS_RESOLUTION_COMPLETE.md); CCIP range move: [reports/status/IP_CONFLICTS_CCIP_RANGE_RESOLVED_20260201.md](../../reports/status/IP_CONFLICTS_CCIP_RANGE_RESOLVED_20260201.md). **Script:** `scripts/resolve-ip-conflicts.sh` (uses `config/ip-addresses.conf`).
---
### Port Conflicts
1. **VMID 2400**: Port conflict resolved ✅
- **Previous**: Besu metrics (9545) conflicted with RPC Translator HTTP (9545)
- **Resolution**: Translator moved to 9645/9646 (completed)
- **Current**: Nginx routes to translator on 9645/9646
### NPMplus Routing Issues
1. **`rpc.public-0138.defi-oracle.io`**: Canonical target is **VMID 2400**
- **Canonical**: `https://192.168.11.240:443` (VMID 2400)
- **Historical note:** older inventory text referenced decommissioned VMID **2502**; remove that assumption when auditing current routing.
---
## Quick Access Commands
### Test RPC Endpoints
```bash
# Public RPC (HTTP)
curl -X POST https://rpc-http-pub.d-bis.org \
-H 'Content-Type: application/json' \
-d '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}'
# Private RPC (HTTP) - requires JWT
curl -X POST https://rpc-http-prv.d-bis.org \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer <JWT_TOKEN>' \
-d '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}'
# ThirdWeb RPC
curl -X POST https://rpc.public-0138.defi-oracle.io \
-H 'Content-Type: application/json' \
-d '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}'
```
### Check Container Status
```bash
# From Proxmox host
pct status <VMID>
qm status <VMID>
# Check specific service
pct exec <VMID> -- systemctl status <service-name>
```
---
## Related Documentation
- **VMID IP List**: `reports/VMID_IP_ADDRESS_LIST.md`
- **NPMplus Setup**: `docs/04-configuration/NPMPLUS_COMPLETE_SETUP_SUMMARY.md`
- **Nginx Configurations**: `docs/04-configuration/NGINX_CONFIGURATIONS_VMIDS_2400-2508.md`
- **RPC Translator**: `rpc-translator-138/VMID_ALLOCATION.md`
---
---
## NPMplus Endpoint Configuration Reference
This section lists all endpoints that should be configured in NPMplus, extracted from NPM (VMID 105) configuration files.
### Complete NPMplus Domain Mapping
| Domain | Target | Scheme | Port | WebSocket | Notes |
|--------|--------|--------|------|-----------|-------|
| **RPC Services** |
| `rpc.public-0138.defi-oracle.io` | `192.168.11.240` | `https` | `443` | ✅ Yes | ThirdWeb RPC (VMID 2400) |
| `rpc-http-pub.d-bis.org` | `192.168.11.221` | `https` | `443` | ✅ Yes | Public RPC (VMID 2201) |
| `rpc-ws-pub.d-bis.org` | `192.168.11.221` | `https` | `443` | ✅ Yes | Public WebSocket RPC (VMID 2201) |
| `rpc-http-prv.d-bis.org` | `192.168.11.211` | `https` | `443` | ✅ Yes | Private RPC with JWT (VMID 2101) |
| `rpc-ws-prv.d-bis.org` | `192.168.11.211` | `https` | `443` | ✅ Yes | Private WebSocket RPC with JWT (VMID 2101) |
| **Explorer** |
| `explorer.d-bis.org` | `192.168.11.140` | `http` | `80` | ❌ No | Explorer ingress on VMID 5000 nginx; internal services are split behind it |
| **DBIS Services** |
| `d-bis.org` | `192.168.11.54` | `http` | `3001` | ❌ No | Public apex — Gov Portals DBIS on **7804** (override `IP_DBIS_PUBLIC_APEX` / `DBIS_PUBLIC_APEX_PORT`) |
| `www.d-bis.org` | `192.168.11.54` | `http` | `3001` | ❌ No | Same upstream as apex; NPM **301**`https://d-bis.org` when `advanced_config` set by fleet script |
| `admin.d-bis.org` | `192.168.11.130` | `http` | `80` | ❌ No | DBIS **admin** console (VMID 10130); canonical |
| `dbis-admin.d-bis.org` | `192.168.11.130` | `http` | `80` | ❌ No | Legacy alias — same upstream as **admin.d-bis.org** |
| `core.d-bis.org` | `192.168.11.155` | `http` | `3000` | ❌ No | Current DBIS Core service root on **10150**; returns service metadata JSON until the dedicated client UI is cut over |
| `dbis-api.d-bis.org` | `192.168.11.155` | `http` | `3000` | ❌ No | VMID 10150 — live DBIS API (`/health`, `/v1/health`) |
| `dbis-api-2.d-bis.org` | `192.168.11.156` | `http` | `3000` | ❌ No | VMID 10151 — live DBIS API (`/health`, `/v1/health`) |
| `secure.d-bis.org` | `192.168.11.130` | `http` | `80` | ❌ No | DBIS Secure Portal (VMID 10130) - Path-based routing |
| **MIM4U Services** |
| `mim4u.org` | `192.168.11.37` | `http` | `80` | ❌ No | MIM4U Main Site (VMID 7810 mim-web-1) |
| `www.mim4u.org` | `192.168.11.37` | `http` | `80` | ❌ No | MIM4U (VMID 7810; optional redirect www → apex) |
| `secure.mim4u.org` | `192.168.11.37` | `http` | `80` | ❌ No | MIM4U Secure Portal (VMID 7810) |
| `training.mim4u.org` | `192.168.11.37` | `http` | `80` | ❌ No | MIM4U Training Portal (VMID 7810) |
| **Sankofa Phoenix Services** |
| *(optional hub)* | **`IP_SANKOFA_WEB_HUB`** / **`IP_SANKOFA_PHOENIX_API_HUB`** (default in `config/ip-addresses.conf` = portal / Phoenix API until `.env` overrides) | `http` | per hub nginx | ❌ No | Consolidated non-chain web + path API hub — see `docs/02-architecture/SANKOFA_PHOENIX_CONSOLIDATED_FRONTEND_AND_API.md` |
| `sankofa.nexus` | **`IP_SANKOFA_PUBLIC_WEB`** (`192.168.11.63` on VMID 7806 in the current deployment) | `http` | **`SANKOFA_PUBLIC_WEB_PORT`** (`3000`) | ❌ No | Corporate apex; fleet script `update-npmplus-proxy-hosts-api.sh` |
| `www.sankofa.nexus` | same as apex | `http` | same | ❌ No | **301**`https://sankofa.nexus` |
| `portal.sankofa.nexus` | **`IP_SANKOFA_CLIENT_SSO`** (typ. `.51` / 7801) | `http` | **`SANKOFA_CLIENT_SSO_PORT`** (`3000`) | ❌ No | Client SSO portal; `NEXTAUTH_URL=https://portal.sankofa.nexus` |
| `admin.sankofa.nexus` | same as portal | `http` | same | ❌ No | Client access admin (same upstream until split) |
| `dash.sankofa.nexus` | **`IP_SANKOFA_DASH`** (set in `ip-addresses.conf`) | `http` | **`SANKOFA_DASH_PORT`** | ❌ No | Operator dash — row omitted from fleet script until `IP_SANKOFA_DASH` set |
| `phoenix.sankofa.nexus` | `192.168.11.50` | `http` | **`8080`** (Tier-1 **API hub** nginx; `/graphql`→**127.0.0.1:4000**, `/api`→dbis_core); **WebSocket: yes** | ❌ No | NPM fleet: `SANKOFA_NPM_PHOENIX_PORT=8080`; Apollo **not** on `0.0.0.0:4000` (loopback bind); break-glass: `pct exec 7800``curl http://127.0.0.1:4000/health` |
| `www.phoenix.sankofa.nexus` | `192.168.11.50` | `http` | **`8080`** | ❌ No | Same; **301** → apex HTTPS |
| `the-order.sankofa.nexus`, `www.the-order.sankofa.nexus` | `192.168.11.39` (10210 HAProxy; default) or `192.168.11.51` (direct portal if env override) | `http` | `80` or `3000` | ❌ No | NPM → **.39:80** by default; HAProxy → **.51:3000** |
| `studio.sankofa.nexus` | `192.168.11.72` | `http` | `8000` | ❌ No | Sankofa Studio (FusionAI Creator) — VMID 7805 |
### Path-Based Routing Notes
Some domains use path-based routing in NPM configs:
**`secure.d-bis.org`**:
- `/admin``http://192.168.11.130:80` (DBIS Frontend)
- `/api``http://192.168.11.155:3000` (live DBIS API on 10150)
- `/graph``http://192.168.11.155:3000` (same)
- `/``http://192.168.11.130:80` (DBIS Frontend)
**`sankofa.nexus`** (intent): corporate marketing at **`IP_SANKOFA_PUBLIC_WEB`**; **`portal.sankofa.nexus`** serves the authenticated portal at **`IP_SANKOFA_CLIENT_SSO`**. Legacy path-based splits (if any) should be reconciled with [EXPECTED_WEB_CONTENT.md](../02-architecture/EXPECTED_WEB_CONTENT.md).
**Note**: NPMplus may need custom location blocks or separate proxy hosts for path-based routing.
### NPMplus routing (authoritative targets)
**Use this document as the source of truth** for domain → VMID:port. Only **explorer.d-bis.org** should point to Blockscout (VMID 5000, 192.168.11.140). All other domains must point to their correct VMID and port:
| Domain | Correct target (VMID, IP:port) | Do NOT point to |
|--------|--------------------------------|-----------------|
| `explorer.d-bis.org` | 5000, 192.168.11.140:80/443 ingress; internal listeners documented in `deployment/LIVE_DEPLOYMENT_MAP.md` | — |
| `sankofa.nexus`, `www.sankofa.nexus` | **Public web:** **7806**, 192.168.11.63:3000 (`IP_SANKOFA_PUBLIC_WEB`) | 192.168.11.140 (Blockscout) |
| `portal.sankofa.nexus`, `admin.sankofa.nexus` | **7801**, 192.168.11.51:3000 (`IP_SANKOFA_CLIENT_SSO`) | 192.168.11.140 (Blockscout) |
| `dash.sankofa.nexus` | Set **`IP_SANKOFA_DASH`** when operator dash exists | 192.168.11.140 (Blockscout) |
| `phoenix.sankofa.nexus`, `www.phoenix.sankofa.nexus` | **7800**, `192.168.11.50:8080` (NPM → hub); Apollo **:4000** on same CT behind hub | 192.168.11.140 (Blockscout) |
| `the-order.sankofa.nexus`, `www.the-order.sankofa.nexus` | 10210, 192.168.11.39:80 | 192.168.11.140 (Blockscout) |
| `studio.sankofa.nexus` | 7805, 192.168.11.72:8000 | — |
If NPMplus proxy hosts for sankofa.nexus or phoenix.sankofa.nexus currently point to 192.168.11.140, update them to the correct IP:port above. See [RPC_ENDPOINTS_MASTER.md](RPC_ENDPOINTS_MASTER.md) and table "Sankofa Phoenix Services" in this document.
**Note**: All `www.*` subdomains redirect to their parent domains to reduce the number of proxy host configurations needed.
---
**Last Updated**: 2026-04-06
**Maintained By**: Infrastructure Team
---
## RPC Node Quick Reference
### Active RPC Endpoints (LAN Besu HTTP RPC fleet)
| IP Address | VMID | Name | Status |
|------------|------|------|--------|
| 192.168.11.211 | 2101 | besu-rpc-core-1 | ✅ Running |
| 192.168.11.212 | 2102 | besu-rpc-core-2 | ✅ Running |
| 192.168.11.217 | 2103 | besu-rpc-core-thirdweb | ✅ Running |
| 192.168.11.221 | 2201 | besu-rpc-public-1 | ✅ Running |
| 192.168.11.232 | 2301 | besu-rpc-private-1 | ✅ Running |
| 192.168.11.233 | 2303 | besu-rpc-ali-0x8a | ✅ Running |
| 192.168.11.234 | 2304 | besu-rpc-ali-0x1 | ✅ Running |
| 192.168.11.235 | 2305 | besu-rpc-luis-0x8a | ✅ Running |
| 192.168.11.236 | 2306 | besu-rpc-luis-0x1 | ✅ Running |
| 192.168.11.237 | 2307 | besu-rpc-putu-0x8a | ✅ Running |
| 192.168.11.238 | 2308 | besu-rpc-putu-0x1 | ✅ Running |
| 192.168.11.240 | 2400 | thirdweb-rpc-1 | ✅ Running |
| 192.168.11.241 | 2401 | besu-rpc-thirdweb-0x8a-1 | ✅ Running |
| 192.168.11.242 | 2402 | besu-rpc-thirdweb-0x8a-2 | ✅ Running |
| 192.168.11.243 | 2403 | besu-rpc-thirdweb-0x8a-3 | ✅ Running |
### Test All RPC Nodes
```bash
# Quick test all RPC nodes (full fleet: scripts/verify/check-chain138-rpc-health.sh)
for ip in 192.168.11.211 192.168.11.212 192.168.11.217 192.168.11.221 192.168.11.232 192.168.11.233 192.168.11.234 192.168.11.235 192.168.11.236 192.168.11.237 192.168.11.238 192.168.11.240 192.168.11.241 192.168.11.242 192.168.11.243; do
curl -s -X POST -H "Content-Type: application/json" \
--data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' \
http://$ip:8545 | grep -q "result" && echo "$ip" || echo "$ip"
done
```