Finalize DBIS infra verification and runtime baselines
All checks were successful
Deploy to Phoenix / deploy (push) Successful in 6s

This commit is contained in:
defiQUG
2026-03-28 19:18:32 -07:00
parent 266a8ae30f
commit 6f53323eae
22 changed files with 1924 additions and 157 deletions

View File

@@ -1,6 +1,6 @@
# Operator Ready Checklist — Copy-Paste Commands
**Last Updated:** 2026-03-27
**Last Updated:** 2026-03-28
**Purpose:** Single page with exact commands to complete every pending todo. Run from **repo root** on a host with **LAN** access (and `smom-dbis-138/.env` with `PRIVATE_KEY`, `NPM_PASSWORD` where noted).
**Do you have all necessary creds?** See [OPERATOR_CREDENTIALS_CHECKLIST.md](OPERATOR_CREDENTIALS_CHECKLIST.md) — per-task list of LAN, PRIVATE_KEY, NPM_PASSWORD, RPC_URL_138, SSH, LINK, gas, token balance.
@@ -276,6 +276,21 @@ This is intentionally deferred with the rest of the Wemix path. If the chain is
---
## 10. DBIS Chain 138 — phased production path (matrix-driven)
**Ref:** [dbis_chain_138_technical_master_plan.md](../../dbis_chain_138_technical_master_plan.md), [DBIS_NODE_ROLE_MATRIX.md](../02-architecture/DBIS_NODE_ROLE_MATRIX.md)
| Phase | Action |
|-------|--------|
| 1 — Reality mapping | `bash scripts/verify/run-phase1-discovery.sh` (optional: `HYPERLEDGER_PROBE=1`). Reports: `reports/phase1-discovery/`. Runbook: [PHASE1_DISCOVERY_RUNBOOK.md](../03-deployment/PHASE1_DISCOVERY_RUNBOOK.md). |
| 2 — Sovereignization roadmap | Read [DBIS_PHASE2_PROXMOX_SOVEREIGNIZATION_ROADMAP.md](../02-architecture/DBIS_PHASE2_PROXMOX_SOVEREIGNIZATION_ROADMAP.md); execute milestones (cluster expansion, Ceph, VLANs) as prioritized. |
| 3 — E2E simulation | `bash scripts/verify/run-dbis-phase3-e2e-simulation.sh` (optional: `RUN_CHAIN138_RPC_HEALTH=1`). Full flow + Indy/Fabric/CCIP manual steps: [DBIS_PHASE3_E2E_PRODUCTION_SIMULATION_RUNBOOK.md](../03-deployment/DBIS_PHASE3_E2E_PRODUCTION_SIMULATION_RUNBOOK.md). |
| Perf (Caliper) | `bash scripts/verify/print-caliper-chain138-stub.sh` — then [CALIPER_CHAIN138_PERF_HOOK.md](../03-deployment/CALIPER_CHAIN138_PERF_HOOK.md). |
**Readiness:** Resolve critical **Entity owner** / **Region** **TBD** rows in the Node Role Matrix before claiming multi-entity production governance.
---
## References
- [COMPLETE_REQUIRED_OPTIONAL_RECOMMENDED_INDEX.md](COMPLETE_REQUIRED_OPTIONAL_RECOMMENDED_INDEX.md) — full plan (required, optional, recommended)

View File

@@ -0,0 +1,168 @@
# DBIS Node Role Matrix
**Last updated:** 2026-03-29 (UTC) — regenerate machine-derived rows: `bash scripts/docs/generate-dbis-node-role-matrix-md.sh`
**Status:** Active — infrastructure constitution for DBIS Chain 138 and colocated workloads.
## Purpose
This matrix assigns **node type**, **preferred host placement**, **validator/signing role** (for Besu), and **security tier** per workload. It implements the entity-placement model in [dbis_chain_138_technical_master_plan.md](../../dbis_chain_138_technical_master_plan.md) (Sections 67) in a form operators can maintain.
**Canonical pairs (keep in sync):**
- Human detail and status: [ALL_VMIDS_ENDPOINTS.md](../04-configuration/ALL_VMIDS_ENDPOINTS.md)
- Machine-readable services: [config/proxmox-operational-template.json](../../config/proxmox-operational-template.json)
When you change VMID, IP, hostname, or placement, update **ALL_VMIDS** and **operational-template.json** first, then regenerate the table below with this script (or edit the static sections manually).
## Columns
| Column | Meaning |
|--------|---------|
| **Entity owner** | DBIS Core, Central Bank, IFI, Regional Operator, etc. — use **TBD** until governance assigns. |
| **Region** | Geographic or site label — **TBD** until multi-site is formalized. |
| **IP note** | Flags duplicate IPv4 entries in the planning template. A duplicate means **shared or historical mapping**, not concurrent ownership — verify live owner in ALL_VMIDS or on-cluster. |
| **Preferred host** | Preferred Proxmox node (`r630-01`, `r630-02`, `ml110`, `any`). This is a planning target, not an assertion of current placement. |
| **Validator / signing** | For Chain 138 Besu: QBFT signer, sentry (no signer), RPC-only, or N/A. |
| **Security tier** | High-level zone: validator-tier, DMZ/RPC, edge ingress, identity/DLT, application, etc. |
## Proxmox hypervisor nodes
| Hostname | MGMT IP | Cluster | Role (summary) |
|----------|---------|---------|------------------|
| ml110 | 192.168.11.10 | h — verify | legacy_cluster_member_or_wan_aggregator |
| r630-01 | 192.168.11.11 | h | primary_compute_chain138_rpc_ccip_relay_sankofa |
| r630-02 | 192.168.11.12 | h | firefly_npmplus_secondary_mim4u_mifos_support |
## Workloads (from operational template)
Machine-derived rows below come from `services[]` in `config/proxmox-operational-template.json`. Duplicate IPv4 notes are warnings that the planning template still contains alternative or legacy ownership for the same address; they must not be read as concurrent live allocations.
| VMID | Hostname | IPv4 | IP note | Node type | Entity owner | Region | Preferred host | Validator / signing | Security tier |
|------|----------|------|---------|-----------|--------------|--------|----------------|---------------------|---------------|
| — | order-redis-primary | 192.168.11.38 | unique in template | The Order service | TBD | TBD | r630-01 | N/A | application |
| 100 | proxmox-mail-gateway | 192.168.11.32 | unique in template | Infra LXC | TBD | TBD | r630-02 | N/A | management / secrets |
| 101 | proxmox-datacenter-manager | 192.168.11.33 | unique in template | Infra LXC | TBD | TBD | r630-02 | N/A | management / secrets |
| 102 | cloudflared | 192.168.11.34 | unique in template | Cloudflare tunnel | TBD | TBD | r630-01 | N/A | edge ingress |
| 103 | omada | 192.168.11.30 | unique in template | Infra LXC | TBD | TBD | r630-02 | N/A | management / secrets |
| 104 | gitea | 192.168.11.31 | unique in template | Infra LXC | TBD | TBD | r630-02 | N/A | management / secrets |
| 105 | nginxproxymanager | 192.168.11.26 | unique in template | Legacy NPM | TBD | TBD | r630-02 | N/A | standard internal |
| 130 | monitoring-1 | 192.168.11.27 | unique in template | Monitoring | TBD | TBD | r630-02 | N/A | standard internal |
| 1000 | besu-validator-1 | 192.168.11.100 | unique in template | Besu validator | TBD | TBD | r630-01 | QBFT signer | validator-tier |
| 1001 | besu-validator-2 | 192.168.11.101 | unique in template | Besu validator | TBD | TBD | r630-01 | QBFT signer | validator-tier |
| 1002 | besu-validator-3 | 192.168.11.102 | unique in template | Besu validator | TBD | TBD | r630-01 | QBFT signer | validator-tier |
| 1003 | besu-validator-4 | 192.168.11.103 | unique in template | Besu validator | TBD | TBD | r630-01 | QBFT signer | validator-tier |
| 1004 | besu-validator-5 | 192.168.11.104 | unique in template | Besu validator | TBD | TBD | r630-01 | QBFT signer | validator-tier |
| 1500 | besu-sentry-1 | 192.168.11.150 | unique in template | Besu sentry | TBD | TBD | r630-01 | Sentry (no signer) | validator-tier |
| 1501 | besu-sentry-2 | 192.168.11.151 | unique in template | Besu sentry | TBD | TBD | r630-01 | Sentry (no signer) | validator-tier |
| 1502 | besu-sentry-3 | 192.168.11.152 | unique in template | Besu sentry | TBD | TBD | r630-01 | Sentry (no signer) | validator-tier |
| 1503 | besu-sentry-4 | 192.168.11.153 | unique in template | Besu sentry | TBD | TBD | r630-01 | Sentry (no signer) | validator-tier |
| 1504 | besu-sentry-ali | 192.168.11.154 | unique in template | Besu sentry | TBD | TBD | r630-01 | Sentry (no signer) | validator-tier |
| 1505 | besu-sentry-alltra-1 | 192.168.11.213 | unique in template | Besu sentry | TBD | TBD | r630-01 | Sentry (no signer) | validator-tier |
| 1506 | besu-sentry-alltra-2 | 192.168.11.214 | unique in template | Besu sentry | TBD | TBD | r630-01 | Sentry (no signer) | validator-tier |
| 1507 | besu-sentry-hybx-1 | 192.168.11.244 | unique in template | Besu sentry | TBD | TBD | ml110 | Sentry (no signer) | validator-tier |
| 1508 | besu-sentry-hybx-2 | 192.168.11.245 | unique in template | Besu sentry | TBD | TBD | ml110 | Sentry (no signer) | validator-tier |
| 2101 | besu-rpc-core-1 | 192.168.11.211 | unique in template | Besu RPC (rpc_core) | TBD | TBD | r630-01 | RPC only | DMZ / RPC exposure |
| 2102 | besu-rpc-core-2 | 192.168.11.212 | unique in template | Besu RPC (rpc_core) | TBD | TBD | r630-01 | RPC only | DMZ / RPC exposure |
| 2103 | besu-rpc-core-thirdweb | 192.168.11.217 | unique in template | Besu RPC (rpc_core) | TBD | TBD | r630-01 | RPC only | DMZ / RPC exposure |
| 2201 | besu-rpc-public-1 | 192.168.11.221 | unique in template | Besu RPC (rpc_public) | TBD | TBD | r630-01 | RPC only | DMZ / RPC exposure |
| 2301 | besu-rpc-private-1 | 192.168.11.232 | unique in template | Besu RPC (rpc_private) | TBD | TBD | r630-01 | RPC only | DMZ / RPC exposure |
| 2303 | besu-rpc-ali-0x8a | 192.168.11.233 | unique in template | Besu RPC (rpc_named) | TBD | TBD | r630-01 | RPC only | DMZ / RPC exposure |
| 2304 | besu-rpc-ali-0x1 | 192.168.11.234 | unique in template | Besu RPC (rpc_named) | TBD | TBD | r630-01 | RPC only | DMZ / RPC exposure |
| 2305 | besu-rpc-luis-0x8a | 192.168.11.235 | unique in template | Besu RPC (rpc_named) | TBD | TBD | r630-01 | RPC only | DMZ / RPC exposure |
| 2306 | besu-rpc-luis-0x1 | 192.168.11.236 | unique in template | Besu RPC (rpc_named) | TBD | TBD | r630-01 | RPC only | DMZ / RPC exposure |
| 2307 | besu-rpc-putu-0x8a | 192.168.11.237 | unique in template | Besu RPC (rpc_named) | TBD | TBD | r630-01 | RPC only | DMZ / RPC exposure |
| 2308 | besu-rpc-putu-0x1 | 192.168.11.238 | unique in template | Besu RPC (rpc_named) | TBD | TBD | r630-01 | RPC only | DMZ / RPC exposure |
| 2400 | thirdweb-rpc-1 | 192.168.11.240 | unique in template | Besu RPC (rpc_thirdweb) | TBD | TBD | r630-01 | RPC only | DMZ / RPC exposure |
| 2401 | besu-rpc-thirdweb-0x8a-1 | 192.168.11.241 | unique in template | Besu RPC (rpc_thirdweb) | TBD | TBD | r630-01 | RPC only | DMZ / RPC exposure |
| 2402 | besu-rpc-thirdweb-0x8a-2 | 192.168.11.242 | unique in template | Besu RPC (rpc_thirdweb) | TBD | TBD | r630-01 | RPC only | DMZ / RPC exposure |
| 2403 | besu-rpc-thirdweb-0x8a-3 | 192.168.11.243 | unique in template | Besu RPC (rpc_thirdweb) | TBD | TBD | r630-01 | RPC only | DMZ / RPC exposure |
| 2500 | besu-rpc-alltra-1 | 192.168.11.172 | unique in template | Besu RPC (rpc_alltra_hybx) | TBD | TBD | r630-01 | RPC only | DMZ / RPC exposure |
| 2501 | besu-rpc-alltra-2 | 192.168.11.173 | unique in template | Besu RPC (rpc_alltra_hybx) | TBD | TBD | r630-01 | RPC only | DMZ / RPC exposure |
| 2502 | besu-rpc-alltra-3 | 192.168.11.174 | unique in template | Besu RPC (rpc_alltra_hybx) | TBD | TBD | r630-01 | RPC only | DMZ / RPC exposure |
| 2503 | besu-rpc-hybx-1 | 192.168.11.246 | unique in template | Besu RPC (rpc_alltra_hybx) | TBD | TBD | r630-01 | RPC only | DMZ / RPC exposure |
| 2504 | besu-rpc-hybx-2 | 192.168.11.247 | unique in template | Besu RPC (rpc_alltra_hybx) | TBD | TBD | r630-01 | RPC only | DMZ / RPC exposure |
| 2505 | besu-rpc-hybx-3 | 192.168.11.248 | unique in template | Besu RPC (rpc_alltra_hybx) | TBD | TBD | r630-01 | RPC only | DMZ / RPC exposure |
| 3000 | ml-node-1 | 192.168.11.60 | unique in template | ML node | TBD | TBD | ml110 | N/A | standard internal |
| 3001 | ml-node-2 | 192.168.11.61 | unique in template | ML node | TBD | TBD | ml110 | N/A | standard internal |
| 3002 | ml-node-3 | 192.168.11.62 | unique in template | ML node | TBD | TBD | ml110 | N/A | standard internal |
| 3003 | ml-node-4 | 192.168.11.63 | unique in template | ML node | TBD | TBD | ml110 | N/A | standard internal |
| 3500 | oracle-publisher-1 | 192.168.11.29 | unique in template | Oracle publisher | TBD | TBD | r630-02 | N/A | standard internal |
| 3501 | ccip-monitor-1 | 192.168.11.28 | unique in template | CCIP monitor | TBD | TBD | r630-02 | N/A | standard internal |
| 5000 | blockscout-1 | 192.168.11.140 | unique in template | Blockscout | TBD | TBD | r630-01 | N/A | standard internal |
| 5010 | tsunamiswap | 192.168.11.91 | unique in template | DeFi | TBD | TBD | r630-01 | N/A | standard internal |
| 5200 | cacti-1 | 192.168.11.80 | unique in template | Cacti | TBD | TBD | r630-02 | N/A | standard internal |
| 5201 | cacti-alltra-1 | 192.168.11.177 | unique in template | Cacti | TBD | TBD | r630-02 | N/A | standard internal |
| 5202 | cacti-hybx-1 | 192.168.11.251 | unique in template | Cacti | TBD | TBD | r630-02 | N/A | standard internal |
| 5700 | dev-vm-gitops | 192.168.11.59 | unique in template | Dev | TBD | TBD | any | N/A | standard internal |
| 5702 | ai-inf-1 | 192.168.11.82 | unique in template | AI infra | TBD | TBD | r630-01 | N/A | standard internal |
| 5705 | ai-inf-2 | 192.168.11.86 | unique in template | AI infra | TBD | TBD | r630-01 | N/A | standard internal |
| 5800 | mifos-fineract | 192.168.11.85 | unique in template | Mifos | TBD | TBD | r630-02 | N/A | standard internal |
| 5801 | dapp-smom | 192.168.11.58 | unique in template | DApp | TBD | TBD | r630-02 | N/A | standard internal |
| 6000 | fabric-1 | 192.168.11.65 | unique in template | Fabric | TBD | TBD | r630-02 | N/A | identity / workflow DLT |
| 6001 | fabric-alltra-1 | 192.168.11.178 | unique in template | Fabric | TBD | TBD | r630-02 | N/A | identity / workflow DLT |
| 6002 | fabric-hybx-1 | 192.168.11.252 | unique in template | Fabric | TBD | TBD | r630-02 | N/A | identity / workflow DLT |
| 6200 | firefly-1 | 192.168.11.35 | shared / non-concurrent mapping — verify live owner | FireFly | TBD | TBD | r630-02 | N/A | identity / workflow DLT |
| 6201 | firefly-ali-1 | 192.168.11.57 | unique in template | FireFly | TBD | TBD | r630-02 | N/A | identity / workflow DLT |
| 6400 | indy-1 | 192.168.11.64 | unique in template | Indy | TBD | TBD | r630-02 | N/A | identity / workflow DLT |
| 6401 | indy-alltra-1 | 192.168.11.179 | unique in template | Indy | TBD | TBD | r630-02 | N/A | identity / workflow DLT |
| 6402 | indy-hybx-1 | 192.168.11.253 | unique in template | Indy | TBD | TBD | r630-02 | N/A | identity / workflow DLT |
| 7800 | sankofa-api-1 | 192.168.11.50 | unique in template | Sankofa / Phoenix | TBD | TBD | r630-01 | N/A | application |
| 7801 | sankofa-portal-1 | 192.168.11.51 | unique in template | Sankofa / Phoenix | TBD | TBD | r630-01 | N/A | application |
| 7802 | sankofa-keycloak-1 | 192.168.11.52 | unique in template | Sankofa / Phoenix | TBD | TBD | r630-01 | N/A | application |
| 7803 | sankofa-postgres-1 | 192.168.11.53 | unique in template | Sankofa / Phoenix | TBD | TBD | r630-01 | N/A | application |
| 7804 | gov-portals-dev | 192.168.11.54 | unique in template | Sankofa / Phoenix | TBD | TBD | r630-01 | N/A | application |
| 7805 | sankofa-studio | 192.168.11.72 | unique in template | Sankofa / Phoenix | TBD | TBD | r630-01 | N/A | application |
| 7810 | mim-web-1 | 192.168.11.37 | shared / non-concurrent mapping — verify live owner | MIM4U | TBD | TBD | r630-02 | N/A | standard internal |
| 7811 | mim-api-1 | 192.168.11.36 | shared / non-concurrent mapping — verify live owner | MIM4U | TBD | TBD | r630-02 | N/A | standard internal |
| 8640 | vault-phoenix-1 | 192.168.11.200 | unique in template | HashiCorp Vault | TBD | TBD | r630-01 | N/A | management / secrets |
| 8641 | vault-phoenix-2 | 192.168.11.215 | unique in template | HashiCorp Vault | TBD | TBD | r630-01 | N/A | management / secrets |
| 8642 | vault-phoenix-3 | 192.168.11.202 | unique in template | HashiCorp Vault | TBD | TBD | r630-01 | N/A | management / secrets |
| 10030 | order-identity | 192.168.11.40 | unique in template | The Order service | TBD | TBD | r630-01 | N/A | application |
| 10040 | order-intake | 192.168.11.41 | unique in template | The Order service | TBD | TBD | r630-01 | N/A | application |
| 10050 | order-finance | 192.168.11.49 | unique in template | The Order service | TBD | TBD | r630-01 | N/A | application |
| 10060 | order-dataroom | 192.168.11.42 | unique in template | The Order service | TBD | TBD | r630-01 | N/A | application |
| 10070 | order-legal | 192.168.11.87 | unique in template | The Order service | TBD | TBD | r630-01 | N/A | application |
| 10080 | order-eresidency | 192.168.11.43 | unique in template | The Order service | TBD | TBD | r630-01 | N/A | application |
| 10090 | order-portal-public | 192.168.11.36 | shared / non-concurrent mapping — verify live owner | The Order service | TBD | TBD | r630-01 | N/A | application |
| 10091 | order-portal-internal | 192.168.11.35 | shared / non-concurrent mapping — verify live owner | The Order service | TBD | TBD | r630-01 | N/A | application |
| 10092 | order-mcp-legal | 192.168.11.37 | shared / non-concurrent mapping — verify live owner | The Order service | TBD | TBD | r630-01 | N/A | application |
| 10100 | dbis-postgres-primary | 192.168.11.105 | unique in template | DBIS stack | TBD | TBD | r630-01 | N/A | application |
| 10101 | dbis-postgres-replica-1 | 192.168.11.106 | unique in template | DBIS stack | TBD | TBD | r630-01 | N/A | application |
| 10120 | dbis-redis | 192.168.11.125 | unique in template | DBIS stack | TBD | TBD | r630-01 | N/A | application |
| 10130 | dbis-frontend | 192.168.11.130 | unique in template | DBIS stack | TBD | TBD | r630-01 | N/A | application |
| 10150 | dbis-api-primary | 192.168.11.155 | unique in template | DBIS stack | TBD | TBD | r630-01 | N/A | application |
| 10151 | dbis-api-secondary | 192.168.11.156 | unique in template | DBIS stack | TBD | TBD | r630-01 | N/A | application |
| 10200 | order-prometheus | 192.168.11.46 | unique in template | The Order service | TBD | TBD | r630-01 | N/A | application |
| 10201 | order-grafana | 192.168.11.47 | unique in template | The Order service | TBD | TBD | r630-01 | N/A | application |
| 10202 | order-opensearch | 192.168.11.48 | unique in template | The Order service | TBD | TBD | r630-01 | N/A | application |
| 10210 | order-haproxy | 192.168.11.39 | unique in template | The Order service | TBD | TBD | r630-01 | N/A | application |
| 10230 | order-vault | 192.168.11.55 | unique in template | The Order service | TBD | TBD | r630-01 | N/A | application |
| 10232 | ct10232 | 192.168.11.56 | unique in template | General CT | TBD | TBD | r630-01 | N/A | standard internal |
| 10233 | npmplus-primary | 192.168.11.167 | unique in template | NPMplus ingress | TBD | TBD | r630-01 | N/A | edge ingress |
| 10234 | npmplus-secondary | 192.168.11.168 | unique in template | NPMplus ingress | TBD | TBD | r630-02 | N/A | edge ingress |
| 10235 | npmplus-alltra-hybx | 192.168.11.169 | unique in template | NPMplus ingress | TBD | TBD | r630-02 | N/A | edge ingress |
| 10236 | npmplus-fourth-dev | 192.168.11.170 | unique in template | NPMplus ingress | TBD | TBD | r630-02 | N/A | edge ingress |
| 10237 | npmplus-mifos | 192.168.11.171 | unique in template | NPMplus ingress | TBD | TBD | r630-02 | N/A | edge ingress |
## Supplementary rows (not in template JSON)
These appear in [ALL_VMIDS_ENDPOINTS.md](../04-configuration/ALL_VMIDS_ENDPOINTS.md) but are not modeled as `services[]` entries in `proxmox-operational-template.json`. They are **manual supplements**, not generator-backed source of truth.
| VMID | Hostname | IPv4 | IP note | Node type | Entity owner | Region | Preferred host | Validator / signing | Security tier |
|------|----------|------|---------|-----------|--------------|--------|----------------|---------------------|---------------|
| 106 | redis-rpc-translator | 192.168.11.110 | manual supplement | RPC translator (Redis) | TBD | TBD | r630-01 (per ALL_VMIDS) | N/A | DMZ / RPC exposure |
| 107 | web3signer-rpc-translator | 192.168.11.111 | manual supplement | RPC translator (Web3Signer) | TBD | TBD | r630-01 | N/A | DMZ / RPC exposure |
| 108 | vault-rpc-translator | 192.168.11.112 | manual supplement | RPC translator (Vault) | TBD | TBD | r630-01 | N/A | management / secrets |
## Host-level services (no VMID)
| Name | Location | Node type | Notes |
|------|----------|-----------|-------|
| CCIP relay | r630-01 host `/opt/smom-dbis-138/services/relay` | Cross-chain relay | Uses RPC (e.g. VMID 2201); see [NETWORK_CONFIGURATION_MASTER.md](../11-references/NETWORK_CONFIGURATION_MASTER.md), [docs/07-ccip/](../07-ccip/). |
## Related
- [dbis_chain_138_technical_master_plan.md](../../dbis_chain_138_technical_master_plan.md)
- [CHAIN138_CANONICAL_NETWORK_ROLES_VALIDATORS_SENTRY_AND_RPC.md](CHAIN138_CANONICAL_NETWORK_ROLES_VALIDATORS_SENTRY_AND_RPC.md)
- [VMID_ALLOCATION_FINAL.md](VMID_ALLOCATION_FINAL.md)

View File

@@ -0,0 +1,69 @@
# DBIS Phase 2 — Proxmox sovereignization roadmap
**Last updated:** 2026-03-28
**Purpose:** Close the gap between **todays** Proxmox footprint (23 active cluster nodes, ZFS/LVM-backed guests, VLAN 11 LAN) and the **target** in [dbis_chain_138_technical_master_plan.md](../../dbis_chain_138_technical_master_plan.md) Sections 45 and 8 (multi-node HA, Ceph-backed storage, stronger segmentation, standardized templates).
**Current ground truth:** [PROXMOX_VE_OPERATIONAL_DEPLOYMENT_TEMPLATE.md](../03-deployment/PROXMOX_VE_OPERATIONAL_DEPLOYMENT_TEMPLATE.md), [config/proxmox-operational-template.json](../../config/proxmox-operational-template.json), [STORAGE_GROWTH_AND_HEALTH.md](../04-configuration/STORAGE_GROWTH_AND_HEALTH.md).
---
## Current state (summary)
| Area | As deployed (typical) | Master plan target |
|------|----------------------|-------------------|
| Cluster | Corosync cluster **h** on ml110 + r630-01 + r630-02 (ml110 **may** be repurposed — verify Phase 1) | 3+ control-oriented nodes, odd quorum, HA services |
| Storage | Local ZFS / LVM thin pools per host | Ceph OSD tier + pools for VM disks and/or RBD |
| Network | Primary **192.168.11.0/24**, VLAN 11, UDM Pro edge, NPMplus ingress | Additional VLANs: storage replication, validator-only, identity, explicit DMZ mapping |
| Workloads | Chain 138 Besu validators/RPC, Hyperledger CTs, apps — see [DBIS_NODE_ROLE_MATRIX.md](DBIS_NODE_ROLE_MATRIX.md) | Same roles, **template-standardized** provisioning |
---
## Milestone 1 — Cluster quorum and fleet expansion
- Bring **r630-03+** online per [R630_13_NODE_DOD_HA_MASTER_PLAN.md](R630_13_NODE_DOD_HA_MASTER_PLAN.md) and [11-references/13_NODE_AND_ASSETS_BRING_ONLINE_CHECKLIST.md](../11-references/13_NODE_AND_ASSETS_BRING_ONLINE_CHECKLIST.md).
- Maintain **odd** node count for Corosync quorum; use qdevice if temporarily even-count during ml110 migration ([UDM_PRO_PROXMOX_CLUSTER.md](../04-configuration/UDM_PRO_PROXMOX_CLUSTER.md)).
---
## Milestone 2 — ML110 migration / WAN aggregator
- **Before** repurposing ml110 to OPNsense/pfSense ([ML110_OPNSENSE_PFSENSE_WAN_AGGREGATOR.md](../11-references/ML110_OPNSENSE_PFSENSE_WAN_AGGREGATOR.md)): migrate all remaining CT/VM to R630s ([NETWORK_CONFIGURATION_MASTER.md](../11-references/NETWORK_CONFIGURATION_MASTER.md)).
- Re-document **physical inventory** row for `.10` after cutover ([PHYSICAL_HARDWARE_INVENTORY.md](PHYSICAL_HARDWARE_INVENTORY.md)).
---
## Milestone 3 — Ceph introduction (decision + prerequisites)
- **Decision record:** whether Ceph replaces or complements ZFS/LVM for new workloads; minimum network (10G storage net, jumbo frames if used), disk layout, and JBOD attachment per [HARDWARE_INVENTORY_MASTER.md](../11-references/HARDWARE_INVENTORY_MASTER.md).
- Pilot: non-production pool → migrate one test CT → expand OSD count.
---
## Milestone 4 — Network segmentation (incremental)
Map master plan layers to implementable steps:
1. Dedicated **storage replication** VLAN (Ceph backhaul or ZFS sync).
2. **Validator / P2P** constraints (firewall rules between sentry and RPC tiers — align [CHAIN138_CANONICAL_NETWORK_ROLES_VALIDATORS_SENTRY_AND_RPC.md](CHAIN138_CANONICAL_NETWORK_ROLES_VALIDATORS_SENTRY_AND_RPC.md)).
3. **Identity / Indy** tier isolation when multi-entity governance requires it.
---
## Milestone 5 — VM / CT templates (Section 7 of master plan)
- Align [PROXMOX_VM_CREATION_RUNBOOK.md](../03-deployment/PROXMOX_VM_CREATION_RUNBOOK.md) with template types: Identity (Indy/Aries), Settlement (Besu), Institutional (Fabric), Workflow (FireFly), Observability (Explorer/monitoring).
- Encode **preferred_node** and sizing in [DBIS_NODE_ROLE_MATRIX.md](DBIS_NODE_ROLE_MATRIX.md) and sync [proxmox-operational-template.json](../../config/proxmox-operational-template.json).
---
## Milestone 6 — Backup and DR alignment (master plan Sections 8, 16)
- Hourly/daily snapshot policy per guest tier; cross-site replication targets (RPO/RTO) documented outside this file when available.
- Reference: existing backup scripts for NPMplus and operator checklist.
---
## Related
- [PHASE1_DISCOVERY_RUNBOOK.md](../03-deployment/PHASE1_DISCOVERY_RUNBOOK.md)
- [DBIS_PHASE3_E2E_PRODUCTION_SIMULATION_RUNBOOK.md](../03-deployment/DBIS_PHASE3_E2E_PRODUCTION_SIMULATION_RUNBOOK.md)

View File

@@ -1,7 +1,7 @@
# Physical Hardware Inventory
**Last Updated:** 2026-02-13
**Document Version:** 1.1
**Last Updated:** 2026-03-28
**Document Version:** 1.2
**Status:** Active Documentation
---
@@ -14,12 +14,12 @@ This document is the placeholder for the physical hardware inventory (hosts, IPs
| Host | IP | Role | NICs |
|------|-----|------|------|
| ml110 | 192.168.11.10 | Proxmox, Besu nodes | 2× Broadcom BCM5717 1GbE |
| r630-01 | 192.168.11.11 | Infrastructure, RPC | 4× Broadcom BCM5720 1GbE |
| r630-02 | 192.168.11.12 | Firefly, NPMplus secondary | 4× Broadcom BCM57800 1/10GbE |
| ml110 | 192.168.11.10 | **Transitional:** historically Proxmox + Besu/sentry/ML workloads; **target** is OPNsense/pfSense WAN aggregator between cable modems and dual UDM Pro — see [NETWORK_CONFIGURATION_MASTER.md](../11-references/NETWORK_CONFIGURATION_MASTER.md). Confirm live role with `pvecm status` / `pct list` on `.10` (Phase 1: `scripts/verify/run-phase1-discovery.sh`). | 2× Broadcom BCM5717 1GbE |
| r630-01 | 192.168.11.11 | Infrastructure, Chain 138 RPC, Sankofa/Order, CCIP relay host | 4× Broadcom BCM5720 1GbE |
| r630-02 | 192.168.11.12 | FireFly, Fabric/Indy/Cacti, NPMplus instances, MIM4U, Mifos | 4× Broadcom BCM57800 1/10GbE |
| UDM Pro (edge) | 76.53.10.34 | Edge router | — |
**See:** [PROXMOX_HOSTS_COMPLETE_HARDWARE_CONFIG.md](PROXMOX_HOSTS_COMPLETE_HARDWARE_CONFIG.md), [NETWORK_CONFIGURATION_MASTER.md](../11-references/NETWORK_CONFIGURATION_MASTER.md), [NETWORK_ARCHITECTURE.md](NETWORK_ARCHITECTURE.md), [VMID_ALLOCATION_FINAL.md](VMID_ALLOCATION_FINAL.md).
**See:** [PROXMOX_HOSTS_COMPLETE_HARDWARE_CONFIG.md](PROXMOX_HOSTS_COMPLETE_HARDWARE_CONFIG.md), [NETWORK_CONFIGURATION_MASTER.md](../11-references/NETWORK_CONFIGURATION_MASTER.md), [NETWORK_ARCHITECTURE.md](NETWORK_ARCHITECTURE.md), [VMID_ALLOCATION_FINAL.md](VMID_ALLOCATION_FINAL.md), [DBIS_NODE_ROLE_MATRIX.md](DBIS_NODE_ROLE_MATRIX.md).
---

View File

@@ -0,0 +1,23 @@
# Caliper performance hook — Chain 138 (Besu)
**Last updated:** 2026-03-28
**Purpose:** Satisfy [dbis_chain_138_technical_master_plan.md](../../dbis_chain_138_technical_master_plan.md) Section 14 without vendoring Caliper into this repository.
## Approach
1. Use upstream [Hyperledger Caliper](https://github.com/hyperledger/caliper) (npm package `/@hyperledger/caliper-cli`).
2. Create a **separate** working directory (or CI job) with:
- `networkconfig.json` pointing `url` to Chain 138 HTTP RPC (prefer an isolated load-test node, not production public RPC).
- `benchmarks/` with a minimal `read` workload (`eth_blockNumber`, `eth_getBlockByNumber`) before write-heavy contracts.
3. Run: `npx caliper launch manager --caliper-workspace . --caliper-networkconfig networkconfig.json --caliper-benchconfig benchmarks/config.yaml`
4. Archive results (HTML/JSON) next to Phase 1 discovery reports if desired: `reports/phase1-discovery/` or `reports/caliper/`.
## Safety
- Use **low** transaction rates first; Besu validators and RPC tier are production assets.
- Do not point Caliper at **validator** JSON-RPC ports; use **RPC tier** only.
- Align gas and chain ID with `smom-dbis-138/.env` and [DEPLOYMENT_ORDER_OF_OPERATIONS.md](DEPLOYMENT_ORDER_OF_OPERATIONS.md).
## Wrapper
`bash scripts/verify/print-caliper-chain138-stub.sh` prints this path and suggested env vars (no network I/O).

View File

@@ -0,0 +1,66 @@
# DBIS Hyperledger Runtime Status
**Last Reviewed:** 2026-03-28
**Purpose:** Concise app-level status table for the non-Besu Hyperledger footprint currently hosted on Proxmox. This complements the VMID inventory and discovery runbooks by recording what was actually verified inside the running containers.
## Scope
This document summarizes the latest operator verification for:
- FireFly CTs: `6200`, `6201`
- Fabric CTs: `6000`, `6001`, `6002`
- Indy CTs: `6400`, `6401`, `6402`
The checks were based on:
- `pct status`
- in-container process checks
- in-container listener checks
- FireFly API / Postgres / IPFS checks where applicable
## Current status table
| VMID | Service family | CT status | App-level status | Listening ports / probe | Notes |
|------|----------------|-----------|------------------|--------------------------|-------|
| `6200` | FireFly primary | Running | Healthy minimal local gateway | `5000/tcp` FireFly API, `5432/tcp` Postgres, `5001/tcp` IPFS | `firefly-core` restored on `ghcr.io/hyperledger/firefly:v1.2.0`; `GET /api/v1/status` returned `200`; Postgres `pg_isready` passed; IPFS version probe passed |
| `6201` | FireFly secondary | Stopped | Standby / incomplete | None verified | CT exists but rootfs is effectively empty and no valid FireFly deployment footprint was found; do not treat as active secondary |
| `6000` | Fabric primary | Running | Unproven | No Fabric listener verified | CT runs, but current app-level checks did not show active peer/orderer processes or expected listeners such as `7050` / `7051` |
| `6001` | Fabric secondary | Running | Unproven | No Fabric listener verified | Same current state as `6000` |
| `6002` | Fabric tertiary | Running | Unproven | No Fabric listener verified | Same current state as `6000` |
| `6400` | Indy primary | Running | Unproven | No Indy listener verified | CT runs, but current checks did not show Indy node listeners on expected ports such as `9701`-`9708` |
| `6401` | Indy secondary | Running | Unproven | No Indy listener verified | Same current state as `6400` |
| `6402` | Indy tertiary | Running | Unproven | No Indy listener verified | Same current state as `6400` |
## Interpretation
### Confirmed working now
- FireFly primary (`6200`) is restored enough to provide a working local FireFly API backed by Postgres and IPFS.
### Present but not currently proved as active application workloads
- Fabric CTs (`6000`-`6002`)
- Indy CTs (`6400`-`6402`)
These should be described as container footprints under validation, not as fully verified production application nodes, until app-level services and expected listeners are confirmed.
### Not currently active
- FireFly secondary (`6201`) should be treated as standby or incomplete deployment state unless it is intentionally rebuilt and verified.
## Operational follow-up
1. Keep `6200` under observation and preserve its working config/image path.
2. Do not force `6201` online unless its intended role and deployment assets are re-established.
3. For Fabric and Indy, the next verification step is app-native validation, not more CT-level checks.
4. Any governance or architecture document should distinguish:
- `deployed and app-healthy`
- `container present only`
- `planned / aspirational`
## Related artifacts
- [docs/02-architecture/DBIS_NODE_ROLE_MATRIX.md](../02-architecture/DBIS_NODE_ROLE_MATRIX.md)
- [docs/03-deployment/PHASE1_DISCOVERY_RUNBOOK.md](PHASE1_DISCOVERY_RUNBOOK.md)
- [docs/03-deployment/DBIS_PHASE3_E2E_PRODUCTION_SIMULATION_RUNBOOK.md](DBIS_PHASE3_E2E_PRODUCTION_SIMULATION_RUNBOOK.md)
- [dbis_chain_138_technical_master_plan.md](../../dbis_chain_138_technical_master_plan.md)

View File

@@ -0,0 +1,76 @@
# DBIS Phase 3 — End-to-end production simulation
**Last updated:** 2026-03-28
**Purpose:** Operationalize [dbis_chain_138_technical_master_plan.md](../../dbis_chain_138_technical_master_plan.md) Section 18 (example flow) and Sections 14, 17 as **repeatable liveness and availability checks** — not a single product build or a full business-E2E execution harness.
**Prerequisites:** LAN access where noted; [DBIS_NODE_ROLE_MATRIX.md](../02-architecture/DBIS_NODE_ROLE_MATRIX.md) for IPs/VMIDs; operator env via `scripts/lib/load-project-env.sh` for on-chain steps.
---
## Section 18 flow → concrete checks
| Step | Master plan | Verification (repo-aligned) |
|------|-------------|-----------------------------|
| 1 | Identity issued (Indy) | Indy steward / node RPC on VMID **6400** (192.168.11.64); pool genesis tools — **manual** until automated issuer script exists. Current CTs `6400/6401/6402` are present, but app-level Indy listener verification is still pending. |
| 2 | Credential verified (Aries) | Aries agents (if colocated): confirm stack on Indy/FireFly integration path — **TBD** per deployment. |
| 3 | Workflow triggered (FireFly) | FireFly API on **6200** (currently restored as a minimal local gateway profile at `http://192.168.11.35:5000`). VMID **6201** is presently stopped / standby and should not be assumed active. |
| 4 | Settlement executed (Besu) | JSON-RPC `eth_chainId`, `eth_blockNumber`, optional test transaction via `smom-dbis-138` with `RPC_URL_138=http://192.168.11.211:8545`. PMM/oracle: [ORACLE_AND_KEEPER_CHAIN138.md](../../smom-dbis-138/docs/integration/ORACLE_AND_KEEPER_CHAIN138.md). |
| 5 | Cross-chain sync (Cacti) | Cacti = network monitoring here (VMID **5200**); **Hyperledger Cacti** interoperability is **future/optional** — track separately if deployed. **CCIP:** relay on r630-01 per [CCIP_RELAY_DEPLOYMENT.md](../07-ccip/CCIP_RELAY_DEPLOYMENT.md). |
| 6 | Compliance recorded (Fabric) | Fabric CTs `6000/6001/6002` are present, but current app-level verification has not yet proven active peer / orderer workloads inside those CTs. Treat Fabric business-flow validation as manual until that gap is closed. |
| 7 | Final settlement confirmed | Re-check Besu head on **2101** and **2201**; Blockscout **5000** for tx receipt if applicable. |
---
## Automated wrapper (partial)
From repo root:
```bash
bash scripts/verify/run-dbis-phase3-e2e-simulation.sh
```
Optional:
```bash
RUN_CHAIN138_RPC_HEALTH=1 bash scripts/verify/run-dbis-phase3-e2e-simulation.sh
```
The script **does not** replace Indy/Fabric business transactions; it proves **liveness** of RPC, optional FireFly HTTP, and prints manual follow-ups. Treat it as a wrapper for infrastructure availability, not as proof that the complete seven-step business flow succeeded.
---
## Performance slice (Section 14 — Caliper)
Hyperledger Caliper is **not** vendored in this repo. To add benchmarks:
1. Install Caliper in a throwaway directory or CI image.
2. Point a Besu **SUT** at `http://192.168.11.211:8545` (deploy/core RPC only) or a dedicated load-test RPC.
3. Start with `simple` contract scenarios; record **TPS**, **latency p95**, and **error rate**.
**Suggested initial thresholds (tune per governance):**
| Metric | Initial gate (lab) |
|--------|-------------------|
| RPC error rate under steady load | less than 1% for 5 min |
| Block production | no stall > 30s (QBFT) |
| Public RPC `eth_blockNumber` lag vs core | within documented spread ([check-chain138-rpc-health.sh](../../scripts/verify/check-chain138-rpc-health.sh) defaults) |
Details: [CALIPER_CHAIN138_PERF_HOOK.md](CALIPER_CHAIN138_PERF_HOOK.md).
---
## Production readiness certification (matrix-driven)
Use [OPERATOR_READY_CHECKLIST.md](../00-meta/OPERATOR_READY_CHECKLIST.md) section **10** plus:
- Phase 1 report timestamped under `reports/phase1-discovery/`.
- Phase 2 milestones acknowledged (Ceph/segmentation may be partial).
- Node Role Matrix: no critical **TBD** for entity-owned validators without a documented interim owner.
---
## Related
- [PHASE1_DISCOVERY_RUNBOOK.md](PHASE1_DISCOVERY_RUNBOOK.md)
- [DBIS_PHASE2_PROXMOX_SOVEREIGNIZATION_ROADMAP.md](../02-architecture/DBIS_PHASE2_PROXMOX_SOVEREIGNIZATION_ROADMAP.md)
- [verify-end-to-end-routing.sh](../../scripts/verify/verify-end-to-end-routing.sh) — public/private ingress

View File

@@ -0,0 +1,119 @@
# Phase 1 — Reality mapping runbook
**Last updated:** 2026-03-28
**Purpose:** Operational steps for [dbis_chain_138_technical_master_plan.md](../../dbis_chain_138_technical_master_plan.md) Sections 3 and 19.119.3: inventory Proxmox, Besu, optional Hyperledger CTs, and record dependency context.
**Outputs:** Timestamped report under `reports/phase1-discovery/` (created by the orchestrator script).
**Pass / fail semantics:** the orchestrator still writes a full evidence report when a critical section fails, but it now exits **non-zero** and appends a final **Critical failure summary** section. Treat the markdown as evidence capture, not automatic proof of success.
---
## Prerequisites
- Repo root; `jq` recommended for template audit.
- **LAN:** SSH keys to Proxmox nodes (default `192.168.11.10`, `.11`, `.12` from `config/ip-addresses.conf`).
- Optional: `curl` for RPC probe.
---
## One-command orchestrator
```bash
bash scripts/verify/run-phase1-discovery.sh
```
Optional Hyperledger container smoke checks (SSH to r630-02, `pct exec`):
```bash
HYPERLEDGER_PROBE=1 bash scripts/verify/run-phase1-discovery.sh
```
Each run writes:
- `reports/phase1-discovery/phase1-discovery-YYYYMMDD_HHMMSS.md` — human-readable report with embedded diagram and command output.
- `reports/phase1-discovery/phase1-discovery-YYYYMMDD_HHMMSS.log` — same content log mirror.
Critical sections for exit status:
- Proxmox template audit
- `pvecm` / `pvesm` / `pct list` / `qm list`
- Chain 138 core RPC quick probe
- `check-chain138-rpc-health.sh`
- `verify-besu-enodes-and-ips.sh`
- optional Hyperledger CT probe when `HYPERLEDGER_PROBE=1`
See also `reports/phase1-discovery/README.md`.
---
## Dependency graph (logical)
Ingress → RPC/sentries/validators → explorer; CCIP relay on r630-01 uses public RPC; FireFly/Fabric/Indy are optional DLT sides for the Section 18 flow.
```mermaid
flowchart TB
subgraph edge [EdgeIngress]
CF[Cloudflare_DNS]
NPM[NPMplus_LXC]
end
subgraph besu [Chain138_Besu]
RPCpub[RPC_public_2201]
RPCcore[RPC_core_2101]
Val[Validators_1000_1004]
Sen[Sentries_1500_1508]
end
subgraph observe [Observability]
BS[Blockscout_5000]
end
subgraph relay [CrossChain]
CCIP[CCIP_relay_r63001_host]
end
subgraph dlt [Hyperledger_optional]
FF[FireFly_6200_6201]
Fab[Fabric_6000_plus]
Indy[Indy_6400_plus]
end
CF --> NPM
NPM --> RPCpub
NPM --> RPCcore
NPM --> BS
RPCpub --> Sen
RPCcore --> Sen
Sen --> Val
CCIP --> RPCpub
FF --> Fab
FF --> Indy
```
**References:** [PROXMOX_VE_OPERATIONAL_DEPLOYMENT_TEMPLATE.md](PROXMOX_VE_OPERATIONAL_DEPLOYMENT_TEMPLATE.md), [ALL_VMIDS_ENDPOINTS.md](../04-configuration/ALL_VMIDS_ENDPOINTS.md), [NETWORK_CONFIGURATION_MASTER.md](../11-references/NETWORK_CONFIGURATION_MASTER.md).
---
## Manual follow-ups
| Task | Command / doc |
|------|----------------|
| Template vs live VMIDs | `bash scripts/verify/audit-proxmox-operational-template.sh` |
| Besu configs | `bash scripts/audit-besu-configs.sh` (review before running; LAN) |
| IP audit | `bash scripts/audit-all-vm-ips.sh` |
| Node role constitution | [DBIS_NODE_ROLE_MATRIX.md](../02-architecture/DBIS_NODE_ROLE_MATRIX.md) |
---
## ML110 documentation reconciliation
**Physical inventory** summary must match **live** role:
- If `192.168.11.10` still runs **Proxmox** and hosts guests, state that explicitly.
- If migration to **OPNsense/pfSense WAN aggregator** is in progress or complete, align with [NETWORK_CONFIGURATION_MASTER.md](../11-references/NETWORK_CONFIGURATION_MASTER.md) and [PHYSICAL_HARDWARE_INVENTORY.md](../02-architecture/PHYSICAL_HARDWARE_INVENTORY.md).
Use `pvecm status` and `pct list` on `.10` from the orchestrator output as evidence.
---
## Related
- [DBIS_NODE_ROLE_MATRIX.md](../02-architecture/DBIS_NODE_ROLE_MATRIX.md)
- [DBIS_PHASE2_PROXMOX_SOVEREIGNIZATION_ROADMAP.md](../02-architecture/DBIS_PHASE2_PROXMOX_SOVEREIGNIZATION_ROADMAP.md)
- [DBIS_PHASE3_E2E_PRODUCTION_SIMULATION_RUNBOOK.md](DBIS_PHASE3_E2E_PRODUCTION_SIMULATION_RUNBOOK.md)

View File

@@ -31,6 +31,25 @@ This is the **authoritative source** for all RPC endpoint configurations. All ot
- Set in `config/ip-addresses.conf` or `smom-dbis-138/.env`. In smom `.env`, **`RPC_URL`** is an accepted alias for **Core** and is normalized to `RPC_URL_138`. `CHAIN138_RPC_URL` / `CHAIN138_RPC` are derived from `RPC_URL_138`. `WS_URL_138_PUBLIC` is the WebSocket for Public (e.g. `ws://192.168.11.221:8546`).
- **Core RPC (VMID 2101) for deploy:** Use **IP and port**, not FQDN. Set `RPC_URL_138=http://192.168.11.211:8545` in `smom-dbis-138/.env` for contract deployment and gas checks. Do not use `https://rpc-core.d-bis.org` for deployment (avoids DNS/tunnel dependency; direct IP is reliable from LAN). See [TODOS_CONSOLIDATED](../00-meta/TODOS_CONSOLIDATED.md) § First (0b).
### Public RPC capability baseline
The public Chain 138 RPC tier is expected to provide the following wallet-grade baseline:
- `eth_chainId`
- `eth_blockNumber`
- `eth_syncing`
- `eth_gasPrice`
- `eth_feeHistory`
- `eth_maxPriorityFeePerGas`
- `eth_estimateGas`
- `eth_getCode`
- `trace_block`
- `trace_replayBlockTransactions`
Use [scripts/verify/check-chain138-rpc-health.sh](/home/intlc/projects/proxmox/scripts/verify/check-chain138-rpc-health.sh) for the live health and capability probe.
If `eth_maxPriorityFeePerGas` is missing, the first fix path is the public node version on VMID `2201`. Besu `24.7.0+` adds support for that method; use [upgrade-public-rpc-vmid2201.sh](/home/intlc/projects/proxmox/scripts/besu/upgrade-public-rpc-vmid2201.sh) to perform the targeted public-RPC upgrade.
| Variable / use | Canonical value | Notes |
|----------------|-----------------|--------|
| **RPC_URL_138** (Core) | `http://192.168.11.211:8545` | **Prefer IP:port for admin/deploy.** Fallback from off-LAN: `https://rpc-core.d-bis.org` |

View File

@@ -1,6 +1,6 @@
# Documentation — Master Index
**Last Updated:** 2026-03-27
**Last Updated:** 2026-03-28
**Purpose:** Single entry point for all project documentation. Use this index to find canonical sources and avoid deprecated or duplicate content.
**Status:** Preflight and Chain 138 next steps completed (59/59 on-chain per [check-contracts-on-chain-138.sh](../../scripts/verify/check-contracts-on-chain-138.sh), 12 c* GRU-registered). **2026-03-06:** Contract check list expanded to 59 addresses (PMM, vault/reserve, CompliantFiatTokens); doc refs updated. **2026-03-04:** Celo CCIP bridges deployed; Phase AD tracked in [03-deployment/REMAINING_DEPLOYMENTS_FOR_FULL_NETWORK_COVERAGE.md](03-deployment/REMAINING_DEPLOYMENTS_FOR_FULL_NETWORK_COVERAGE.md). Phase C: [PHASE_C_CW_AND_EDGE_POOLS_RUNBOOK.md](03-deployment/PHASE_C_CW_AND_EDGE_POOLS_RUNBOOK.md); Phase D: [PHASE_D_OPTIONAL_CHECKLIST.md](03-deployment/PHASE_D_OPTIONAL_CHECKLIST.md). **On-chain verification:** DODOPMMIntegration canonical cUSDT/cUSDC — [EXPLORER_TOKEN_LIST_CROSSCHECK](11-references/EXPLORER_TOKEN_LIST_CROSSCHECK.md) §8. **Remaining:** Wemix 0.4 WEMIX, LINK fund, cW* + edge pools — see [00-meta/TODOS_CONSOLIDATED.md](00-meta/TODOS_CONSOLIDATED.md).
@@ -57,8 +57,8 @@
| Area | Index / key doc |
|------|-----------------|
| **00-meta** (tasks, next steps, phases) | [00-meta/NEXT_STEPS_INDEX.md](00-meta/NEXT_STEPS_INDEX.md), [00-meta/PHASES_AND_TASKS_MASTER.md](00-meta/PHASES_AND_TASKS_MASTER.md) |
| **02-architecture** | [02-architecture/](02-architecture/) — **Public sector + Phoenix catalog baseline:** [02-architecture/PUBLIC_SECTOR_TENANCY_MARKETPLACE_AND_DEPLOYMENT_BASELINE.md](02-architecture/PUBLIC_SECTOR_TENANCY_MARKETPLACE_AND_DEPLOYMENT_BASELINE.md); **non-goals (incl. catalog vs marketing §9):** [02-architecture/NON_GOALS.md](02-architecture/NON_GOALS.md) |
| **03-deployment** | [03-deployment/OPERATIONAL_RUNBOOKS.md](03-deployment/OPERATIONAL_RUNBOOKS.md), [03-deployment/DEPLOYMENT_ORDER_OF_OPERATIONS.md](03-deployment/DEPLOYMENT_ORDER_OF_OPERATIONS.md), **Public sector live checklist:** [03-deployment/PUBLIC_SECTOR_LIVE_DEPLOYMENT_CHECKLIST.md](03-deployment/PUBLIC_SECTOR_LIVE_DEPLOYMENT_CHECKLIST.md), **Proxmox VE ops template:** [03-deployment/PROXMOX_VE_OPERATIONAL_DEPLOYMENT_TEMPLATE.md](03-deployment/PROXMOX_VE_OPERATIONAL_DEPLOYMENT_TEMPLATE.md) · [`config/proxmox-operational-template.json`](config/proxmox-operational-template.json) |
| **02-architecture** | [02-architecture/](02-architecture/) — **Public sector + Phoenix catalog baseline:** [02-architecture/PUBLIC_SECTOR_TENANCY_MARKETPLACE_AND_DEPLOYMENT_BASELINE.md](02-architecture/PUBLIC_SECTOR_TENANCY_MARKETPLACE_AND_DEPLOYMENT_BASELINE.md); **non-goals (incl. catalog vs marketing §9):** [02-architecture/NON_GOALS.md](02-architecture/NON_GOALS.md); **DBIS Chain 138:** [dbis_chain_138_technical_master_plan.md](../dbis_chain_138_technical_master_plan.md), [02-architecture/DBIS_NODE_ROLE_MATRIX.md](02-architecture/DBIS_NODE_ROLE_MATRIX.md), [02-architecture/DBIS_PHASE2_PROXMOX_SOVEREIGNIZATION_ROADMAP.md](02-architecture/DBIS_PHASE2_PROXMOX_SOVEREIGNIZATION_ROADMAP.md) |
| **03-deployment** | [03-deployment/OPERATIONAL_RUNBOOKS.md](03-deployment/OPERATIONAL_RUNBOOKS.md), [03-deployment/DEPLOYMENT_ORDER_OF_OPERATIONS.md](03-deployment/DEPLOYMENT_ORDER_OF_OPERATIONS.md), **Public sector live checklist:** [03-deployment/PUBLIC_SECTOR_LIVE_DEPLOYMENT_CHECKLIST.md](03-deployment/PUBLIC_SECTOR_LIVE_DEPLOYMENT_CHECKLIST.md), **Proxmox VE ops template:** [03-deployment/PROXMOX_VE_OPERATIONAL_DEPLOYMENT_TEMPLATE.md](03-deployment/PROXMOX_VE_OPERATIONAL_DEPLOYMENT_TEMPLATE.md) · [`config/proxmox-operational-template.json`](config/proxmox-operational-template.json); **DBIS Phase 13:** [03-deployment/PHASE1_DISCOVERY_RUNBOOK.md](03-deployment/PHASE1_DISCOVERY_RUNBOOK.md), [03-deployment/DBIS_PHASE3_E2E_PRODUCTION_SIMULATION_RUNBOOK.md](03-deployment/DBIS_PHASE3_E2E_PRODUCTION_SIMULATION_RUNBOOK.md), [03-deployment/CALIPER_CHAIN138_PERF_HOOK.md](03-deployment/CALIPER_CHAIN138_PERF_HOOK.md), [03-deployment/DBIS_HYPERLEDGER_RUNTIME_STATUS.md](03-deployment/DBIS_HYPERLEDGER_RUNTIME_STATUS.md) |
| **04-configuration** | [04-configuration/README.md](04-configuration/README.md), [04-configuration/ADDITIONAL_PATHS_AND_EXTENSIONS.md](04-configuration/ADDITIONAL_PATHS_AND_EXTENSIONS.md) (paths, registry, token-mapping, LiFi/Jumper); **Chain 138 wallets:** [04-configuration/CHAIN138_WALLET_CONFIG_VALIDATION.md](04-configuration/CHAIN138_WALLET_CONFIG_VALIDATION.md); **Chain 2138 testnet wallets:** [04-configuration/CHAIN2138_WALLET_CONFIG_VALIDATION.md](04-configuration/CHAIN2138_WALLET_CONFIG_VALIDATION.md); **OMNL Indonesia / HYBX-BATCH-001:** [04-configuration/mifos-omnl-central-bank/HYBX_BATCH_001_OPERATOR_CHECKLIST.md](04-configuration/mifos-omnl-central-bank/HYBX_BATCH_001_OPERATOR_CHECKLIST.md), [04-configuration/mifos-omnl-central-bank/INDONESIA_PACKAGE_4_995_EVIDENCE_STANDARD.md](04-configuration/mifos-omnl-central-bank/INDONESIA_PACKAGE_4_995_EVIDENCE_STANDARD.md) |
| **06-besu** | [06-besu/MASTER_INDEX.md](06-besu/MASTER_INDEX.md) |
| **Testnet (2138)** | [testnet/DEFI_ORACLE_META_TESTNET_2138_RUNBOOK.md](testnet/DEFI_ORACLE_META_TESTNET_2138_RUNBOOK.md), [testnet/TESTNET_DEPLOYMENT.md](testnet/TESTNET_DEPLOYMENT.md) |