15 KiB
DBIS Chain 138 Technical Master Plan
Purpose
This document is the governance and execution baseline for DBIS Chain 138 infrastructure. It is intentionally grounded in repo-backed and operator-verified reality, so it can be used for audits, deployment planning, and readiness decisions without confusing currently deployed, under validation, and future-state work.
The objective is to move from architecture theory to a production-grade sovereign deployment program that is evidence-based, phased, and operationally auditable.
SECTION 1 — MASTER OBJECTIVES
Primary objectives
- Inventory currently installed stack components and host placement.
- Validate actual service readiness, not just declared architecture.
- Standardize Proxmox VE deployment topology and preferred workload placement.
- Assign infrastructure ownership across ecosystem entities once governance is finalized.
- Define production-grade deployment and verification workflows.
- Track the gap between today’s footprint and sovereign target-state architecture.
- Produce auditable artifacts that operators can regenerate and maintain.
SECTION 2 — CURRENT STACK STATUS
Deployed now
- Hyperledger Besu (QBFT, Chain 138)
- Hyperledger Fabric containers and VMIDs are allocated
- Hyperledger Indy containers and VMIDs are allocated
- Hyperledger FireFly primary container footprint exists
- Blockscout / explorer stack
- Hyperledger Caliper hook and performance guidance (documentation only)
Partially deployed / under validation
- Hyperledger FireFly:
- primary
6200is restored as a minimal local FireFly API footprint - secondary
6201is present in inventory but currently behaves like a retired / standby shell with no valid deployment payload
- primary
- Hyperledger Fabric:
6000,6001,6002are present in inventory but are now intentionally stopped as reserved placeholders- current app-level verification did not show active Fabric peer / orderer workloads or meaningful Fabric payloads inside those CTs
- Hyperledger Indy:
6400,6401,6402are present in inventory but are now intentionally stopped as reserved placeholders- current app-level verification did not show active Indy node listeners or meaningful Indy payloads inside those CTs
Planned / aspirational
- Hyperledger Aries as a proven deployed service tier
- Hyperledger AnonCreds as an operationally verified deployed layer
- Hyperledger Ursa as a required runtime dependency
- Hyperledger Quilt
- Hyperledger Avalon
- Hyperledger Cacti as a proven live interoperability layer
- Full multi-region sovereignized Proxmox with Ceph-backed storage and segmented production VLANs
SECTION 3 — CURRENT ENVIRONMENT DISCOVERY
Canonical discovery artifacts
The source-of-truth discovery path for current state is:
- docs/02-architecture/DBIS_NODE_ROLE_MATRIX.md
- docs/03-deployment/PHASE1_DISCOVERY_RUNBOOK.md
- docs/03-deployment/DBIS_HYPERLEDGER_RUNTIME_STATUS.md
- scripts/verify/run-phase1-discovery.sh
- config/proxmox-operational-template.json
- docs/04-configuration/ALL_VMIDS_ENDPOINTS.md
- docs/02-architecture/PHYSICAL_HARDWARE_INVENTORY.md
Discovery scope
Reality mapping must validate:
- Proxmox hosts and cluster health
- VMID / CT inventory versus template JSON
- Besu validators, sentries, and RPC tiers
- Explorer and public RPC availability
- Hyperledger CT presence and app-level readiness where possible
- Storage topology and current backing stores
- Network topology and current LAN / VLAN reality
- ML110 role reality versus migration plan
Required outputs
Every discovery run should produce:
- Infrastructure inventory report
- Service state map
- Dependency context
- Critical failure summary
The markdown report is evidence capture; the script exit code is the pass/fail signal.
SECTION 4 — PROXMOX VE DEPLOYMENT DESIGN
Current state
- Current cluster footprint is smaller than the target sovereign model.
- Current storage is primarily local ZFS / LVM-based, not Ceph-backed distributed storage.
- Current workload placement is represented as
preferred hostin the planning template, not guaranteed live placement.
Target model
- Multi-node Proxmox VE cluster with stable quorum
- HA-aware workload placement
- Dedicated roles for core compute, RPC exposure, identity/workflow DLT, ingress, and future storage tiers
Current interpretation rule
This plan must not describe the target sovereignized Proxmox model as already achieved. All references to HA, Ceph, dedicated storage nodes, or dedicated network nodes are roadmap items unless Phase 1 evidence proves they are already active.
SECTION 5 — NETWORK ARCHITECTURE
Current network reality
- Primary active management / services LAN is
192.168.11.0/24 - Public ingress is fronted through NPMplus / edge services
- RPC exposure is already tiered across core, public, private, named, and thirdweb-facing nodes
Target network layers
- Management network
- Storage replication network
- Blockchain validator / P2P network
- Identity / workflow DLT network
- Public access / DMZ network
- Validator-only restricted paths
Status
- Public access and RPC role separation exist in practice.
- Full sovereign segmentation with dedicated VLANs and zero-trust internal routing remains roadmap work.
SECTION 6 — ENTITY ASSIGNMENT MODEL
Governance model
The entity-assignment model remains valid as a target governance structure:
- DBIS Core Authority
- Central Banks
- International Financial Institutions
- Regional Operators
Current status
- Entity ownership for many deployed workloads is still
TBDin the operational matrix. - Until governance assigns final owners, operator documentation must keep those fields explicitly marked as
TBDrather than inventing ownership.
The executable placement artifact is:
SECTION 7 — VM AND CONTAINER DESIGN
Current status by workload family
Deployed now
- Settlement / Besu VM family
- Explorer / observability family
- Ingress / proxy family
- Application and DBIS-support workloads
Partially deployed / under validation
- Workflow VM / CT family for FireFly
- Institutional VM / CT family for Fabric
- Identity VM / CT family for Indy
Planned / aspirational
- Identity VM template that includes proven Aries + AnonCreds runtime
- Interoperability VM template for true Hyperledger Cacti usage
Implementation rule
Template language in this plan must map to actual repo artifacts and actual VMIDs, not hypothetical inventory.
SECTION 8 — STORAGE ARCHITECTURE
Current state
- Current guest storage is backed by local Proxmox storage pools.
- Ceph-backed distributed storage is not yet an achieved platform baseline.
Target state
- Ceph or equivalent distributed storage tier
- Snapshot-aware backup strategy by workload class
- Archive and audit retention policy
Roadmap artifact
SECTION 9 — SECURITY ARCHITECTURE
Current baseline
- Chain 138 validator, sentry, and RPC tiering exists as an operational pattern.
- Public RPC capability validation is now script-backed.
- Explorer and wallet metadata are now explicitly documented and verifiable.
Target-state controls
- HSM-backed key management
- stronger secrets segregation
- certificate hierarchy and operator MFA
- formalized tier-to-tier firewall policy
Status
These remain partially implemented and must not be represented as fully complete without separate evidence.
SECTION 10 — GOVERNANCE ARCHITECTURE
Target
- validator governance across multiple entities
- admission control
- key rotation
- emergency controls
Current state
- Chain 138 validator topology exists
- final multi-entity validator governance assignment is still pending
This section remains a target architecture section, not a statement of fully executed governance.
SECTION 11 — FIREFLY WORKFLOW ARCHITECTURE
Current state
- FireFly primary footprint exists and now exposes a local API again.
- Current restored
6200configuration is a minimal local gateway profile for stability and API availability. - Full multiparty FireFly workflow behavior across blockchain, shared storage, and data exchange is not yet evidenced as healthy in the current container deployment.
Program objective
Use FireFly as the workflow layer only after:
- primary and secondary footprint are clearly defined
- connector/plugin model is explicit
- upstream blockchain and shared-storage dependencies are validated
SECTION 12 — CROSS-CHAIN INTEROPERABILITY DESIGN
Current state
- CCIP relay and Chain 138 cross-chain infrastructure exist in the broader stack.
- Hyperledger Cacti is not currently proven as the live interoperability engine for DBIS in this environment.
Planning rule
This plan must refer to Cacti as future / optional until a deployed and validated Cacti environment is evidenced in discovery artifacts.
SECTION 13 — DEVSECOPS PIPELINE
Required execution model
- Source control
- Build / validation
- Security and config review
- Service verification
- Deployment
- Monitoring and readiness evidence
Repo-backed implementation
- discovery scripts
- RPC health checks
- route / explorer verification
- operator runbooks
- submodule hygiene and deployment docs
The pipeline is partially implemented via scripts and runbooks; it is not yet a single unified CI/CD system for every DBIS workload.
SECTION 14 — PERFORMANCE VALIDATION
Current state
- Hyperledger Caliper is not vendored in this repo.
- A documented performance hook exists instead of a committed benchmark harness.
Canonical artifact
Interpretation rule
Performance benchmarking is planned and documented, but not yet a routine automated readiness gate.
SECTION 15 — MONITORING AND OBSERVABILITY
Deployed now
- Explorer / Blockscout
- Besu RPC health verification
- operational checks and route verification scripts
Partially deployed / under validation
- Hyperledger-side service health beyond CT status
- unified status reporting for the broader DLT stack
SECTION 16 — DISASTER RECOVERY DESIGN
Target state
- RPO / RTO by workload tier
- cross-site replication
- cold / standby recovery paths
Current state
DR remains a program requirement, not a fully evidenced completed deployment capability.
SECTION 17 — PRODUCTION DEPLOYMENT WORKFLOW
Phase 1 — Reality mapping
Canonical implementation:
Phase 2 — Sovereignization roadmap
Canonical implementation:
Phase 3 — Liveness and production-simulation wrapper
Canonical implementation:
- scripts/verify/run-dbis-phase3-e2e-simulation.sh
- docs/03-deployment/DBIS_PHASE3_E2E_PRODUCTION_SIMULATION_RUNBOOK.md
SECTION 18 — END-TO-END PRODUCTION FLOW
Reference flow
- Identity issued
- Credential verified
- Workflow triggered
- Settlement executed
- Cross-chain sync
- Compliance recorded
- Final settlement confirmed
Current interpretation
This is the target business flow. Current automation verifies only selected infrastructure slices of that flow:
- Besu liveness
- optional FireFly HTTP
- operator-guided manual follow-ups for Indy / Fabric / CCIP
It must not be represented as fully automated end-to-end execution today.
SECTION 19 — EXECUTION DIRECTIVES
Cursor / operators should execute the following in order:
- Run Phase 1 discovery and review the critical failure summary.
- Reconcile node-role matrix conflicts, especially duplicate IP planning entries.
- Validate live Hyperledger CTs at the app layer, not only CT status.
- Track sovereignization gaps in the Phase 2 roadmap.
- Run the Phase 3 liveness wrapper and manual follow-ups.
- Produce or refresh readiness evidence.
These directives must map to repo scripts and docs, not hypothetical tooling.
SECTION 20 — EXPECTED DELIVERABLES
The executable deliverables in this repository are:
- Infrastructure inventory report
- Node role assignment map
- Phase 2 sovereignization roadmap
- Phase 3 liveness simulation runbook
- Caliper performance hook
- Operator readiness checklist
Separate security compliance and benchmark reports remain future deliverables unless explicitly generated.
SECTION 21 — CURRENT GAPS
Infrastructure gaps
- FireFly secondary
6201is currently stopped and should be treated as retired / standby until intentionally rebuilt. - Fabric CTs are present in inventory, but current app-level verification did not prove active Fabric peer or orderer services and did not show meaningful Fabric payloads; they are now intentionally stopped as reserved placeholders.
- Indy CTs are present in inventory, but current app-level verification did not prove active Indy validator listeners and did not show meaningful Indy payloads; they are now intentionally stopped as reserved placeholders.
- The current per-node app-level evidence table is maintained in docs/03-deployment/DBIS_HYPERLEDGER_RUNTIME_STATUS.md.
Platform gaps
- Ceph-backed distributed storage is still roadmap work.
- Full VLAN / sovereign network segmentation is still roadmap work.
- Final entity ownership assignments remain incomplete.
Planning gaps
- Future-state architecture items must remain clearly labeled as planned, not deployed.
SECTION 22 — IMPLEMENTATION ARTIFACTS
Executable counterparts in this repository:
| Deliverable | Location |
|---|---|
| Node Role Matrix | docs/02-architecture/DBIS_NODE_ROLE_MATRIX.md |
| Phase 1 discovery | scripts/verify/run-phase1-discovery.sh, docs/03-deployment/PHASE1_DISCOVERY_RUNBOOK.md, reports/phase1-discovery/ |
| Phase 2 roadmap | docs/02-architecture/DBIS_PHASE2_PROXMOX_SOVEREIGNIZATION_ROADMAP.md |
| Phase 3 liveness wrapper | scripts/verify/run-dbis-phase3-e2e-simulation.sh, docs/03-deployment/DBIS_PHASE3_E2E_PRODUCTION_SIMULATION_RUNBOOK.md |
| Production gate | docs/03-deployment/DBIS_PHASES_1_TO_3_PRODUCTION_GATE.md |
| Caliper hook | docs/03-deployment/CALIPER_CHAIN138_PERF_HOOK.md, scripts/verify/print-caliper-chain138-stub.sh |
| Operator readiness checklist | docs/00-meta/OPERATOR_READY_CHECKLIST.md section 10 |