- Update dbis_core, cross-chain-pmm-lps, explorer-monorepo, metamask-integration, pr-workspace/chains - Omit embedded publish git dirs and empty placeholders from index Made-with: Cursor
8.5 KiB
8.5 KiB
DBIS Hyperledger Runtime Status
Last Reviewed: 2026-04-05
Purpose: Concise app-level status table for the non-Besu Hyperledger footprint currently hosted on Proxmox. This complements the VMID inventory and discovery runbooks by recording what was actually verified inside the running containers.
Scope
This document summarizes the latest operator verification for:
- Cacti CTs:
5200,5201,5202 - FireFly CTs:
6200,6201 - Fabric CTs:
6000,6001,6002 - Indy CTs:
6400,6401,6402 - Aries / AnonCreds CT:
6500 - Caliper CT:
6600
The checks were based on:
pct status- in-container process checks
- in-container listener checks
- FireFly API / Postgres / IPFS checks where applicable
- ACA-Py admin API and container-inspection checks where applicable
- Caliper CLI / bind / RPC reachability checks where applicable
Current status table
| VMID | Service family | CT status | App-level status | Listening ports / probe | Notes |
|---|---|---|---|---|---|
5200 |
Cacti primary | Running | Healthy Besu connector gateway | 4000/tcp Cacti API, 5000/tcp local gRPC sidecar |
Reworked from the stale two-container template into a live ghcr.io/hyperledger/cactus-connector-besu:2024-07-04-8c030ae runtime; container health is healthy; GET /api/v1/api-server/healthcheck returned 200; Besu connector plugin loaded against http://192.168.11.211:8545 / ws://192.168.11.211:8546 |
5201 |
Cacti secondary | Running | Healthy public Cacti surface | 80/tcp landing page, 4000/tcp local Cacti API |
GET / returned 200; GET /api/v1/api-server/healthcheck returned 200; intended public surface for cacti-alltra.d-bis.org with the local Hyperledger Cacti API kept healthy behind it. |
5202 |
Cacti tertiary | Running | Healthy public Cacti surface | 80/tcp landing page, 4000/tcp local Cacti API |
Same disposition as 5201, but for cacti-hybx.d-bis.org; both the web landing page and the local Cacti API are healthy. |
6200 |
FireFly primary | Running | Healthy minimal local gateway | 5000/tcp FireFly API, 5432/tcp Postgres, 5001/tcp IPFS |
firefly-core is healthy on ghcr.io/hyperledger/firefly:v1.2.0; GET /api/v1/status returned 200; Postgres pg_isready passed; IPFS version probe passed. On 2026-04-05 the CT was reconciled into an idempotent mixed stack: compose now manages Postgres/IPFS, a helper-backed firefly.service treats the existing legacy firefly-core container as canonical instead of trying to recreate it, and all three containers use restart=unless-stopped. |
6201 |
FireFly secondary | Stopped | Formally retired until rebuilt | None verified | CT exists in inventory, but the rootfs is effectively empty and no valid FireFly deployment footprint was found. Treat this as retired / standby metadata only until it is intentionally rebuilt as a real secondary node. |
6000 |
Fabric primary | Running | Operational sample network | 7050/tcp orderer, 7051/tcp org1 peer, 9051/tcp org2 peer, 9443 / 9444 / 9445 operations ports |
Official fabric-samples payload staged under /opt/fabric; the sample network was restored and reverified on 2026-04-05; orderer.example.com, peer0.org1.example.com, and peer0.org2.example.com are running again; peer channel getinfo -c mychannel now returns height 1. Nested LXC requires features: nesting=1,keyctl=1, and fabric-sample-network.service is now installed to re-ensure the sample network after CT restarts. |
6001 |
Fabric secondary | Stopped | Reserved placeholder | None active | Same disposition as 6000: no proven Fabric application payload or listeners, now stopped and reserved only as placeholder inventory. |
6002 |
Fabric tertiary | Stopped | Reserved placeholder | None active | Same disposition as 6000: no proven Fabric application payload or listeners, now stopped and reserved only as placeholder inventory. |
6400 |
Indy primary | Running | Healthy four-node local validator pool | 9701-9708/tcp validator and client listeners |
hyperledgerlabs/indy-node:latest now runs indy-node-1 through indy-node-4 under /opt/indy/docker-compose.yml; systemctl is-active indy returned active and systemctl is-enabled indy returned enabled; all expected start_indy_node listeners are bound on 0.0.0.0. |
6401 |
Indy secondary | Stopped | Reserved placeholder | None active | Same disposition as 6400: no proven Indy application payload or listeners, now stopped and reserved only as placeholder inventory. |
6402 |
Indy tertiary | Stopped | Reserved placeholder | None active | Same disposition as 6400: no proven Indy application payload or listeners, now stopped and reserved only as placeholder inventory. |
6500 |
Aries / AnonCreds primary | Running | Healthy ACA-Py agent on the askar-anoncreds wallet path |
8030/tcp DIDComm endpoint, 8031/tcp admin API |
acapy-agent is running from ghcr.io/openwallet-foundation/acapy-agent:py3.12-1.3-lts; GET /status/live returned {"alive": true}; docker inspect confirms --wallet-type askar-anoncreds, --endpoint http://192.168.11.88:8030, and a real Indy genesis file mounted from the 6400 pool artifacts. |
6600 |
Caliper primary | Running | Operational benchmark workspace | No inbound app port required; npx caliper --version returned 0.6.0 |
/opt/caliper/workspace contains an upstream Caliper CLI install, npx caliper bind --caliper-bind-sut besu:1.4 succeeded, npm ls confirms @hyperledger/caliper-cli@0.6.0 and web3@1.3.0, and RPC reachability to http://192.168.11.211:8545 was verified with eth_blockNumber. |
Interpretation
Confirmed working now
- Cacti primary (
5200) is live as a local Cacti API with the Besu connector plugin loaded and healthy. - Cacti public surfaces (
5201and5202) are live as landing pages plus local Hyperledger Cacti APIs for the Alltra and HYBX edges. - FireFly primary (
6200) is restored enough to provide a working local FireFly API backed by Postgres and IPFS. - Fabric primary (
6000) now runs a verified official sample network with one orderer and two peers joined tomychannel. - Indy primary (
6400) now runs a verified four-node local validator pool with all expected node and client listeners active. - Aries / AnonCreds primary (
6500) now runs a verified ACA-Py agent with theaskar-anoncredswallet type against the local Indy genesis. - Caliper primary (
6600) now hosts a verified upstream Caliper workspace with the Besu1.4binding installed and Chain 138 RPC reachability confirmed.
Present only as reserved placeholders right now
- Fabric CTs (
6001-6002) - Indy CTs (
6401-6402)
These should be described as reserved placeholder inventory only. The primaries 6000 and 6400 are now active application nodes, while the secondary and tertiary CTs remain inactive inventory.
Not currently active
- FireFly secondary (
6201) should be treated as formally retired / standby metadata unless it is intentionally rebuilt and verified.
Operational follow-up
- Keep
5200,5201,5202,6000,6200, and6400under observation and preserve their working images, config paths, and nested-Docker allowances. - Use
scripts/maintenance/ensure-fabric-sample-network-via-ssh.shwhen the Fabric sample network on6000drifts back to stopped peer/orderer containers or after unit-level changes. - Use
scripts/maintenance/ensure-firefly-primary-via-ssh.shwhen6200shows a failedfirefly.servicestate, a compose-version mismatch, or drift between the compose-managed dependencies and the existingfirefly-corecontainer. - Keep
6500and6600under observation as primaries for identity-agent and benchmark-harness work, and preserve the Indy genesis handoff plus the Caliper workspace state. - Do not force
6201,6001,6002,6401, or6402online unless their intended roles and deployment assets are re-established from scratch. - Any governance or architecture document should distinguish:
deployed and app-healthycontainer present onlyplanned / aspirational