Files
explorer-monorepo/deployment/LIVE_DEPLOYMENT_MAP.md
defiQUG 0972178cc5 refactor: rename SolaceScanScout to Solace and update related configurations
- Updated branding from "SolaceScanScout" to "Solace" across various files including deployment scripts, API responses, and documentation.
- Changed default base URL for Playwright tests and updated security headers to reflect the new branding.
- Enhanced README and API documentation to include new authentication endpoints and product access details.

This refactor aligns the project branding and improves clarity in the API documentation.
2026-04-10 12:52:17 -07:00

6.6 KiB

Live Deployment Map

Current production deployment map for the SolaceScan public explorer surface.

This file is the authoritative reference for the live explorer stack as of 2026-04-05. It supersedes the older monolithic deployment notes in this directory when the question is "what is running in production right now?"

Public Entry Point

  • Canonical public domain: https://blockscout.defi-oracle.io
  • Companion surface: https://explorer.d-bis.org
  • Primary container: VMID 5000 (192.168.11.140, blockscout-1)
  • Public edge: nginx on VMID 5000

VMID 5000 Internal Topology

Surface Internal listener Owner Public paths
nginx 80, 443 VMID 5000 terminates public traffic
Next frontend 127.0.0.1:3000 solacescanscout-frontend.service /, /bridge, /routes, /more, /wallet, /liquidity, /pools, /analytics, /operator, /system, /weth
Explorer config/API 127.0.0.1:8081 explorer-config-api.service /api/config/*, /explorer-api/v1/*
Blockscout 127.0.0.1:4000 existing Blockscout stack /api/v2/* and Blockscout-backed explorer data
Token aggregation 127.0.0.1:3001 token-aggregation service /token-aggregation/api/v1/*
Static config assets /var/www/html/config, /var/www/html/token-icons nginx static files /config/*, /token-icons/*

Canonical Deploy Scripts

Component Canonical deploy path Notes
Next frontend deploy-next-frontend-to-vmid5000.sh Builds the Next standalone bundle and installs solacescanscout-frontend.service on port 3000
Explorer config assets deploy-explorer-config-to-vmid5000.sh Publishes token list, networks, capabilities, topology, verification example, and token icons
Explorer config/API backend deploy-explorer-ai-to-vmid5000.sh Builds and installs explorer-config-api.service on port 8081 and normalizes nginx /explorer-api/v1/* routing
RPC/API-key edge enforcement ACCESS_EDGE_ENFORCEMENT_RUNBOOK.md, render-rpc-access-gate-nginx.sh Canonical nginx auth_request pattern plus renderer for 2101 / 2102 / 2103 lanes using the explorer validator

Relay Topology

CCIP relay workers do not run inside VMID 5000. They run on host r630-01 and are consumed by the explorer API through relay-health probes.

Service file Profile Port Current role
ccip-relay.service mainnet-weth 9860 Mainnet WETH lane, intentionally paused
ccip-relay-mainnet-cw.service mainnet-cw 9863 Mainnet cW lane
ccip-relay-bsc.service bsc 9861 BSC lane
ccip-relay-avax.service avax 9862 Avalanche lane
ccip-relay-avax-cw.service avax-cw 9864 Avalanche cW lane
ccip-relay-avax-to-138.service avax-to-138 9865 Reverse Avalanche to Chain 138 lane

The explorer backend reads these through CCIP_RELAY_HEALTH_URL or CCIP_RELAY_HEALTH_URLS; see backend/api/rest/README.md.

Public Verification Points

The following endpoints currently describe the live deployment contract:

  • https://blockscout.defi-oracle.io/
  • https://blockscout.defi-oracle.io/bridge
  • https://blockscout.defi-oracle.io/routes
  • https://blockscout.defi-oracle.io/liquidity
  • https://blockscout.defi-oracle.io/api/config/capabilities
  • https://blockscout.defi-oracle.io/config/CHAIN138_RPC_CAPABILITIES.json
  • https://blockscout.defi-oracle.io/explorer-api/v1/features
  • https://blockscout.defi-oracle.io/explorer-api/v1/track1/bridge/status
  • https://blockscout.defi-oracle.io/explorer-api/v1/mission-control/stream
  • https://blockscout.defi-oracle.io/token-aggregation/api/v1/routes/matrix

When a change spans multiple explorer surfaces, use this order:

  1. Deploy static config assets with deploy-explorer-config-to-vmid5000.sh.
  2. Deploy the explorer config/API backend with deploy-explorer-ai-to-vmid5000.sh.
  3. Deploy the Next frontend with deploy-next-frontend-to-vmid5000.sh.
  4. If nginx routing changed, verify the VMID 5000 nginx site before reload.
  5. Run check-explorer-health.sh against the public domain.
  6. Confirm relay visibility on /explorer-api/v1/track1/bridge/status and mission-control SSE.

When a change spans relays as well:

  1. Deploy or restart the relevant ccip-relay*.service unit on r630-01.
  2. Ensure the explorer backend relay probe env still matches the active host ports.
  3. Recheck /explorer-api/v1/track1/bridge/status and /explorer-api/v1/mission-control/stream.

Current Gaps And Legacy Footguns

  • Older docs in this directory still describe a retired monolithic API-plus-frontend package. That is no longer the production deployment shape.
  • ALL_VMIDS_ENDPOINTS.md is still correct at the public ingress level, but it intentionally compresses the explorer into :80/:443 and Blockscout :4000. Use this file for the detailed internal listener split.
  • There is no single one-shot script in this repo that fully deploys Blockscout, nginx, token aggregation, explorer-config-api, Next frontend, and host-side relays together. Production is currently assembled from the component deploy scripts above.
  • mainnet-weth is deployed but intentionally paused until that bridge lane is funded again.
  • Etherlink and XDC Zero remain separate bridge programs; they are not part of the current CCIP relay fleet described here.

Source Of Truth

Use these in order:

  1. This file for the live explorer deployment map.
  2. ALL_VMIDS_ENDPOINTS.md for VMID, IP, and public ingress inventory.
  3. The deploy scripts themselves for exact install behavior.
  4. check-explorer-health.sh plus check-explorer-e2e.sh for public verification.