feat: explorer API, wallet, CCIP scripts, and config refresh

- Backend REST/gateway/track routes, analytics, Blockscout proxy paths.
- Frontend wallet and liquidity surfaces; MetaMask token list alignment.
- Deployment docs, verification scripts, address inventory updates.

Check: go build ./... under backend/ (pass).
Made-with: Cursor
This commit is contained in:
defiQUG
2026-04-07 23:22:12 -07:00
parent d931be8e19
commit 6eef6b07f6
224 changed files with 19671 additions and 3291 deletions

View File

@@ -1,183 +1,40 @@
# Deployment Summary
## Complete Deployment Package
This directory contains two different kinds of deployment material:
All deployment files and scripts have been created and are ready for use.
- current production references for the live explorer stack
- older monolithic deployment scaffolding that is still useful as background, but is no longer the authoritative description of production
## 📁 File Structure
## Current Production Summary
```
deployment/
├── DEPLOYMENT_GUIDE.md # Complete step-by-step guide (1,079 lines)
├── DEPLOYMENT_TASKS.md # Detailed 71-task checklist (561 lines)
├── DEPLOYMENT_CHECKLIST.md # Interactive checklist (204 lines)
├── DEPLOYMENT_SUMMARY.md # This file
├── QUICK_DEPLOY.md # Quick command reference
├── README.md # Documentation overview
├── ENVIRONMENT_TEMPLATE.env # Environment variables template
├── nginx/
│ └── explorer.conf # Complete Nginx configuration
├── cloudflare/
│ └── tunnel-config.yml # Cloudflare Tunnel template
├── systemd/
│ ├── explorer-indexer.service
│ ├── explorer-api.service
│ ├── explorer-frontend.service
│ └── cloudflared.service
├── fail2ban/
│ ├── nginx.conf # Nginx filter
│ └── jail.local # Jail configuration
└── scripts/
├── deploy-lxc.sh # Automated LXC setup
├── install-services.sh # Install systemd services
├── setup-nginx.sh # Setup Nginx
├── setup-cloudflare-tunnel.sh # Setup Cloudflare Tunnel
├── setup-firewall.sh # Configure firewall
├── setup-fail2ban.sh # Configure Fail2ban
├── setup-backup.sh # Setup backup system
├── setup-health-check.sh # Setup health monitoring
├── build-all.sh # Build all applications
├── verify-deployment.sh # Verify deployment
└── full-deploy.sh # Full automated deployment
```
Start with [`LIVE_DEPLOYMENT_MAP.md`](./LIVE_DEPLOYMENT_MAP.md).
## 🚀 Quick Start
The live explorer is currently assembled from separate deployment paths:
### Option 1: Automated Deployment
```bash
# Run full automated deployment
sudo ./deployment/scripts/full-deploy.sh
```
| Component | Live service | Canonical deploy path |
|---|---|---|
| Next frontend | `solacescanscout-frontend.service` | [`scripts/deploy-next-frontend-to-vmid5000.sh`](../scripts/deploy-next-frontend-to-vmid5000.sh) |
| Explorer config/API | `explorer-config-api.service` | [`scripts/deploy-explorer-ai-to-vmid5000.sh`](../scripts/deploy-explorer-ai-to-vmid5000.sh) |
| Static config assets | nginx static files under `/var/www/html` | [`scripts/deploy-explorer-config-to-vmid5000.sh`](../scripts/deploy-explorer-config-to-vmid5000.sh) |
| Relay fleet | `ccip-relay*.service` on `r630-01` | host-side `config/systemd/ccip-relay*.service` |
### Option 2: Step-by-Step Manual
```bash
# 1. Read the guide
cat deployment/DEPLOYMENT_GUIDE.md
## Public Verification
# 2. Follow tasks
# Use deployment/DEPLOYMENT_TASKS.md
- [`check-explorer-health.sh`](../scripts/check-explorer-health.sh)
- [`check-explorer-e2e.sh`](../../scripts/verify/check-explorer-e2e.sh)
- `https://explorer.d-bis.org/api/config/capabilities`
- `https://explorer.d-bis.org/explorer-api/v1/track1/bridge/status`
- `https://explorer.d-bis.org/explorer-api/v1/mission-control/stream`
# 3. Track progress
# Use deployment/DEPLOYMENT_CHECKLIST.md
```
## Legacy Material In This Directory
## 📋 Deployment Phases
These files remain in the repo, but they describe an older generalized package:
1. **LXC Container Setup** (8 tasks)
- Create container
- Configure resources
- Install base packages
2. **Application Installation** (12 tasks)
- Install Go, Node.js, Docker
- Clone repository
- Build applications
3. **Database Setup** (10 tasks)
- Install PostgreSQL + TimescaleDB
- Create database
- Run migrations
4. **Infrastructure Services** (6 tasks)
- Deploy Elasticsearch
- Deploy Redis
5. **Application Services** (10 tasks)
- Configure environment
- Create systemd services
- Start services
6. **Nginx Reverse Proxy** (9 tasks)
- Install Nginx
- Configure reverse proxy
- Set up SSL
7. **Cloudflare Configuration** (18 tasks)
- Configure DNS
- Set up SSL/TLS
- Configure Tunnel
- Set up WAF
- Configure caching
8. **Security Hardening** (12 tasks)
- Configure firewall
- Set up Fail2ban
- Configure backups
- Harden SSH
9. **Monitoring** (8 tasks)
- Set up health checks
- Configure logging
- Set up alerts
## 🔧 Available Scripts
| Script | Purpose |
|--------|---------|
| `deploy-lxc.sh` | Automated LXC container setup |
| `build-all.sh` | Build all applications |
| `install-services.sh` | Install systemd service files |
| `setup-nginx.sh` | Configure Nginx |
| `setup-cloudflare-tunnel.sh` | Setup Cloudflare Tunnel |
| `setup-firewall.sh` | Configure UFW firewall |
| `setup-fail2ban.sh` | Configure Fail2ban |
| `setup-backup.sh` | Setup backup system |
| `setup-health-check.sh` | Setup health monitoring |
| `verify-deployment.sh` | Verify deployment |
| `full-deploy.sh` | Full automated deployment |
## 📝 Configuration Files
- **Nginx**: `nginx/explorer.conf`
- **Cloudflare Tunnel**: `cloudflare/tunnel-config.yml`
- **Systemd Services**: `systemd/*.service`
- **Fail2ban**: `fail2ban/*.conf`
- **Environment Template**: `ENVIRONMENT_TEMPLATE.env`
## ✅ Verification Checklist
After deployment, verify:
- [ ] All services running
- [ ] API responding: `curl http://localhost:8080/health`
- [ ] Frontend loading: `curl http://localhost:3000`
- [ ] Nginx proxying: `curl http://localhost/api/health`
- [ ] Database accessible
- [ ] DNS resolving
- [ ] SSL working (if direct connection)
- [ ] Cloudflare Tunnel connected (if using)
- [ ] Firewall configured
- [ ] Backups running
## 🆘 Troubleshooting
See `QUICK_DEPLOY.md` for:
- Common issues
- Quick fixes
- Emergency procedures
## 📊 Statistics
- **Total Tasks**: 71
- **Documentation**: 1,844+ lines
- **Scripts**: 11 automation scripts
- **Config Files**: 8 configuration templates
- **Estimated Time**: 6-8 hours (first deployment)
## 🎯 Next Steps
1. Review `DEPLOYMENT_GUIDE.md`
2. Prepare environment (Proxmox, Cloudflare)
3. Run deployment scripts
4. Verify deployment
5. Configure monitoring
---
**All deployment files are ready!**
- `DEPLOYMENT_GUIDE.md`
- `DEPLOYMENT_TASKS.md`
- `DEPLOYMENT_CHECKLIST.md`
- `QUICK_DEPLOY.md`
- `systemd/explorer-api.service`
- `systemd/explorer-frontend.service`
Treat those as scaffold or historical reference unless they have been explicitly updated to match the live split architecture.

View File

@@ -0,0 +1,94 @@
# Live Deployment Map
Current production deployment map for `explorer.d-bis.org`.
This file is the authoritative reference for the live explorer stack as of `2026-04-05`. It supersedes the older monolithic deployment notes in this directory when the question is "what is running in production right now?"
## Public Entry Point
- Public domain: `https://explorer.d-bis.org`
- Primary container: VMID `5000` (`192.168.11.140`, `blockscout-1`)
- Public edge: nginx on VMID `5000`
## VMID 5000 Internal Topology
| Surface | Internal listener | Owner | Public paths |
|---|---:|---|---|
| nginx | `80`, `443` | VMID `5000` | terminates public traffic |
| Next frontend | `127.0.0.1:3000` | `solacescanscout-frontend.service` | `/`, `/bridge`, `/routes`, `/more`, `/wallet`, `/liquidity`, `/pools`, `/analytics`, `/operator`, `/system`, `/weth` |
| Explorer config/API | `127.0.0.1:8081` | `explorer-config-api.service` | `/api/config/*`, `/explorer-api/v1/*` |
| Blockscout | `127.0.0.1:4000` | existing Blockscout stack | `/api/v2/*` and Blockscout-backed explorer data |
| Token aggregation | `127.0.0.1:3001` | token-aggregation service | `/token-aggregation/api/v1/*` |
| Static config assets | `/var/www/html/config`, `/var/www/html/token-icons` | nginx static files | `/config/*`, `/token-icons/*` |
## Canonical Deploy Scripts
| Component | Canonical deploy path | Notes |
|---|---|---|
| Next frontend | [`deploy-next-frontend-to-vmid5000.sh`](../scripts/deploy-next-frontend-to-vmid5000.sh) | Builds the Next standalone bundle and installs `solacescanscout-frontend.service` on port `3000` |
| Explorer config assets | [`deploy-explorer-config-to-vmid5000.sh`](../scripts/deploy-explorer-config-to-vmid5000.sh) | Publishes token list, networks, capabilities, topology, verification example, and token icons |
| Explorer config/API backend | [`deploy-explorer-ai-to-vmid5000.sh`](../scripts/deploy-explorer-ai-to-vmid5000.sh) | Builds and installs `explorer-config-api.service` on port `8081` and normalizes nginx `/explorer-api/v1/*` routing |
## Relay Topology
CCIP relay workers do not run inside VMID `5000`. They run on host `r630-01` and are consumed by the explorer API through relay-health probes.
| Service file | Profile | Port | Current role |
|---|---|---:|---|
| [`ccip-relay.service`](../../config/systemd/ccip-relay.service) | `mainnet-weth` | `9860` | Mainnet WETH lane, intentionally paused |
| [`ccip-relay-mainnet-cw.service`](../../config/systemd/ccip-relay-mainnet-cw.service) | `mainnet-cw` | `9863` | Mainnet cW lane |
| [`ccip-relay-bsc.service`](../../config/systemd/ccip-relay-bsc.service) | `bsc` | `9861` | BSC lane |
| [`ccip-relay-avax.service`](../../config/systemd/ccip-relay-avax.service) | `avax` | `9862` | Avalanche lane |
| [`ccip-relay-avax-cw.service`](../../config/systemd/ccip-relay-avax-cw.service) | `avax-cw` | `9864` | Avalanche cW lane |
| [`ccip-relay-avax-to-138.service`](../../config/systemd/ccip-relay-avax-to-138.service) | `avax-to-138` | `9865` | Reverse Avalanche to Chain 138 lane |
The explorer backend reads these through `CCIP_RELAY_HEALTH_URL` or `CCIP_RELAY_HEALTH_URLS`; see [`backend/api/rest/README.md`](../backend/api/rest/README.md).
## Public Verification Points
The following endpoints currently describe the live deployment contract:
- `https://explorer.d-bis.org/`
- `https://explorer.d-bis.org/bridge`
- `https://explorer.d-bis.org/routes`
- `https://explorer.d-bis.org/liquidity`
- `https://explorer.d-bis.org/api/config/capabilities`
- `https://explorer.d-bis.org/config/CHAIN138_RPC_CAPABILITIES.json`
- `https://explorer.d-bis.org/explorer-api/v1/features`
- `https://explorer.d-bis.org/explorer-api/v1/track1/bridge/status`
- `https://explorer.d-bis.org/explorer-api/v1/mission-control/stream`
- `https://explorer.d-bis.org/token-aggregation/api/v1/routes/matrix`
## Recommended Rollout Order
When a change spans multiple explorer surfaces, use this order:
1. Deploy static config assets with [`deploy-explorer-config-to-vmid5000.sh`](../scripts/deploy-explorer-config-to-vmid5000.sh).
2. Deploy the explorer config/API backend with [`deploy-explorer-ai-to-vmid5000.sh`](../scripts/deploy-explorer-ai-to-vmid5000.sh).
3. Deploy the Next frontend with [`deploy-next-frontend-to-vmid5000.sh`](../scripts/deploy-next-frontend-to-vmid5000.sh).
4. If nginx routing changed, verify the VMID `5000` nginx site before reload.
5. Run [`check-explorer-health.sh`](../scripts/check-explorer-health.sh) against the public domain.
6. Confirm relay visibility on `/explorer-api/v1/track1/bridge/status` and mission-control SSE.
When a change spans relays as well:
1. Deploy or restart the relevant `ccip-relay*.service` unit on `r630-01`.
2. Ensure the explorer backend relay probe env still matches the active host ports.
3. Recheck `/explorer-api/v1/track1/bridge/status` and `/explorer-api/v1/mission-control/stream`.
## Current Gaps And Legacy Footguns
- Older docs in this directory still describe a monolithic `explorer-api.service` plus `explorer-frontend.service` package. That is no longer the production deployment shape.
- [`ALL_VMIDS_ENDPOINTS.md`](../../docs/04-configuration/ALL_VMIDS_ENDPOINTS.md) is still correct at the public ingress level, but it intentionally compresses the explorer into `:80/:443` and Blockscout `:4000`. Use this file for the detailed internal listener split.
- There is no single one-shot script in this repo that fully deploys Blockscout, nginx, token aggregation, explorer-config-api, Next frontend, and host-side relays together. Production is currently assembled from the component deploy scripts above.
- `mainnet-weth` is deployed but intentionally paused until that bridge lane is funded again.
- `Etherlink` and `XDC Zero` remain separate bridge programs; they are not part of the current CCIP relay fleet described here.
## Source Of Truth
Use these in order:
1. This file for the live explorer deployment map.
2. [`ALL_VMIDS_ENDPOINTS.md`](../../docs/04-configuration/ALL_VMIDS_ENDPOINTS.md) for VMID, IP, and public ingress inventory.
3. The deploy scripts themselves for exact install behavior.
4. [`check-explorer-health.sh`](../scripts/check-explorer-health.sh) plus [`check-explorer-e2e.sh`](../../scripts/verify/check-explorer-e2e.sh) for public verification.

View File

@@ -1,118 +1,41 @@
# Deployment Documentation
Complete deployment documentation for the ChainID 138 Explorer Platform.
Deployment docs for the Chain 138 explorer stack.
## Documentation Files
## Read This First
### 📘 DEPLOYMENT_GUIDE.md
**Complete step-by-step guide** with detailed instructions for:
- LXC container setup
- Application installation
- Database configuration
- Nginx reverse proxy setup
- Cloudflare DNS, SSL, and Tunnel configuration
- Security hardening
- Monitoring setup
For the current production deployment shape, start with [`LIVE_DEPLOYMENT_MAP.md`](./LIVE_DEPLOYMENT_MAP.md).
**Use this for**: Full deployment walkthrough
That file reflects the live split deployment now in production:
### 📋 DEPLOYMENT_TASKS.md
**Detailed task checklist** with all 71 tasks organized by phase:
- Pre-deployment (5 tasks)
- Phase 1: LXC Setup (8 tasks)
- Phase 2: Application Installation (12 tasks)
- Phase 3: Database Setup (10 tasks)
- Phase 4: Infrastructure Services (6 tasks)
- Phase 5: Application Services (10 tasks)
- Phase 6: Nginx Reverse Proxy (9 tasks)
- Phase 7: Cloudflare Configuration (18 tasks)
- Phase 8: Security Hardening (12 tasks)
- Phase 9: Monitoring (8 tasks)
- Post-Deployment Verification (13 tasks)
- Optional Enhancements (8 tasks)
- Next frontend on `127.0.0.1:3000` via `solacescanscout-frontend.service`
- explorer config/API on `127.0.0.1:8081` via `explorer-config-api.service`
- Blockscout on `127.0.0.1:4000`
- token aggregation on `127.0.0.1:3001`
- static config assets under `/var/www/html/config`
- CCIP relay workers on host `r630-01`, outside VMID `5000`
**Use this for**: Tracking deployment progress
## Current Canonical Deployment Paths
### ✅ DEPLOYMENT_CHECKLIST.md
**Interactive checklist** for tracking deployment completion.
- Frontend deploy: [`scripts/deploy-next-frontend-to-vmid5000.sh`](../scripts/deploy-next-frontend-to-vmid5000.sh)
- Config deploy: [`scripts/deploy-explorer-config-to-vmid5000.sh`](../scripts/deploy-explorer-config-to-vmid5000.sh)
- Explorer config/API deploy: [`scripts/deploy-explorer-ai-to-vmid5000.sh`](../scripts/deploy-explorer-ai-to-vmid5000.sh)
- Public health audit: [`scripts/check-explorer-health.sh`](../scripts/check-explorer-health.sh)
- Full public smoke: [`check-explorer-e2e.sh`](../../scripts/verify/check-explorer-e2e.sh)
**Use this for**: Marking off completed items
## Legacy And Greenfield Docs
### ⚡ QUICK_DEPLOY.md
**Quick reference** with essential commands and common issues.
The rest of this directory is still useful, but it should be treated as legacy scaffold or greenfield reference unless it explicitly matches the live split architecture above.
**Use this for**: Quick command lookup during deployment
- `DEPLOYMENT_GUIDE.md`: older full-stack walkthrough
- `DEPLOYMENT_TASKS.md`: older monolithic deployment checklist
- `DEPLOYMENT_CHECKLIST.md`: older tracking checklist
- `QUICK_DEPLOY.md`: command reference for the legacy package
## Configuration Files
## Practical Rule
### nginx/explorer.conf
Complete Nginx configuration with:
- Rate limiting
- SSL/TLS settings
- Reverse proxy configuration
- Security headers
- Caching rules
- WebSocket support
### cloudflare/tunnel-config.yml
Cloudflare Tunnel configuration template.
### scripts/deploy-lxc.sh
Automated deployment script for initial setup.
## Deployment Architecture
```
Internet
Cloudflare (DNS, SSL, WAF, CDN)
Cloudflare Tunnel (optional)
LXC Container
├── Nginx (Reverse Proxy)
│ ├── → Frontend (Port 3000)
│ └── → API (Port 8080)
├── PostgreSQL + TimescaleDB
├── Elasticsearch
├── Redis
└── Application Services
├── Indexer
├── API Server
└── Frontend Server
```
## Quick Start
1. **Read the deployment guide**: `DEPLOYMENT_GUIDE.md`
2. **Use the task list**: `DEPLOYMENT_TASKS.md`
3. **Track progress**: `DEPLOYMENT_CHECKLIST.md`
4. **Quick reference**: `QUICK_DEPLOY.md`
## Prerequisites
- Proxmox VE with LXC support
- Cloudflare account with domain
- 16GB+ RAM, 4+ CPU cores, 100GB+ storage
- Ubuntu 22.04 LTS template
- SSH access to Proxmox host
## Estimated Time
- **First deployment**: 6-8 hours
- **Subsequent deployments**: 2-3 hours
- **Updates**: 30-60 minutes
## Support
For issues during deployment:
1. Check `QUICK_DEPLOY.md` for common issues
2. Review service logs: `journalctl -u <service-name> -f`
3. Check Nginx logs: `tail -f /var/log/nginx/explorer-error.log`
4. Verify Cloudflare tunnel: `systemctl status cloudflared`
## Version
**Version**: 1.0.0
**Last Updated**: 2024-12-23
If the question is "how do we update production today?", use:
1. [`LIVE_DEPLOYMENT_MAP.md`](./LIVE_DEPLOYMENT_MAP.md)
2. the specific deploy script for the component being changed
3. the public health scripts for verification

View File

@@ -0,0 +1,17 @@
# Include inside the same server block as /explorer-api/ (or equivalent Go upstream).
# SSE responses must not be buffered by nginx or clients stall until the ticker fires.
location = /explorer-api/v1/mission-control/stream {
proxy_pass http://127.0.0.1:8080;
proxy_http_version 1.1;
proxy_set_header Connection '';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off;
proxy_cache off;
gzip off;
proxy_read_timeout 3600s;
add_header X-Accel-Buffering no;
}

View File

@@ -0,0 +1,36 @@
# Next.js frontend proxy locations for SolaceScanScout.
# Keep the existing higher-priority locations for:
# - /api/
# - /api/config/token-list
# - /api/config/networks
# - /api/config/capabilities
# - /explorer-api/v1/
# - /token-aggregation/api/v1/
# - /snap/
# - /health
#
# Include these locations after those API/static locations and before any legacy
# catch-all that serves /var/www/html/index.html directly.
location ^~ /_next/ {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Connection "";
proxy_buffering off;
proxy_hide_header Cache-Control;
add_header Cache-Control "no-store, no-cache, must-revalidate" always;
add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval' https://cdn.jsdelivr.net https://unpkg.com https://cdnjs.cloudflare.com; style-src 'self' 'unsafe-inline' https://cdnjs.cloudflare.com; img-src 'self' data: https:; font-src 'self' https://cdnjs.cloudflare.com; connect-src 'self' https://explorer.d-bis.org wss://explorer.d-bis.org https://rpc-http-pub.d-bis.org wss://rpc-ws-pub.d-bis.org http://192.168.11.221:8545 ws://192.168.11.221:8546;" always;
}

View File

@@ -13,6 +13,13 @@ Environment=PORT=8080
Environment=DB_HOST=localhost
Environment=DB_NAME=explorer
Environment=CHAIN_ID=138
Environment=RPC_URL=https://rpc-http-pub.d-bis.org
Environment=TOKEN_AGGREGATION_BASE_URL=http://127.0.0.1:3000
Environment=BLOCKSCOUT_INTERNAL_URL=http://127.0.0.1:4000
Environment=EXPLORER_PUBLIC_BASE=https://explorer.d-bis.org
Environment=OPERATOR_SCRIPTS_ROOT=/opt/explorer/scripts
Environment=OPERATOR_SCRIPT_ALLOWLIST=check-health.sh,check-bridges.sh
Environment=OPERATOR_SCRIPT_TIMEOUT_SEC=120
ExecStart=/opt/explorer/bin/api-server
Restart=on-failure
RestartSec=5

View File

@@ -93,6 +93,9 @@ services:
- PORT=8080
- CHAIN_ID=138
- REDIS_URL=redis://redis:6379
# Optional relay health for mission-control / bridge UI (see backend CCIP_RELAY_HEALTH_URLS)
- CCIP_RELAY_HEALTH_URL=${CCIP_RELAY_HEALTH_URL:-}
- CCIP_RELAY_HEALTH_URLS=${CCIP_RELAY_HEALTH_URLS:-}
ports:
- "8080:8080"
depends_on:

View File

@@ -0,0 +1,28 @@
[Unit]
Description=SolaceScanScout Next Frontend Service
After=network.target
Wants=network.target
[Service]
Type=simple
User=www-data
Group=www-data
WorkingDirectory=/opt/solacescanscout/frontend/current
Environment=NODE_ENV=production
Environment=HOSTNAME=127.0.0.1
Environment=PORT=3000
ExecStart=/usr/bin/node /opt/solacescanscout/frontend/current/server.js
Restart=always
RestartSec=5
StandardOutput=journal
StandardError=journal
SyslogIdentifier=solacescanscout-frontend
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/opt/solacescanscout/frontend
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target