Add contained MEV backend CT runbook

This commit is contained in:
defiQUG
2026-04-12 20:25:31 -07:00
parent 614d045842
commit 5d5d7a7db4
9 changed files with 554 additions and 5 deletions

View File

@@ -0,0 +1,23 @@
# MEV platform runtime env for the dedicated backend CT
# Copy inside CT to: /etc/mev-platform/backend.env
# Core config
MEV_CONFIG=/opt/proxmox/MEV_Bot/mev-platform/config.dev.toml
MEV_ADMIN_PORT=9090
MEV_SUPERVISOR_PORT=9091
MEV_GRAPH_URL=http://127.0.0.1:9082
MEV_SUPERVISOR_URL=http://127.0.0.1:9091
MEV_SUPERVISOR_BIN_DIR=/opt/proxmox/MEV_Bot/mev-platform/target/release
# Security
MEV_ADMIN_API_KEY=replace-me
# Safety: keep submission disabled for initial contained bring-up.
MEV_SUBMIT_DISABLED=1
# Logging
RUST_LOG=info,mev_admin_api=info,mev_supervisor=info
# Optional later:
# MEV_EXECUTOR_PRIVATE_KEY=0x...

View File

@@ -0,0 +1,16 @@
[Unit]
Description=MEV Admin API
Wants=network-online.target
After=network-online.target mev-infra-compose.service mev-supervisor.service
Requires=mev-infra-compose.service mev-supervisor.service
[Service]
Type=simple
WorkingDirectory=/opt/proxmox/MEV_Bot/mev-platform
EnvironmentFile=-/etc/mev-platform/backend.env
ExecStart=/opt/proxmox/MEV_Bot/mev-platform/target/release/mev-admin-api
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,17 @@
[Unit]
Description=MEV Platform Infra via Docker Compose
Wants=network-online.target
After=network-online.target docker.service
Requires=docker.service
[Service]
Type=oneshot
WorkingDirectory=/opt/proxmox/MEV_Bot/mev-platform
RemainAfterExit=yes
ExecStart=/usr/bin/docker compose up -d postgres redis nats
ExecStop=/usr/bin/docker compose stop postgres redis nats
TimeoutStartSec=0
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,16 @@
[Unit]
Description=MEV Supervisor
Wants=network-online.target
After=network-online.target mev-infra-compose.service
Requires=mev-infra-compose.service
[Service]
Type=simple
WorkingDirectory=/opt/proxmox/MEV_Bot/mev-platform
EnvironmentFile=-/etc/mev-platform/backend.env
ExecStart=/opt/proxmox/MEV_Bot/mev-platform/target/release/mev-supervisor
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.target

View File

@@ -11,11 +11,11 @@ This document describes how to publish the **MEV Control** web app (`MEV_Bot/mev
| Layer | Role |
|--------|------|
| **Static SPA** | Vite `dist/` on the nginx LXC (default **VMID 2410**, same CT as `info.defi-oracle.io` unless overridden). |
| **`/api/*`** | Nginx `proxy_pass` to **mev-admin-api** (Axum, default port **9090**) on a LAN host reachable from the CT. |
| **`/api/*`** | Nginx `proxy_pass` to **mev-admin-api** (Axum, default port **9090**) inside a dedicated backend CT reachable from the public edge CT. |
| **NPMplus** | TLS termination; forwards `mev.defi-oracle.io``http://IP_INFO_DEFI_ORACLE_WEB:80` (or `MEV_DEFI_ORACLE_UPSTREAM_*`). |
| **Cloudflare** | Optional proxied **A** or **CNAME** (tunnel) for `mev.defi-oracle.io` / `www.mev.defi-oracle.io`. |
The browser uses **same-origin** `/api` (no CORS split). Set **`MEV_ADMIN_API_HOST`** / **`MEV_ADMIN_API_PORT`** so the nginx CT can reach the machine where `cargo run -p mev-admin-api` (or your unit) listens.
The browser uses **same-origin** `/api` (no CORS split). Set **`MEV_ADMIN_API_HOST`** / **`MEV_ADMIN_API_PORT`** so the nginx CT can reach the backend CT where `mev-admin-api` listens. For the current recommended split topology, keep the public GUI on CT **2410** and point `/api` to a dedicated backend CT on **`r630-04`** (default recommendation: **`192.168.11.219:9090`**). Do not host the MEV backend services directly on a Proxmox node unless you are intentionally breaking the portability standard.
## Prerequisites
@@ -29,7 +29,11 @@ The browser uses **same-origin** `/api` (no CORS split). Set **`MEV_ADMIN_API_HO
From **proxmox** repo root (loads paths; override via env):
```bash
# Optional: export MEV_ADMIN_API_HOST=192.168.11.xx MEV_ADMIN_API_PORT=9090
# Recommended split topology:
# public GUI on CT 2410
# admin API / pipeline inside dedicated backend CT on r630-04
export MEV_ADMIN_API_HOST=192.168.11.219
export MEV_ADMIN_API_PORT=9090
bash scripts/deployment/sync-mev-control-gui-defi-oracle.sh --dry-run
bash scripts/deployment/sync-mev-control-gui-defi-oracle.sh
```
@@ -72,7 +76,7 @@ If you intentionally carry **MEV** traffic on the same Cloudflare tunnel stack a
|----------|--------------------------------------------|---------|
| `MEV_DEFI_ORACLE_WEB_VMID` | `2410` | Target LXC |
| `MEV_DEFI_ORACLE_WEB_ROOT` | `/var/www/mev.defi-oracle.io/html` | Web root |
| `MEV_ADMIN_API_HOST` | `192.168.11.11` | mev-admin-api bind host (from CT) |
| `MEV_ADMIN_API_HOST` | `192.168.11.11` shared default; override to the backend CT IP (recommended `192.168.11.219`) for the contained split topology | mev-admin-api bind host (from CT) |
| `MEV_ADMIN_API_PORT` | `9090` | mev-admin-api port |
| `MEV_DEFI_ORACLE_UPSTREAM_IP` | `IP_INFO_DEFI_ORACLE_WEB` | NPM forward target |
| `MEV_DEFI_ORACLE_UPSTREAM_PORT` | `80` | NPM forward port |
@@ -90,6 +94,7 @@ After TLS is live, open **https://mev.defi-oracle.io/intel** for in-app framing
## Related
- [MEV_CONTROL_LAN_BRINGUP_CHECKLIST.md](MEV_CONTROL_LAN_BRINGUP_CHECKLIST.md)
- [INFO_DEFI_ORACLE_IO_DEPLOYMENT.md](INFO_DEFI_ORACLE_IO_DEPLOYMENT.md)
- [MEV_Bot/README.md](../../MEV_Bot/README.md)
- [MEV_Bot/mev-platform/docs/RUNBOOK_AND_DEPLOYMENT.md](../../MEV_Bot/mev-platform/docs/RUNBOOK_AND_DEPLOYMENT.md)

View File

@@ -0,0 +1,397 @@
# MEV Control LAN Bring-Up Checklist
**Last Updated:** 2026-04-12
**Status:** Operator checklist for the current Proxmox / Sankofa LAN
This runbook turns the current MEV Control deployment into a working LAN stack for **`https://mev.defi-oracle.io`**.
It is based on the repo's current assumptions:
- Public GUI static assets are served from **VMID 2410** (`192.168.11.218`) via nginx.
- The GUI proxies **same-origin** `/api/*` to **`MEV_ADMIN_API_HOST:MEV_ADMIN_API_PORT`**.
- The shared repo default upstream is still **`192.168.11.11:9090`** in [config/ip-addresses.conf](../../config/ip-addresses.conf), but for this deployment we intentionally override it to a dedicated backend CT on `r630-04`.
- The admin API expects the pipeline health ports **8080-8087** and graph API **9082** to be reachable on **localhost**.
Because of that localhost assumption, the cleanest contained split topology for this deployment is:
- **Backend Proxmox host:** `r630-04` at **`192.168.11.14`**
- **Backend CT:** **VMID `2421`** at **`192.168.11.219`**
- **Public web host:** `info-defi-oracle-web` CT **2410** at **`192.168.11.218`**
## 1. Host choice
Use the dedicated backend CT for:
- `mev-admin-api`
- `mev-supervisor`
- pipeline services
- local Docker infra (`postgres`, `redis`, `nats`)
Use **CT 2410** only for:
- static GUI files
- nginx reverse proxy `/api` to `192.168.11.219:9090`
Why this is the recommended topology:
- It preserves the already-working public edge on CT `2410` and only moves the backend.
- It keeps application services off the Proxmox hosts themselves, so the stack stays portable and migratable.
- It keeps the control plane portable with `pct migrate`.
- It puts the whole contained backend on a much less contended node.
- The admin API health handler probes `127.0.0.1:8080-8087`, so splitting services across multiple hosts will make the Overview page lie unless the code is changed first.
- The graph proxy default is `http://127.0.0.1:9082`, so colocating `liquidity-graph` with `mev-admin-api` avoids extra rewiring.
## 2. CT provisioning
Provision the backend CT on `r630-04` first:
```bash
cd /home/intlc/projects/proxmox
bash scripts/deployment/provision-mev-control-backend-lxc.sh --dry-run
bash scripts/deployment/provision-mev-control-backend-lxc.sh
```
Default CT identity:
- Proxmox host: `192.168.11.14`
- VMID: `2421`
- CT IP: `192.168.11.219`
- Hostname: `mev-control-backend`
The provisioner creates an unprivileged Debian 12 CT with:
- `16` vCPU
- `32 GiB` RAM
- `8 GiB` swap
- `200 GiB` rootfs
- `nesting=1,keyctl=1`
These defaults are intentionally generous for the first contained deployment and can be tuned later.
## 3. Paths inside the backend CT
Recommended checkout path inside the CT:
```bash
/opt/proxmox
```
Relevant MEV subpath:
```bash
/opt/proxmox/MEV_Bot/mev-platform
```
Recommended env file path:
```bash
/etc/mev-platform/backend.env
```
## 4. Files to use
- Runbook: this document
- GUI edge deploy: [MEV_CONTROL_DEFI_ORACLE_IO_DEPLOYMENT.md](MEV_CONTROL_DEFI_ORACLE_IO_DEPLOYMENT.md)
- CT provisioner: [scripts/deployment/provision-mev-control-backend-lxc.sh](../../scripts/deployment/provision-mev-control-backend-lxc.sh)
- Env template: [config/mev-platform/mev-platform-backend-ct.env.example](../../config/mev-platform/mev-platform-backend-ct.env.example)
- Systemd examples:
- [config/systemd/mev-infra-compose.service.example](../../config/systemd/mev-infra-compose.service.example)
- [config/systemd/mev-supervisor.service.example](../../config/systemd/mev-supervisor.service.example)
- [config/systemd/mev-admin-api.service.example](../../config/systemd/mev-admin-api.service.example)
## 5. Preflight inside the backend CT
Open a shell into the CT:
```bash
ssh root@192.168.11.14 "pct enter 2421"
```
Clone or update the repo:
```bash
mkdir -p /opt
cd /opt
git clone https://github.com/Order-of-Hospitallers/proxmox-cp.git proxmox || true
cd /opt/proxmox
git pull --ff-only
git submodule update --init --recursive MEV_Bot
```
Install runtime prerequisites if missing:
```bash
apt-get update
apt-get install -y curl jq git build-essential pkg-config libssl-dev ca-certificates
```
Install Docker if missing, then confirm:
```bash
docker --version
docker compose version
```
Install Rust if missing, then confirm:
```bash
cargo --version
rustc --version
```
Install `sqlx-cli` once on the host if missing:
```bash
cargo install sqlx-cli --no-default-features --features postgres
```
## 6. Environment file
Create the runtime env directory and file:
```bash
mkdir -p /etc/mev-platform
cp /opt/proxmox/config/mev-platform/mev-platform-backend-ct.env.example /etc/mev-platform/backend.env
chmod 600 /etc/mev-platform/backend.env
```
Edit it before first boot:
- Set **`MEV_ADMIN_API_KEY`** to a real secret.
- Leave **`MEV_SUBMIT_DISABLED=1`** for the first bring-up.
- Keep **`MEV_CONFIG=/opt/proxmox/MEV_Bot/mev-platform/config.dev.toml`** unless you already have a dedicated LAN config.
Notes:
- `config.dev.toml` expects **Postgres on port `5434`**, Redis on `6379`, and NATS on `4222`.
- This is the easiest way to match the included `docker-compose.yml`.
- If you later promote this to a dedicated Chain 138 config, update `MEV_CONFIG` in the env file and restart `mev-supervisor` and `mev-admin-api`.
## 7. Build binaries inside the backend CT
Build the relevant release binaries once from the MEV workspace:
```bash
cd /opt/proxmox/MEV_Bot/mev-platform
cargo build --release -p mev-admin-api -p mev-supervisor \
-p mev-pool-indexer -p mev-state-ingestion -p mev-liquidity-graph \
-p mev-opportunity-search -p mev-simulation -p mev-bundle-builder \
-p mev-execution-gateway -p mev-settlement-analytics
```
Confirm the binaries exist:
```bash
ls -1 /opt/proxmox/MEV_Bot/mev-platform/target/release/mev-*
```
## 8. Install systemd units
Copy the example units into `/etc/systemd/system/`:
```bash
cp /opt/proxmox/config/systemd/mev-infra-compose.service.example /etc/systemd/system/mev-infra-compose.service
cp /opt/proxmox/config/systemd/mev-supervisor.service.example /etc/systemd/system/mev-supervisor.service
cp /opt/proxmox/config/systemd/mev-admin-api.service.example /etc/systemd/system/mev-admin-api.service
systemctl daemon-reload
```
Enable them:
```bash
systemctl enable mev-infra-compose.service
systemctl enable mev-supervisor.service
systemctl enable mev-admin-api.service
```
## 9. Exact bring-up order inside the backend CT
Bring the stack up in this order.
### Step 1: start infra
```bash
systemctl start mev-infra-compose.service
docker ps --format 'table {{.Names}}\t{{.Status}}\t{{.Ports}}'
```
Expected containers:
- `postgres`
- `redis`
- `nats`
### Step 2: run DB migrations
Important: override `DATABASE_URL` to match `config.dev.toml` on port `5434`.
```bash
cd /opt/proxmox/MEV_Bot/mev-platform
export DATABASE_URL='postgres://postgres:postgres@127.0.0.1:5434/mev'
./scripts/run_migrations.sh
```
### Step 3: start supervisor
```bash
systemctl start mev-supervisor.service
systemctl status --no-pager mev-supervisor.service
curl -fsS http://127.0.0.1:9091/control/status | jq .
```
### Step 4: start admin API
```bash
systemctl start mev-admin-api.service
systemctl status --no-pager mev-admin-api.service
curl -fsS http://127.0.0.1:9090/api/health | jq .
curl -fsS http://127.0.0.1:9090/api/auth/check | jq .
```
At this point:
- `/api/health` should return `200`
- `/api/auth/check` should return `200`
- all services will still likely show `"down"` until you start them
### Step 5: start the pipeline in order through supervisor
Export your API key once:
```bash
export MEV_API_KEY='replace-with-your-real-key'
```
Start each service in order:
```bash
curl -fsS -X POST -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/control/start/pool-indexer | jq .
sleep 2
curl -fsS -X POST -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/control/start/state-ingestion | jq .
sleep 2
curl -fsS -X POST -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/control/start/liquidity-graph | jq .
sleep 2
curl -fsS -X POST -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/control/start/opportunity-search | jq .
sleep 2
curl -fsS -X POST -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/control/start/simulation | jq .
sleep 2
curl -fsS -X POST -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/control/start/bundle-builder | jq .
sleep 2
curl -fsS -X POST -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/control/start/execution-gateway | jq .
sleep 2
curl -fsS -X POST -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/control/start/settlement-analytics | jq .
```
Then confirm:
```bash
curl -fsS -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/control/status | jq .
curl -fsS http://127.0.0.1:9090/api/health | jq .
```
## 10. Local validation inside the backend CT
Run the built-in admin API E2E:
```bash
cd /opt/proxmox/MEV_Bot/mev-platform
MEV_ADMIN_API_KEY="$MEV_API_KEY" BASE=http://127.0.0.1:9090 ./scripts/e2e_admin_api.sh
```
Useful manual probes:
```bash
curl -fsS -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/infra | jq .
curl -fsS -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/config | jq .
curl -fsS -H "X-API-Key: $MEV_API_KEY" "http://127.0.0.1:9090/api/pools?limit=5" | jq .
curl -fsS -H "X-API-Key: $MEV_API_KEY" "http://127.0.0.1:9090/api/graph/snapshot?chain_id=1" | jq .
```
## 11. Public cutover check
Before testing the public site, re-render the MEV nginx vhost so CT `2410` points at the backend CT:
```bash
cd /home/intlc/projects/proxmox
MEV_ADMIN_API_HOST=192.168.11.219 bash scripts/deployment/sync-mev-control-gui-defi-oracle.sh
```
Once the control plane is up inside the backend CT, confirm CT 2410 can reach it through nginx:
```bash
curl -fsS https://mev.defi-oracle.io/health
curl -fsS https://mev.defi-oracle.io/api/auth/check | jq .
```
If auth is enabled, test a protected route with the key:
```bash
curl -fsS -H "X-API-Key: $MEV_API_KEY" https://mev.defi-oracle.io/api/infra | jq .
```
Then open:
- `https://mev.defi-oracle.io/`
- `https://mev.defi-oracle.io/intel`
- `https://mev.defi-oracle.io/login`
## 12. What each screen depends on
Overview:
- needs `mev-admin-api`
- needs service health ports 8080-8087 on the same host
- infra only reports real Postgres status today; Redis and NATS are still backend TODOs
Pipeline:
- needs the same health stack as Overview
Data:
- needs Postgres up
- needs migrations applied
- needs ingest/search/execution services writing rows
Config:
- needs `MEV_CONFIG` readable by `mev-admin-api`
Safety:
- needs `mev-admin-api`
- kill switch works even before full execution is live
Operations:
- needs `mev-supervisor`
- needs `MEV_SUPERVISOR_URL=http://127.0.0.1:9091`
Analytics:
- needs executions and opportunities in Postgres
Graph:
- needs `liquidity-graph` on `127.0.0.1:9082`
## 13. Safe first-boot posture
For the first LAN bring-up, keep:
- `MEV_SUBMIT_DISABLED=1`
- `MEV_ADMIN_API_KEY` set
- `execution-gateway` running only after you confirm config, data, and graph look sane
That lets the UI come alive without immediately turning on live submission.
## 14. Current known gaps after bring-up
Even after the stack is running, the following are still known implementation gaps in the MEV platform itself:
- real Redis/NATS health reporting in `/api/infra`
- signer readiness/status API
- richer graph visualization
- bundle signing
- inclusion detection
- Uniswap V3 / Curve / multicall / block-subscription gaps tracked in `MEV_Bot/mev-platform/docs/REMAINING_GAPS_IMPLEMENTATION.md`

View File

@@ -35,6 +35,7 @@ This directory contains setup and configuration guides.
- **[GITEA_IP_CONFLICT_CHECK.md](GITEA_IP_CONFLICT_CHECK.md)** — Gitea IP (.31) vs other VMIDs; `IP_GITEA_INFRA` notes
- **[INFO_DEFI_ORACLE_IO_DEPLOYMENT.md](INFO_DEFI_ORACLE_IO_DEPLOYMENT.md)** - **`info.defi-oracle.io`** Chain 138 hub SPA (incl. `/governance`, `/ecosystem`, `/documentation`, `/solacenet`, `/disclosures`, agents): VMID **2410**, nginx **`/token-aggregation/`** proxy, `sync-info-defi-oracle-to-vmid2400.sh`, NPMplus, Cloudflare DNS (`set-info-defi-oracle-dns-to-vmid2400-tunnel.sh`), `purge-info-defi-oracle-cache.sh`, `pnpm run verify:info-defi-oracle-public`, CI `info-defi-oracle-138.yml` + `verify-info-defi-oracle-public.yml`, optional `pnpm run audit:info-defi-oracle-site`
- **[MEV_CONTROL_DEFI_ORACLE_IO_DEPLOYMENT.md](MEV_CONTROL_DEFI_ORACLE_IO_DEPLOYMENT.md)** — **`mev.defi-oracle.io`** MEV Control GUI (`MEV_Bot/mev-platform/gui`): `sync-mev-control-gui-defi-oracle.sh`, nginx `/api` → mev-admin-api, NPMplus + `set-mev-defi-oracle-dns.sh`
- **[MEV_CONTROL_LAN_BRINGUP_CHECKLIST.md](MEV_CONTROL_LAN_BRINGUP_CHECKLIST.md)** — concrete LAN operator checklist for the full MEV Control stack with **public GUI on CT `2410`** and a **dedicated backend CT on `r630-04`**: CT provisioning, env file, Docker infra, systemd units, migrations, supervisor, admin API, pipeline bring-up order, and public cutover verification
- **[SOLACENET_PUBLIC_HUB.md](SOLACENET_PUBLIC_HUB.md)** — Public **SolaceNet** page (`/solacenet`) on the info hub plus `dbis_core/docs/solacenet/` markdown map
- **[PROXMOX_LOAD_BALANCING_RUNBOOK.md](PROXMOX_LOAD_BALANCING_RUNBOOK.md)** - Balance Proxmox load: migrate containers from r630-01 to r630-02/ml110; candidates, script, cluster vs backup/restore
- **[PROXMOX_ADD_THIRD_FOURTH_R630_DECISION.md](PROXMOX_ADD_THIRD_FOURTH_R630_DECISION.md)** - Add 3rd/4th R630 before migration? r630-03/04 status, HA/Ceph (34 nodes), order of operations

View File

@@ -16,7 +16,7 @@
| **Agent / IDE instructions** | [AGENTS.md](../AGENTS.md) (repo root) |
| **Local green-path tests** | Root `pnpm test` → [`scripts/verify/run-repo-green-test-path.sh`](../scripts/verify/run-repo-green-test-path.sh) |
| **Git submodule hygiene + explorer remotes** | [00-meta/SUBMODULE_HYGIENE.md](00-meta/SUBMODULE_HYGIENE.md) — detached HEAD, push order, Gitea/GitHub, `submodules-clean.sh` |
| **MEV intel + public GUI (`mev.defi-oracle.io`)** | Framing: [../MEV_Bot/docs/framing/README.md](../MEV_Bot/docs/framing/README.md); deploy: [04-configuration/MEV_CONTROL_DEFI_ORACLE_IO_DEPLOYMENT.md](04-configuration/MEV_CONTROL_DEFI_ORACLE_IO_DEPLOYMENT.md); specs: [../MEV_Bot/specs/README.md](../MEV_Bot/specs/README.md) |
| **MEV intel + public GUI (`mev.defi-oracle.io`)** | Framing: [../MEV_Bot/docs/framing/README.md](../MEV_Bot/docs/framing/README.md); deploy: [04-configuration/MEV_CONTROL_DEFI_ORACLE_IO_DEPLOYMENT.md](04-configuration/MEV_CONTROL_DEFI_ORACLE_IO_DEPLOYMENT.md); LAN bring-up: [04-configuration/MEV_CONTROL_LAN_BRINGUP_CHECKLIST.md](04-configuration/MEV_CONTROL_LAN_BRINGUP_CHECKLIST.md) (dedicated backend CT on `r630-04`); specs: [../MEV_Bot/specs/README.md](../MEV_Bot/specs/README.md) |
| **What to do next** | [00-meta/NEXT_STEPS_INDEX.md](00-meta/NEXT_STEPS_INDEX.md) — ordered actions, by audience, execution plan |
| **Live verification evidence (dated)** | [00-meta/LIVE_VERIFICATION_LOG_2026-03-30.md](00-meta/LIVE_VERIFICATION_LOG_2026-03-30.md) |
| **Your personal checklist** | [00-meta/NEXT_STEPS_FOR_YOU.md](00-meta/NEXT_STEPS_FOR_YOU.md) |

View File

@@ -0,0 +1,74 @@
#!/usr/bin/env bash
# Provision a dedicated backend LXC for the MEV Control stack.
#
# Intended topology:
# - Public GUI/static nginx remains on CT 2410 (info-defi-oracle-web)
# - This backend CT runs mev-admin-api, mev-supervisor, pipeline services, and local infra
# - CT 2410 proxies /api/* to this backend CT
#
# Usage:
# bash scripts/deployment/provision-mev-control-backend-lxc.sh [--dry-run]
#
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
# shellcheck source=/dev/null
source "${PROJECT_ROOT}/config/ip-addresses.conf" 2>/dev/null || true
PROXMOX_HOST="${PROXMOX_HOST:-${PROXMOX_HOST_R630_04:-192.168.11.14}}"
VMID="${MEV_CONTROL_BACKEND_VMID:-2421}"
IP_CT="${MEV_CONTROL_BACKEND_IP:-192.168.11.219}"
HOSTNAME_CT="${MEV_CONTROL_BACKEND_HOSTNAME:-mev-control-backend}"
TEMPLATE_CT="${TEMPLATE:-local:vztmpl/debian-12-standard_12.12-1_amd64.tar.zst}"
STORAGE="${STORAGE:-local-lvm}"
NETWORK="${NETWORK:-vmbr0}"
GATEWAY="${NETWORK_GATEWAY:-192.168.11.1}"
SSH_OPTS="-o BatchMode=yes -o ConnectTimeout=15 -o StrictHostKeyChecking=accept-new"
DRY_RUN=false
[[ "${1:-}" == "--dry-run" ]] && DRY_RUN=true
echo "=== Provision MEV Control backend LXC ==="
echo "Proxmox: ${PROXMOX_HOST} VMID: ${VMID} IP: ${IP_CT}"
if $DRY_RUN; then
echo "[DRY-RUN] pct create ${VMID} on ${PROXMOX_HOST} with Docker-capable unprivileged settings"
exit 0
fi
if ssh $SSH_OPTS "root@${PROXMOX_HOST}" "pct list 2>/dev/null | grep -q '^${VMID} '"; then
echo "CT ${VMID} already exists — skipping pct create"
else
echo "Creating CT ${VMID} (${HOSTNAME_CT}) @ ${IP_CT}/24..."
ssh $SSH_OPTS "root@${PROXMOX_HOST}" bash -s <<EOF
set -euo pipefail
pct create ${VMID} ${TEMPLATE_CT} \\
--hostname ${HOSTNAME_CT} \\
--memory 32768 \\
--swap 8192 \\
--cores 16 \\
--rootfs ${STORAGE}:200 \\
--net0 name=eth0,bridge=${NETWORK},ip=${IP_CT}/24,gw=${GATEWAY} \\
--nameserver ${DNS_PRIMARY:-1.1.1.1} \\
--description 'Dedicated backend LXC: MEV admin API, supervisor, pipeline, and local infra' \\
--features nesting=1,keyctl=1 \\
--onboot 1 \\
--start 1 \\
--unprivileged 1
EOF
echo "Waiting for CT to boot..."
sleep 15
fi
ssh $SSH_OPTS "root@${PROXMOX_HOST}" "pct status ${VMID}" | grep -q running || {
echo "ERROR: CT ${VMID} not running — start with: ssh root@${PROXMOX_HOST} 'pct start ${VMID}'" >&2
exit 1
}
echo "Installing baseline packages inside CT ${VMID}..."
ssh $SSH_OPTS "root@${PROXMOX_HOST}" "pct exec ${VMID} -- bash -lc \"set -euo pipefail; export DEBIAN_FRONTEND=noninteractive; apt-get update -qq; apt-get install -y -qq curl jq git ca-certificates build-essential pkg-config libssl-dev gpg lsb-release uidmap\""
echo ""
echo "✅ Backend CT ${VMID} ready at ${IP_CT}"
echo " Next: deploy the MEV stack inside the CT and point CT 2410 /api to http://${IP_CT}:9090"