# MEV Control LAN Bring-Up Checklist **Last Updated:** 2026-04-12 **Status:** Operator checklist for the current Proxmox / Sankofa LAN This runbook turns the current MEV Control deployment into a working LAN stack for **`https://mev.defi-oracle.io`**. It is based on the repo's current assumptions: - Public GUI static assets are served from **VMID 2410** (`192.168.11.218`) via nginx. - The GUI proxies **same-origin** `/api/*` to **`MEV_ADMIN_API_HOST:MEV_ADMIN_API_PORT`**. - The shared repo default upstream is still **`192.168.11.11:9090`** in [config/ip-addresses.conf](../../config/ip-addresses.conf), but for this deployment we intentionally override it to a dedicated backend CT on `r630-04`. - The admin API expects the pipeline health ports **8080-8087** and graph API **9082** to be reachable on **localhost**. Because of that localhost assumption, the cleanest contained split topology for this deployment is: - **Backend Proxmox host:** `r630-04` at **`192.168.11.14`** - **Backend CT:** **VMID `2421`** at **`192.168.11.223`** - **Public web host:** `info-defi-oracle-web` CT **2410** at **`192.168.11.218`** ## 1. Host choice Use the dedicated backend CT for: - `mev-admin-api` - `mev-supervisor` - pipeline services - local Docker infra (`postgres`, `redis`, `nats`) Use **CT 2410** only for: - static GUI files - nginx reverse proxy `/api` to `192.168.11.223:9090` Why this is the recommended topology: - It preserves the already-working public edge on CT `2410` and only moves the backend. - It keeps application services off the Proxmox hosts themselves, so the stack stays portable and migratable. - It keeps the control plane portable with `pct migrate`. - It puts the whole contained backend on a much less contended node. - The admin API health handler probes `127.0.0.1:8080-8087`, so splitting services across multiple hosts will make the Overview page lie unless the code is changed first. - The graph proxy default is `http://127.0.0.1:9082`, so colocating `liquidity-graph` with `mev-admin-api` avoids extra rewiring. ## 2. CT provisioning Provision the backend CT on `r630-04` first: ```bash cd /home/intlc/projects/proxmox bash scripts/deployment/provision-mev-control-backend-lxc.sh --dry-run bash scripts/deployment/provision-mev-control-backend-lxc.sh ``` Default CT identity: - Proxmox host: `192.168.11.14` - VMID: `2421` - CT IP: `192.168.11.223` - Hostname: `mev-control-backend` The provisioner creates an unprivileged Debian 12 CT with: - `16` vCPU - `32 GiB` RAM - `8 GiB` swap - `200 GiB` rootfs - `nesting=1,keyctl=1` These defaults are intentionally generous for the first contained deployment and can be tuned later. ## 3. Paths inside the backend CT Recommended checkout path inside the CT: ```bash /opt/proxmox ``` Relevant MEV subpath: ```bash /opt/proxmox/MEV_Bot/mev-platform ``` Recommended env file path: ```bash /etc/mev-platform/backend.env ``` ## 4. Files to use - Runbook: this document - GUI edge deploy: [MEV_CONTROL_DEFI_ORACLE_IO_DEPLOYMENT.md](MEV_CONTROL_DEFI_ORACLE_IO_DEPLOYMENT.md) - CT provisioner: [scripts/deployment/provision-mev-control-backend-lxc.sh](../../scripts/deployment/provision-mev-control-backend-lxc.sh) - Env template: [config/mev-platform/mev-platform-backend-ct.env.example](../../config/mev-platform/mev-platform-backend-ct.env.example) - Systemd examples: - [config/systemd/mev-infra-compose.service.example](../../config/systemd/mev-infra-compose.service.example) - [config/systemd/mev-supervisor.service.example](../../config/systemd/mev-supervisor.service.example) - [config/systemd/mev-admin-api.service.example](../../config/systemd/mev-admin-api.service.example) ## 5. Preflight inside the backend CT Open a shell into the CT: ```bash ssh root@192.168.11.14 "pct enter 2421" ``` Clone or update the repo: ```bash mkdir -p /opt cd /opt git clone https://github.com/Order-of-Hospitallers/proxmox-cp.git proxmox || true cd /opt/proxmox git pull --ff-only git submodule update --init --recursive MEV_Bot ``` Install runtime prerequisites if missing: ```bash apt-get update apt-get install -y curl jq git build-essential pkg-config libssl-dev ca-certificates ``` Install Docker if missing, then confirm: ```bash docker --version docker compose version ``` Install Rust if missing, then confirm: ```bash cargo --version rustc --version ``` Install `sqlx-cli` once on the host if missing: ```bash cargo install sqlx-cli --no-default-features --features postgres ``` ## 6. Environment file Create the runtime env directory and file: ```bash mkdir -p /etc/mev-platform cp /opt/proxmox/config/mev-platform/mev-platform-backend-ct.env.example /etc/mev-platform/backend.env chmod 600 /etc/mev-platform/backend.env ``` Edit it before first boot: - Set **`MEV_ADMIN_API_KEY`** to a real secret. - Leave **`MEV_SUBMIT_DISABLED=1`** for the first bring-up. - Keep **`MEV_CONFIG=/opt/proxmox/MEV_Bot/mev-platform/config.dev.toml`** unless you already have a dedicated LAN config. Notes: - `config.dev.toml` expects **Postgres on port `5434`**, Redis on `6379`, and NATS on `4222`. - This is the easiest way to match the included `docker-compose.yml`. - If you later promote this to a dedicated Chain 138 config, update `MEV_CONFIG` in the env file and restart `mev-supervisor` and `mev-admin-api`. ## 7. Build binaries inside the backend CT Build the relevant release binaries once from the MEV workspace: ```bash cd /opt/proxmox/MEV_Bot/mev-platform cargo build --release -p mev-admin-api -p mev-supervisor \ -p mev-pool-indexer -p mev-state-ingestion -p mev-liquidity-graph \ -p mev-opportunity-search -p mev-simulation -p mev-bundle-builder \ -p mev-execution-gateway -p mev-settlement-analytics ``` Confirm the binaries exist: ```bash ls -1 /opt/proxmox/MEV_Bot/mev-platform/target/release/mev-* ``` ## 8. Install systemd units Copy the example units into `/etc/systemd/system/`: ```bash cp /opt/proxmox/config/systemd/mev-infra-compose.service.example /etc/systemd/system/mev-infra-compose.service cp /opt/proxmox/config/systemd/mev-supervisor.service.example /etc/systemd/system/mev-supervisor.service cp /opt/proxmox/config/systemd/mev-admin-api.service.example /etc/systemd/system/mev-admin-api.service systemctl daemon-reload ``` Enable them: ```bash systemctl enable mev-infra-compose.service systemctl enable mev-supervisor.service systemctl enable mev-admin-api.service ``` ## 9. Exact bring-up order inside the backend CT Bring the stack up in this order. ### Step 1: start infra ```bash systemctl start mev-infra-compose.service docker ps --format 'table {{.Names}}\t{{.Status}}\t{{.Ports}}' ``` Expected containers: - `postgres` - `redis` - `nats` ### Step 2: run DB migrations Important: override `DATABASE_URL` to match `config.dev.toml` on port `5434`. ```bash cd /opt/proxmox/MEV_Bot/mev-platform export DATABASE_URL='postgres://postgres:postgres@127.0.0.1:5434/mev' ./scripts/run_migrations.sh ``` ### Step 3: start supervisor ```bash systemctl start mev-supervisor.service systemctl status --no-pager mev-supervisor.service curl -fsS http://127.0.0.1:9091/control/status | jq . ``` ### Step 4: start admin API ```bash systemctl start mev-admin-api.service systemctl status --no-pager mev-admin-api.service curl -fsS http://127.0.0.1:9090/api/health | jq . curl -fsS http://127.0.0.1:9090/api/auth/check | jq . ``` At this point: - `/api/health` should return `200` - `/api/auth/check` should return `200` - all services will still likely show `"down"` until you start them ### Step 5: start the pipeline in order through supervisor Export your API key once: ```bash export MEV_API_KEY='replace-with-your-real-key' ``` Start each service in order: ```bash curl -fsS -X POST -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/control/start/pool-indexer | jq . sleep 2 curl -fsS -X POST -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/control/start/state-ingestion | jq . sleep 2 curl -fsS -X POST -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/control/start/liquidity-graph | jq . sleep 2 curl -fsS -X POST -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/control/start/opportunity-search | jq . sleep 2 curl -fsS -X POST -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/control/start/simulation | jq . sleep 2 curl -fsS -X POST -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/control/start/bundle-builder | jq . sleep 2 curl -fsS -X POST -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/control/start/execution-gateway | jq . sleep 2 curl -fsS -X POST -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/control/start/settlement-analytics | jq . ``` Then confirm: ```bash curl -fsS -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/control/status | jq . curl -fsS http://127.0.0.1:9090/api/health | jq . ``` ## 10. Local validation inside the backend CT Run the built-in admin API E2E: ```bash cd /opt/proxmox/MEV_Bot/mev-platform MEV_ADMIN_API_KEY="$MEV_API_KEY" BASE=http://127.0.0.1:9090 ./scripts/e2e_admin_api.sh ``` Useful manual probes: ```bash curl -fsS -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/infra | jq . curl -fsS -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/config | jq . curl -fsS -H "X-API-Key: $MEV_API_KEY" "http://127.0.0.1:9090/api/pools?limit=5" | jq . curl -fsS -H "X-API-Key: $MEV_API_KEY" "http://127.0.0.1:9090/api/graph/snapshot?chain_id=1" | jq . ``` ## 11. Public cutover check Before testing the public site, re-render the MEV nginx vhost so CT `2410` points at the backend CT: ```bash cd /home/intlc/projects/proxmox MEV_ADMIN_API_HOST=192.168.11.223 bash scripts/deployment/sync-mev-control-gui-defi-oracle.sh ``` Once the control plane is up inside the backend CT, confirm CT 2410 can reach it through nginx: ```bash curl -fsS https://mev.defi-oracle.io/health curl -fsS https://mev.defi-oracle.io/api/auth/check | jq . ``` If auth is enabled, test a protected route with the key: ```bash curl -fsS -H "X-API-Key: $MEV_API_KEY" https://mev.defi-oracle.io/api/infra | jq . ``` Then open: - `https://mev.defi-oracle.io/` - `https://mev.defi-oracle.io/intel` - `https://mev.defi-oracle.io/login` ## 12. What each screen depends on Overview: - needs `mev-admin-api` - needs service health ports 8080-8087 on the same host - infra reports live Postgres, Redis, and NATS status - backend CT includes `mev-start-all.service` so the worker stack auto-starts after `mev-supervisor` and `mev-admin-api` Pipeline: - needs the same health stack as Overview Data: - needs Postgres up - needs migrations applied - needs ingest/search/execution services writing rows Config: - needs `MEV_CONFIG` readable by `mev-admin-api` Safety: - needs `mev-admin-api` - kill switch works even before full execution is live Operations: - needs `mev-supervisor` - needs `MEV_SUPERVISOR_URL=http://127.0.0.1:9091` Analytics: - needs executions and opportunities in Postgres Graph: - needs `liquidity-graph` on `127.0.0.1:9082` ## 13. Safe first-boot posture For the first LAN bring-up, keep: - `MEV_SUBMIT_DISABLED=1` - `MEV_ADMIN_API_KEY` set - `execution-gateway` running only after you confirm config, data, and graph look sane That lets the UI come alive without immediately turning on live submission. ## 14. Current known gaps after bring-up Even after the stack is running, the following are still known implementation gaps in the MEV platform itself: - real Redis/NATS health reporting in `/api/infra` - signer readiness/status API - richer graph visualization - bundle signing - inclusion detection - Uniswap V3 / Curve / multicall / block-subscription gaps tracked in `MEV_Bot/mev-platform/docs/REMAINING_GAPS_IMPLEMENTATION.md` ## 15. Post-deploy cutover for CT 2421 Once the hardened executor and flash-loan provider wrapper are actually broadcast, use the dedicated cutover helper from the repo root: ```bash bash scripts/deployment/run-mev-post-deploy-cutover-ct2421.sh \ --artifact reports/status/mev_execution_deploy_YYYYMMDD_HHMMSS.json \ --uniswap-v2-router 0x... \ --sushiswap-router 0x... \ --api-key "$MEV_API_KEY" ``` That runs in dry-run mode by default and prints: - the exact config patch diff for `config.dev.toml` - the exact `pct exec 2421` copy command - the exact restart chain for `mev-supervisor`, `mev-admin-api`, and `mev-start-all` - the exact local CT verification commands - the exact public verification commands When the diff and commands look correct, run the same command with `--apply`. This helper assumes: - Proxmox host for the backend CT is `192.168.11.14` - backend CT VMID is `2421` - target config inside the CT is `/opt/proxmox/MEV_Bot/mev-platform/config.dev.toml` - backend env file inside the CT is `/etc/mev-platform/backend.env`