Files
proxmox/docs/04-configuration/MEV_CONTROL_LAN_BRINGUP_CHECKLIST.md
2026-04-12 20:25:31 -07:00

11 KiB

MEV Control LAN Bring-Up Checklist

Last Updated: 2026-04-12
Status: Operator checklist for the current Proxmox / Sankofa LAN

This runbook turns the current MEV Control deployment into a working LAN stack for https://mev.defi-oracle.io.

It is based on the repo's current assumptions:

  • Public GUI static assets are served from VMID 2410 (192.168.11.218) via nginx.
  • The GUI proxies same-origin /api/* to MEV_ADMIN_API_HOST:MEV_ADMIN_API_PORT.
  • The shared repo default upstream is still 192.168.11.11:9090 in config/ip-addresses.conf, but for this deployment we intentionally override it to a dedicated backend CT on r630-04.
  • The admin API expects the pipeline health ports 8080-8087 and graph API 9082 to be reachable on localhost.

Because of that localhost assumption, the cleanest contained split topology for this deployment is:

  • Backend Proxmox host: r630-04 at 192.168.11.14
  • Backend CT: VMID 2421 at 192.168.11.219
  • Public web host: info-defi-oracle-web CT 2410 at 192.168.11.218

1. Host choice

Use the dedicated backend CT for:

  • mev-admin-api
  • mev-supervisor
  • pipeline services
  • local Docker infra (postgres, redis, nats)

Use CT 2410 only for:

  • static GUI files
  • nginx reverse proxy /api to 192.168.11.219:9090

Why this is the recommended topology:

  • It preserves the already-working public edge on CT 2410 and only moves the backend.
  • It keeps application services off the Proxmox hosts themselves, so the stack stays portable and migratable.
  • It keeps the control plane portable with pct migrate.
  • It puts the whole contained backend on a much less contended node.
  • The admin API health handler probes 127.0.0.1:8080-8087, so splitting services across multiple hosts will make the Overview page lie unless the code is changed first.
  • The graph proxy default is http://127.0.0.1:9082, so colocating liquidity-graph with mev-admin-api avoids extra rewiring.

2. CT provisioning

Provision the backend CT on r630-04 first:

cd /home/intlc/projects/proxmox
bash scripts/deployment/provision-mev-control-backend-lxc.sh --dry-run
bash scripts/deployment/provision-mev-control-backend-lxc.sh

Default CT identity:

  • Proxmox host: 192.168.11.14
  • VMID: 2421
  • CT IP: 192.168.11.219
  • Hostname: mev-control-backend

The provisioner creates an unprivileged Debian 12 CT with:

  • 16 vCPU
  • 32 GiB RAM
  • 8 GiB swap
  • 200 GiB rootfs
  • nesting=1,keyctl=1

These defaults are intentionally generous for the first contained deployment and can be tuned later.

3. Paths inside the backend CT

Recommended checkout path inside the CT:

/opt/proxmox

Relevant MEV subpath:

/opt/proxmox/MEV_Bot/mev-platform

Recommended env file path:

/etc/mev-platform/backend.env

4. Files to use

5. Preflight inside the backend CT

Open a shell into the CT:

ssh root@192.168.11.14 "pct enter 2421"

Clone or update the repo:

mkdir -p /opt
cd /opt
git clone https://github.com/Order-of-Hospitallers/proxmox-cp.git proxmox || true
cd /opt/proxmox
git pull --ff-only
git submodule update --init --recursive MEV_Bot

Install runtime prerequisites if missing:

apt-get update
apt-get install -y curl jq git build-essential pkg-config libssl-dev ca-certificates

Install Docker if missing, then confirm:

docker --version
docker compose version

Install Rust if missing, then confirm:

cargo --version
rustc --version

Install sqlx-cli once on the host if missing:

cargo install sqlx-cli --no-default-features --features postgres

6. Environment file

Create the runtime env directory and file:

mkdir -p /etc/mev-platform
cp /opt/proxmox/config/mev-platform/mev-platform-backend-ct.env.example /etc/mev-platform/backend.env
chmod 600 /etc/mev-platform/backend.env

Edit it before first boot:

  • Set MEV_ADMIN_API_KEY to a real secret.
  • Leave MEV_SUBMIT_DISABLED=1 for the first bring-up.
  • Keep MEV_CONFIG=/opt/proxmox/MEV_Bot/mev-platform/config.dev.toml unless you already have a dedicated LAN config.

Notes:

  • config.dev.toml expects Postgres on port 5434, Redis on 6379, and NATS on 4222.
  • This is the easiest way to match the included docker-compose.yml.
  • If you later promote this to a dedicated Chain 138 config, update MEV_CONFIG in the env file and restart mev-supervisor and mev-admin-api.

7. Build binaries inside the backend CT

Build the relevant release binaries once from the MEV workspace:

cd /opt/proxmox/MEV_Bot/mev-platform
cargo build --release -p mev-admin-api -p mev-supervisor \
  -p mev-pool-indexer -p mev-state-ingestion -p mev-liquidity-graph \
  -p mev-opportunity-search -p mev-simulation -p mev-bundle-builder \
  -p mev-execution-gateway -p mev-settlement-analytics

Confirm the binaries exist:

ls -1 /opt/proxmox/MEV_Bot/mev-platform/target/release/mev-*

8. Install systemd units

Copy the example units into /etc/systemd/system/:

cp /opt/proxmox/config/systemd/mev-infra-compose.service.example /etc/systemd/system/mev-infra-compose.service
cp /opt/proxmox/config/systemd/mev-supervisor.service.example /etc/systemd/system/mev-supervisor.service
cp /opt/proxmox/config/systemd/mev-admin-api.service.example /etc/systemd/system/mev-admin-api.service
systemctl daemon-reload

Enable them:

systemctl enable mev-infra-compose.service
systemctl enable mev-supervisor.service
systemctl enable mev-admin-api.service

9. Exact bring-up order inside the backend CT

Bring the stack up in this order.

Step 1: start infra

systemctl start mev-infra-compose.service
docker ps --format 'table {{.Names}}\t{{.Status}}\t{{.Ports}}'

Expected containers:

  • postgres
  • redis
  • nats

Step 2: run DB migrations

Important: override DATABASE_URL to match config.dev.toml on port 5434.

cd /opt/proxmox/MEV_Bot/mev-platform
export DATABASE_URL='postgres://postgres:postgres@127.0.0.1:5434/mev'
./scripts/run_migrations.sh

Step 3: start supervisor

systemctl start mev-supervisor.service
systemctl status --no-pager mev-supervisor.service
curl -fsS http://127.0.0.1:9091/control/status | jq .

Step 4: start admin API

systemctl start mev-admin-api.service
systemctl status --no-pager mev-admin-api.service
curl -fsS http://127.0.0.1:9090/api/health | jq .
curl -fsS http://127.0.0.1:9090/api/auth/check | jq .

At this point:

  • /api/health should return 200
  • /api/auth/check should return 200
  • all services will still likely show "down" until you start them

Step 5: start the pipeline in order through supervisor

Export your API key once:

export MEV_API_KEY='replace-with-your-real-key'

Start each service in order:

curl -fsS -X POST -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/control/start/pool-indexer | jq .
sleep 2
curl -fsS -X POST -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/control/start/state-ingestion | jq .
sleep 2
curl -fsS -X POST -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/control/start/liquidity-graph | jq .
sleep 2
curl -fsS -X POST -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/control/start/opportunity-search | jq .
sleep 2
curl -fsS -X POST -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/control/start/simulation | jq .
sleep 2
curl -fsS -X POST -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/control/start/bundle-builder | jq .
sleep 2
curl -fsS -X POST -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/control/start/execution-gateway | jq .
sleep 2
curl -fsS -X POST -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/control/start/settlement-analytics | jq .

Then confirm:

curl -fsS -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/control/status | jq .
curl -fsS http://127.0.0.1:9090/api/health | jq .

10. Local validation inside the backend CT

Run the built-in admin API E2E:

cd /opt/proxmox/MEV_Bot/mev-platform
MEV_ADMIN_API_KEY="$MEV_API_KEY" BASE=http://127.0.0.1:9090 ./scripts/e2e_admin_api.sh

Useful manual probes:

curl -fsS -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/infra | jq .
curl -fsS -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/config | jq .
curl -fsS -H "X-API-Key: $MEV_API_KEY" "http://127.0.0.1:9090/api/pools?limit=5" | jq .
curl -fsS -H "X-API-Key: $MEV_API_KEY" "http://127.0.0.1:9090/api/graph/snapshot?chain_id=1" | jq .

11. Public cutover check

Before testing the public site, re-render the MEV nginx vhost so CT 2410 points at the backend CT:

cd /home/intlc/projects/proxmox
MEV_ADMIN_API_HOST=192.168.11.219 bash scripts/deployment/sync-mev-control-gui-defi-oracle.sh

Once the control plane is up inside the backend CT, confirm CT 2410 can reach it through nginx:

curl -fsS https://mev.defi-oracle.io/health
curl -fsS https://mev.defi-oracle.io/api/auth/check | jq .

If auth is enabled, test a protected route with the key:

curl -fsS -H "X-API-Key: $MEV_API_KEY" https://mev.defi-oracle.io/api/infra | jq .

Then open:

  • https://mev.defi-oracle.io/
  • https://mev.defi-oracle.io/intel
  • https://mev.defi-oracle.io/login

12. What each screen depends on

Overview:

  • needs mev-admin-api
  • needs service health ports 8080-8087 on the same host
  • infra only reports real Postgres status today; Redis and NATS are still backend TODOs

Pipeline:

  • needs the same health stack as Overview

Data:

  • needs Postgres up
  • needs migrations applied
  • needs ingest/search/execution services writing rows

Config:

  • needs MEV_CONFIG readable by mev-admin-api

Safety:

  • needs mev-admin-api
  • kill switch works even before full execution is live

Operations:

  • needs mev-supervisor
  • needs MEV_SUPERVISOR_URL=http://127.0.0.1:9091

Analytics:

  • needs executions and opportunities in Postgres

Graph:

  • needs liquidity-graph on 127.0.0.1:9082

13. Safe first-boot posture

For the first LAN bring-up, keep:

  • MEV_SUBMIT_DISABLED=1
  • MEV_ADMIN_API_KEY set
  • execution-gateway running only after you confirm config, data, and graph look sane

That lets the UI come alive without immediately turning on live submission.

14. Current known gaps after bring-up

Even after the stack is running, the following are still known implementation gaps in the MEV platform itself:

  • real Redis/NATS health reporting in /api/infra
  • signer readiness/status API
  • richer graph visualization
  • bundle signing
  • inclusion detection
  • Uniswap V3 / Curve / multicall / block-subscription gaps tracked in MEV_Bot/mev-platform/docs/REMAINING_GAPS_IMPLEMENTATION.md