11 KiB
MEV Control LAN Bring-Up Checklist
Last Updated: 2026-04-12
Status: Operator checklist for the current Proxmox / Sankofa LAN
This runbook turns the current MEV Control deployment into a working LAN stack for https://mev.defi-oracle.io.
It is based on the repo's current assumptions:
- Public GUI static assets are served from VMID 2410 (
192.168.11.218) via nginx. - The GUI proxies same-origin
/api/*toMEV_ADMIN_API_HOST:MEV_ADMIN_API_PORT. - The shared repo default upstream is still
192.168.11.11:9090in config/ip-addresses.conf, but for this deployment we intentionally override it to a dedicated backend CT onr630-04. - The admin API expects the pipeline health ports 8080-8087 and graph API 9082 to be reachable on localhost.
Because of that localhost assumption, the cleanest contained split topology for this deployment is:
- Backend Proxmox host:
r630-04at192.168.11.14 - Backend CT: VMID
2421at192.168.11.219 - Public web host:
info-defi-oracle-webCT 2410 at192.168.11.218
1. Host choice
Use the dedicated backend CT for:
mev-admin-apimev-supervisor- pipeline services
- local Docker infra (
postgres,redis,nats)
Use CT 2410 only for:
- static GUI files
- nginx reverse proxy
/apito192.168.11.219:9090
Why this is the recommended topology:
- It preserves the already-working public edge on CT
2410and only moves the backend. - It keeps application services off the Proxmox hosts themselves, so the stack stays portable and migratable.
- It keeps the control plane portable with
pct migrate. - It puts the whole contained backend on a much less contended node.
- The admin API health handler probes
127.0.0.1:8080-8087, so splitting services across multiple hosts will make the Overview page lie unless the code is changed first. - The graph proxy default is
http://127.0.0.1:9082, so colocatingliquidity-graphwithmev-admin-apiavoids extra rewiring.
2. CT provisioning
Provision the backend CT on r630-04 first:
cd /home/intlc/projects/proxmox
bash scripts/deployment/provision-mev-control-backend-lxc.sh --dry-run
bash scripts/deployment/provision-mev-control-backend-lxc.sh
Default CT identity:
- Proxmox host:
192.168.11.14 - VMID:
2421 - CT IP:
192.168.11.219 - Hostname:
mev-control-backend
The provisioner creates an unprivileged Debian 12 CT with:
16vCPU32 GiBRAM8 GiBswap200 GiBrootfsnesting=1,keyctl=1
These defaults are intentionally generous for the first contained deployment and can be tuned later.
3. Paths inside the backend CT
Recommended checkout path inside the CT:
/opt/proxmox
Relevant MEV subpath:
/opt/proxmox/MEV_Bot/mev-platform
Recommended env file path:
/etc/mev-platform/backend.env
4. Files to use
- Runbook: this document
- GUI edge deploy: MEV_CONTROL_DEFI_ORACLE_IO_DEPLOYMENT.md
- CT provisioner: scripts/deployment/provision-mev-control-backend-lxc.sh
- Env template: config/mev-platform/mev-platform-backend-ct.env.example
- Systemd examples:
5. Preflight inside the backend CT
Open a shell into the CT:
ssh root@192.168.11.14 "pct enter 2421"
Clone or update the repo:
mkdir -p /opt
cd /opt
git clone https://github.com/Order-of-Hospitallers/proxmox-cp.git proxmox || true
cd /opt/proxmox
git pull --ff-only
git submodule update --init --recursive MEV_Bot
Install runtime prerequisites if missing:
apt-get update
apt-get install -y curl jq git build-essential pkg-config libssl-dev ca-certificates
Install Docker if missing, then confirm:
docker --version
docker compose version
Install Rust if missing, then confirm:
cargo --version
rustc --version
Install sqlx-cli once on the host if missing:
cargo install sqlx-cli --no-default-features --features postgres
6. Environment file
Create the runtime env directory and file:
mkdir -p /etc/mev-platform
cp /opt/proxmox/config/mev-platform/mev-platform-backend-ct.env.example /etc/mev-platform/backend.env
chmod 600 /etc/mev-platform/backend.env
Edit it before first boot:
- Set
MEV_ADMIN_API_KEYto a real secret. - Leave
MEV_SUBMIT_DISABLED=1for the first bring-up. - Keep
MEV_CONFIG=/opt/proxmox/MEV_Bot/mev-platform/config.dev.tomlunless you already have a dedicated LAN config.
Notes:
config.dev.tomlexpects Postgres on port5434, Redis on6379, and NATS on4222.- This is the easiest way to match the included
docker-compose.yml. - If you later promote this to a dedicated Chain 138 config, update
MEV_CONFIGin the env file and restartmev-supervisorandmev-admin-api.
7. Build binaries inside the backend CT
Build the relevant release binaries once from the MEV workspace:
cd /opt/proxmox/MEV_Bot/mev-platform
cargo build --release -p mev-admin-api -p mev-supervisor \
-p mev-pool-indexer -p mev-state-ingestion -p mev-liquidity-graph \
-p mev-opportunity-search -p mev-simulation -p mev-bundle-builder \
-p mev-execution-gateway -p mev-settlement-analytics
Confirm the binaries exist:
ls -1 /opt/proxmox/MEV_Bot/mev-platform/target/release/mev-*
8. Install systemd units
Copy the example units into /etc/systemd/system/:
cp /opt/proxmox/config/systemd/mev-infra-compose.service.example /etc/systemd/system/mev-infra-compose.service
cp /opt/proxmox/config/systemd/mev-supervisor.service.example /etc/systemd/system/mev-supervisor.service
cp /opt/proxmox/config/systemd/mev-admin-api.service.example /etc/systemd/system/mev-admin-api.service
systemctl daemon-reload
Enable them:
systemctl enable mev-infra-compose.service
systemctl enable mev-supervisor.service
systemctl enable mev-admin-api.service
9. Exact bring-up order inside the backend CT
Bring the stack up in this order.
Step 1: start infra
systemctl start mev-infra-compose.service
docker ps --format 'table {{.Names}}\t{{.Status}}\t{{.Ports}}'
Expected containers:
postgresredisnats
Step 2: run DB migrations
Important: override DATABASE_URL to match config.dev.toml on port 5434.
cd /opt/proxmox/MEV_Bot/mev-platform
export DATABASE_URL='postgres://postgres:postgres@127.0.0.1:5434/mev'
./scripts/run_migrations.sh
Step 3: start supervisor
systemctl start mev-supervisor.service
systemctl status --no-pager mev-supervisor.service
curl -fsS http://127.0.0.1:9091/control/status | jq .
Step 4: start admin API
systemctl start mev-admin-api.service
systemctl status --no-pager mev-admin-api.service
curl -fsS http://127.0.0.1:9090/api/health | jq .
curl -fsS http://127.0.0.1:9090/api/auth/check | jq .
At this point:
/api/healthshould return200/api/auth/checkshould return200- all services will still likely show
"down"until you start them
Step 5: start the pipeline in order through supervisor
Export your API key once:
export MEV_API_KEY='replace-with-your-real-key'
Start each service in order:
curl -fsS -X POST -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/control/start/pool-indexer | jq .
sleep 2
curl -fsS -X POST -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/control/start/state-ingestion | jq .
sleep 2
curl -fsS -X POST -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/control/start/liquidity-graph | jq .
sleep 2
curl -fsS -X POST -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/control/start/opportunity-search | jq .
sleep 2
curl -fsS -X POST -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/control/start/simulation | jq .
sleep 2
curl -fsS -X POST -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/control/start/bundle-builder | jq .
sleep 2
curl -fsS -X POST -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/control/start/execution-gateway | jq .
sleep 2
curl -fsS -X POST -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/control/start/settlement-analytics | jq .
Then confirm:
curl -fsS -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/control/status | jq .
curl -fsS http://127.0.0.1:9090/api/health | jq .
10. Local validation inside the backend CT
Run the built-in admin API E2E:
cd /opt/proxmox/MEV_Bot/mev-platform
MEV_ADMIN_API_KEY="$MEV_API_KEY" BASE=http://127.0.0.1:9090 ./scripts/e2e_admin_api.sh
Useful manual probes:
curl -fsS -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/infra | jq .
curl -fsS -H "X-API-Key: $MEV_API_KEY" http://127.0.0.1:9090/api/config | jq .
curl -fsS -H "X-API-Key: $MEV_API_KEY" "http://127.0.0.1:9090/api/pools?limit=5" | jq .
curl -fsS -H "X-API-Key: $MEV_API_KEY" "http://127.0.0.1:9090/api/graph/snapshot?chain_id=1" | jq .
11. Public cutover check
Before testing the public site, re-render the MEV nginx vhost so CT 2410 points at the backend CT:
cd /home/intlc/projects/proxmox
MEV_ADMIN_API_HOST=192.168.11.219 bash scripts/deployment/sync-mev-control-gui-defi-oracle.sh
Once the control plane is up inside the backend CT, confirm CT 2410 can reach it through nginx:
curl -fsS https://mev.defi-oracle.io/health
curl -fsS https://mev.defi-oracle.io/api/auth/check | jq .
If auth is enabled, test a protected route with the key:
curl -fsS -H "X-API-Key: $MEV_API_KEY" https://mev.defi-oracle.io/api/infra | jq .
Then open:
https://mev.defi-oracle.io/https://mev.defi-oracle.io/intelhttps://mev.defi-oracle.io/login
12. What each screen depends on
Overview:
- needs
mev-admin-api - needs service health ports 8080-8087 on the same host
- infra only reports real Postgres status today; Redis and NATS are still backend TODOs
Pipeline:
- needs the same health stack as Overview
Data:
- needs Postgres up
- needs migrations applied
- needs ingest/search/execution services writing rows
Config:
- needs
MEV_CONFIGreadable bymev-admin-api
Safety:
- needs
mev-admin-api - kill switch works even before full execution is live
Operations:
- needs
mev-supervisor - needs
MEV_SUPERVISOR_URL=http://127.0.0.1:9091
Analytics:
- needs executions and opportunities in Postgres
Graph:
- needs
liquidity-graphon127.0.0.1:9082
13. Safe first-boot posture
For the first LAN bring-up, keep:
MEV_SUBMIT_DISABLED=1MEV_ADMIN_API_KEYsetexecution-gatewayrunning only after you confirm config, data, and graph look sane
That lets the UI come alive without immediately turning on live submission.
14. Current known gaps after bring-up
Even after the stack is running, the following are still known implementation gaps in the MEV platform itself:
- real Redis/NATS health reporting in
/api/infra - signer readiness/status API
- richer graph visualization
- bundle signing
- inclusion detection
- Uniswap V3 / Curve / multicall / block-subscription gaps tracked in
MEV_Bot/mev-platform/docs/REMAINING_GAPS_IMPLEMENTATION.md