chore: update submodule references and documentation
Some checks failed
Deploy to Phoenix / deploy (push) Has been cancelled
Some checks failed
Deploy to Phoenix / deploy (push) Has been cancelled
- Marked submodules ai-mcp-pmm-controller, explorer-monorepo, and smom-dbis-138 as dirty to reflect recent changes. - Updated documentation to clarify operator script usage, including dotenv loading and task execution instructions. - Enhanced the README and various index files to provide clearer navigation and task completion guidance. Made-with: Cursor
This commit is contained in:
87
docs/02-architecture/AI_AGENTS_57XX_MCP_ADDENDUM.md
Normal file
87
docs/02-architecture/AI_AGENTS_57XX_MCP_ADDENDUM.md
Normal file
@@ -0,0 +1,87 @@
|
||||
# AI Agents 57xx — MCP Addendum (Multi-Chain, Uniswap, Automation)
|
||||
|
||||
**Purpose:** Addendum to [AI_AGENTS_57XX_DEPLOYMENT_PLAN.md](AI_AGENTS_57XX_DEPLOYMENT_PLAN.md) and [AI_AGENTS_57XX_MCP_CONTRACTS_AND_CHAINS.md](AI_AGENTS_57XX_MCP_CONTRACTS_AND_CHAINS.md) for: multi-chain MCP, Uniswap pool profile, dashboard/API alignment, and automation triggers. Supports the dedicated MCP/AI for Dodoex and Uniswap pool management per [MCP_AI_POOL_MANAGEMENT_PLAN_UPGRADES.md](../03-deployment/MCP_AI_POOL_MANAGEMENT_PLAN_UPGRADES.md).
|
||||
|
||||
---
|
||||
|
||||
## 1. Multi-chain MCP
|
||||
|
||||
**Option:** One MCP server that supports **multiple chains** so a single AI can read and manage pools on Chain 138 and on public chains (cW* / HUB) without running one MCP instance per chain.
|
||||
|
||||
- **Allowlist shape:** Each pool entry includes **chainId** (e.g. `"chainId": "138"`, `"chainId": "137"`). The MCP uses the appropriate RPC per chain when calling `get_pool_state`, `quote_add_liquidity`, etc.
|
||||
- **RPC config:** Maintain a map `chainId → RPC_URL` (e.g. from env `RPC_URL_138`, `POLYGON_MAINNET_RPC`, or a config file). When a tool is invoked for a pool, the MCP looks up the chain and uses the corresponding RPC.
|
||||
- **Implementation note:** If the current ai-mcp-pmm-controller is single-chain (`CHAIN`, `RPC_URL`), extend the allowlist schema and server to accept `chainId` per pool and to select RPC by `chainId`. Alternatively, run one MCP instance per chain and have the AI/orchestrator call the appropriate MCP by chain.
|
||||
|
||||
**Reference:** [POOL_ACCESS_DASHBOARD_API_MCP.md](../11-references/POOL_ACCESS_DASHBOARD_API_MCP.md); allowlist generation from [generate-mcp-allowlist-from-deployment-status.sh](../../scripts/generate-mcp-allowlist-from-deployment-status.sh) (outputs per-chain fragments that can be merged into one multi-chain allowlist).
|
||||
|
||||
---
|
||||
|
||||
## 2. Uniswap pool profile
|
||||
|
||||
To allow the MCP to read **Uniswap V2** and **Uniswap V3** pool state (reserves, price, liquidity) so the same AI can manage both DODO and Uniswap pools:
|
||||
|
||||
- **Profile ID:** `uniswap_v2_pair` and/or `uniswap_v3_pool`.
|
||||
- **Expected view methods (Uniswap V2 pair):**
|
||||
- `getReserves() → (uint112 reserve0, uint112 reserve1, uint32 blockTimestampLast)`
|
||||
- `token0() → address`
|
||||
- `token1() → address`
|
||||
- **Expected view methods (Uniswap V3 pool):**
|
||||
- `slot0() → (uint160 sqrtPriceX96, int24 tick, uint16 observationIndex, ...)`
|
||||
- `liquidity() → uint128`
|
||||
- `token0() → address`, `token1() → address`
|
||||
- **MCP behavior:** For allowlisted pools with profile `uniswap_v2_pair` or `uniswap_v3_pool`, the MCP calls the corresponding view methods on the pool contract and returns normalized state (e.g. reserves, derived price) so the AI can reason about liquidity and rebalancing the same way as for DODO pools.
|
||||
|
||||
**Config:** Add entries to `ai-mcp-pmm-controller/config/pool_profiles.json` (see below). Pools created on a chain that uses Uniswap (from token-aggregation indexer or deployment-status) should be added to the allowlist with this profile.
|
||||
|
||||
**Example pool_profiles addition:**
|
||||
|
||||
```json
|
||||
"uniswap_v2_pair": {
|
||||
"methods": {
|
||||
"get_reserves": "getReserves",
|
||||
"token0": "token0",
|
||||
"token1": "token1"
|
||||
}
|
||||
},
|
||||
"uniswap_v3_pool": {
|
||||
"methods": {
|
||||
"slot0": "slot0",
|
||||
"liquidity": "liquidity",
|
||||
"token0": "token0",
|
||||
"token1": "token1"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Dashboard and API alignment
|
||||
|
||||
The **token-aggregation API** and the **MCP** should expose **the same set of pools** for a given chain (DODO + Uniswap once indexed):
|
||||
|
||||
- **Source of truth:** (1) **Chain 138:** DODOPMMIntegration.getAllPools() + poolConfigs (drives both indexer and MCP allowlist via `generate-mcp-allowlist-from-chain138.sh`). (2) **Public chains:** deployment-status.json `pmmPools` (and any Uniswap pool list) drives both indexer config (CHAIN_*_DODO_*, CHAIN_*_UNISWAP_*) and MCP allowlist (via `generate-mcp-allowlist-from-deployment-status.sh`).
|
||||
- **Practice:** After deploying or creating pools on a chain, (1) update deployment-status.json (and for 138, the integration has the pools on-chain); (2) run the appropriate allowlist generator script; (3) ensure the token-aggregation indexer has the correct factory/integration env for that chain and has been run. Then the custom dashboard (using the API) and the MCP/AI (using the allowlist) see the same pools.
|
||||
|
||||
---
|
||||
|
||||
## 4. Automation triggers
|
||||
|
||||
How the dedicated AI (pool manager) is **triggered** to read state and optionally execute rebalance/add/remove liquidity:
|
||||
|
||||
| Trigger | Description |
|
||||
|--------|-------------|
|
||||
| **Scheduled** | Cron or scheduler (e.g. every 5–15 min) calls MCP/API to get pool state for all allowlisted pools; AI (or a rule engine) decides whether to rebalance, add, or remove liquidity; if allowed by policy, executor submits tx. |
|
||||
| **Event-driven** | Indexer or chain watcher emits events (e.g. "reserve delta > X" or "price deviation > band"); triggers the AI to fetch state via MCP/API and decide action; executor runs within cooldown and circuit-break rules. |
|
||||
| **Manual** | Operator asks the AI (via MCP or chat) for a quote or recommendation (e.g. "quote add liquidity for pool X"); AI returns suggestion; operator executes tx manually. |
|
||||
|
||||
**Policy:** Document which triggers are enabled (scheduled vs event vs manual), max trade size, cooldown, and circuit-break so the AI/executor stays within guardrails. See [MCP_AI_POOL_MANAGEMENT_PLAN_UPGRADES.md](../03-deployment/MCP_AI_POOL_MANAGEMENT_PLAN_UPGRADES.md) and allowlist `limits` in allowlist-138.json.
|
||||
|
||||
---
|
||||
|
||||
## 5. References
|
||||
|
||||
- [AI_AGENTS_57XX_DEPLOYMENT_PLAN.md](AI_AGENTS_57XX_DEPLOYMENT_PLAN.md)
|
||||
- [AI_AGENTS_57XX_MCP_CONTRACTS_AND_CHAINS.md](AI_AGENTS_57XX_MCP_CONTRACTS_AND_CHAINS.md)
|
||||
- [MCP_AI_POOL_MANAGEMENT_PLAN_UPGRADES.md](../03-deployment/MCP_AI_POOL_MANAGEMENT_PLAN_UPGRADES.md)
|
||||
- [POOL_ACCESS_DASHBOARD_API_MCP.md](../11-references/POOL_ACCESS_DASHBOARD_API_MCP.md)
|
||||
- [PMM_FULL_MESH_AND_PUBLIC_SINGLE_SIDED_PLAN.md](../03-deployment/PMM_FULL_MESH_AND_PUBLIC_SINGLE_SIDED_PLAN.md) §2.9
|
||||
@@ -82,6 +82,7 @@ So: **no** new chain-specific contracts are “for the MCP” itself; the MCP on
|
||||
|
||||
## 5. References
|
||||
|
||||
- [AI_AGENTS_57XX_MCP_ADDENDUM.md](AI_AGENTS_57XX_MCP_ADDENDUM.md) — Multi-chain MCP, Uniswap pool profile, dashboard/API alignment, automation triggers
|
||||
- MCP allowlist shape: `ai-mcp-pmm-controller/config/allowlist.json`
|
||||
- MCP pool profile (view methods): `ai-mcp-pmm-controller/config/pool_profiles.json`
|
||||
- Chain 138 tokens: `docs/11-references/CHAIN138_TOKEN_ADDRESSES.md`
|
||||
|
||||
@@ -16,6 +16,8 @@ This document defines the target architecture for a **13-node Dell PowerEdge R63
|
||||
|
||||
**Scope:** All 13 R630s as Proxmox cluster nodes; optional separate management node (e.g. ml110) or integration of management on a subset of R630s. Design assumes **hyper-converged** (Proxmox + Ceph on same nodes) for shared storage and true HA.
|
||||
|
||||
**Extended inventory:** The same site includes 3× Dell R750 servers, 2× Dell Precision 7920 workstations, and 2× UniFi Dream Machine Pro (gateways). See [HARDWARE_INVENTORY_MASTER.md](../11-references/HARDWARE_INVENTORY_MASTER.md), [13_NODE_NETWORK_AND_CABLING_CHECKLIST.md](../11-references/13_NODE_NETWORK_AND_CABLING_CHECKLIST.md), and [13_NODE_AND_ASSETS_BRING_ONLINE_CHECKLIST.md](../11-references/13_NODE_AND_ASSETS_BRING_ONLINE_CHECKLIST.md) for network topology, cabling, and bring-online order.
|
||||
|
||||
---
|
||||
|
||||
## 2. Cluster Design — 13 Nodes
|
||||
@@ -45,6 +47,14 @@ This document defines the target architecture for a **13-node Dell PowerEdge R63
|
||||
- **VLANs:** Same VLAN-aware bridge (e.g. vmbr0) on all nodes so VMs/containers keep IPs when failed over.
|
||||
- **IP plan for 13 R630s:** Reserve 13 consecutive IPs (e.g. 192.168.11.11–192.168.11.23 for r630-01 … r630-13). Document in `config/ip-addresses.conf` and DNS.
|
||||
|
||||
### 2.4 Switching (10G backbone)
|
||||
|
||||
**Inventory:** 2 × UniFi XG 10G 16-port switches (see [HARDWARE_INVENTORY_MASTER.md](../11-references/HARDWARE_INVENTORY_MASTER.md)).
|
||||
|
||||
- Use for **Ceph cluster network** and inter-node traffic; connect all 13 R630s via 10G for storage and replication.
|
||||
- **Redundancy:** Two switches allow dual-attach per node (e.g. one link per switch or LACP) for HA.
|
||||
- **Management:** Can stay on existing 1G LAN or use 10G for management if NICs support it.
|
||||
|
||||
---
|
||||
|
||||
## 3. RAM Specifications — R630
|
||||
|
||||
Reference in New Issue
Block a user