Sync workspace: config, docs, scripts, CI, operator rules, and submodule pointers.

- Update dbis_core, cross-chain-pmm-lps, explorer-monorepo, metamask-integration, pr-workspace/chains
- Omit embedded publish git dirs and empty placeholders from index

Made-with: Cursor
This commit is contained in:
defiQUG
2026-04-12 06:12:20 -07:00
parent 6fb6bd3993
commit dbd517b279
2935 changed files with 327972 additions and 5533 deletions

View File

@@ -1,6 +1,6 @@
# Scripts Directory
**Last Updated:** 2026-01-31
**Last Updated:** 2026-04-06
---
@@ -165,12 +165,16 @@ export CCIP_DEST_CHAIN_SELECTOR=5009297550715157269 # Ethereum mainnet
Default bridge in `.env` is the **LINK-fee** bridge (pay fee in Chain 138 LINK). To pay fee in **native ETH**, set `CCIPWETH9_BRIDGE_CHAIN138=0x63cbeE010D64ab7F1760ad84482D6cC380435ab5`.
**Requirements:** Sender must have (1) WETH on Chain 138 (balance ≥ amount), (2) for LINK-fee bridge: LINK on Chain 138 approved for the bridge; for native-ETH bridge: sufficient ETH for fee. When using a **new** bridge address, approve both WETH and LINK to that bridge. Recipient defaults to sender address if omitted.
**Requirements:** Sender must have (1) WETH on Chain 138 (balance ≥ amount), (2) WETH approved to the Chain 138 bridge for at least the send amount, (3) for LINK-fee bridge: LINK on Chain 138 approved for the bridge fee amount; for native-ETH bridge: sufficient ETH for fee. For relay-backed first hops (Mainnet, BSC, Avalanche), the destination relay inventory must also already hold at least the amount being sent. Recipient defaults to sender address if omitted.
**If send reverts** (e.g. `0x9996b315` with fee-token address): the CCIP router on Chain 138 may not accept the bridges fee token (LINK at `0xb772...`). See [docs/07-ccip/SEND_ETH_TO_MAINNET_REVERT_TRACE.md](../docs/07-ccip/SEND_ETH_TO_MAINNET_REVERT_TRACE.md) for the revert trace and fix options.
**Env:** `CCIP_DEST_CHAIN_SELECTOR` (default: 5009297550715157269 = Ethereum mainnet); `GAS_PRICE` (default: 1000000000); `CONFIRM_ABOVE_ETH` (optional; prompt for confirmation above this amount).
**Direct first-hop guard:** This helper now only allows proven direct first hops from Chain 138 to Mainnet, BSC, or Avalanche. It also fails fast when the source-token allowance is missing or when the destination relay inventory is smaller than the requested send amount. For Gnosis, Cronos, Celo, Polygon, Arbitrum, Optimism, and Base, use the Mainnet hub unless you intentionally override with `ALLOW_UNSUPPORTED_DIRECT_FIRST_HOP=1`.
**Source quote preflight:** `smom-dbis-138/scripts/ccip/ccip-send.sh` now fails before approvals or send attempts if `calculateFee()` already reverts on the chosen source bridge. As of 2026-04-04 UTC, the active Mainnet `WETH9` public fan-out path is quote-blocked on the tracked selectors `BSC`, `Avalanche`, `Gnosis`, `Cronos`, `Celo`, `Polygon`, `Arbitrum`, `Optimism`, and `Base`, so do not assume Mainnet hub fan-out is usable until that bridge/router path is repaired.
### 9. DBIS Frontend Deploy to Container
Deploy dbis-frontend build to Proxmox container VMID 10130. Builds locally, pushes dist, reloads nginx.
@@ -196,12 +200,17 @@ CT 2301 (besu-rpc-private-1) may fail to start with `lxc.hook.pre-start` due to
- **NPMplus backup:** `./scripts/verify/backup-npmplus.sh [--dry-run]` — requires NPM_PASSWORD in .env
- **Wave 0 from LAN:** `./scripts/run-wave0-from-lan.sh [--dry-run] [--skip-backup] [--skip-rpc-fix]` — runs NPMplus RPC fix (W0-1) and NPMplus backup (W0-3); W0-2 (sendCrossChain) run separately without `--dry-run`.
- **All waves (max parallel):** `./scripts/run-all-waves-parallel.sh [--dry-run] [--skip-wave0] [--skip-wave2] [--host HOST]` — Wave 0 via SSH, Wave 1 parallel (env, cron, SSH/firewall dry-run, shellcheck, validate), Wave 2 W2-6 (create 2506/2507/2508). See `docs/00-meta/FULL_PARALLEL_EXECUTION_ORDER.md` and `FULL_PARALLEL_RUN_LOG.md`.
- **NPMplus backup cron:** `./scripts/maintenance/schedule-npmplus-backup-cron.sh [--install|--show]` — add or print daily 03:00 cron for backup-npmplus.sh.
- **NPMplus backup cron:** `./scripts/maintenance/schedule-npmplus-backup-cron.sh [--install|--show]` — add or print daily 03:00 cron for backup-npmplus.sh. Use from a persistent host checkout, e.g. `CRON_PROJECT_ROOT=/srv/proxmox`.
- **Security:** `./scripts/security/secure-env-permissions.sh [--dry-run]` or `chmod 600 .env smom-dbis-138/.env dbis_core/.env` — secure env files. **Validator keys (W1-19):** On Proxmox host as root: `./scripts/secure-validator-keys.sh [--dry-run]` (VMIDs 10001004).
- **info.defi-oracle.io public smoke:** `./scripts/verify/check-info-defi-oracle-public.sh` — HTTPS SPA, `/llms.txt`, `/agent-hints.json`, same-origin `/token-aggregation/api/v1/networks` JSON. Optional `INFO_SITE_BASE=https://staging.example.com`. Wrappers: `pnpm run verify:info-defi-oracle-public`; scored browser audit: `pnpm run audit:info-defi-oracle-site` (Chromium via `pnpm exec playwright install chromium`). Deploy to dedicated LXC: `./scripts/deployment/sync-info-defi-oracle-to-vmid2400.sh` (VMID **2410**). Also run (non-fatal) from `./scripts/run-operator-tasks-from-lan.sh`, `./scripts/run-all-operator-tasks-from-lan.sh`, and after E2E in `./scripts/run-full-operator-completion-from-lan.sh`. Runbook: [INFO_DEFI_ORACLE_IO_DEPLOYMENT.md](../docs/04-configuration/INFO_DEFI_ORACLE_IO_DEPLOYMENT.md).
- **Explorer token-aggregation API:** Build bundle with `./scripts/deploy-token-aggregation-for-publication.sh`, rsync to explorer with `./scripts/deployment/push-token-aggregation-bundle-to-explorer.sh` (`EXPLORER_SSH`, `REMOTE_DIR`, optional `systemctl restart token-aggregation`). Verify: `pnpm run verify:token-aggregation-api` or `./scripts/verify/check-token-aggregation-chain138-api.sh`. Apex `/api/v1/*` vs `/token-aggregation/api/v1/*` and planner POST issues: `./scripts/fix-explorer-http-api-v1-proxy.sh`, `./scripts/fix-explorer-token-aggregation-api-v2-proxy.sh`. Runbook: [TOKEN_AGGREGATION_REPORT_API_RUNBOOK.md](../docs/04-configuration/TOKEN_AGGREGATION_REPORT_API_RUNBOOK.md).
### 12. Maintenance (135139)
- **Daily/weekly checks:** `./scripts/maintenance/daily-weekly-checks.sh [daily|weekly|all]` — explorer sync (135), RPC health (136), config API (137). **Cron:** `./scripts/maintenance/schedule-daily-weekly-cron.sh [--install|--show]` (daily 08:00, weekly Sun 09:00). See [OPERATIONAL_RUNBOOKS.md](../docs/03-deployment/OPERATIONAL_RUNBOOKS.md) § Maintenance.
- **Daily/weekly checks:** `./scripts/maintenance/daily-weekly-checks.sh [daily|weekly|all]` — explorer sync (135), RPC health (136), config API (137). **Cron:** `./scripts/maintenance/schedule-daily-weekly-cron.sh [--install|--show]` (daily 08:00, weekly Sun 09:00). Use from a persistent host checkout, e.g. `CRON_PROJECT_ROOT=/srv/proxmox`. See [OPERATIONAL_RUNBOOKS.md](../docs/03-deployment/OPERATIONAL_RUNBOOKS.md) § Maintenance.
- **Ensure FireFly primary (6200):** `./scripts/maintenance/ensure-firefly-primary-via-ssh.sh [--dry-run]` — normalize the compose file expected by the installed `docker-compose`, install the idempotent helper-backed `firefly.service`, and verify `/api/v1/status` for the current mixed legacy-plus-compose stack.
- **Ensure Fabric sample network (6000):** `./scripts/maintenance/ensure-fabric-sample-network-via-ssh.sh [--dry-run]` — ensure nested-LXC features, install the boot-time `fabric-sample-network.service`, and verify `mychannel`.
- **Ensure legacy monitor networking (3000-3003):** `./scripts/maintenance/ensure-legacy-monitor-networkd-via-ssh.sh [--dry-run]` — host-side enable plus in-guest start for `systemd-networkd` on the legacy monitor/RPC-adjacent LXCs so their static LAN IPs actually come up.
- **Start firefly-ali-1 (6201):** `./scripts/maintenance/start-firefly-6201.sh [--dry-run] [--host HOST]` — start CT 6201 on r630-02 when needed (optional ongoing).
- **Config validation (pre-deploy):** `./scripts/validation/validate-config-files.sh` — set `VALIDATE_REQUIRED_FILES` for required paths. **CI / all validation:** `./scripts/verify/run-all-validation.sh [--skip-genesis]` — dependencies + config + optional genesis (no LAN/SSH).
@@ -209,11 +218,100 @@ CT 2301 (besu-rpc-private-1) may fail to start with `lxc.hook.pre-start` due to
- **Monitoring (Phase 2):** `./scripts/deployment/phase2-observability.sh [--config-only]` — writes `config/monitoring/` (prometheus.yml, alertmanager.yml).
- **Security (Phase 2):** `./scripts/security/setup-ssh-key-auth.sh [--dry-run|--apply]`, `./scripts/security/firewall-proxmox-8006.sh [--dry-run|--apply] [CIDR]`.
- **Proxmox SSH / FQDN:** `./scripts/security/ensure-proxmox-ssh-access.sh` (all five mgmt IPs; `--fqdn` for `*.sankofa.nexus`; `--copy` for `ssh-copy-id`). `./scripts/verify/check-proxmox-mgmt-fqdn.sh` (`--print-hosts` for `/etc/hosts`).
- **Backup (Phase 2):** `./scripts/backup/automated-backup.sh [--dry-run] [--with-npmplus]` — config + optional NPMplus; cron in header.
- **CCIP (Phase 3):** `./scripts/ccip/ccip-deploy-checklist.sh` — env check and deployment order from spec.
- **Sovereign tenants (Phase 4):** `./scripts/deployment/phase4-sovereign-tenants.sh [--show-steps|--dry-run]` — checklist; full runbook in OPERATIONAL_RUNBOOKS § Phase 4.
- **Full verification (6 steps):** `./scripts/verify/run-full-verification.sh` — Step 0: config validation; Steps 15: DNS, UDM Pro, NPMplus, backend VMs, E2E routing; Step 6: source-of-truth JSON. Run from project root.
### 14. Public Mainnet DODO cW swaps
Repeatable helper for the first public Mainnet DODO PMM `cW*` pools, including the USD bootstrap set and the first non-USD Wave 1 rows.
**Usage:**
```bash
# Dry-run, including quote-source detection and reserve fallback
bash scripts/deployment/run-mainnet-public-dodo-cw-swap.sh \
--pair=cwusdt-usdc \
--direction=base-to-quote \
--amount=5000 \
--dry-run
# Live CHF row proof
bash scripts/deployment/run-mainnet-public-dodo-cw-swap.sh \
--pair=cwchfc-usdc \
--direction=quote-to-base \
--amount=1000
# Live tiny swap
bash scripts/deployment/run-mainnet-public-dodo-cw-swap.sh \
--pair=cwusdt-usdc \
--direction=base-to-quote \
--amount=5000
# Dry-run the first non-USD Wave 1 Mainnet pool
bash scripts/deployment/run-mainnet-public-dodo-cw-swap.sh \
--pair=cweurc-usdc \
--direction=base-to-quote \
--amount=1000 \
--dry-run
```
**Supported pairs:** `cwusdt-usdc`, `cwusdc-usdc`, `cwusdt-usdt`, `cwusdc-usdt`, `cwusdt-cwusdc`, `cweurc-usdc`, `cwgbpc-usdc`, `cwaudc-usdc`, `cwcadc-usdc`, `cwjpyc-usdc`
**Supported directions:** `base-to-quote`, `quote-to-base`
**Important note:** the live Mainnet DODO pools can execute swaps, but the direct hosted `querySellBase` / `querySellQuote` read path may revert. This helper tries the direct pool read first and then falls back to a conservative reserve-based quote when needed, and it prints `quoteSource=pool_query` or `quoteSource=reserve_fallback` so the operator can see which path was used.
**Bootstrap verifier:**
```bash
bash scripts/verify/check-mainnet-public-dodo-cw-bootstrap-pools.sh
```
This checks that the eleven recorded Mainnet DODO cW bootstrap pools (USD rails + non-USD Wave 1 + `cWUSDT/cWUSDC`) are still mapped by the integration, have non-zero reserves, and remain dry-run routable through the repeatable swap helper.
### 15. Mainnet DODO Wave 1 pool deploy helper
Repeatable helper for creating and seeding a first-tier Mainnet DODO PMM Wave 1 `cW* / USDC` pair.
**Usage:**
```bash
bash scripts/deployment/deploy-mainnet-public-dodo-wave1-pool.sh \
--pair=cweurc-usdc \
--initial-price=1151700000000000000 \
--base-amount=1250000 \
--quote-amount=1439625 \
--mint-base-amount=1300000
```
This helper creates the pool if needed, optionally mints the base `cW*` token to the deployer when the deployer still has `MINTER_ROLE`, approves the integration, and seeds the first liquidity tranche.
### 16. Mainnet cWUSDT / cWUSDC PMM (direct wrap pair)
**Compute matched seed (e.g. 50× reference depth, optional cap):**
```bash
bash scripts/deployment/compute-mainnet-cwusdt-cwusdc-seed-amounts.sh --multiplier=50 --cap-raw=<optional_max_raw>
```
**Create (if missing) or top up 1:1 liquidity:**
```bash
bash scripts/deployment/deploy-mainnet-cwusdt-cwusdc-pool.sh \
--initial-price=1000000000000000000 \
--base-amount=<raw> --quote-amount=<raw> \
--dry-run
```
**Deterministic round-trip soak (default dry-run; no RNG):**
```bash
bash scripts/deployment/run-mainnet-cwusdt-cwusdc-soak-roundtrips.sh \
--amounts-raw=100000000,10000000000 \
--repeat-list=10 --dry-run
```
**Routing planner (USDT↔USDC paths including direct cW leg):** `scripts/verify/plan-mainnet-usdt-usdc-via-cw-paths.sh`
---
## Utility Modules

View File

@@ -15,7 +15,7 @@ SOURCE_PROJECT="/home/intlc/projects/smom-dbis-138"
source "$SOURCE_PROJECT/.env" 2>/dev/null || true
RPC_URL="${RPC_URL_138:-http://${RPC_ALLTRA_1:-${RPC_ALLTRA_1:-192.168.11.250}}:8545}"
RPC_URL="${RPC_URL_138:-http://${RPC_CORE_1:-192.168.11.211}:8545}"
WETH9_BRIDGE="${CCIPWETH9_BRIDGE_CHAIN138:-0x971cD9D156f193df8051E48043C476e53ECd4693}"
WETH10_BRIDGE="${CCIPWETH10_BRIDGE_CHAIN138:-0xe0E93247376aa097dB308B92e6Ba36bA015535D0}"
@@ -85,4 +85,3 @@ security_recommendations() {
check_admin_roles
check_pause_functionality
security_recommendations

View File

@@ -0,0 +1,154 @@
#!/usr/bin/env python3
"""
Mainnet cW/TRUU PMM — profitability *model* (not trading advice).
Computes break-even edges, max clip sizes vs reserves, and LP fee ceilings from
documented defaults (MAINNET_PMM_TRUU_CWUSD_PEG_AND_BOT_RUNBOOK.md, 04-bot-policy.md).
Usage:
python3 scripts/analytics/mainnet-truu-pmm-profitability-model.py
python3 scripts/analytics/mainnet-truu-pmm-profitability-model.py --gas-usd 3 --assume-daily-volume-usd 5000
All USD notions for the cW leg treat 6-decimal cW as ~1 USD per 1e6 raw (numeraire wrapper).
TRUU leg is shown in raw + full tokens only; USD(TRUU) must be supplied via --truu-usd-price.
"""
from __future__ import annotations
import argparse
from dataclasses import dataclass
@dataclass(frozen=True)
class PoolState:
name: str
base_raw: int # cW 6 decimals
quote_raw: int # TRUU 10 decimals
def fmt_usd_from_cw_raw(raw: int) -> float:
return raw / 1_000_000.0
def max_clip_base_raw(base_reserve_raw: int, max_trade_frac: float) -> int:
"""Runbook-style cap: fraction of *base* reserve (binding leg is usually tiny USD base)."""
return int(base_reserve_raw * max_trade_frac)
def break_even_edge_bps(*, fee_bps_sum: float, gas_usd: float, trade_notional_usd: float) -> float:
"""
Minimum mid-price edge (in bps of notional) so that fee_bps_sum + gas/trade >= 0 on the margin.
edge_bps >= fee_bps_sum + (gas_usd / trade_notional_usd) * 10000
"""
if trade_notional_usd <= 0:
return float("inf")
return fee_bps_sum + (gas_usd / trade_notional_usd) * 10_000.0
def main() -> None:
p = argparse.ArgumentParser(description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter)
p.add_argument("--gas-usd", type=float, default=2.0, help="All-in gas cost per tx (USD), rough")
p.add_argument("--pmm-fee-bps", type=float, default=30.0, help="PMM pool fee (bps per leg)")
p.add_argument(
"--uni-fee-bps",
type=float,
default=5.0,
help="Uniswap v3 pool fee tier (bps), if used (500 v3 tier = 5 bps)",
)
p.add_argument("--truu-usd-price", type=float, default=0.0, help="USD per 1 full TRUU (optional, for quote-leg USD)")
p.add_argument("--assume-daily-volume-usd", type=float, default=0.0, help="Hypothetical daily volume through one pool (USD)")
p.add_argument(
"--pools-json",
type=str,
default="",
help="Optional: path to file with last reserves; otherwise use baked-in snapshot",
)
args = p.parse_args()
# Snapshot: update when reserves change (or pass --pools-json later).
pools = [
PoolState("cWUSDT/TRUU", 1_739_869, 431_892_240_211_824),
PoolState("cWUSDC/TRUU", 320_478, 79_553_215_830_929),
]
print("=== Mainnet TRUU PMM — maximum-profitability *inputs* (model) ===\n")
print("1) DEPTH (binding leg is usually 6-dec cW ≈ USD numeraire)\n")
total_cw_usd = 0.0
for pool in pools:
usd = fmt_usd_from_cw_raw(pool.base_raw)
total_cw_usd += usd
truu_full = pool.quote_raw / 1e10
line = (
f" {pool.name}: base_raw={pool.base_raw} (~${usd:.4f} cW) | "
f"quote_raw={pool.quote_raw} ({truu_full:.4f} full TRUU)"
)
if args.truu_usd_price > 0:
line += f" | quote≈${truu_full * args.truu_usd_price:.4f} @ {args.truu_usd_price} USD/TRUU"
print(line)
print(f" Sum cW legs (USD-ish): ~${total_cw_usd:.4f}\n")
print("2) MAX ACTIVE CLIP (runbook ballpark: 0.250.5% of smaller reserve — here applied to *base* raw)\n")
for frac, label in ((0.0025, "0.25%"), (0.005, "0.5%")):
print(f" {label} of base reserve:")
for pool in pools:
clip = max_clip_base_raw(pool.base_raw, frac)
print(f" {pool.name}: max_base_raw={clip} (~${fmt_usd_from_cw_raw(clip):.6f})")
print()
print("3) BREAK-EVEN *PRICE EDGE* (bps of notional) after fees + gas\n")
print(f" Assumed gas_usd per tx = ${args.gas_usd:.2f}")
print(" Fee model (linear bps, simplified): PMM round-trip ≈ 2× PMM fee; PMM+Uni round-trip ≈ PMM+Uni fee.\n")
# Two-leg PMM round-trip (sell then buy back): 2 * pmm fee on notional (simplified linear bps model)
pmm_round_trip_fee_bps = 2 * args.pmm_fee_bps
# PMM + Uniswap round-trip: one PMM + one Uni (minimal 2-hop story)
pmm_uni_fee_bps = args.pmm_fee_bps + args.uni_fee_bps
for pool in pools:
n_clip = fmt_usd_from_cw_raw(max_clip_base_raw(pool.base_raw, 0.005))
n_full = fmt_usd_from_cw_raw(pool.base_raw)
print(f" {pool.name} — runbook clip (0.5% of base): notional≈${n_clip:.6f}; entire base leg≈${n_full:.4f}")
for label, n_usd in (("0.5% clip", n_clip), ("100% base leg", n_full)):
if n_usd < 1e-12:
continue
be_pmm_2g = break_even_edge_bps(
fee_bps_sum=pmm_round_trip_fee_bps, gas_usd=2 * args.gas_usd, trade_notional_usd=n_usd
)
be_x_2g = break_even_edge_bps(
fee_bps_sum=pmm_uni_fee_bps, gas_usd=2 * args.gas_usd, trade_notional_usd=n_usd
)
print(f" {label} (${n_usd:.6f}): break-even edge ≥ {be_pmm_2g:.1f} bps (PMM 2×gas) | ≥ {be_x_2g:.1f} bps (PMM+Uni 2×gas)")
print("\n Scaled notionals (if you add depth + capital — same fee/gas model):\n")
print(f" {'N USD':>10} {'PMM 2×gas bps':>16} {'PMM+Uni 2×gas bps':>18}")
for n_usd in (10, 50, 100, 500, 1000, 10_000):
b1 = break_even_edge_bps(fee_bps_sum=pmm_round_trip_fee_bps, gas_usd=2 * args.gas_usd, trade_notional_usd=n_usd)
b2 = break_even_edge_bps(fee_bps_sum=pmm_uni_fee_bps, gas_usd=2 * args.gas_usd, trade_notional_usd=n_usd)
print(f" {n_usd:>10.0f} {b1:>16.2f} {b2:>18.2f}")
print(" (Below ~$100 notional on L1, gas usually dominates 60 bps fees; depth must rise to trade larger.)\n")
print("4) LP FEE CEILING (30 bps taker fee — simplified: revenue ≈ 0.003 × volume)\n")
f = args.pmm_fee_bps / 10_000.0
if args.assume_daily_volume_usd > 0:
rev = f * args.assume_daily_volume_usd
print(f" If daily volume through *one* pool = ${args.assume_daily_volume_usd:,.2f} → LP fee ~${rev:.2f}/day (theory; split among LPs).")
else:
print(" Pass --assume-daily-volume-usd N to see $/day; with publicRoutingEnabled=false, N may be ~0.")
print()
print("5) WHAT IS *NECESSARY* FOR (close to) MAXIMUM PROFITABILITY — operational, not a guarantee\n")
reqs = [
"Edge: repeated |P_reference P_PMM| large enough to clear fee_bps + gas/notional (see section 3).",
"Size: inventory + ETH gas; clip within runbook fraction of smallest reserve (section 2).",
"Fair pricing: keep PMM `i` aligned with Uniswap TRUU/USDT TWAP/median + caps on `i` moves.",
"Flow: route discovery (routers/aggregators) or deliberate flow; else LP fees stay ~0.",
"Execution: minAmountOut, nonce control, pause on oracle divergence / bridge flags (bot policy doc).",
"Scale: deeper two-sided liquidity reduces slippage and raises max safe notional per clip.",
"Legal/compliance: MM and public claims per internal policy (runbook).",
]
for r in reqs:
print(f"{r}")
print("\nThis script does not fetch live gas or Uni prices; plug fresh numbers for decisions.\n")
if __name__ == "__main__":
main()

File diff suppressed because it is too large Load Diff

View File

@@ -140,7 +140,7 @@ echo " - Firefly (6200) - Moderate resources"
echo ""
echo "🔒 Priority 3 - Keep on ml110 (infrastructure):"
echo " - Infrastructure services (100-105) - proxmox-mail-gateway, cloudflared, omada, gitea, nginx"
echo " - Infrastructure services (100-105) - proxmox-mail-gateway, cloudflared, gitea, nginx (Omada VMID 103 retired)"
echo " - Monitoring (130) - Keep on primary node"
echo " - These are core infrastructure and should remain on primary node"
echo ""

View File

@@ -2,7 +2,7 @@
# Audit Besu heap sizing (BESU_OPTS) vs container memory for RPC VMIDs.
#
# Usage:
# PROXMOX_HOST=${PROXMOX_HOST_ML110:-192.168.11.10} ./scripts/audit-proxmox-rpc-besu-heap.sh
# PROXMOX_HOST=${PROXMOX_HOST_R630_01:-192.168.11.11} ./scripts/audit-proxmox-rpc-besu-heap.sh
#
# Output highlights containers where Xmx appears larger than container memory.
@@ -15,10 +15,10 @@ source "${PROJECT_ROOT}/config/ip-addresses.conf" 2>/dev/null || true
PROXMOX_SSH_USER="${PROXMOX_SSH_USER:-${PROXMOX_USER:-root}}"
[[ "$PROXMOX_SSH_USER" == *"@"* ]] && PROXMOX_SSH_USER="root"
PROXMOX_HOST="${PROXMOX_HOST:-${PROXMOX_HOST_ML110:-192.168.11.10}}"
PROXMOX_HOST="${PROXMOX_HOST:-${PROXMOX_HOST_R630_01:-192.168.11.11}}"
UNIT="/etc/systemd/system/besu-rpc.service"
VMIDS=(2400 2401 2402 2500 2501 2502 2503 2504 2505 2506 2507 2508)
VMIDS=(2101 2102 2103 2201 2301 2303 2304 2305 2306 2307 2308 2400 2401 2402 2403 2420 2430 2440 2460 2470 2480)
ssh_pve() {
ssh -o StrictHostKeyChecking=no -o BatchMode=yes -o ConnectTimeout=5 "${PROXMOX_SSH_USER}@${PROXMOX_HOST}" "$@"
@@ -74,4 +74,3 @@ NOTE:
- A safe rule of thumb: keep -Xmx <= ~60-70% of container memory to avoid swap/IO thrash during RocksDB work.
- If you see HIGH risk, reduce BESU_OPTS and restart besu-rpc during a maintenance window.
NOTE

View File

@@ -4,42 +4,67 @@
#
# Usage:
# ./scripts/audit-proxmox-rpc-storage.sh
# ./scripts/audit-proxmox-rpc-storage.sh --vmid 2301
# PROXMOX_HOST=192.168.11.10 ./scripts/audit-proxmox-rpc-storage.sh # single-host mode (legacy)
set -euo pipefail
# Load IP configuration
# Load shared project environment and VMID placement helper.
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
source "${PROJECT_ROOT}/config/ip-addresses.conf" 2>/dev/null || true
source "${PROJECT_ROOT}/scripts/lib/load-project-env.sh"
# SSH user for shell (PROXMOX_USER in .env may be root@pam for API)
PROXMOX_SSH_USER="${PROXMOX_SSH_USER:-${PROXMOX_USER:-root}}"
[[ "$PROXMOX_SSH_USER" == *"@"* ]] && PROXMOX_SSH_USER="root"
R630_01="${PROXMOX_HOST_R630_01:-192.168.11.11}"
R630_02="${PROXMOX_HOST_R630_02:-192.168.11.12}"
ML110="${PROXMOX_HOST_ML110:-192.168.11.10}"
# VMID:host (same mapping as check-rpc-vms-health.sh) — only VMIDs that exist on these hosts
RPC_NODES=(
"2101:$R630_01"
"2201:$R630_02"
"2301:$ML110"
"2303:$R630_02"
"2304:$ML110"
"2305:$ML110"
"2306:$ML110"
"2307:$ML110"
"2308:$ML110"
"2400:$ML110"
"2401:$R630_02"
"2402:$ML110"
"2403:$ML110"
)
RPC_VMIDS=(2101 2201 2301 2303 2304 2305 2306 2307 2308 2400 2401 2402 2403)
RPC_NODES=()
for vmid in "${RPC_VMIDS[@]}"; do
RPC_NODES+=("${vmid}:$(get_host_for_vmid "$vmid")")
done
# If PROXMOX_HOST is set, run single-host legacy mode (one host only)
PROXMOX_HOST="${PROXMOX_HOST:-}"
TARGET_VMIDS=()
usage() {
cat <<'EOF'
Usage: ./scripts/audit-proxmox-rpc-storage.sh [--vmid <N>]
Options:
--vmid <N> Limit the VMID section to one VMID; repeatable
EOF
}
while [[ $# -gt 0 ]]; do
case "$1" in
--vmid)
[[ $# -ge 2 ]] || { usage >&2; exit 2; }
TARGET_VMIDS+=("$2")
shift 2
;;
-h|--help)
usage
exit 0
;;
*)
echo "Unknown argument: $1" >&2
usage >&2
exit 2
;;
esac
done
selected_vmid() {
local vmid="$1"
[[ ${#TARGET_VMIDS[@]} -eq 0 ]] && return 0
local wanted
for wanted in "${TARGET_VMIDS[@]}"; do
[[ "$vmid" == "$wanted" ]] && return 0
done
return 1
}
ssh_pve() {
local host="$1"
@@ -67,6 +92,7 @@ if [[ -n "${PROXMOX_HOST:-}" ]]; then
# Single host: only VMIDs that belong on this node (we don't have full map in legacy mode, so list all and skip missing)
for entry in "${RPC_NODES[@]}"; do
IFS=: read -r vmid host <<< "$entry"
selected_vmid "$vmid" || continue
[[ "$host" != "$PROXMOX_HOST" ]] && continue
echo "--- VMID ${vmid} ---"
ssh_pve "$host" "pct status ${vmid} 2>&1; pct config ${vmid} 2>&1 | egrep -i 'hostname:|rootfs:|memory:|swap:|cores:|net0:'" | sed 's/^/ /'
@@ -90,7 +116,7 @@ else
done
echo "=== storage.cfg (from first host) ==="
first_host="${R630_01}"
first_host="${PROXMOX_HOST_R630_01}"
ssh_pve "$first_host" "awk '
/^dir: /{s=\$2; t=\"dir\"; nodes=\"\"}
/^lvmthin: /{s=\$2; t=\"lvmthin\"; nodes=\"\"}
@@ -102,6 +128,7 @@ else
echo "=== RPC VMID -> rootfs storage mapping (per host) ==="
for entry in "${RPC_NODES[@]}"; do
IFS=: read -r vmid host <<< "$entry"
selected_vmid "$vmid" || continue
echo "--- VMID ${vmid} (host $host) ---"
ssh_pve "$host" "pct status ${vmid} 2>&1; pct config ${vmid} 2>&1 | egrep -i 'hostname:|rootfs:|memory:|swap:|cores:|net0:'" | sed 's/^/ /'
echo ""
@@ -115,4 +142,3 @@ NOTE:
- Fix is to update /etc/pve/storage.cfg nodes=... for that storage to include the node hosting the VMID,
or migrate the VMID to an allowed node/storage.
NOTE

View File

@@ -23,14 +23,14 @@ STATIC_FILE="${PROJECT_ROOT}/config/besu-node-lists/static-nodes.json"
PERM_FILE="${PROJECT_ROOT}/config/besu-node-lists/permissions-nodes.toml"
SSH_OPTS="-o ConnectTimeout=8 -o StrictHostKeyChecking=accept-new"
# All 32 Besu VMIDs in stable order (validators, sentries, RPCs)
BESU_VMIDS=(1000 1001 1002 1003 1004 1500 1501 1502 1503 1504 1505 1506 1507 1508 2101 2102 2201 2301 2303 2304 2305 2306 2400 2401 2402 2403 2500 2501 2502 2503 2504 2505)
# All Besu VMIDs in stable order (validators, sentries, RPCs)
BESU_VMIDS=(1000 1001 1002 1003 1004 1500 1501 1502 1503 1504 1505 1506 1507 1508 2101 2102 2103 2201 2301 2303 2304 2305 2306 2400 2401 2402 2403 2500 2501 2502 2503 2504 2505)
# VMID -> Proxmox host
declare -A HOST_BY_VMID
for v in 1000 1001 1002 1500 1501 1502 2101 2500 2501 2502 2503 2504 2505; do HOST_BY_VMID[$v]="${PROXMOX_R630_01:-192.168.11.11}"; done
for v in 2201 2303 2401; do HOST_BY_VMID[$v]="${PROXMOX_R630_02:-192.168.11.12}"; done
for v in 1003 1004 1503 1504 1505 1506 1507 1508 2102 2301 2304 2305 2306 2400 2402 2403; do HOST_BY_VMID[$v]="${PROXMOX_ML110:-192.168.11.10}"; done
for v in 1000 1001 1002 1500 1501 1502 2101 2103 2500 2501 2502 2503 2504 2505; do HOST_BY_VMID[$v]="${PROXMOX_R630_01:-192.168.11.11}"; done
for v in 2201 2303 2305 2306 2307 2308 2401; do HOST_BY_VMID[$v]="${PROXMOX_R630_02:-192.168.11.12}"; done
for v in 1003 1004 1503 1504 1505 1506 1507 1508 2102 2301 2304 2400 2402 2403; do HOST_BY_VMID[$v]="${PROXMOX_R630_03:-192.168.11.13}"; done
# VMID -> canonical IP (all 32)
declare -A IP_BY_VMID
@@ -50,6 +50,7 @@ IP_BY_VMID[1507]=192.168.11.244
IP_BY_VMID[1508]=192.168.11.245
IP_BY_VMID[2101]=192.168.11.211
IP_BY_VMID[2102]=192.168.11.212
IP_BY_VMID[2103]=192.168.11.217
IP_BY_VMID[2201]=192.168.11.221
IP_BY_VMID[2301]=192.168.11.232
IP_BY_VMID[2303]=192.168.11.233
@@ -132,7 +133,7 @@ if $MISSING_ONLY; then
[[ ${#VMIDS_TO_TRY[@]} -eq 0 ]] && echo "All 32 IPs already present. Nothing to collect." && exit 0
else
VMIDS_TO_TRY=( "${BESU_VMIDS[@]}" )
echo "Collecting enodes from 32 Besu nodes..."
echo "Collecting enodes from ${#BESU_VMIDS[@]} Besu nodes..."
fi
declare -A COLLECTED_BY_VMID
@@ -188,10 +189,10 @@ for vmid in "${BESU_VMIDS[@]}"; do
fi
done
echo "Merged total: ${#FINAL_ENODES[@]} unique enodes (target 32)."
echo "Merged total: ${#FINAL_ENODES[@]} unique enodes (target ${#BESU_VMIDS[@]})."
if [[ ${#FINAL_ENODES[@]} -lt 32 && ${#MISSING_VMIDS[@]} -gt 0 ]]; then
echo "WARNING: ${#FINAL_ENODES[@]}/32. Missing VMIDs: ${MISSING_VMIDS[*]}. Re-run when those nodes are up or add enodes manually." >&2
if [[ ${#FINAL_ENODES[@]} -lt ${#BESU_VMIDS[@]} && ${#MISSING_VMIDS[@]} -gt 0 ]]; then
echo "WARNING: ${#FINAL_ENODES[@]}/${#BESU_VMIDS[@]}. Missing VMIDs: ${MISSING_VMIDS[*]}. Re-run when those nodes are up or add enodes manually." >&2
fi
if [[ ${#FINAL_ENODES[@]} -eq 0 ]]; then

View File

@@ -20,8 +20,8 @@ done
# Same host/VMID as deploy-besu-node-lists-to-all.sh
declare -A HOST_BY_VMID
for v in 1000 1001 1002 1500 1501 1502 2101 2500 2501 2502 2503 2504 2505; do HOST_BY_VMID[$v]="${PROXMOX_R630_01:-${PROXMOX_HOST_R630_01:-192.168.11.11}}"; done
for v in 2201 2303 2401; do HOST_BY_VMID[$v]="${PROXMOX_R630_02:-${PROXMOX_HOST_R630_02:-192.168.11.12}}"; done
for v in 1003 1004 1503 1504 1505 1506 1507 1508 2102 2301 2304 2305 2306 2307 2308 2400 2402 2403; do HOST_BY_VMID[$v]="${PROXMOX_ML110:-${PROXMOX_HOST_ML110:-192.168.11.10}}"; done
for v in 2201 2303 2305 2306 2307 2308 2401; do HOST_BY_VMID[$v]="${PROXMOX_R630_02:-${PROXMOX_HOST_R630_02:-192.168.11.12}}"; done
for v in 1003 1004 1503 1504 1505 1506 1507 1508 2102 2301 2304 2400 2402 2403; do HOST_BY_VMID[$v]="${PROXMOX_R630_03:-${PROXMOX_HOST_R630_03:-192.168.11.13}}"; done
BESU_VMIDS=(1000 1001 1002 1003 1004 1500 1501 1502 1503 1504 1505 1506 1507 1508 2101 2102 2201 2301 2303 2304 2305 2306 2307 2308 2400 2401 2402 2403 2500 2501 2502 2503 2504 2505)
STATIC="${PROJECT_ROOT}/config/besu-node-lists/static-nodes.json"

View File

@@ -1,5 +1,5 @@
#!/usr/bin/env bash
# Install Besu permanently on nodes that don't have /opt/besu/bin/besu (1505-1508, 2501-2505).
# Install Besu permanently on nodes that don't have /opt/besu/bin/besu (1505-1508, 2420-2480).
# Uses install-besu-in-ct-standalone.sh inside each CT; deploys config, genesis, node lists; enables and starts service.
#
# Usage: bash scripts/besu/install-besu-permanent-on-missing-nodes.sh [--dry-run]
@@ -19,23 +19,23 @@ GENESIS_SRC="${PROJECT_ROOT}/smom-dbis-138-proxmox/config/genesis.json"
STATIC_SRC="${PROJECT_ROOT}/config/besu-node-lists/static-nodes.json"
PERMS_SRC="${PROJECT_ROOT}/config/besu-node-lists/permissions-nodes.toml"
# VMIDs that may lack Besu (sentries 1505-1508 on ml110; RPC 2500-2505 on r630-01)
# VMIDs that may lack Besu (sentries 1505-1508 on ml110; edge RPC 2420-2480 on r630-01)
SENTRY_VMIDS=(1505 1506 1507 1508)
RPC_VMIDS=(2500 2501 2502 2503 2504 2505)
RPC_VMIDS=(2420 2430 2440 2460 2470 2480)
declare -A HOST_BY_VMID
for v in 1505 1506 1507 1508; do HOST_BY_VMID[$v]="${PROXMOX_ML110:-192.168.11.10}"; done
for v in 2500 2501 2502 2503 2504 2505; do HOST_BY_VMID[$v]="${PROXMOX_R630_01:-192.168.11.11}"; done
for v in 2420 2430 2440 2460 2470 2480; do HOST_BY_VMID[$v]="${PROXMOX_R630_01:-192.168.11.11}"; done
declare -A IP_BY_VMID
IP_BY_VMID[1505]=192.168.11.213
IP_BY_VMID[1506]=192.168.11.214
IP_BY_VMID[1507]=192.168.11.244
IP_BY_VMID[1508]=192.168.11.245
IP_BY_VMID[2500]=192.168.11.172
IP_BY_VMID[2501]=192.168.11.173
IP_BY_VMID[2502]=192.168.11.174
IP_BY_VMID[2503]=192.168.11.246
IP_BY_VMID[2504]=192.168.11.247
IP_BY_VMID[2505]=192.168.11.248
IP_BY_VMID[2420]=192.168.11.172
IP_BY_VMID[2430]=192.168.11.173
IP_BY_VMID[2440]=192.168.11.174
IP_BY_VMID[2460]=192.168.11.246
IP_BY_VMID[2470]=192.168.11.247
IP_BY_VMID[2480]=192.168.11.248
[[ ! -f "$GENESIS_SRC" ]] && { echo "ERROR: $GENESIS_SRC not found"; exit 1; }
[[ ! -f "$STATIC_SRC" ]] && { echo "ERROR: $STATIC_SRC not found"; exit 1; }
@@ -71,7 +71,7 @@ install_rpc() {
ssh $SSH_OPTS "root@$host" "pct exec $vmid -- rm -rf /opt/besu 2>/dev/null; true"
scp -q $SSH_OPTS "${PROJECT_ROOT}/scripts/install-besu-in-ct-standalone.sh" "root@${host}:/tmp/"
ssh $SSH_OPTS "root@$host" "pct push $vmid /tmp/install-besu-in-ct-standalone.sh /tmp/install-besu-in-ct-standalone.sh && pct exec $vmid -- env NODE_TYPE=rpc BESU_VERSION=$BESU_VERSION bash /tmp/install-besu-in-ct-standalone.sh" || { echo " install failed"; return 1; }
# 2500-style: besu.service + config.toml (not besu-rpc.service). Use inline template so we don't depend on 2500 having Besu.
# Edge-RPC style: besu.service + config.toml (not besu-rpc.service).
cat << 'BESUSVC' > /tmp/besu.service
[Unit]
Description=Hyperledger Besu
@@ -134,7 +134,7 @@ EOF
ssh $SSH_OPTS "root@$host" "pct exec $vmid -- systemctl disable besu-rpc.service 2>/dev/null; pct exec $vmid -- systemctl daemon-reload; pct exec $vmid -- systemctl enable besu.service && pct exec $vmid -- systemctl start besu.service" && echo " besu.service enabled and started" || echo " start failed"
}
echo "Installing Besu permanently on nodes missing /opt/besu/bin/besu (1505-1508, 2500-2505)"
echo "Installing Besu permanently on nodes missing /opt/besu/bin/besu (1505-1508, 2420-2480)"
echo ""
for vmid in "${SENTRY_VMIDS[@]}"; do
@@ -160,4 +160,4 @@ done
echo ""
echo "Done. Verify: bash scripts/besu/restart-besu-reload-node-lists.sh (optional); then check block production on RPCs."
rm -f /tmp/config-sentry.toml /tmp/besu.service /tmp/config.toml 2>/dev/null || true
for v in 2500 2501 2502 2503 2504 2505; do rm -f /tmp/config-rpc-${v}.toml 2>/dev/null; done
for v in 2420 2430 2440 2460 2470 2480; do rm -f /tmp/config-rpc-${v}.toml 2>/dev/null; done

View File

@@ -18,11 +18,11 @@ DRY_RUN=false
# Same VMID -> host as deploy-besu-node-lists-to-all.sh
declare -A HOST_BY_VMID
for v in 1000 1001 1002 1500 1501 1502 2101; do HOST_BY_VMID[$v]="${PROXMOX_R630_01:-${PROXMOX_HOST_R630_01:-192.168.11.11}}"; done
for v in 2201 2303 2401; do HOST_BY_VMID[$v]="${PROXMOX_R630_02:-${PROXMOX_HOST_R630_02:-192.168.11.12}}"; done
for v in 1003 1004 1503 1504 1505 1506 1507 1508 2102 2301 2304 2305 2306 2307 2308 2400 2402 2403; do HOST_BY_VMID[$v]="${PROXMOX_ML110:-${PROXMOX_HOST_ML110:-192.168.11.10}}"; done
for v in 1000 1001 1002 1500 1501 1502 2101 2103; do HOST_BY_VMID[$v]="${PROXMOX_R630_01:-${PROXMOX_HOST_R630_01:-192.168.11.11}}"; done
for v in 2201 2303 2305 2306 2307 2308 2401; do HOST_BY_VMID[$v]="${PROXMOX_R630_02:-${PROXMOX_HOST_R630_02:-192.168.11.12}}"; done
for v in 1003 1004 1503 1504 1505 1506 1507 1508 1509 1510 2102 2301 2304 2400 2402 2403; do HOST_BY_VMID[$v]="${PROXMOX_R630_03:-${PROXMOX_HOST_R630_03:-192.168.11.13}}"; done
BESU_VMIDS=(1000 1001 1002 1003 1004 1500 1501 1502 1503 1504 1505 1506 1507 1508 2101 2102 2201 2301 2303 2304 2305 2306 2307 2308 2400 2401 2402 2403)
BESU_VMIDS=(1000 1001 1002 1003 1004 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 2101 2102 2103 2201 2301 2303 2304 2305 2306 2307 2308 2400 2401 2402 2403)
echo "Restarting Besu on all nodes (to reload static-nodes.json and permissions-nodes.toml)"
if $DRY_RUN; then echo " [dry-run]"; fi

View File

@@ -1,8 +1,8 @@
#!/usr/bin/env node
/**
* Query balances from Besu RPC nodes (VMID 2500-2502 by default)
* Query balances from Besu RPC nodes (current production fleet by default)
*
* Note: Only RPC nodes (2500-2502) expose RPC endpoints.
* Note: Only RPC nodes expose RPC endpoints.
* Validators (1000-1004) and sentries (1500-1503) don't have RPC enabled.
*
* Usage:
@@ -11,8 +11,8 @@
* WETH10_ADDRESS="0x..." \
* node scripts/besu_balances_106_117.js
*
* Or use template (defaults to RPC nodes 115-117):
* RPC_TEMPLATE="http://192.168.11.{vmid}:8545" \
* Or use template:
* RPC_TEMPLATE="http://192.168.11.{vmid}:8545" VMID_START=211 VMID_END=212 \
* node scripts/besu_balances_106_117.js
*/
@@ -20,9 +20,9 @@ import { ethers } from 'ethers';
// Configuration
const WALLET_ADDRESS = '0xa55A4B57A91561e9df5a883D4883Bd4b1a7C4882';
// RPC nodes are 2500-2502; validators (1000-1004) and sentries (1500-1503) don't expose RPC
const VMID_START = process.env.VMID_START ? parseInt(process.env.VMID_START) : 2500;
const VMID_END = process.env.VMID_END ? parseInt(process.env.VMID_END) : 2502;
// Validators and sentries do not expose RPC. Default to the current core/public trio.
const VMID_START = process.env.VMID_START ? parseInt(process.env.VMID_START) : 2101;
const VMID_END = process.env.VMID_END ? parseInt(process.env.VMID_END) : 2103;
const CONCURRENCY_LIMIT = 4;
const REQUEST_TIMEOUT = 15000; // 15 seconds
@@ -39,34 +39,52 @@ const WETH10_ADDRESS = process.env.WETH10_ADDRESS || null;
// VMID to IP mapping for better VMID detection
const VMID_TO_IP = {
'192.168.11.13': 106,
'192.168.11.14': 107,
'192.168.11.15': 108,
'192.168.11.16': 109,
'192.168.11.18': 110,
'192.168.11.19': 111,
'192.168.11.20': 112,
'192.168.11.21': 113,
'192.168.11.22': 114,
'192.168.11.23': 115,
'192.168.11.24': 116,
'192.168.11.25': 117,
'192.168.11.211': 2101,
'192.168.11.212': 2102,
'192.168.11.217': 2103,
'192.168.11.221': 2201,
'192.168.11.232': 2301,
'192.168.11.233': 2303,
'192.168.11.234': 2304,
'192.168.11.235': 2305,
'192.168.11.236': 2306,
'192.168.11.237': 2307,
'192.168.11.238': 2308,
'192.168.11.240': 2400,
'192.168.11.241': 2401,
'192.168.11.242': 2402,
'192.168.11.243': 2403,
'192.168.11.172': 2420,
'192.168.11.173': 2430,
'192.168.11.174': 2440,
'192.168.11.246': 2460,
'192.168.11.247': 2470,
'192.168.11.248': 2480,
};
// Reverse IP to VMID mapping (VMID -> IP)
const IP_TO_VMID = {
106: '192.168.11.13',
107: '192.168.11.14',
108: '192.168.11.15',
109: '192.168.11.16',
110: '192.168.11.18',
111: '192.168.11.19',
112: '192.168.11.20',
113: '192.168.11.21',
114: '192.168.11.22',
115: '192.168.11.23',
116: '192.168.11.24',
117: '192.168.11.25',
2101: '192.168.11.211',
2102: '192.168.11.212',
2103: '192.168.11.217',
2201: '192.168.11.221',
2301: '192.168.11.232',
2303: '192.168.11.233',
2304: '192.168.11.234',
2305: '192.168.11.235',
2306: '192.168.11.236',
2307: '192.168.11.237',
2308: '192.168.11.238',
2400: '192.168.11.240',
2401: '192.168.11.241',
2402: '192.168.11.242',
2403: '192.168.11.243',
2420: '192.168.11.172',
2430: '192.168.11.173',
2440: '192.168.11.174',
2460: '192.168.11.246',
2470: '192.168.11.247',
2480: '192.168.11.248',
};
// RPC endpoint configuration
@@ -304,4 +322,3 @@ main().catch(err => {
console.error('Fatal error:', err);
process.exit(1);
});

View File

@@ -0,0 +1,339 @@
#!/usr/bin/env bash
#
# Send any configured canonical Chain 138 c* token through CWMultiTokenBridgeL1
# to a destination cW* bridge lane.
#
# Dry-run by default. Pass --execute to submit approvals and the bridge send.
#
# Examples:
# ./scripts/bridge/bridge-cstar-to-cw.sh --asset cUSDT --chain MAINNET --amount 1 --recipient 0xabc...
# ./scripts/bridge/bridge-cstar-to-cw.sh --asset cEURC --chain POLYGON --amount 250.5 --approve --execute
# ./scripts/bridge/bridge-cstar-to-cw.sh --asset cXAUC --chain AVALANCHE --raw-amount 1000000 --approve --execute
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
SMOM_ROOT="${PROJECT_ROOT}/smom-dbis-138"
CONFIG_FILE="${PROJECT_ROOT}/config/token-mapping-multichain.json"
CHAIN=""
ASSET=""
AMOUNT=""
RAW_AMOUNT=""
RECIPIENT=""
AUTO_APPROVE=false
EXECUTE=false
usage() {
cat <<'EOF'
Usage:
./scripts/bridge/bridge-cstar-to-cw.sh --asset <c*> --chain <CHAIN> [--amount <decimal> | --raw-amount <int>] [options]
Required:
--asset <c*> cUSDT | cUSDC | cAUSDT | cUSDW | cEURC | cEURT | cGBPC | cGBPT | cAUDC | cJPYC | cCHFC | cCADC | cXAUC | cXAUT
--chain <CHAIN> MAINNET | CRONOS | BSC | POLYGON | GNOSIS | AVALANCHE | BASE | ARBITRUM | OPTIMISM | CELO
Amount:
--amount <decimal> Human-readable amount; converted using token decimals()
--raw-amount <int> Raw integer amount in token base units
Options:
--recipient <address> Destination recipient; defaults to the sender derived from PRIVATE_KEY
--approve Auto-approve source token and ERC-20 fee token if needed
--execute Broadcast approvals and lockAndSend; default is dry-run
--help Show this message
EOF
}
while [[ $# -gt 0 ]]; do
case "$1" in
--chain) CHAIN="${2:-}"; shift 2 ;;
--asset) ASSET="${2:-}"; shift 2 ;;
--amount) AMOUNT="${2:-}"; shift 2 ;;
--raw-amount) RAW_AMOUNT="${2:-}"; shift 2 ;;
--recipient) RECIPIENT="${2:-}"; shift 2 ;;
--approve) AUTO_APPROVE=true; shift ;;
--execute) EXECUTE=true; shift ;;
--help|-h) usage; exit 0 ;;
*)
echo "Unknown argument: $1" >&2
usage >&2
exit 2
;;
esac
done
[[ -n "$CHAIN" && -n "$ASSET" ]] || { usage >&2; exit 2; }
if [[ -z "$AMOUNT" && -z "$RAW_AMOUNT" ]]; then
echo "Provide --amount or --raw-amount" >&2
exit 2
fi
if [[ -n "$AMOUNT" && -n "$RAW_AMOUNT" ]]; then
echo "Use only one of --amount or --raw-amount" >&2
exit 2
fi
if [[ -f "${SMOM_ROOT}/scripts/lib/deployment/dotenv.sh" ]]; then
# shellcheck disable=SC1090
source "${SMOM_ROOT}/scripts/lib/deployment/dotenv.sh"
load_deployment_env --repo-root "$SMOM_ROOT"
else
set -a
# shellcheck disable=SC1090
source "${SMOM_ROOT}/.env"
set +a
fi
command -v cast >/dev/null 2>&1 || { echo "cast (foundry) required" >&2; exit 1; }
command -v node >/dev/null 2>&1 || { echo "node required" >&2; exit 1; }
command -v python3 >/dev/null 2>&1 || { echo "python3 required" >&2; exit 1; }
[[ -n "${PRIVATE_KEY:-}" ]] || { echo "PRIVATE_KEY required in ${SMOM_ROOT}/.env" >&2; exit 1; }
[[ -n "${RPC_URL_138:-}" ]] || { echo "RPC_URL_138 required in ${SMOM_ROOT}/.env" >&2; exit 1; }
declare -A CHAIN_NAME_TO_ID=(
[MAINNET]=1
[CRONOS]=25
[BSC]=56
[POLYGON]=137
[GNOSIS]=100
[AVALANCHE]=43114
[BASE]=8453
[ARBITRUM]=42161
[OPTIMISM]=10
[CELO]=42220
)
declare -A CHAIN_NAME_TO_SELECTOR_VAR=(
[MAINNET]=ETH_MAINNET_SELECTOR
[CRONOS]=CRONOS_SELECTOR
[BSC]=BSC_SELECTOR
[POLYGON]=POLYGON_SELECTOR
[GNOSIS]=GNOSIS_SELECTOR
[AVALANCHE]=AVALANCHE_SELECTOR
[BASE]=BASE_SELECTOR
[ARBITRUM]=ARBITRUM_SELECTOR
[OPTIMISM]=OPTIMISM_SELECTOR
[CELO]=CELO_SELECTOR
)
declare -A ASSET_TO_MAPPING_KEY=(
[cUSDT]=Compliant_USDT_cW
[cUSDC]=Compliant_USDC_cW
[cAUSDT]=Compliant_AUSDT_cW
[cUSDW]=Compliant_USDW_cW
[cEURC]=Compliant_EURC_cW
[cEURT]=Compliant_EURT_cW
[cGBPC]=Compliant_GBPC_cW
[cGBPT]=Compliant_GBPT_cW
[cAUDC]=Compliant_AUDC_cW
[cJPYC]=Compliant_JPYC_cW
[cCHFC]=Compliant_CHFC_cW
[cCADC]=Compliant_CADC_cW
[cXAUC]=Compliant_XAUC_cW
[cXAUT]=Compliant_XAUT_cW
)
chain_id="${CHAIN_NAME_TO_ID[$CHAIN]:-}"
selector_var="${CHAIN_NAME_TO_SELECTOR_VAR[$CHAIN]:-}"
mapping_key="${ASSET_TO_MAPPING_KEY[$ASSET]:-}"
[[ -n "$chain_id" && -n "$selector_var" && -n "$mapping_key" ]] || {
echo "Unsupported chain or asset: chain=$CHAIN asset=$ASSET" >&2
exit 2
}
selector="${!selector_var:-}"
[[ -n "$selector" ]] || { echo "Missing selector env for $CHAIN ($selector_var)" >&2; exit 1; }
CHAIN138_L1_BRIDGE="${CW_L1_BRIDGE_CHAIN138:-${CHAIN138_L1_BRIDGE:-}}"
[[ -n "$CHAIN138_L1_BRIDGE" ]] || { echo "Set CW_L1_BRIDGE_CHAIN138 (or CHAIN138_L1_BRIDGE) in .env" >&2; exit 1; }
SENDER="$(cast wallet address "$PRIVATE_KEY" 2>/dev/null)" || {
echo "Failed to derive sender from PRIVATE_KEY" >&2
exit 1
}
if [[ -z "$RECIPIENT" ]]; then
RECIPIENT="$SENDER"
fi
map_info="$(
node - <<'NODE' "$CONFIG_FILE" "$chain_id" "$mapping_key"
const fs = require('fs');
const [configFile, chainIdRaw, mappingKey] = process.argv.slice(2);
const chainId = Number(chainIdRaw);
const json = JSON.parse(fs.readFileSync(configFile, 'utf8'));
const pair = (json.pairs || []).find((entry) => entry.fromChainId === 138 && entry.toChainId === chainId);
if (!pair) process.exit(10);
const token = (pair.tokens || []).find((entry) => entry.key === mappingKey);
if (!token) process.exit(11);
process.stdout.write([token.addressFrom || '', token.addressTo || '', token.name || ''].join('\n'));
NODE
)" || {
echo "Failed to resolve $ASSET -> $CHAIN from $CONFIG_FILE" >&2
exit 1
}
canonical_address="$(sed -n '1p' <<<"$map_info")"
mirrored_address="$(sed -n '2p' <<<"$map_info")"
lane_name="$(sed -n '3p' <<<"$map_info")"
[[ -n "$canonical_address" && "$canonical_address" != "0x0000000000000000000000000000000000000000" ]] || {
echo "Canonical address missing for $ASSET -> $CHAIN in $CONFIG_FILE" >&2
exit 1
}
[[ -n "$mirrored_address" && "$mirrored_address" != "0x0000000000000000000000000000000000000000" ]] || {
echo "Mirrored cW address for $ASSET -> $CHAIN is missing in $CONFIG_FILE" >&2
exit 1
}
DECIMALS="$(cast call "$canonical_address" "decimals()(uint8)" --rpc-url "$RPC_URL_138" 2>/dev/null | tr -d '[:space:]')" || {
echo "Failed to read decimals() from $canonical_address" >&2
exit 1
}
[[ "$DECIMALS" =~ ^[0-9]+$ ]] || { echo "Unexpected decimals() result: $DECIMALS" >&2; exit 1; }
if [[ -z "$RAW_AMOUNT" ]]; then
RAW_AMOUNT="$(
python3 - <<'PY' "$AMOUNT" "$DECIMALS"
from decimal import Decimal, ROUND_DOWN, getcontext
import sys
getcontext().prec = 80
amount = Decimal(sys.argv[1])
decimals = int(sys.argv[2])
scale = Decimal(10) ** decimals
raw = (amount * scale).quantize(Decimal('1'), rounding=ROUND_DOWN)
print(int(raw))
PY
)"
fi
[[ "$RAW_AMOUNT" =~ ^[0-9]+$ && "$RAW_AMOUNT" != "0" ]] || {
echo "Invalid raw amount: $RAW_AMOUNT" >&2
exit 1
}
human_amount="$(
python3 - <<'PY' "$RAW_AMOUNT" "$DECIMALS"
from decimal import Decimal, getcontext
import sys
getcontext().prec = 80
raw = Decimal(sys.argv[1])
decimals = int(sys.argv[2])
scale = Decimal(10) ** decimals
value = raw / scale
text = format(value.normalize(), 'f')
print(text.rstrip('0').rstrip('.') if '.' in text else text)
PY
)"
clean_scalar() {
local value="$1"
value="${value%%[*}"
value="${value//$'\n'/}"
value="${value//$'\r'/}"
value="${value//[[:space:]]/}"
printf '%s' "$value"
}
fee_token="$(clean_scalar "$(cast call "$CHAIN138_L1_BRIDGE" "feeToken()(address)" --rpc-url "$RPC_URL_138" 2>/dev/null || true)")"
fee_wei="$(clean_scalar "$(cast call "$CHAIN138_L1_BRIDGE" "calculateFee(address,uint64,address,uint256)(uint256)" "$canonical_address" "$selector" "$RECIPIENT" "$RAW_AMOUNT" --rpc-url "$RPC_URL_138" 2>/dev/null || true)")"
source_allowance="$(clean_scalar "$(cast call "$canonical_address" "allowance(address,address)(uint256)" "$SENDER" "$CHAIN138_L1_BRIDGE" --rpc-url "$RPC_URL_138" 2>/dev/null || true)")"
[[ -n "$source_allowance" ]] || source_allowance="0"
fee_allowance="0"
if [[ -n "$fee_token" && "${fee_token,,}" != "0x0000000000000000000000000000000000000000" && -n "$fee_wei" ]]; then
fee_allowance="$(clean_scalar "$(cast call "$fee_token" "allowance(address,address)(uint256)" "$SENDER" "$CHAIN138_L1_BRIDGE" --rpc-url "$RPC_URL_138" 2>/dev/null || true)")"
[[ -n "$fee_allowance" ]] || fee_allowance="0"
fi
dest_config="$(cast call "$CHAIN138_L1_BRIDGE" "destinations(address,uint64)((address,bool))" "$canonical_address" "$selector" --rpc-url "$RPC_URL_138" 2>/dev/null || true)"
token_supported="$(cast call "$CHAIN138_L1_BRIDGE" "supportedCanonicalToken(address)(bool)" "$canonical_address" --rpc-url "$RPC_URL_138" 2>/dev/null || true)"
send_tx() {
cast send "$CHAIN138_L1_BRIDGE" "lockAndSend(address,uint64,address,uint256)" "$canonical_address" "$selector" "$RECIPIENT" "$RAW_AMOUNT" \
--rpc-url "$RPC_URL_138" --private-key "$PRIVATE_KEY" --legacy "$@"
}
approve_tx() {
local token="$1"
local spender="$2"
local amount="$3"
cast send "$token" "approve(address,uint256)" "$spender" "$amount" \
--rpc-url "$RPC_URL_138" --private-key "$PRIVATE_KEY" --legacy
}
echo "=== Generic Chain 138 c* -> cW* send ==="
echo "Chain: $CHAIN (chainId $chain_id)"
echo "Lane: $lane_name"
echo "Sender: $SENDER"
echo "Recipient: $RECIPIENT"
echo "Canonical token: $canonical_address"
echo "Mirrored token: $mirrored_address"
echo "L1 bridge: $CHAIN138_L1_BRIDGE"
echo "Chain selector: $selector"
echo "Amount (human): $human_amount"
echo "Amount (raw): $RAW_AMOUNT"
echo "Supported on L1: ${token_supported:-unavailable}"
echo "Destination config: $(tr '\n' ' ' <<<"$dest_config" | xargs 2>/dev/null || true)"
echo "Source allowance: $source_allowance"
if [[ -n "$fee_wei" ]]; then
echo "Bridge fee: $fee_wei"
fi
if [[ -n "$fee_token" ]]; then
echo "Fee token: $fee_token"
if [[ "${fee_token,,}" != "0x0000000000000000000000000000000000000000" ]]; then
echo "Fee allowance: $fee_allowance"
else
echo "Fee mode: native"
fi
fi
echo "Mode: $([[ "$EXECUTE" == "true" ]] && echo "execute" || echo "dry-run")"
echo
if ! $EXECUTE; then
echo "Dry-run commands:"
if [[ "$source_allowance" -lt "$RAW_AMOUNT" ]]; then
echo "1. Approve source token"
printf 'cast send %q %q %q %q --rpc-url %q --private-key "$PRIVATE_KEY" --legacy\n' \
"$canonical_address" "approve(address,uint256)" "$CHAIN138_L1_BRIDGE" "$RAW_AMOUNT" "$RPC_URL_138"
fi
if [[ -n "$fee_token" && "${fee_token,,}" != "0x0000000000000000000000000000000000000000" && -n "$fee_wei" && "$fee_allowance" -lt "$fee_wei" ]]; then
echo "2. Approve ERC-20 fee token"
printf 'cast send %q %q %q %q --rpc-url %q --private-key "$PRIVATE_KEY" --legacy\n' \
"$fee_token" "approve(address,uint256)" "$CHAIN138_L1_BRIDGE" "$fee_wei" "$RPC_URL_138"
fi
echo "3. lockAndSend"
if [[ -n "$fee_token" && "${fee_token,,}" == "0x0000000000000000000000000000000000000000" && -n "$fee_wei" ]]; then
printf 'cast send %q %q %q %q %q %q --rpc-url %q --private-key "$PRIVATE_KEY" --legacy --value %q\n' \
"$CHAIN138_L1_BRIDGE" "lockAndSend(address,uint64,address,uint256)" "$canonical_address" "$selector" "$RECIPIENT" "$RAW_AMOUNT" "$RPC_URL_138" "$fee_wei"
else
printf 'cast send %q %q %q %q %q %q --rpc-url %q --private-key "$PRIVATE_KEY" --legacy\n' \
"$CHAIN138_L1_BRIDGE" "lockAndSend(address,uint64,address,uint256)" "$canonical_address" "$selector" "$RECIPIENT" "$RAW_AMOUNT" "$RPC_URL_138"
fi
echo
echo "Re-run with --execute to broadcast. Add --approve to auto-submit approvals first."
exit 0
fi
if [[ "$source_allowance" -lt "$RAW_AMOUNT" ]]; then
if ! $AUTO_APPROVE; then
echo "Source allowance is too low; re-run with --approve or approve manually first." >&2
exit 1
fi
approve_tx "$canonical_address" "$CHAIN138_L1_BRIDGE" "$RAW_AMOUNT"
fi
if [[ -n "$fee_token" && "${fee_token,,}" != "0x0000000000000000000000000000000000000000" && -n "$fee_wei" && "$fee_allowance" -lt "$fee_wei" ]]; then
if ! $AUTO_APPROVE; then
echo "Fee-token allowance is too low; re-run with --approve or approve manually first." >&2
exit 1
fi
approve_tx "$fee_token" "$CHAIN138_L1_BRIDGE" "$fee_wei"
fi
if [[ -n "$fee_token" && "${fee_token,,}" == "0x0000000000000000000000000000000000000000" && -n "$fee_wei" ]]; then
send_tx --value "$fee_wei"
else
send_tx
fi

View File

@@ -1,43 +1,96 @@
#!/usr/bin/env bash
# Fund the mainnet CCIPRelayBridge with WETH from the deployer wallet.
# Usage: ./scripts/bridge/fund-mainnet-relay-bridge.sh [amount_wei]
# If amount_wei is omitted, transfers the deployer's full WETH balance.
# Usage:
# ./scripts/bridge/fund-mainnet-relay-bridge.sh [amount_wei] [--dry-run]
# ./scripts/bridge/fund-mainnet-relay-bridge.sh --target-bridge-balance-wei 10000000000000000 [--dry-run]
# If amount_wei is omitted, transfers the deployer's full WETH balance.
# Env: PRIVATE_KEY, ETHEREUM_MAINNET_RPC or RPC_URL_MAINNET (use Infura/Alchemy if llamarpc rate-limits)
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
[[ -f "${PROJECT_ROOT}/smom-dbis-138/.env" ]] && source "${PROJECT_ROOT}/smom-dbis-138/.env" 2>/dev/null || true
# shellcheck source=/dev/null
source "${PROJECT_ROOT}/scripts/lib/load-project-env.sh" >/dev/null 2>&1 || true
DEPLOYER="${DEPLOYER_ADDRESS:-0x4A666F96fC8764181194447A7dFdb7d471b301C8}"
WETH="0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2"
BRIDGE="0xF9A32F37099c582D28b4dE7Fca6eaC1e5259f939"
# Prefer RPC_URL_MAINNET so override works (e.g. RPC_URL_MAINNET=https://ethereum.publicnode.com if Infura requires secret)
WETH="${WETH9_MAINNET:-0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2}"
BRIDGE="${CCIP_RELAY_BRIDGE_MAINNET:-0xF9A32F37099c582D28b4dE7Fca6eaC1e5259f939}"
RPC="${RPC_URL_MAINNET:-${ETHEREUM_MAINNET_RPC:-https://ethereum.publicnode.com}}"
[[ -n "${PRIVATE_KEY:-}" ]] || { echo "PRIVATE_KEY required"; exit 1; }
DRY_RUN=0
TARGET_BRIDGE_BALANCE_WEI=""
AMOUNT_WEI=""
while [[ $# -gt 0 ]]; do
case "$1" in
--dry-run)
DRY_RUN=1
shift
;;
--target-bridge-balance-wei)
TARGET_BRIDGE_BALANCE_WEI="${2:-}"
shift 2
;;
*)
if [[ -z "$AMOUNT_WEI" ]]; then
AMOUNT_WEI="$1"
shift
else
echo "Usage: $0 [amount_wei] [--target-bridge-balance-wei WEI] [--dry-run]" >&2
exit 1
fi
;;
esac
done
command -v cast &>/dev/null || { echo "cast not found (install Foundry)"; exit 1; }
BALANCE=$(cast call $WETH "balanceOf(address)(uint256)" $DEPLOYER --rpc-url "$RPC" 2>/dev/null || echo "0")
echo "Deployer WETH balance (mainnet): $BALANCE wei"
echo "Bridge WETH balance (mainnet): $(cast call $WETH "balanceOf(address)(uint256)" $BRIDGE --rpc-url "$RPC" 2>/dev/null || echo "0") wei"
normalize_uint() {
awk '{print $1}'
}
if [[ "${1:-}" = "" ]]; then
AMT="$BALANCE"
echo "Transferring full balance: $AMT wei"
else
AMT="$1"
echo "Transferring: $AMT wei"
DEPLOYER_BALANCE="$( (cast call "$WETH" "balanceOf(address)(uint256)" "$DEPLOYER" --rpc-url "$RPC" 2>/dev/null || echo "0") | normalize_uint )"
BRIDGE_BALANCE="$( (cast call "$WETH" "balanceOf(address)(uint256)" "$BRIDGE" --rpc-url "$RPC" 2>/dev/null || echo "0") | normalize_uint )"
echo "Deployer WETH balance (mainnet): $DEPLOYER_BALANCE wei"
echo "Bridge WETH balance (mainnet): $BRIDGE_BALANCE wei"
if [[ -n "$TARGET_BRIDGE_BALANCE_WEI" ]]; then
AMOUNT_WEI="$(node -e 'const target=BigInt(process.argv[1]); const current=BigInt(process.argv[2]); const delta=target>current?target-current:0n; process.stdout.write(delta.toString())' "$TARGET_BRIDGE_BALANCE_WEI" "$BRIDGE_BALANCE")"
echo "Target bridge balance: $TARGET_BRIDGE_BALANCE_WEI wei"
echo "Calculated transfer: $AMOUNT_WEI wei"
fi
if [[ -z "$AMT" ]] || [[ "$AMT" = "0" ]]; then
if [[ -z "$AMOUNT_WEI" ]]; then
AMOUNT_WEI="$DEPLOYER_BALANCE"
echo "Transferring full deployer balance: $AMOUNT_WEI wei"
else
echo "Transferring: $AMOUNT_WEI wei"
fi
if [[ -z "$AMOUNT_WEI" ]] || [[ "$AMOUNT_WEI" = "0" ]]; then
echo "Nothing to transfer."
exit 0
fi
cast send $WETH "transfer(address,uint256)" $BRIDGE "$AMT" \
if [[ "$DRY_RUN" = "1" ]]; then
if node -e 'const amount=BigInt(process.argv[1]); const deployer=BigInt(process.argv[2]); process.exit(amount > deployer ? 1 : 0)' "$AMOUNT_WEI" "$DEPLOYER_BALANCE"; then
:
else
echo "[dry-run] Warning: requested transfer exceeds current deployer balance."
fi
echo "[dry-run] Would execute:"
echo "cast send $WETH \"transfer(address,uint256)\" $BRIDGE \"$AMOUNT_WEI\" --rpc-url \"$RPC\" --private-key \"\$PRIVATE_KEY\" --legacy"
exit 0
fi
[[ -n "${PRIVATE_KEY:-}" ]] || { echo "PRIVATE_KEY required unless --dry-run is used"; exit 1; }
node -e 'const amount=BigInt(process.argv[1]); const deployer=BigInt(process.argv[2]); if (amount > deployer) { console.error(`Requested transfer ${amount} exceeds deployer balance ${deployer}.`); process.exit(2); }' "$AMOUNT_WEI" "$DEPLOYER_BALANCE"
cast send "$WETH" "transfer(address,uint256)" "$BRIDGE" "$AMOUNT_WEI" \
--rpc-url "$RPC" \
--private-key "$PRIVATE_KEY" \
--legacy
echo "Done. Bridge WETH balance: $(cast call $WETH "balanceOf(address)(uint256)" $BRIDGE --rpc-url "$RPC") wei"
FINAL_BRIDGE_BALANCE="$( (cast call "$WETH" "balanceOf(address)(uint256)" "$BRIDGE" --rpc-url "$RPC" 2>/dev/null || echo "0") | normalize_uint )"
echo "Done. Bridge WETH balance: $FINAL_BRIDGE_BALANCE wei"

View File

@@ -27,6 +27,7 @@ done
AMOUNT_ETH="${ARGS[0]:?Usage: $0 amount_eth [recipient] [--dry-run]}"
RECIPIENT="${ARGS[1]:-$(cast wallet address "$PRIVATE_KEY" 2>/dev/null)}"
SENDER_ADDR="$(cast wallet address "$PRIVATE_KEY" 2>/dev/null)"
DEST_SELECTOR="${CCIP_DEST_CHAIN_SELECTOR:-5009297550715157269}"
GAS_PRICE="${GAS_PRICE:-1000000000}"
@@ -39,13 +40,51 @@ extract_first_address() {
echo "$1" | grep -oE '0x[a-fA-F0-9]{40}' | sed -n '1p'
}
raw_uint() {
echo "$1" | awk '{print $1}'
}
lower() {
echo "$1" | tr '[:upper:]' '[:lower:]'
}
MAINNET_SELECTOR_VALUE="${MAINNET_SELECTOR:-5009297550715157269}"
BSC_SELECTOR_VALUE="${BSC_SELECTOR:-11344663589394136015}"
AVALANCHE_SELECTOR_VALUE="${AVALANCHE_SELECTOR:-6433500567565415381}"
assert_supported_direct_first_hop() {
case "$DEST_SELECTOR" in
"$MAINNET_SELECTOR_VALUE"|"${BSC_SELECTOR_VALUE}"|"${AVALANCHE_SELECTOR_VALUE}")
return 0
;;
esac
if [[ "${ALLOW_UNSUPPORTED_DIRECT_FIRST_HOP:-0}" = "1" ]]; then
echo "WARNING: proceeding with unsupported direct first hop for selector $DEST_SELECTOR because ALLOW_UNSUPPORTED_DIRECT_FIRST_HOP=1"
return 0
fi
cat <<EOF
ERROR: selector $DEST_SELECTOR is not a proven direct first hop from the current Chain 138 router.
The supported direct first-hop lanes today are:
- Mainnet ($MAINNET_SELECTOR_VALUE)
- BSC ($BSC_SELECTOR_VALUE)
- Avalanche ($AVALANCHE_SELECTOR_VALUE)
For Gnosis, Cronos, Celo, Polygon, Arbitrum, Optimism, and Base, use the Mainnet hub:
1. bridge 138 -> Mainnet
2. then bridge Mainnet -> destination
Set ALLOW_UNSUPPORTED_DIRECT_FIRST_HOP=1 only if you are intentionally testing an unsupported path
and accept that the source send may succeed without destination delivery.
EOF
exit 1
}
DEST_RAW="$(cast call "$BRIDGE" 'destinations(uint64)((uint64,address,bool))' "$DEST_SELECTOR" --rpc-url "$RPC" 2>/dev/null || echo "")"
DEST_ADDR="$(extract_first_address "$DEST_RAW")"
AVALANCHE_SELECTOR_VALUE="${AVALANCHE_SELECTOR:-6433500567565415381}"
assert_supported_direct_first_hop
if [[ "$DEST_SELECTOR" == "$AVALANCHE_SELECTOR_VALUE" ]]; then
AVALANCHE_NATIVE_BRIDGE="${CCIPWETH9_BRIDGE_AVALANCHE:-}"
@@ -76,8 +115,82 @@ if [[ -n "$CONFIRM_ABOVE" ]] && awk -v a="$AMOUNT_ETH" -v b="$CONFIRM_ABOVE" 'BE
fi
AMOUNT_WEI=$(cast --to-wei "$AMOUNT_ETH" ether)
ALLOWANCE_WETH_RAW="$(cast call "$WETH9" "allowance(address,address)(uint256)" "$SENDER_ADDR" "$BRIDGE" --rpc-url "$RPC" 2>/dev/null || echo "0")"
ALLOWANCE_WETH="$(raw_uint "$ALLOWANCE_WETH_RAW")"
if [[ "$ALLOWANCE_WETH" =~ ^[0-9]+$ ]] && (( ALLOWANCE_WETH < AMOUNT_WEI )); then
cat <<EOF
ERROR: insufficient WETH allowance for source bridge pull.
Current allowance:
token: $WETH9
owner: $SENDER_ADDR
spender: $BRIDGE
allowance_raw: $ALLOWANCE_WETH
amount_raw: $AMOUNT_WEI
Approve WETH to the Chain 138 bridge first, for example:
cast send "$WETH9" "approve(address,uint256)" "$BRIDGE" "$AMOUNT_WEI" --rpc-url "$RPC" --private-key "\$PRIVATE_KEY" --gas-price "$GAS_PRICE" --legacy
EOF
exit 1
fi
FEE_WEI=$(cast call "$BRIDGE" "calculateFee(uint64,uint256)" "$DEST_SELECTOR" "$AMOUNT_WEI" --rpc-url "$RPC" 2>/dev/null | cast --to-dec || echo "0")
FEE_TOKEN=$(cast call "$BRIDGE" "feeToken()(address)" --rpc-url "$RPC" 2>/dev/null || echo "0x0")
if [[ "$FEE_TOKEN" != "0x0000000000000000000000000000000000000000" ]] && [[ -n "$FEE_WEI" ]] && [[ "$FEE_WEI" != "0" ]]; then
ALLOWANCE_FEE_RAW="$(cast call "$FEE_TOKEN" "allowance(address,address)(uint256)" "$SENDER_ADDR" "$BRIDGE" --rpc-url "$RPC" 2>/dev/null || echo "0")"
ALLOWANCE_FEE="$(raw_uint "$ALLOWANCE_FEE_RAW")"
if [[ "$ALLOWANCE_FEE" =~ ^[0-9]+$ ]] && (( ALLOWANCE_FEE < FEE_WEI )); then
cat <<EOF
ERROR: insufficient fee-token allowance for bridge fee pull.
Current allowance:
token: $FEE_TOKEN
owner: $SENDER_ADDR
spender: $BRIDGE
allowance_raw: $ALLOWANCE_FEE
required_fee_raw: $FEE_WEI
EOF
exit 1
fi
fi
DEST_RPC=""
DEST_WETH=""
case "$DEST_SELECTOR" in
"$MAINNET_SELECTOR_VALUE")
DEST_RPC="${ETHEREUM_MAINNET_RPC:-}"
DEST_WETH="${WETH9_MAINNET:-}"
;;
"$BSC_SELECTOR_VALUE")
DEST_RPC="${BSC_RPC_URL:-${BSC_MAINNET_RPC:-}}"
DEST_WETH="${WETH9_BSC:-}"
;;
"$AVALANCHE_SELECTOR_VALUE")
DEST_RPC="${AVALANCHE_RPC_URL:-${AVALANCHE_MAINNET_RPC:-}}"
DEST_WETH="${WETH9_AVALANCHE:-}"
;;
esac
if [[ -n "$DEST_ADDR" && -n "$DEST_RPC" && -n "$DEST_WETH" ]]; then
DEST_BALANCE_RAW="$(cast call "$DEST_WETH" "balanceOf(address)(uint256)" "$DEST_ADDR" --rpc-url "$DEST_RPC" 2>/dev/null || echo "0")"
DEST_BALANCE="$(raw_uint "$DEST_BALANCE_RAW")"
if [[ "$DEST_BALANCE" =~ ^[0-9]+$ ]] && (( DEST_BALANCE < AMOUNT_WEI )); then
SHORTFALL=$(( AMOUNT_WEI - DEST_BALANCE ))
cat <<EOF
ERROR: insufficient destination relay inventory for this direct first hop.
Destination receiver: $DEST_ADDR
Destination token: $DEST_WETH
Destination balance: $DEST_BALANCE
Requested amount: $AMOUNT_WEI
Shortfall: $SHORTFALL
Top up the destination relay inventory first, or reduce the send amount.
EOF
exit 1
fi
fi
V=""; [[ "$FEE_TOKEN" = "0x0000000000000000000000000000000000000000" ]] && [[ -n "$FEE_WEI" ]] && [[ "$FEE_WEI" != "0" ]] && V="--value $FEE_WEI"
GAS_OPT=""; [[ -n "$GAS_LIMIT" ]] && GAS_OPT="--gas-limit $GAS_LIMIT"

View File

@@ -28,7 +28,7 @@ logging.basicConfig(
logger = logging.getLogger('ccip-monitor')
# Load configuration from environment
RPC_URL = os.getenv('RPC_URL_138', 'http://192.168.11.250:8545')
RPC_URL = os.getenv('RPC_URL_138', 'http://192.168.11.211:8545')
CCIP_ROUTER_ADDRESS = os.getenv('CCIP_ROUTER_ADDRESS', '')
CCIP_SENDER_ADDRESS = os.getenv('CCIP_SENDER_ADDRESS', '')
LINK_TOKEN_ADDRESS = os.getenv('LINK_TOKEN_ADDRESS', '')
@@ -303,4 +303,3 @@ def main():
if __name__ == '__main__':
main()

View File

@@ -36,6 +36,8 @@ EXPECTED_CHAIN_ID_HEX="0x8a"
EXPECTED_CHAIN_ID_DEC=138
BLOCK_WAIT_SEC=${BLOCK_WAIT_SEC:-6}
RPC_TIMEOUT=${RPC_TIMEOUT:-15}
RPC_LAST_CURL_EXIT=0
RPC_LAST_ERROR=""
# Build list of WebSocket RPC URLs from chainlist
get_chainlist_ws_rpcs() {
@@ -75,9 +77,31 @@ rpc_call() {
local url=$1
local method=$2
local params=${3:-[]}
curl -s -m "$RPC_TIMEOUT" -X POST "$url" \
local body_file err_file rc
body_file=$(mktemp)
err_file=$(mktemp)
if curl -sS -m "$RPC_TIMEOUT" -X POST "$url" \
-H "Content-Type: application/json" \
-d "{\"jsonrpc\":\"2.0\",\"method\":\"$method\",\"params\":$params,\"id\":1}" 2>/dev/null || echo '{"error":"request failed"}'
-d "{\"jsonrpc\":\"2.0\",\"method\":\"$method\",\"params\":$params,\"id\":1}" \
-o "$body_file" 2>"$err_file"; then
rc=0
else
rc=$?
fi
RPC_LAST_CURL_EXIT=$rc
RPC_LAST_ERROR="$(tr '\n' ' ' < "$err_file" | sed 's/ */ /g' | sed 's/^ //; s/ $//')"
if [ "$rc" -eq 0 ]; then
cat "$body_file"
else
printf '{"error":"request failed","curl_exit":%s,"curl_error":%s}\n' \
"$rc" "$(jq -Rn --arg msg "$RPC_LAST_ERROR" '$msg')"
fi
rm -f "$body_file" "$err_file"
}
is_optional_external_rpc() {
local url="$1"
[[ "$url" == *"thirdweb.com/"* ]]
}
# Check block production: get block, wait, get again; at least one RPC must show advancing blocks
@@ -124,23 +148,38 @@ e2e_one() {
local chain_ok=false
local block_ok=false
local err_msg=""
local chain_id=$(echo "$(rpc_call "$url" "eth_chainId")" | jq -r '.result // empty')
local chain_resp chain_id block_resp block_hex rpc_error
chain_resp="$(rpc_call "$url" "eth_chainId")"
chain_id=$(echo "$chain_resp" | jq -r '.result // empty')
rpc_error=$(echo "$chain_resp" | jq -r '.error.message // .curl_error // empty')
if [ "$chain_id" = "$EXPECTED_CHAIN_ID_HEX" ] || [ "$chain_id" = "$EXPECTED_CHAIN_ID_DEC" ]; then
chain_ok=true
else
err_msg="chainId=$chain_id (expected $EXPECTED_CHAIN_ID_HEX)"
if [ -n "$rpc_error" ]; then
err_msg="$rpc_error"
else
err_msg="chainId=$chain_id (expected $EXPECTED_CHAIN_ID_HEX)"
fi
fi
local block_resp=$(rpc_call "$url" "eth_blockNumber")
local block_hex=$(echo "$block_resp" | jq -r '.result // empty')
block_resp="$(rpc_call "$url" "eth_blockNumber")"
block_hex=$(echo "$block_resp" | jq -r '.result // empty')
if [ -n "$block_hex" ]; then
block_ok=true
else
rpc_error=$(echo "$block_resp" | jq -r '.error.message // .curl_error // empty')
[ -n "$err_msg" ] && err_msg="$err_msg; " || true
err_msg="${err_msg}eth_blockNumber failed"
if [ -n "$rpc_error" ]; then
err_msg="${err_msg}eth_blockNumber failed (${rpc_error})"
else
err_msg="${err_msg}eth_blockNumber failed"
fi
fi
if [ "$chain_ok" = true ] && [ "$block_ok" = true ]; then
log_success "$label — chainId $chain_id, block $block_hex"
return 0
elif is_optional_external_rpc "$url"; then
log_warn "$label — optional external provider issue: $err_msg"
return 2
else
log_error "$label$err_msg"
return 1
@@ -182,6 +221,7 @@ main() {
log_section "E2E — All chainlist HTTP RPC endpoints (thirdweb + d-bis + defi-oracle)"
local passed=0
local failed=0
local external_warn=0
while IFS= read -r line; do
[ -z "$line" ] && continue
url=$(normalize_url "$line")
@@ -193,7 +233,12 @@ main() {
if e2e_one "$url" "$url"; then
((passed++)) || true
else
((failed++)) || true
rc=$?
if [ "$rc" -eq 2 ]; then
((external_warn++)) || true
else
((failed++)) || true
fi
fi
done < <(get_chainlist_http_rpcs)
@@ -218,6 +263,7 @@ main() {
echo "Block production: $([ "$block_ok" = true ] && echo "PASSED" || echo "FAILED")"
echo "HTTP E2E passed: $passed"
echo "HTTP E2E failed: $failed"
echo "HTTP E2E external warnings: $external_warn"
echo "WebSocket E2E passed: $ws_passed"
echo "WebSocket E2E failed: $ws_failed"
echo ""
@@ -225,6 +271,9 @@ main() {
if [ "$block_ok" = true ] && [ "$failed" -eq 0 ]; then
log_success "Result: block production + HTTP chainlist E2E passed."
if [ "$external_warn" -gt 0 ]; then
log_warn "Third-party provider warnings: $external_warn"
fi
exit 0
fi
if [ "$block_ok" = false ]; then

View File

@@ -1,6 +1,9 @@
#!/usr/bin/env bash
# Clear transaction pools on all Besu nodes (RPC and Validators)
# This script clears transaction pool databases to remove stuck transactions
# Clear transaction pools on all Besu nodes (validators, Core/Thirdweb/public RPC).
#
# Stuck txs often reappear on Core when sentries or Alltra/Hybx RPC CTs (still on the P2P mesh) were not cleared.
# To include those: CLEAR_BESU_PEER_TXPOOLS=1 bash scripts/clear-all-transaction-pools.sh
# Peer tier (15001502, 2420/2430/2440/2460/2470/2480) uses pct stop + pct mount on the PVE host (avoids hung systemctl inside flaky CTs).
#
# SSH: uses PROXMOX_SSH_USER from config/ip-addresses.conf (root). If .env sets PROXMOX_USER=root@pam for the API,
# that value is NOT used for SSH (see PROXMOX_USER= assignment below).
@@ -13,12 +16,18 @@ SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
source "${PROJECT_ROOT}/config/ip-addresses.conf" 2>/dev/null || true
# Peer-tier Besu CTs (sentries, Alltra/Hybx RPC) retain mempool and can re-gossip txs to Core after a clear.
# Default: skip them (faster). To flush those pools too: CLEAR_BESU_PEER_TXPOOLS=1 bash scripts/clear-all-transaction-pools.sh
CLEAR_BESU_PEER_TXPOOLS="${CLEAR_BESU_PEER_TXPOOLS:-0}"
# Shell SSH must be root@host — not root@pam@host (.env often sets PROXMOX_USER=root@pam for API).
PROXMOX_SSH_USER="${PROXMOX_SSH_USER:-root}"
[[ "$PROXMOX_SSH_USER" == *"@"* ]] && PROXMOX_SSH_USER="root"
PROXMOX_USER="$PROXMOX_SSH_USER"
PROXMOX_ML110="${PROXMOX_ML110:-${PROXMOX_HOST_ML110:-192.168.11.10}}"
PROXMOX_R630="${PROXMOX_R630:-${PROXMOX_R630_01:-${PROXMOX_HOST_R630_01:-192.168.11.11}}}"
R630_03="${PROXMOX_R630_03:-${PROXMOX_HOST_R630_03:-192.168.11.13}}"
R630_02="${PROXMOX_R630_02:-${PROXMOX_HOST_R630_02:-192.168.11.12}}"
# Colors
RED='\033[0;31m'
@@ -46,10 +55,19 @@ clear_node_pool() {
log_info "Clearing transaction pool for $NODE_TYPE (VMID $VMID on $HOST)..."
# Stop the service
# Stop the service (timeout each stop — Besu can hang on SIGTERM during heavy I/O)
log_info " Stopping service..."
ssh -o ConnectTimeout=10 -o StrictHostKeyChecking=no "$SSH_TARGET" \
"pct exec $VMID -- systemctl stop besu-validator 2>/dev/null || pct exec $VMID -- systemctl stop besu-rpc-core 2>/dev/null || pct exec $VMID -- systemctl stop besu-rpc.service 2>/dev/null || pct exec $VMID -- systemctl stop besu-rpc 2>/dev/null || true" 2>&1 | grep -v "Configuration file" || true
"pct exec $VMID -- bash -c '
stop_besu() {
local n=\"\$1\"
timeout 90 systemctl stop \"\${n}.service\" 2>/dev/null || timeout 90 systemctl stop \"\$n\" 2>/dev/null || true
}
stop_besu besu-validator
stop_besu besu-rpc-core
stop_besu besu-rpc
stop_besu besu-sentry
'" 2>&1 | grep -v "Configuration file" || true
sleep 2
@@ -75,16 +93,30 @@ clear_node_pool() {
log_warn " Could not clear transaction pool (may not exist)"
fi
# Restart the service
# Restart the service (first successful start wins; timeout avoids indefinite hang)
log_info " Restarting service..."
ssh -o ConnectTimeout=10 -o StrictHostKeyChecking=no "$SSH_TARGET" \
"pct exec $VMID -- systemctl start besu-validator 2>/dev/null || pct exec $VMID -- systemctl start besu-rpc-core 2>/dev/null || pct exec $VMID -- systemctl start besu-rpc.service 2>/dev/null || pct exec $VMID -- systemctl start besu-rpc 2>/dev/null || true" 2>&1 | grep -v "Configuration file" || true
"pct exec $VMID -- bash -c '
timeout 120 systemctl start besu-validator.service 2>/dev/null || timeout 120 systemctl start besu-validator 2>/dev/null || \
timeout 120 systemctl start besu-rpc-core.service 2>/dev/null || timeout 120 systemctl start besu-rpc-core 2>/dev/null || \
timeout 120 systemctl start besu-rpc.service 2>/dev/null || timeout 120 systemctl start besu-rpc 2>/dev/null || \
timeout 120 systemctl start besu-sentry.service 2>/dev/null || timeout 120 systemctl start besu-sentry 2>/dev/null || true
'" 2>&1 | grep -v "Configuration file" || true
sleep 3
# Verify service is running
# Verify at least one Besu unit is active (single line — avoids inactive\\ninactive\\nactive noise)
STATUS=$(ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no "$SSH_TARGET" \
"pct exec $VMID -- systemctl is-active besu-validator 2>/dev/null || pct exec $VMID -- systemctl is-active besu-rpc-core 2>/dev/null || pct exec $VMID -- systemctl is-active besu-rpc.service 2>/dev/null || pct exec $VMID -- systemctl is-active besu-rpc 2>/dev/null || echo 'unknown'" 2>&1 | grep -v "Configuration file" || echo "unknown")
"pct exec $VMID -- bash -c '
for u in besu-validator besu-rpc-core besu-rpc besu-sentry; do
for s in \"\$u\" \"\${u}.service\"; do
st=\$(systemctl is-active \"\$s\" 2>/dev/null || true)
[ \"\$st\" = active ] && { echo active; exit 0; }
done
done
echo inactive
'" 2>&1 | grep -v "Configuration file" | tr -d '\r' | tail -1) \
|| STATUS="unknown"
if [ "$STATUS" = "active" ]; then
log_success " Service restarted and active"
@@ -95,15 +127,129 @@ clear_node_pool() {
echo ""
}
# Clear txpool files while CT is stopped — mount rootfs from PVE host (reliable for sentries / Alltra RPC).
clear_peer_txpool_via_pct_mount() {
local VMID=$1
local HOST=$2
local LABEL=$3
local SSH_TARGET="${PROXMOX_USER}@${HOST}"
log_info "Clearing $LABEL (VMID $VMID on $HOST) via pct stop + rootfs wipe..."
# shellcheck disable=SC2087
if ssh -o ConnectTimeout=15 -o StrictHostKeyChecking=no "$SSH_TARGET" bash -s <<EOF
set +e
VMID=$VMID
echo " [pve] pct stop \$VMID (timeout 300s)"
timeout 300 pct stop "\$VMID" 2>/dev/null || true
for i in \$(seq 1 90); do
if pct status "\$VMID" 2>/dev/null | grep -qi stopped; then
echo " [pve] CT \$VMID stopped"
break
fi
sleep 2
done
if ! pct status "\$VMID" 2>/dev/null | grep -qi stopped; then
echo " [pve] WARN: CT \$VMID not stopped after wait — cannot wipe pool safely"
timeout 120 pct start "\$VMID" 2>/dev/null || true
exit 1
fi
sleep 2
# After pct stop, /var/lib/lxc/<vmid>/rootfs often exists but is empty until pct mount binds the CT disk (LVM/ZFS).
# Using the path without mount caused silent no-ops on r630-03 mesh CTs (no "cleared under" lines).
MP="/var/lib/lxc/\${VMID}/rootfs"
pct unmount "\$VMID" 2>/dev/null || true
if ! pct mount "\$VMID" 2>/dev/null; then
echo " [pve] WARN: pct mount \$VMID failed — cannot wipe pool safely"
timeout 120 pct start "\$VMID" 2>/dev/null || true
exit 1
fi
echo " [pve] pct mount bound \$MP"
if [ ! -d "\$MP" ]; then
echo " [pve] WARN: no rootfs at \$MP after mount"
pct unmount "\$VMID" 2>/dev/null || true
timeout 120 pct start "\$VMID" 2>/dev/null || true
exit 1
fi
for dd in "\$MP/data/besu" "\$MP/var/lib/besu" "\$MP/opt/besu"; do
if [ -d "\$dd" ]; then
find "\$dd" -type d -name "*pool*" -exec rm -rf {} \; 2>/dev/null || true
find "\$dd" -type f -name "*transaction*" -delete 2>/dev/null || true
find "\$dd" -type f -name "*txpool*" -delete 2>/dev/null || true
echo " [pve] cleared under \$dd"
fi
done
pct unmount "\$VMID" 2>/dev/null || true
sleep 2
echo " [pve] pct start \$VMID"
timeout 180 pct start "\$VMID" 2>/dev/null || true
sleep 2
exit 0
EOF
then
log_success " Peer CT $VMID — pool wipe via mount complete"
else
log_warn " Peer CT $VMID — mount path had issues; confirm: ssh ${SSH_TARGET} 'pct status $VMID'"
fi
echo ""
}
# Peer tier first (when enabled) so validators/RPC are not refilled from sentries mid-run.
if [[ "$CLEAR_BESU_PEER_TXPOOLS" == "1" ]]; then
log_section "Clearing Sentry transaction pools (15001502) — pct mount on host"
for vmid in 1500 1501 1502; do
if ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no "${PROXMOX_USER}@${PROXMOX_R630}" \
"pct list | grep -q '^${vmid} '" 2>/dev/null; then
clear_peer_txpool_via_pct_mount "$vmid" "$PROXMOX_R630" "Sentry"
else
log_warn "Sentry VMID $vmid not found on ${PROXMOX_R630}"
fi
done
log_section "Clearing current edge RPC pools (2420/2430/2440/2460/2470/2480) — pct mount on host"
for vmid in 2420 2430 2440 2460 2470 2480; do
if ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no "${PROXMOX_USER}@${PROXMOX_R630}" \
"pct list | grep -q '^${vmid} '" 2>/dev/null; then
clear_peer_txpool_via_pct_mount "$vmid" "$PROXMOX_R630" "Besu RPC (Alltra/Hybx)"
else
log_warn "VMID $vmid not found on ${PROXMOX_R630}"
fi
done
# r630-03: validators 10031004, sentries 15031508, Core2 2102, Fireblocks 2301/2304, ThirdWeb stack 2400/2402/2403
log_section "Clearing Besu mesh on r630-03 (15031508, 2102, 2301, 2304, 24002403)"
for vmid in 1503 1504 1505 1506 1507 1508 2102 2301 2304 2400 2402 2403; do
if ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no "${PROXMOX_USER}@${R630_03}" \
"pct list | grep -q '^${vmid} '" 2>/dev/null; then
clear_peer_txpool_via_pct_mount "$vmid" "$R630_03" "Besu mesh r630-03 VMID $vmid"
else
log_warn "VMID $vmid not found on ${R630_03}"
fi
done
# r630-02: public 2201 + named RPC / ThirdWeb helper CTs (same P2P mesh)
log_section "Clearing Besu mesh on r630-02 (2201, 2303, 23052308, 2401)"
for vmid in 2201 2303 2305 2306 2307 2308 2401; do
if ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no "${PROXMOX_USER}@${R630_02}" \
"pct list | grep -q '^${vmid} '" 2>/dev/null; then
clear_peer_txpool_via_pct_mount "$vmid" "$R630_02" "Besu mesh r630-02 VMID $vmid"
else
log_warn "VMID $vmid not found on ${R630_02}"
fi
done
else
log_info "Skipping sentry + Alltra/Hybx pool clear (set CLEAR_BESU_PEER_TXPOOLS=1 if stuck txs reappear on Core after a clear)."
fi
# Clear validators
log_section "Clearing Validator Transaction Pools"
# Validators: 10001002 on r630-01; 10031004 on r630-03 (see ALL_VMIDS_ENDPOINTS.md).
VALIDATORS=(
"1000:$PROXMOX_R630:Validator"
"1001:$PROXMOX_R630:Validator"
"1002:$PROXMOX_R630:Validator"
"1003:$PROXMOX_ML110:Validator"
"1004:$PROXMOX_ML110:Validator"
"1003:$R630_03:Validator"
"1004:$R630_03:Validator"
)
for validator in "${VALIDATORS[@]}"; do
@@ -124,19 +270,39 @@ else
log_warn "RPC node (2101) not found on either host"
fi
# Clear RPC Public (2201) — often used when Core is down; ensures deploy txs not stuck
log_section "Clearing RPC Public (2201)"
R630_02="${PROXMOX_R630_02:-${PROXMOX_HOST_R630_02:-192.168.11.12}}"
if ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no "${PROXMOX_USER}@${R630_02}" \
"pct list | grep -q '2201'" 2>/dev/null; then
clear_node_pool 2201 "$R630_02" "RPC Public"
# Clear RPC Core Thirdweb admin (2103) — r630-01 per ALL_VMIDS_ENDPOINTS.md
log_section "Clearing RPC Core Thirdweb admin (2103)"
if ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no "${PROXMOX_USER}@${PROXMOX_R630}" \
"pct list | grep -q '2103'" 2>/dev/null; then
clear_node_pool 2103 "$PROXMOX_R630" "RPC Thirdweb Core"
else
log_warn "RPC Public (2201) not found on ${R630_02}"
log_warn "RPC Thirdweb Core (2103) not found on ${PROXMOX_R630}"
fi
# 2102 (r630-03) and 2201 (r630-02) are cleared in the peer-tier pct-mount pass when CLEAR_BESU_PEER_TXPOOLS=1
if [[ "$CLEAR_BESU_PEER_TXPOOLS" != "1" ]]; then
log_section "Clearing RPC Core 2 (2102)"
if ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no "${PROXMOX_USER}@${R630_03}" \
"pct list | grep -q '2102'" 2>/dev/null; then
clear_node_pool 2102 "$R630_03" "RPC Core 2"
else
log_warn "RPC Core 2 (2102) not found on ${R630_03}"
fi
log_section "Clearing RPC Public (2201)"
if ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no "${PROXMOX_USER}@${R630_02}" \
"pct list | grep -q '2201'" 2>/dev/null; then
clear_node_pool 2201 "$R630_02" "RPC Public"
else
log_warn "RPC Public (2201) not found on ${R630_02}"
fi
else
log_info "Skipping duplicate 2102/2201 clear_node_pool (already wiped in peer-tier pass)."
fi
log_section "Transaction Pool Clear Complete"
echo "Next steps:"
echo " 1. Wait 30-60 seconds for nodes to fully restart"
echo " 2. Check pending transactions: bash scripts/check-pending-transactions.sh"
echo " 2. Check pending transactions: bash scripts/verify/check-pending-transactions-chain138.sh"
echo " 3. Monitor health: bash scripts/monitoring/monitor-blockchain-health.sh"

View File

@@ -38,7 +38,7 @@ fi
# All Besu nodes
VALIDATORS=(1000 1001 1002 1003 1004)
RPC_NODES=(2500 2501 2502)
RPC_NODES=(2101 2102 2103 2201 2301 2303 2304 2305 2306 2307 2308 2400 2401 2402 2403 2420 2430 2440 2460 2470 2480)
log_info "Stopping all Besu nodes..."
for vmid in "${VALIDATORS[@]}"; do
@@ -101,4 +101,3 @@ log_info "Next steps:"
log_info " 1. Wait for nodes to sync"
log_info " 2. Run: ./scripts/configure-ethereum-mainnet-final.sh"
log_info ""

View File

@@ -18,9 +18,10 @@ ZONE_ID="${CLOUDFLARE_ZONE_ID:-${CLOUDFLARE_ZONE_ID_D_BIS_ORG}}"
# Third NPMplus (192.168.11.169) — Alltra, HYBX, rpc-core-2
ORIGIN="https://192.168.11.169:443"
CNAME_TARGET="${TUNNEL_ID}.cfargotunnel.com"
INCLUDE_PLACEHOLDER_HOSTNAMES="${INCLUDE_PLACEHOLDER_HOSTNAMES:-0}"
# All hostnames on this tunnel (must match NPMplus proxy hosts on 192.168.11.169)
HOSTNAMES=(
# Active hostnames on this tunnel (must match live proxy hosts on 192.168.11.169)
ACTIVE_HOSTNAMES=(
rpc-core-2.d-bis.org
rpc-alltra.d-bis.org
rpc-alltra-2.d-bis.org
@@ -30,6 +31,10 @@ HOSTNAMES=(
rpc-hybx-3.d-bis.org
cacti-alltra.d-bis.org
cacti-hybx.d-bis.org
)
# Firefly / Fabric / Indy remain placeholders until the real web listener is deployed.
PLACEHOLDER_HOSTNAMES=(
firefly-alltra-1.d-bis.org
firefly-alltra-2.d-bis.org
firefly-hybx-1.d-bis.org
@@ -40,6 +45,11 @@ HOSTNAMES=(
indy-hybx.d-bis.org
)
HOSTNAMES=("${ACTIVE_HOSTNAMES[@]}")
if [ "$INCLUDE_PLACEHOLDER_HOSTNAMES" = "1" ]; then
HOSTNAMES+=("${PLACEHOLDER_HOSTNAMES[@]}")
fi
# Auth
if [ -n "${CLOUDFLARE_API_TOKEN:-}" ]; then
AUTH_H=(-H "Authorization: Bearer $CLOUDFLARE_API_TOKEN")
@@ -59,6 +69,9 @@ echo "━━━━━━━━━━━━━━━━━━━━━━━━
echo "Tunnel ID: $TUNNEL_ID"
echo "Origin: $ORIGIN"
echo "CNAME target: $CNAME_TARGET"
if [ "$INCLUDE_PLACEHOLDER_HOSTNAMES" != "1" ]; then
echo "Placeholder Firefly/Fabric/Indy hostnames are excluded by default. Set INCLUDE_PLACEHOLDER_HOSTNAMES=1 only after those HTTP services are actually live."
fi
echo ""
# Build ingress config from HOSTNAMES + catch-all
@@ -81,18 +94,20 @@ fi
echo ""
echo "Adding/updating DNS CNAME records..."
for h in "${HOSTNAMES[@]}"; do
EXISTING=$(curl -s -X GET "https://api.cloudflare.com/client/v4/zones/${ZONE_ID}/dns_records?name=${h}&type=CNAME" \
EXISTING=$(curl -s -X GET "https://api.cloudflare.com/client/v4/zones/${ZONE_ID}/dns_records?name=${h}" \
"${AUTH_H[@]}" -H "Content-Type: application/json")
RECORD_COUNT=$(echo "$EXISTING" | jq -r '.result | length')
RECORD_ID=$(echo "$EXISTING" | jq -r '.result[0].id // empty')
RECORD_TYPE=$(echo "$EXISTING" | jq -r '.result[0].type // empty')
CONTENT=$(echo "$EXISTING" | jq -r '.result[0].content // empty')
DATA=$(jq -n --arg name "$h" --arg target "$CNAME_TARGET" \
'{type:"CNAME",name:$name,content:$target,ttl:1,proxied:true}')
if [ -n "$RECORD_ID" ] && [ "$RECORD_ID" != "null" ]; then
if [ "$CONTENT" = "$CNAME_TARGET" ]; then
if [ "$RECORD_COUNT" -gt 0 ] && [ -n "$RECORD_ID" ] && [ "$RECORD_ID" != "null" ]; then
if [ "$RECORD_TYPE" = "CNAME" ] && [ "$CONTENT" = "$CNAME_TARGET" ]; then
echo " $h: OK (CNAME → $CNAME_TARGET)"
else
elif [ "$RECORD_TYPE" = "CNAME" ]; then
UPD=$(curl -s -X PUT "https://api.cloudflare.com/client/v4/zones/${ZONE_ID}/dns_records/${RECORD_ID}" \
"${AUTH_H[@]}" -H "Content-Type: application/json" -d "$DATA")
if echo "$UPD" | jq -e '.success == true' >/dev/null 2>&1; then
@@ -100,6 +115,20 @@ for h in "${HOSTNAMES[@]}"; do
else
echo " $h: Update failed"
fi
else
echo " $h: Replacing existing ${RECORD_TYPE} record(s) with proxied tunnel CNAME"
echo "$EXISTING" | jq -r '.result[].id' | while read -r rid; do
[ -n "$rid" ] || continue
curl -s -X DELETE "https://api.cloudflare.com/client/v4/zones/${ZONE_ID}/dns_records/${rid}" \
"${AUTH_H[@]}" -H "Content-Type: application/json" >/dev/null
done
CR=$(curl -s -X POST "https://api.cloudflare.com/client/v4/zones/${ZONE_ID}/dns_records" \
"${AUTH_H[@]}" -H "Content-Type: application/json" -d "$DATA")
if echo "$CR" | jq -e '.success == true' >/dev/null 2>&1; then
echo " $h: Created CNAME → $CNAME_TARGET"
else
echo " $h: Create failed ($(echo "$CR" | jq -r '.errors[0].message // "unknown"' 2>/dev/null))"
fi
fi
else
CR=$(curl -s -X POST "https://api.cloudflare.com/client/v4/zones/${ZONE_ID}/dns_records" \
@@ -112,6 +141,32 @@ for h in "${HOSTNAMES[@]}"; do
fi
done
if [ "$INCLUDE_PLACEHOLDER_HOSTNAMES" != "1" ]; then
echo ""
echo "Removing placeholder DNS records that should not be published yet..."
for h in "${PLACEHOLDER_HOSTNAMES[@]}"; do
EXISTING=$(curl -s -X GET "https://api.cloudflare.com/client/v4/zones/${ZONE_ID}/dns_records?name=${h}" \
"${AUTH_H[@]}" -H "Content-Type: application/json")
RECORD_IDS=$(echo "$EXISTING" | jq -r '.result[]?.id // empty')
if [ -z "$RECORD_IDS" ]; then
echo " $h: OK (no DNS record)"
continue
fi
while read -r rid; do
[ -n "$rid" ] || continue
DEL=$(curl -s -X DELETE "https://api.cloudflare.com/client/v4/zones/${ZONE_ID}/dns_records/${rid}" \
"${AUTH_H[@]}" -H "Content-Type: application/json")
if echo "$DEL" | jq -e '.success == true' >/dev/null 2>&1; then
echo " $h: Deleted placeholder DNS record"
else
echo " $h: Delete failed ($(echo "$DEL" | jq -r '.errors[0].message // "unknown"' 2>/dev/null))"
fi
done <<< "$RECORD_IDS"
done
fi
echo ""
echo "Done. Next: Request Let's Encrypt certs in NPMplus UI for each domain."
echo "See: docs/04-configuration/UDM_PRO_NPMPLUS_ALLTRA_HYBX_PORT_FORWARD.md for UDM Pro port forward (manual)."

View File

@@ -0,0 +1,84 @@
#!/usr/bin/env bash
# Cloudflare DNS for omdnl.org: apex (@) and www → PUBLIC_IP (A records, proxied).
#
# Prerequisite: Zone omdnl.org exists in Cloudflare and nameservers are delegated.
# Requires .env: CLOUDFLARE_API_TOKEN (or CLOUDFLARE_EMAIL + CLOUDFLARE_API_KEY),
# CLOUDFLARE_ZONE_ID_OMDNL_ORG (or CLOUDFLARE_ZONE_ID when zone is omdnl.org)
# PUBLIC_IP (WAN IP that reaches NPMplus, same pattern as other public sites)
#
# Usage: bash scripts/cloudflare/configure-omdnl-org-dns.sh [--dry-run]
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
cd "$PROJECT_ROOT"
source config/ip-addresses.conf 2>/dev/null || true
[ -f .env ] && set +u && source .env 2>/dev/null || true && set -u
ZONE_ID="${CLOUDFLARE_ZONE_ID_OMDNL_ORG:-${CLOUDFLARE_ZONE_ID:-}}"
PUBLIC_IP="${PUBLIC_IP:-}"
ZONE_NAME="${OMDNL_ORG_DNS_ZONE:-omdnl.org}"
DRY=false
[[ "${1:-}" == "--dry-run" ]] && DRY=true
[ -n "$PUBLIC_IP" ] || { echo "Set PUBLIC_IP in .env (WAN IP for NPM)" >&2; exit 1; }
if [ -n "${CLOUDFLARE_API_TOKEN:-}" ]; then
AUTH_H=(-H "Authorization: Bearer $CLOUDFLARE_API_TOKEN")
elif [ -n "${CLOUDFLARE_API_KEY:-}" ] && [ -n "${CLOUDFLARE_EMAIL:-}" ]; then
AUTH_H=(-H "X-Auth-Email: $CLOUDFLARE_EMAIL" -H "X-Auth-Key: $CLOUDFLARE_API_KEY")
else
echo "Set CLOUDFLARE_API_TOKEN or (CLOUDFLARE_EMAIL + CLOUDFLARE_API_KEY) in .env" >&2
exit 1
fi
[ -n "$ZONE_ID" ] || { echo "Set CLOUDFLARE_ZONE_ID_OMDNL_ORG (or CLOUDFLARE_ZONE_ID) in .env" >&2; exit 1; }
upsert_a() {
local api_name="$1"
local query_name="$2"
local data
data=$(jq -n --arg name "$api_name" --arg content "$PUBLIC_IP" \
'{type:"A",name:$name,content:$content,ttl:1,proxied:true}')
if $DRY; then
echo "[dry-run] would upsert A $api_name$PUBLIC_IP"
return 0
fi
local existing record_id
existing=$(curl -s -G "https://api.cloudflare.com/client/v4/zones/${ZONE_ID}/dns_records" \
--data-urlencode "name=${query_name}" \
--data-urlencode "type=A" \
"${AUTH_H[@]}" -H "Content-Type: application/json")
record_id=$(echo "$existing" | jq -r '.result[0].id // empty')
if [ -n "$record_id" ] && [ "$record_id" != "null" ]; then
local upd
upd=$(curl -s -X PUT "https://api.cloudflare.com/client/v4/zones/${ZONE_ID}/dns_records/${record_id}" \
"${AUTH_H[@]}" -H "Content-Type: application/json" -d "$data")
echo "$upd" | jq -e '.success == true' >/dev/null 2>&1 && echo "Updated A ${query_name}" || { echo "$upd" | jq . >&2; return 1; }
else
local cr code
cr=$(curl -s -X POST "https://api.cloudflare.com/client/v4/zones/${ZONE_ID}/dns_records" \
"${AUTH_H[@]}" -H "Content-Type: application/json" -d "$data")
if echo "$cr" | jq -e '.success == true' >/dev/null 2>&1; then
echo "Created A ${query_name}"
return 0
fi
code=$(echo "$cr" | jq -r '.errors[0].code // empty')
if [ "$code" = "81058" ]; then
echo "OK A ${query_name} (already exists with same target)"
return 0
fi
echo "$cr" | jq . >&2
return 1
fi
}
echo "omdnl.org DNS (zone id set) → ${PUBLIC_IP} (proxied)"
# Apex: API name @ ; list API returns full record name omdnl.org
upsert_a "@" "${ZONE_NAME}"
# www
upsert_a "www" "www.${ZONE_NAME}"
echo "Done. After propagation: request NPM TLS for omdnl.org and www.omdnl.org."

View File

@@ -0,0 +1,65 @@
#!/usr/bin/env bash
# Create or update Cloudflare DNS for relay-mainnet-cw.d-bis.org → PUBLIC_IP (DNS-only, gray cloud).
# Matches scripts/update-all-dns-to-public-ip.sh behavior for other NPM-fronted d-bis hosts.
#
# Usage: bash scripts/cloudflare/configure-relay-mainnet-cw-dns.sh
# Requires: .env with CLOUDFLARE_API_TOKEN or (CLOUDFLARE_EMAIL + CLOUDFLARE_API_KEY),
# CLOUDFLARE_ZONE_ID_D_BIS_ORG or CLOUDFLARE_ZONE_ID, and PUBLIC_IP (WAN → NPM NAT).
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
cd "$PROJECT_ROOT"
source config/ip-addresses.conf 2>/dev/null || true
[ -f .env ] && set +u && source .env 2>/dev/null || true && set -u
ZONE_ID="${CLOUDFLARE_ZONE_ID:-${CLOUDFLARE_ZONE_ID_D_BIS_ORG:-}}"
HOSTNAME="relay-mainnet-cw.d-bis.org"
NAME_LABEL="relay-mainnet-cw"
PUBLIC_IP="${PUBLIC_IP:-76.53.10.36}"
if [ -n "${CLOUDFLARE_API_TOKEN:-}" ]; then
AUTH_H=(-H "Authorization: Bearer $CLOUDFLARE_API_TOKEN")
elif [ -n "${CLOUDFLARE_API_KEY:-}" ] && [ -n "${CLOUDFLARE_EMAIL:-}" ]; then
AUTH_H=(-H "X-Auth-Email: $CLOUDFLARE_EMAIL" -H "X-Auth-Key: $CLOUDFLARE_API_KEY")
else
echo "Set CLOUDFLARE_API_TOKEN or (CLOUDFLARE_EMAIL + CLOUDFLARE_API_KEY) in .env" >&2
exit 1
fi
[ -z "${ZONE_ID:-}" ] && { echo "Set CLOUDFLARE_ZONE_ID or CLOUDFLARE_ZONE_ID_D_BIS_ORG in .env" >&2; exit 1; }
DATA=$(jq -n --arg name "$NAME_LABEL" --arg content "$PUBLIC_IP" \
'{type:"A",name:$name,content:$content,ttl:1,proxied:false}')
echo "Relay mainnet-cw DNS: $HOSTNAME$PUBLIC_IP (DNS-only)"
EXISTING=$(curl -s -X GET "https://api.cloudflare.com/client/v4/zones/${ZONE_ID}/dns_records?name=${HOSTNAME}" \
"${AUTH_H[@]}" -H "Content-Type: application/json")
RECORD_ID=$(echo "$EXISTING" | jq -r '.result[0].id // empty')
CURRENT_CONTENT=$(echo "$EXISTING" | jq -r '.result[0].content // empty')
CURRENT_PROXIED=$(echo "$EXISTING" | jq -r '.result[0].proxied // false')
if [ -n "$RECORD_ID" ] && [ "$RECORD_ID" != "null" ]; then
if [ "$CURRENT_CONTENT" = "$PUBLIC_IP" ] && [ "$CURRENT_PROXIED" = "false" ]; then
echo " $HOSTNAME: OK (A → $PUBLIC_IP, proxied=false)"
exit 0
fi
UPD=$(curl -s -X PUT "https://api.cloudflare.com/client/v4/zones/${ZONE_ID}/dns_records/${RECORD_ID}" \
"${AUTH_H[@]}" -H "Content-Type: application/json" -d "$DATA")
if echo "$UPD" | jq -e '.success == true' >/dev/null 2>&1; then
echo " $HOSTNAME: Updated A → $PUBLIC_IP (proxied=false)"
else
echo " $HOSTNAME: Update failed ($(echo "$UPD" | jq -r '.errors[0].message // "unknown"' 2>/dev/null))" >&2
exit 1
fi
else
CR=$(curl -s -X POST "https://api.cloudflare.com/client/v4/zones/${ZONE_ID}/dns_records" \
"${AUTH_H[@]}" -H "Content-Type: application/json" -d "$DATA")
if echo "$CR" | jq -e '.success == true' >/dev/null 2>&1; then
echo " $HOSTNAME: Created A → $PUBLIC_IP (proxied=false)"
else
echo " $HOSTNAME: Create failed ($(echo "$CR" | jq -r '.errors[0].message // "unknown"' 2>/dev/null))" >&2
exit 1
fi
fi

View File

@@ -0,0 +1,31 @@
#!/usr/bin/env bash
# Purge Cloudflare edge cache for static paths on info.defi-oracle.io (robots, sitemap, agent-hints).
# Use when the origin is correct but verify shows HTML fallback for /robots.txt etc.
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
cd "$PROJECT_ROOT"
# shellcheck source=/dev/null
source "$PROJECT_ROOT/config/ip-addresses.conf" 2>/dev/null || true
if [[ -f "$PROJECT_ROOT/.env" ]]; then
set +u
# shellcheck source=/dev/null
source "$PROJECT_ROOT/.env" 2>/dev/null || true
set -u
fi
ZONE="${CLOUDFLARE_ZONE_ID_DEFI_ORACLE_IO:?Set CLOUDFLARE_ZONE_ID_DEFI_ORACLE_IO in .env}"
BASE="${INFO_SITE_BASE:-https://info.defi-oracle.io}"
BASE="${BASE%/}"
AUTH=()
if [[ -n "${CLOUDFLARE_API_TOKEN:-}" ]]; then
AUTH=(-H "Authorization: Bearer $CLOUDFLARE_API_TOKEN")
elif [[ -n "${CLOUDFLARE_API_KEY:-}" && -n "${CLOUDFLARE_EMAIL:-}" ]]; then
AUTH=(-H "X-Auth-Email: $CLOUDFLARE_EMAIL" -H "X-Auth-Key: $CLOUDFLARE_API_KEY")
else
echo "Need CLOUDFLARE_API_TOKEN or CLOUDFLARE_EMAIL + CLOUDFLARE_API_KEY" >&2
exit 1
fi
PAYLOAD=$(jq -n --arg b "$BASE" '{files: [$b+"/robots.txt",$b+"/sitemap.xml",$b+"/agent-hints.json",$b+"/llms.txt"]}')
curl -sS -X POST "https://api.cloudflare.com/client/v4/zones/${ZONE}/purge_cache" "${AUTH[@]}" \
-H "Content-Type: application/json" --data "$PAYLOAD" | jq -e '.success == true' >/dev/null
echo "Purged static URLs under $BASE"

View File

@@ -0,0 +1,150 @@
#!/usr/bin/env bash
# Publish info.defi-oracle.io and www.info.defi-oracle.io.
# Prefers the VMID 2400 tunnel when it exists; otherwise falls back to the public NPMplus edge.
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
cd "$PROJECT_ROOT"
source "$PROJECT_ROOT/config/ip-addresses.conf" 2>/dev/null || true
if [[ -f "$PROJECT_ROOT/.env" ]]; then
set +u
source "$PROJECT_ROOT/.env" 2>/dev/null || true
set -u
fi
TUNNEL_ID="${VMID2400_TUNNEL_ID:-${CLOUDFLARE_TUNNEL_ID_VMID2400:-26138c21-db00-4a02-95db-ec75c07bda5b}}"
ZONE_ID="${CLOUDFLARE_ZONE_ID_DEFI_ORACLE_IO:-}"
TUNNEL_TARGET="${TUNNEL_ID}.cfargotunnel.com"
EDGE_MODE="${INFO_DEFI_ORACLE_EDGE_MODE:-auto}"
PUBLIC_EDGE_IP="${INFO_DEFI_ORACLE_PUBLIC_IP:-${PUBLIC_IP:-76.53.10.36}}"
ACCOUNT_ID="${CLOUDFLARE_ACCOUNT_ID:-}"
HOSTS=( "info.defi-oracle.io" "www.info.defi-oracle.io" )
if [[ -z "$ZONE_ID" ]]; then
echo "CLOUDFLARE_ZONE_ID_DEFI_ORACLE_IO is required." >&2
exit 1
fi
cf_api() {
local method="$1"
local endpoint="$2"
local data="${3:-}"
local url="https://api.cloudflare.com/client/v4/zones/${ZONE_ID}${endpoint}"
if [[ -n "${CLOUDFLARE_API_TOKEN:-}" ]]; then
if [[ -n "$data" ]]; then
curl -s -X "$method" "$url" -H "Authorization: Bearer $CLOUDFLARE_API_TOKEN" -H "Content-Type: application/json" --data "$data"
else
curl -s -X "$method" "$url" -H "Authorization: Bearer $CLOUDFLARE_API_TOKEN" -H "Content-Type: application/json"
fi
elif [[ -n "${CLOUDFLARE_EMAIL:-}" && -n "${CLOUDFLARE_API_KEY:-}" ]]; then
if [[ -n "$data" ]]; then
curl -s -X "$method" "$url" -H "X-Auth-Email: $CLOUDFLARE_EMAIL" -H "X-Auth-Key: $CLOUDFLARE_API_KEY" -H "Content-Type: application/json" --data "$data"
else
curl -s -X "$method" "$url" -H "X-Auth-Email: $CLOUDFLARE_EMAIL" -H "X-Auth-Key: $CLOUDFLARE_API_KEY" -H "Content-Type: application/json"
fi
else
echo "Cloudflare credentials are required." >&2
exit 1
fi
}
cf_global_api() {
local method="$1"
local url="$2"
if [[ -n "${CLOUDFLARE_API_TOKEN:-}" ]]; then
curl -s -X "$method" "$url" -H "Authorization: Bearer $CLOUDFLARE_API_TOKEN" -H "Content-Type: application/json"
elif [[ -n "${CLOUDFLARE_EMAIL:-}" && -n "${CLOUDFLARE_API_KEY:-}" ]]; then
curl -s -X "$method" "$url" -H "X-Auth-Email: $CLOUDFLARE_EMAIL" -H "X-Auth-Key: $CLOUDFLARE_API_KEY" -H "Content-Type: application/json"
else
echo "Cloudflare credentials are required." >&2
exit 1
fi
}
resolve_account_id() {
if [[ -n "$ACCOUNT_ID" ]]; then
printf '%s\n' "$ACCOUNT_ID"
return 0
fi
local response
response="$(cf_global_api GET "https://api.cloudflare.com/client/v4/accounts")"
ACCOUNT_ID="$(printf '%s\n' "$response" | jq -r '.result[0].id // empty')"
printf '%s\n' "$ACCOUNT_ID"
}
tunnel_exists() {
local account_id
account_id="$(resolve_account_id)"
[[ -z "$account_id" ]] && return 1
local response
response="$(cf_global_api GET "https://api.cloudflare.com/client/v4/accounts/${account_id}/cfd_tunnel/${TUNNEL_ID}")"
printf '%s\n' "$response" | jq -e '.success == true' >/dev/null 2>&1
}
delete_conflicting_records() {
local name="$1"
local existing
existing="$(cf_api GET "/dns_records?name=${name}")"
printf '%s\n' "$existing" | jq -r '.result[]? | "\(.id) \(.type)"' | while read -r record_id record_type; do
[[ -z "$record_id" ]] && continue
if [[ "$record_type" == "A" || "$record_type" == "AAAA" || "$record_type" == "CNAME" ]]; then
cf_api DELETE "/dns_records/${record_id}" >/dev/null
fi
done
}
create_cname() {
local name="$1"
local payload
payload="$(jq -n --arg name "$name" --arg content "$TUNNEL_TARGET" '{type:"CNAME",name:$name,content:$content,proxied:true,ttl:1}')"
cf_api POST "/dns_records" "$payload" | jq -e '.success == true' >/dev/null
}
create_a_record() {
local name="$1"
local payload
payload="$(jq -n --arg name "$name" --arg content "$PUBLIC_EDGE_IP" '{type:"A",name:$name,content:$content,proxied:true,ttl:1}')"
cf_api POST "/dns_records" "$payload" | jq -e '.success == true' >/dev/null
}
TARGET_MODE="$EDGE_MODE"
if [[ "$TARGET_MODE" == "auto" ]]; then
if tunnel_exists; then
TARGET_MODE="tunnel"
else
TARGET_MODE="public_ip"
fi
fi
case "$TARGET_MODE" in
tunnel)
echo "Publishing info.defi-oracle.io hostnames to tunnel ${TUNNEL_TARGET}"
;;
public_ip|npmplus)
echo "Publishing info.defi-oracle.io hostnames to public edge ${PUBLIC_EDGE_IP}"
;;
*)
echo "Unsupported INFO_DEFI_ORACLE_EDGE_MODE=${TARGET_MODE}" >&2
exit 1
;;
esac
for host in "${HOSTS[@]}"; do
delete_conflicting_records "$host"
if [[ "$TARGET_MODE" == "tunnel" ]]; then
create_cname "$host"
else
create_a_record "$host"
fi
echo " - ${host}"
done
echo "Done."

View File

@@ -143,7 +143,7 @@ if [ "$EXTERNAL_TEST" = "200" ]; then
EXTERNAL_OK=true
elif [ "$EXTERNAL_TEST" = "502" ]; then
log_warn "HTTP 502 - Cloudflare Tunnel can't reach Blockscout"
log_info "Firewall rule needed in Omada Controller"
log_info "Firewall rule needed in the active UDM / edge firewall"
EXTERNAL_OK=false
elif [ "$EXTERNAL_TEST" = "522" ]; then
log_warn "HTTP 522 - Connection timeout"
@@ -172,9 +172,8 @@ if [ "${INTERNAL_OK:-false}" = "false" ] || [ "${EXTERNAL_OK:-false}" = "false"
echo ""
if [ "${INTERNAL_OK:-false}" = "false" ]; then
echo "1. Configure Omada Firewall Rule:"
echo " - Access: https://omada.tplinkcloud.com"
echo " - Or run: bash scripts/access-omada-cloud-controller.sh"
echo "1. Configure Edge Firewall Rule (UDM / current gateway):"
echo " - Open the active firewall management UI"
echo " - Navigate: Settings → Firewall → Firewall Rules"
echo " - Create allow rule:"
echo " Source: ${NETWORK_192_168_11_0:-192.168.11.0}/24"
@@ -193,8 +192,7 @@ fi
echo "📝 Next Steps:"
echo " 1. Set password via Proxmox Web UI (if not done)"
echo " 2. Configure Omada firewall rule (if connectivity fails)"
echo " 2. Configure edge firewall rule in the active gateway UI (if connectivity fails)"
echo " 3. Verify: bash scripts/complete-blockscout-firewall-fix.sh"
echo ""
echo "════════════════════════════════════════════════════════"

View File

@@ -177,26 +177,11 @@ fi
echo ""
# Step 5: Check Omada firewall configuration (via API if possible)
echo "Step 5: Checking Omada firewall configuration..."
# Step 5: Edge firewall guidance
echo "Step 5: Checking edge firewall guidance..."
echo "────────────────────────────────────────────────────────"
if command -v node &> /dev/null; then
if [ -f "scripts/query-omada-firewall-blockscout-direct.js" ]; then
print_status "INFO" "Attempting to query Omada Controller API..."
if node scripts/query-omada-firewall-blockscout-direct.js 2>&1 | grep -q "Found.*firewall rules"; then
print_status "OK" "Omada Controller API accessible"
node scripts/query-omada-firewall-blockscout-direct.js 2>&1 | tail -20
else
print_status "WARN" "Omada Controller API doesn't expose firewall rules"
echo " Firewall rules must be configured via web interface"
fi
else
print_status "WARN" "Omada query script not found"
fi
else
print_status "WARN" "Node.js not available - skipping API check"
fi
print_status "INFO" "Apply any required allow rule in the active UDM / edge firewall UI."
echo " Omada-specific helper scripts are retained for historical reference only."
echo ""
@@ -216,13 +201,12 @@ if [ "${INTERNAL_CONNECTIVITY_OK:-false}" = "true" ] && [ "${EXTERNAL_CONNECTIVI
elif [ "${INTERNAL_CONNECTIVITY_OK:-false}" = "false" ]; then
print_status "FAIL" "Internal connectivity issue detected"
echo ""
echo "🔧 Action Required: Configure Omada Firewall Rules"
echo "🔧 Action Required: Configure Edge Firewall Rules"
echo ""
echo "The firewall is blocking traffic from cloudflared ($VMID_CLOUDFLARED) to Blockscout ($BLOCKSCOUT_IP:$BLOCKSCOUT_PORT)"
echo ""
echo "Steps to fix:"
echo " 1. Access Omada Cloud Controller: https://omada.tplinkcloud.com"
echo " Or run: bash scripts/access-omada-cloud-controller.sh"
echo " 1. Open the current UDM / edge firewall management UI"
echo ""
echo " 2. Navigate to: Settings → Firewall → Firewall Rules"
echo ""
@@ -257,8 +241,6 @@ fi
echo "════════════════════════════════════════════════════════"
echo "Documentation:"
echo " - docs/OMADA_CLOUD_ACCESS_SUMMARY.md"
echo " - docs/OMADA_CLOUD_CONTROLLER_FIREWALL_GUIDE.md"
echo " - scripts/access-omada-cloud-controller.sh"
echo " - Review the current gateway / firewall runbook for the active environment"
echo " - Omada-specific helper docs remain historical only"
echo "════════════════════════════════════════════════════════"

View File

@@ -31,7 +31,6 @@ BRIDGE="vmbr0"
declare -a CONVERSIONS=(
"${PROXMOX_HOST_ML110:-192.168.11.10}:3501:${IP_DEVICE_14:-${IP_DEVICE_14:-${IP_DEVICE_14:-${IP_DEVICE_14:-${IP_DEVICE_14:-${IP_DEVICE_14:-192.168.11.14}}}}}}:${IP_CCIP_MONITOR:-192.168.11.28}:ccip-monitor-1:ml110"
"${PROXMOX_HOST_ML110:-192.168.11.10}:3500:${IP_SERVICE_15:-${IP_SERVICE_15:-192.168.11.15}}:${IP_SERVICE_29:-${IP_SERVICE_29:-192.168.11.29}}:oracle-publisher-1:ml110"
"${PROXMOX_HOST_R630_02:-192.168.11.12}:103:${IP_OMADA:-192.168.11.20}:${IP_SERVICE_30:-192.168.11.30}:omada:r630-02"
"${PROXMOX_HOST_R630_02:-192.168.11.12}:104:${IP_SERVICE_18:-${IP_SERVICE_18:-192.168.11.18}}:${IP_SERVICE_31:-${IP_SERVICE_31:-192.168.11.31}}:gitea:r630-02"
"${PROXMOX_HOST_R630_02:-192.168.11.12}:100:${IP_SERVICE_4:-${IP_SERVICE_4:-192.168.11.4}}:${IP_SERVICE_32:-${IP_SERVICE_32:-192.168.11.32}}:proxmox-mail-gateway:r630-02"
"${PROXMOX_HOST_R630_02:-192.168.11.12}:101:192.168.11.6:${IP_SERVICE_33:-${IP_SERVICE_33:-192.168.11.33}}:proxmox-datacenter-manager:r630-02"

View File

@@ -91,8 +91,8 @@ create 2508 "besu-rpc-putu" "${RPC_PUTU_2:-${RPC_PUTU_2:-${RPC_PUTU_2:-192.168.1
create 6200 "firefly-1" "${IP_FIREFLY:-192.168.11.66}" 4 2 50 "Hyperledger Firefly Core"
create 6201 "firefly-2" "${IP_FIREFLY_2:-192.168.11.67}" 4 2 50 "Hyperledger Firefly Node - ChainID 138"
create 5200 "cacti-1" "${IP_CACTI:-${IP_CACTI:-192.168.11.64}}" 4 2 50 "Hyperledger Cacti"
create 6000 "fabric-1" "${IP_FABRIC:-${IP_FABRIC:-192.168.11.65}}" 8 4 100 "Hyperledger Fabric"
create 6400 "indy-1" "${IP_INDY:-${IP_INDY:-192.168.11.68}}" 8 4 100 "Hyperledger Indy"
create 6000 "fabric-1" "${IP_FABRIC:-${IP_FABRIC:-192.168.11.113}}" 8 4 100 "Hyperledger Fabric"
create 6400 "indy-1" "${IP_INDY:-${IP_INDY:-192.168.11.64}}" 8 4 100 "Hyperledger Indy"
# Explorer
create 5000 "blockscout-1" "${IP_BLOCKSCOUT:-192.168.11.69}" 8 4 200 "Blockscout Explorer - ChainID 138"
@@ -101,4 +101,3 @@ echo ""
log "Container creation complete!"
log "Checking final status..."
ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} "pct list | grep -E '(1504|2503|2504|2505|2506|2507|2508|6200|6201|5200|6000|6400|5000)' | sort -n"

View File

@@ -227,7 +227,7 @@ main() {
echo ""
# 11. Fabric
if create_container 6000 "fabric-1" "${IP_FABRIC:-${IP_FABRIC:-192.168.11.65}}" 8 4 100 "Hyperledger Fabric Enterprise Contracts"; then
if create_container 6000 "fabric-1" "${IP_FABRIC:-${IP_FABRIC:-192.168.11.113}}" 8 4 100 "Hyperledger Fabric Enterprise Contracts"; then
success_count=$((success_count + 1))
else
fail_count=$((fail_count + 1))
@@ -235,7 +235,7 @@ main() {
echo ""
# 12. Indy
if create_container 6400 "indy-1" "${IP_INDY:-${IP_INDY:-192.168.11.68}}" 8 4 100 "Hyperledger Indy Identity Layer"; then
if create_container 6400 "indy-1" "${IP_INDY:-${IP_INDY:-192.168.11.64}}" 8 4 100 "Hyperledger Indy Identity Layer"; then
success_count=$((success_count + 1))
else
fail_count=$((fail_count + 1))
@@ -280,4 +280,3 @@ main() {
}
main "$@"

View File

@@ -29,7 +29,7 @@ log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
DOMAIN="rpc-core.d-bis.org"
NAME="rpc-core"
IP="${RPC_ALLTRA_1:-${RPC_ALLTRA_1:-192.168.11.250}}"
IP="${RPC_CORE_1:-192.168.11.211}"
# Load .env if exists (set +u so values with $ in them don't trigger unbound variable)
if [ -f "$SCRIPT_DIR/../.env" ]; then
@@ -199,7 +199,7 @@ if echo "$RESPONSE" | grep -q '"success":true'; then
echo ""
log_info "DNS record created. Wait 2-5 minutes for propagation, then run:"
log_info " pct exec 2500 -- certbot --nginx --non-interactive --agree-tos --email admin@d-bis.org -d rpc-core.d-bis.org --redirect"
log_info " pct exec 2101 -- certbot --nginx --non-interactive --agree-tos --email admin@d-bis.org -d rpc-core.d-bis.org --redirect"
else
log_error "Failed to create DNS record"
log_info "Response: $RESPONSE"

View File

@@ -38,18 +38,18 @@ create() {
}
log "Creating remaining nodes with fixed storage/templates..."
# RPC nodes on r630-01 (use local-lvm)
create 2500 "besu-rpc-alltra-1" "${IP_SERVICE_172:-${IP_SERVICE_172:-192.168.11.172}}" 16 4 200 "Besu RPC ALLTRA 1" "${PROXMOX_HOST_R630_01}" "local-lvm" &
create 2501 "besu-rpc-alltra-2" "${IP_SERVICE_173:-${IP_SERVICE_173:-192.168.11.173}}" 16 4 200 "Besu RPC ALLTRA 2" "${PROXMOX_HOST_R630_01}" "local-lvm" &
create 2502 "besu-rpc-alltra-3" "${IP_SERVICE_174:-${IP_SERVICE_174:-192.168.11.174}}" 16 4 200 "Besu RPC ALLTRA 3" "${PROXMOX_HOST_R630_01}" "local-lvm" &
create 2503 "besu-rpc-hybx-1" "${IP_RPC_246:-${IP_RPC_246:-${IP_RPC_246:-192.168.11.246}}}" 16 4 200 "Besu RPC HYBX 1" "${PROXMOX_HOST_R630_01}" "local-lvm" &
create 2504 "besu-rpc-hybx-2" "${IP_RPC_247:-${IP_RPC_247:-${IP_RPC_247:-192.168.11.247}}}" 16 4 200 "Besu RPC HYBX 2" "${PROXMOX_HOST_R630_01}" "local-lvm" &
create 2505 "besu-rpc-hybx-3" "${IP_RPC_248:-${IP_RPC_248:-${IP_RPC_248:-192.168.11.248}}}" 16 4 200 "Besu RPC HYBX 3" "${PROXMOX_HOST_R630_01}" "local-lvm" &
create 2420 "besu-rpc-alltra-1" "${RPC_ALLTRA_1:-192.168.11.172}" 16 4 200 "Besu RPC ALLTRA 1" "${PROXMOX_HOST_R630_01}" "local-lvm" &
create 2430 "besu-rpc-alltra-2" "${RPC_ALLTRA_2:-192.168.11.173}" 16 4 200 "Besu RPC ALLTRA 2" "${PROXMOX_HOST_R630_01}" "local-lvm" &
create 2440 "besu-rpc-alltra-3" "${RPC_ALLTRA_3:-192.168.11.174}" 16 4 200 "Besu RPC ALLTRA 3" "${PROXMOX_HOST_R630_01}" "local-lvm" &
create 2460 "besu-rpc-hybx-1" "${RPC_HYBX_1:-192.168.11.246}" 16 4 200 "Besu RPC HYBX 1" "${PROXMOX_HOST_R630_01}" "local-lvm" &
create 2470 "besu-rpc-hybx-2" "${RPC_HYBX_2:-192.168.11.247}" 16 4 200 "Besu RPC HYBX 2" "${PROXMOX_HOST_R630_01}" "local-lvm" &
create 2480 "besu-rpc-hybx-3" "${RPC_HYBX_3:-192.168.11.248}" 16 4 200 "Besu RPC HYBX 3" "${PROXMOX_HOST_R630_01}" "local-lvm" &
wait
# Firefly nodes on r630-02 (use thin1-r630-02)
create 6202 "firefly-alltra-1" "${IP_FIREFLY_ALLTRA_1:-192.168.11.175}" 4 2 50 "Firefly ALLTRA 1" "${PROXMOX_HOST_R630_02}" "thin1-r630-02" &
create 6203 "firefly-alltra-2" "${IP_FIREFLY_ALLTRA_2:-192.168.11.176}" 4 2 50 "Firefly ALLTRA 2" "${PROXMOX_HOST_R630_02}" "thin1-r630-02" &
create 6204 "firefly-hybx-1" "${IP_SERVICE_24:-192.168.11.24}9" 4 2 50 "Firefly HYBX 1" "${PROXMOX_HOST_R630_02}" "thin1-r630-02" &
create 6205 "firefly-hybx-2" "${RPC_ALLTRA_1:-${RPC_ALLTRA_1:-192.168.11.250}}" 4 2 50 "Firefly HYBX 2" "${PROXMOX_HOST_R630_02}" "thin1-r630-02" &
create 6204 "firefly-hybx-1" "${IP_FIREFLY_HYBX_1:-192.168.11.249}" 4 2 50 "Firefly HYBX 1" "${PROXMOX_HOST_R630_02}" "thin1-r630-02" &
create 6205 "firefly-hybx-2" "${IP_FIREFLY_HYBX_2:-192.168.11.250}" 4 2 50 "Firefly HYBX 2" "${PROXMOX_HOST_R630_02}" "thin1-r630-02" &
wait
# Other services on r630-01 (use local-lvm)
create 5201 "cacti-alltra-1" "${IP_CACTI_ALLTRA:-192.168.11.177}" 4 2 50 "Cacti ALLTRA" "${PROXMOX_HOST_R630_01}" "local-lvm" &

View File

@@ -100,12 +100,16 @@ success "Group 3 complete"
echo ""
# Group 4: New Sentry Nodes (ml110)
log "Group 4: New Sentry Nodes (4 nodes - Pending Besu Installation)"
for vmid in 1505 1506 1507 1508; do
log "Group 4: New Sentry Nodes (6 nodes - Pending Besu Installation)"
for vmid in 1505 1506 1507 1508 1509 1510; do
if [ $vmid -le 1506 ]; then
ip="${NETWORK_PREFIX:-192.168.11}.$((65 + vmid - 1505))"
else
elif [ $vmid -le 1508 ]; then
ip="${NETWORK_PREFIX:-192.168.11}.$((140 + vmid - 1507))"
elif [ $vmid -eq 1509 ]; then
ip="${IP_BESU_SENTRY_THIRDWEB_1:-192.168.11.219}"
else
ip="${IP_BESU_SENTRY_THIRDWEB_2:-192.168.11.220}"
fi
warn "Skipping VMID $vmid (Besu not installed yet)"
done
@@ -113,14 +117,14 @@ echo ""
# Group 5: New RPC Nodes (r630-01)
log "Group 5: New RPC Nodes (6 nodes - Pending Besu Installation)"
for vmid in 2500 2501 2502 2503 2504 2505; do
for vmid in 2420 2430 2440 2460 2470 2480; do
case $vmid in
2500) ip="${IP_SERVICE_172:-${IP_SERVICE_172:-192.168.11.172}}" ;;
2501) ip="${IP_SERVICE_173:-${IP_SERVICE_173:-192.168.11.173}}" ;;
2502) ip="${IP_SERVICE_174:-${IP_SERVICE_174:-192.168.11.174}}" ;;
2503) ip="${IP_RPC_246:-${IP_RPC_246:-${IP_RPC_246:-192.168.11.246}}}" ;;
2504) ip="${IP_RPC_247:-${IP_RPC_247:-${IP_RPC_247:-192.168.11.247}}}" ;;
2505) ip="${IP_RPC_248:-${IP_RPC_248:-${IP_RPC_248:-192.168.11.248}}}" ;;
2420) ip="${RPC_ALLTRA_1:-192.168.11.172}" ;;
2430) ip="${RPC_ALLTRA_2:-192.168.11.173}" ;;
2440) ip="${RPC_ALLTRA_3:-192.168.11.174}" ;;
2460) ip="${RPC_HYBX_1:-192.168.11.246}" ;;
2470) ip="${RPC_HYBX_2:-192.168.11.247}" ;;
2480) ip="${RPC_HYBX_3:-192.168.11.248}" ;;
esac
warn "Skipping VMID $vmid (Besu not installed yet)"
done
@@ -134,8 +138,7 @@ log " Non-Besu nodes: 3 nodes"
log "======================================="
echo ""
log "Next Steps:"
log " 1. Install Besu on new nodes (VMIDs: 1505-1508, 2500-2505)"
log " 1. Install Besu on new nodes (VMIDs: 1505-1510, 2420-2480)"
log " 2. Verify node lists are in /opt/besu/config/"
log " 3. Restart Besu services if needed"
log " 4. Run verification: ./scripts/verify-node-list-deployment.sh"

View File

@@ -30,13 +30,13 @@ fi
# VMID -> Proxmox host (per BESU_VMIDS_FROM_PROXMOX / list-besu-vmids-from-proxmox.sh)
declare -A HOST_BY_VMID
# r630-01 (192.168.11.11) — 2500-2505 removed (destroyed; see ALL_VMIDS_ENDPOINTS.md)
for v in 1000 1001 1002 1500 1501 1502 2101; do HOST_BY_VMID[$v]="${PROXMOX_R630_01:-${PROXMOX_HOST_R630_01:-192.168.11.11}}"; done
for v in 1000 1001 1002 1500 1501 1502 2101 2103; do HOST_BY_VMID[$v]="${PROXMOX_R630_01:-${PROXMOX_HOST_R630_01:-192.168.11.11}}"; done
# r630-02 (192.168.11.12)
for v in 2201 2303 2401; do HOST_BY_VMID[$v]="${PROXMOX_R630_02:-${PROXMOX_HOST_R630_02:-192.168.11.12}}"; done
# ml110 (192.168.11.10)
for v in 1003 1004 1503 1504 1505 1506 1507 1508 2102 2301 2304 2305 2306 2307 2308 2400 2402 2403; do HOST_BY_VMID[$v]="${PROXMOX_ML110:-${PROXMOX_HOST_ML110:-192.168.11.10}}"; done
for v in 2201 2303 2305 2306 2307 2308 2401; do HOST_BY_VMID[$v]="${PROXMOX_R630_02:-${PROXMOX_HOST_R630_02:-192.168.11.12}}"; done
# r630-03 (192.168.11.13)
for v in 1003 1004 1503 1504 1505 1506 1507 1508 1509 1510 2102 2301 2304 2400 2402 2403; do HOST_BY_VMID[$v]="${PROXMOX_R630_03:-${PROXMOX_HOST_R630_03:-192.168.11.13}}"; done
BESU_VMIDS=(1000 1001 1002 1003 1004 1500 1501 1502 1503 1504 1505 1506 1507 1508 2101 2102 2201 2301 2303 2304 2305 2306 2307 2308 2400 2401 2402 2403)
BESU_VMIDS=(1000 1001 1002 1003 1004 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 2101 2102 2103 2201 2301 2303 2304 2305 2306 2307 2308 2400 2401 2402 2403)
echo "Deploying Besu node lists from config/besu-node-lists/ to all nodes"
echo " static-nodes.json -> /etc/besu/static-nodes.json"

View File

@@ -12,17 +12,216 @@ set -euo pipefail
REPO_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
INSTALL_DIR="${1:-$REPO_ROOT/token-aggregation-build}"
BUNDLE_ROOT="$INSTALL_DIR"
SERVICE_INSTALL_DIR="$BUNDLE_ROOT/smom-dbis-138/services/token-aggregation"
SVC_DIR="$REPO_ROOT/smom-dbis-138/services/token-aggregation"
# shellcheck source=/dev/null
source "$REPO_ROOT/scripts/lib/load-project-env.sh" >/dev/null 2>&1 || true
upsert_env_var() {
local env_file="$1"
local key="$2"
local value="${3:-}"
[[ -n "$value" ]] || return 0
if grep -q "^${key}=" "$env_file"; then
sed -i "s|^${key}=.*|${key}=${value}|" "$env_file"
else
printf '%s=%s\n' "$key" "$value" >> "$env_file"
fi
}
ensure_env_key() {
local env_file="$1"
local key="$2"
grep -q "^${key}=" "$env_file" && return 0
printf '%s=\n' "$key" >> "$env_file"
}
derive_cw_reserve_verifier() {
if [[ -n "${CW_RESERVE_VERIFIER_CHAIN138:-}" ]]; then
printf '%s' "$CW_RESERVE_VERIFIER_CHAIN138"
return 0
fi
local broadcast_json="$REPO_ROOT/smom-dbis-138/broadcast/DeployCWReserveVerifier.s.sol/138/run-latest.json"
[[ -f "$broadcast_json" ]] || return 0
node -e '
const fs = require("fs");
const file = process.argv[1];
try {
const data = JSON.parse(fs.readFileSync(file, "utf8"));
const tx = (data.transactions || []).find(
(entry) => entry.contractName === "CWReserveVerifier" &&
entry.transactionType === "CREATE" &&
entry.contractAddress
);
if (tx?.contractAddress) process.stdout.write(tx.contractAddress);
} catch (_) {
process.exit(0);
}
' "$broadcast_json"
}
derive_cw_asset_reserve_verifier() {
if [[ -n "${CW_ASSET_RESERVE_VERIFIER_DEPLOYED_CHAIN138:-}" ]]; then
printf '%s' "$CW_ASSET_RESERVE_VERIFIER_DEPLOYED_CHAIN138"
return 0
fi
node - <<'NODE' "$REPO_ROOT/config/smart-contracts-master.json"
const fs = require('fs');
const file = process.argv[2];
try {
const data = JSON.parse(fs.readFileSync(file, 'utf8'));
const address = data?.chains?.['138']?.contracts?.CWAssetReserveVerifier;
if (typeof address === 'string' && /^0x[a-fA-F0-9]{40}$/.test(address)) {
process.stdout.write(address);
}
} catch (_) {
process.exit(0);
}
NODE
}
derive_gru_transport_policy_amount() {
local env_key="$1"
node - <<'NODE' "$REPO_ROOT/config/gru-transport-active.json" "$env_key"
const fs = require('fs');
const file = process.argv[2];
const envKey = process.argv[3];
try {
const data = JSON.parse(fs.readFileSync(file, 'utf8'));
const familiesByKey = new Map();
for (const family of data.gasAssetFamilies || []) {
if (family && family.familyKey) familiesByKey.set(String(family.familyKey), family);
}
for (const pair of data.transportPairs || []) {
if (!pair || pair.active === false) continue;
if (pair?.maxOutstanding?.env === envKey) {
const family = familiesByKey.get(String(pair.familyKey || ''));
const cap = family?.perLaneCaps?.[String(pair.destinationChainId)];
if (typeof cap === 'string' && cap.trim()) {
process.stdout.write(cap.trim());
}
process.exit(0);
}
}
} catch (_) {
process.exit(0);
}
NODE
}
sync_gru_transport_env() {
local env_file="$1"
local chain138_l1_bridge="${CHAIN138_L1_BRIDGE:-${CW_L1_BRIDGE_CHAIN138:-}}"
local reserve_verifier=""
local asset_reserve_verifier=""
local key=""
local value=""
local derived_value=""
local refs=()
reserve_verifier="$(derive_cw_reserve_verifier || true)"
asset_reserve_verifier="$(derive_cw_asset_reserve_verifier || true)"
upsert_env_var "$env_file" "CHAIN138_L1_BRIDGE" "$chain138_l1_bridge"
upsert_env_var "$env_file" "CW_RESERVE_VERIFIER_CHAIN138" "$reserve_verifier"
upsert_env_var "$env_file" "CW_ASSET_RESERVE_VERIFIER_DEPLOYED_CHAIN138" "$asset_reserve_verifier"
refs=()
while IFS= read -r key; do
[[ -n "$key" ]] && refs+=("$key")
done < <(
node - <<'NODE' "$REPO_ROOT/config/gru-transport-active.json"
const fs = require('fs');
const file = process.argv[2];
const data = JSON.parse(fs.readFileSync(file, 'utf8'));
const refs = new Set();
function walk(value) {
if (Array.isArray(value)) {
value.forEach(walk);
return;
}
if (!value || typeof value !== 'object') return;
if (typeof value.env === 'string' && value.env.trim()) refs.add(value.env.trim());
Object.values(value).forEach(walk);
}
walk(data);
for (const key of [...refs].sort()) console.log(key);
NODE
)
for key in "${refs[@]}"; do
case "$key" in
CHAIN138_L1_BRIDGE)
value="$chain138_l1_bridge"
;;
CW_RESERVE_VERIFIER_CHAIN138)
value="${CW_RESERVE_VERIFIER_CHAIN138:-$reserve_verifier}"
;;
CW_MAX_OUTSTANDING_*)
derived_value="$(derive_gru_transport_policy_amount "$key" || true)"
value="${!key:-$derived_value}"
;;
CW_GAS_OUTSTANDING_*|CW_GAS_ESCROWED_*|CW_GAS_TREASURY_BACKED_*|CW_GAS_TREASURY_CAP_*)
value="${!key:-0}"
;;
*)
value="${!key:-}"
;;
esac
if [[ -n "$value" ]]; then
upsert_env_var "$env_file" "$key" "$value"
else
ensure_env_key "$env_file" "$key"
fi
done
}
sync_chain138_public_rpc_env() {
local env_file="$1"
local public_chain138_rpc="${TOKEN_AGG_CHAIN138_RPC_URL:-http://192.168.11.221:8545}"
# Explorer-side read services must use the public Chain 138 RPC node, not the
# operator/deploy core RPC.
upsert_env_var "$env_file" "CHAIN_138_RPC_URL" "$public_chain138_rpc"
upsert_env_var "$env_file" "RPC_URL_138" "$public_chain138_rpc"
# Explicit alias for GET /api/v1/quote PMM on-chain path (see pmm-onchain-quote.ts).
upsert_env_var "$env_file" "TOKEN_AGGREGATION_CHAIN138_RPC_URL" "$public_chain138_rpc"
# Optional operator override: set in repo .env before deploy to use core RPC for PMM eth_calls only
# while keeping publication indexing on the public node above.
if [[ -n "${TOKEN_AGGREGATION_PMM_RPC_URL:-}" ]]; then
upsert_env_var "$env_file" "TOKEN_AGGREGATION_PMM_RPC_URL" "$TOKEN_AGGREGATION_PMM_RPC_URL"
fi
if [[ -n "${TOKEN_AGGREGATION_PMM_QUERY_TRADER:-}" ]]; then
upsert_env_var "$env_file" "TOKEN_AGGREGATION_PMM_QUERY_TRADER" "$TOKEN_AGGREGATION_PMM_QUERY_TRADER"
fi
}
if [ ! -d "$SVC_DIR" ]; then
echo "Token-aggregation not found at $SVC_DIR" >&2
exit 1
fi
echo "Deploying token-aggregation to $INSTALL_DIR"
mkdir -p "$INSTALL_DIR"
cp -a "$SVC_DIR"/* "$INSTALL_DIR/"
cd "$INSTALL_DIR"
echo "Deploying token-aggregation bundle to $BUNDLE_ROOT"
mkdir -p "$BUNDLE_ROOT/config"
mkdir -p "$BUNDLE_ROOT/cross-chain-pmm-lps"
# Fresh copy: stale node_modules types (file vs dir) break cp -a on re-runs.
rm -rf "$SERVICE_INSTALL_DIR"
mkdir -p "$SERVICE_INSTALL_DIR"
cp -a "$SVC_DIR"/. "$SERVICE_INSTALL_DIR/"
cp -a "$REPO_ROOT/config"/. "$BUNDLE_ROOT/config/"
cp -a "$REPO_ROOT/cross-chain-pmm-lps/config" "$BUNDLE_ROOT/cross-chain-pmm-lps/"
cd "$SERVICE_INSTALL_DIR"
if [ ! -f .env ]; then
if [ -f .env.example ]; then
@@ -33,18 +232,25 @@ if [ ! -f .env ]; then
fi
fi
sync_gru_transport_env .env
sync_chain138_public_rpc_env .env
if command -v pnpm >/dev/null 2>&1 && [ -f "$REPO_ROOT/pnpm-lock.yaml" ]; then
(cd "$REPO_ROOT" && pnpm install --filter token-aggregation-service --no-frozen-lockfile 2>/dev/null) || true
fi
npm install --omit=dev 2>/dev/null || npm install
npm install
npm run build
npm prune --omit=dev >/dev/null 2>&1 || true
echo ""
echo "Token-aggregation built. Start with:"
echo " cd $INSTALL_DIR && node dist/index.js"
echo " cd $SERVICE_INSTALL_DIR && node dist/index.js"
echo "Or add systemd unit. Default port from code: 3000 (match nginx TOKEN_AGG_PORT / fix-explorer-http-api-v1-proxy.sh uses 3001)."
echo ""
echo "If this is a standalone token_aggregation database (no explorer-monorepo schema), bootstrap the lightweight schema first:"
echo " cd $SERVICE_INSTALL_DIR && bash scripts/apply-lightweight-schema.sh"
echo ""
echo "Then apply nginx proxy (on same host), e.g.:"
echo " TOKEN_AGG_PORT=3001 CONFIG_FILE=/etc/nginx/sites-available/blockscout \\"
echo " bash $REPO_ROOT/scripts/fix-explorer-http-api-v1-proxy.sh"
@@ -52,4 +258,13 @@ echo " # or: explorer-monorepo/scripts/apply-nginx-token-aggregation-proxy.sh"
echo ""
echo "Verify:"
echo " pnpm run verify:token-aggregation-api"
echo "Push to explorer VM (LAN):"
echo " EXPLORER_SSH=root@192.168.11.140 bash scripts/deployment/push-token-aggregation-bundle-to-explorer.sh $BUNDLE_ROOT"
echo " SKIP_BRIDGE_ROUTES=0 bash scripts/verify/check-public-report-api.sh https://explorer.d-bis.org"
echo " ALLOW_BLOCKED=1 bash scripts/verify/check-gru-transport-preflight.sh https://explorer.d-bis.org"
echo " bash scripts/verify/check-gas-public-pool-status.sh"
echo ""
echo "Notes:"
echo " CW_ASSET_RESERVE_VERIFIER_DEPLOYED_CHAIN138 is synced as a neutral reference only."
echo " Leave CW_GAS_STRICT_ESCROW_VERIFIER_CHAIN138 and CW_GAS_HYBRID_CAP_VERIFIER_CHAIN138 unset until the live L1 bridge is explicitly wired to the generic gas verifier."
echo " GET /api/v1/quote uses on-chain PMM when RPC is set (RPC_URL_138 or TOKEN_AGGREGATION_*); optional TOKEN_AGGREGATION_PMM_RPC_URL in operator .env overrides PMM calls only."

View File

@@ -0,0 +1,70 @@
#!/usr/bin/env bash
set -euo pipefail
# Acquire official Chain 138 USDC from the deployer's cUSDC inventory using the
# canonical DODO cUSDC/USDC mirror pool.
#
# Default amount: 250,000 USDC worth of cUSDC (250000e6).
# This unblocks upstream-native WETH/USDC seeding without disturbing the live
# router-v2 venue layer.
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
if [[ -f "${PROJECT_ROOT}/scripts/lib/load-project-env.sh" ]]; then
# shellcheck source=/dev/null
source "${PROJECT_ROOT}/scripts/lib/load-project-env.sh"
fi
require_cmd() {
command -v "$1" >/dev/null 2>&1 || {
echo "[fail] missing required command: $1" >&2
exit 1
}
}
require_cmd cast
RPC_URL="${CHAIN138_CUSDC_USDC_SWAP_RPC_URL:-http://192.168.11.221:8545}"
PRIVATE_KEY="${PRIVATE_KEY:-}"
DEPLOYER="${DEPLOYER_ADDRESS:-0x4A666F96fC8764181194447A7dFdb7d471b301C8}"
POOL="${POOL_CUSDCUSDC:-0xc39B7D0F40838cbFb54649d327f49a6DAC964062}"
INTEGRATION="${DODO_PMM_INTEGRATION_ADDRESS:-${CHAIN_138_DODO_PMM_INTEGRATION:-0x86ADA6Ef91A3B450F89f2b751e93B1b7A3218895}}"
CUSDC="${COMPLIANT_USDC_ADDRESS:-0xf22258f57794CC8E06237084b353Ab30fFfa640b}"
USDC="${OFFICIAL_USDC_ADDRESS:-0x71D6687F38b93CCad569Fa6352c876eea967201b}"
AMOUNT_IN="${AMOUNT_IN:-250000000000}" # 250,000 with 6 decimals
MIN_OUT="${MIN_OUT:-249000000000}" # 249,000 with 6 decimals
GAS_PRICE_WEI="${CHAIN138_DEPLOY_GAS_PRICE_WEI:-1000}"
if [[ -z "${PRIVATE_KEY}" ]]; then
echo "[fail] PRIVATE_KEY is required" >&2
exit 1
fi
echo "== Acquire official USDC from cUSDC on Chain 138 =="
echo "RPC: ${RPC_URL}"
echo "Deployer: ${DEPLOYER}"
echo "Pool: ${POOL}"
echo "Integration: ${INTEGRATION}"
echo "Amount in: ${AMOUNT_IN} (cUSDC, 6 decimals)"
echo "Min out: ${MIN_OUT} (USDC, 6 decimals)"
echo
allowance="$(cast call --rpc-url "${RPC_URL}" "${CUSDC}" 'allowance(address,address)(uint256)' "${DEPLOYER}" "${INTEGRATION}")"
echo "[info] current cUSDC allowance to integration: ${allowance}"
quote="$(cast call --rpc-url "${RPC_URL}" "${POOL}" 'querySellBase(address,uint256)(uint256,uint256)' "${DEPLOYER}" "${AMOUNT_IN}")"
quoted_out="$(printf '%s\n' "${quote}" | sed -n '1p' | awk '{print $1}')"
echo "[info] quoted output via canonical mirror pool: ${quoted_out} USDC raw"
cast send "${INTEGRATION}" \
'swapExactIn(address,address,uint256,uint256)' \
"${POOL}" "${CUSDC}" "${AMOUNT_IN}" "${MIN_OUT}" \
--rpc-url "${RPC_URL}" \
--private-key "${PRIVATE_KEY}" \
--legacy \
--gas-price "${GAS_PRICE_WEI}"
echo
new_usdc="$(cast call --rpc-url "${RPC_URL}" "${USDC}" 'balanceOf(address)(uint256)' "${DEPLOYER}" | awk '{print $1}')"
echo "[ok] new official USDC balance: ${new_usdc}"

View File

@@ -0,0 +1,270 @@
#!/usr/bin/env bash
set -euo pipefail
# Add liquidity to an existing Mainnet public DODO cW/USD pool.
#
# Run from repo root (directory that contains scripts/):
# bash scripts/deployment/add-mainnet-public-dodo-cw-liquidity.sh ...
#
# Amount arguments are raw token units (6 decimals for current cW/USD suite).
#
# Examples:
# bash scripts/deployment/add-mainnet-public-dodo-cw-liquidity.sh \
# --pair=cwusdc-usdc --base-amount=118471 --quote-amount=49527316 --dry-run
#
# bash scripts/deployment/add-mainnet-public-dodo-cw-liquidity.sh \
# --pair=cwusdc-usdc --base-amount=118471 --quote-amount=49527316
#
# # --execute is accepted as an explicit alias for "not --dry-run" (optional).
# bash scripts/deployment/add-mainnet-public-dodo-cw-liquidity.sh \
# --pair=cwusdc-usdc --base-amount=118471 --quote-amount=10000000 --execute
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
usage() {
cat <<'EOF'
Add liquidity to an existing Mainnet public DODO cW/USD pool.
Run from repo root (directory that contains scripts/):
bash scripts/deployment/add-mainnet-public-dodo-cw-liquidity.sh --pair=PAIR ...
Amounts are raw token units (6 decimals for the current cW/USD suite).
Required:
--pair=PAIR cwusdc-usdc | cwusdt-usdc | cwusdt-usdt | cwusdc-usdt | cwusdt-cwusdc
--base-amount=N base token amount (wei-like integer, 6 decimals)
--quote-amount=N quote token amount
Optional:
--dry-run plan only (no transactions)
--execute explicit live run (default is live if neither flag is set; do not combine with --dry-run)
Env:
ETHEREUM_MAINNET_RPC
PRIVATE_KEY
DODO_PMM_INTEGRATION_MAINNET
Example (plan first):
bash scripts/deployment/add-mainnet-public-dodo-cw-liquidity.sh \
--pair=cwusdc-usdc --base-amount=118471 --quote-amount=49527316 --dry-run
EOF
}
source "${PROJECT_ROOT}/smom-dbis-138/scripts/load-env.sh" >/dev/null 2>&1
require_cmd() {
command -v "$1" >/dev/null 2>&1 || {
echo "[fail] missing required command: $1" >&2
exit 1
}
}
require_cmd cast
PAIR=""
BASE_AMOUNT=""
QUOTE_AMOUNT=""
USE_DRY_RUN=0
EXPLICIT_EXECUTE=0
for arg in "$@"; do
case "$arg" in
--help|-h) usage; exit 0 ;;
--pair=*) PAIR="${arg#*=}" ;;
--base-amount=*) BASE_AMOUNT="${arg#*=}" ;;
--quote-amount=*) QUOTE_AMOUNT="${arg#*=}" ;;
--dry-run) USE_DRY_RUN=1 ;;
--execute) EXPLICIT_EXECUTE=1 ;;
*)
echo "[fail] unknown arg: $arg (try --help)" >&2
exit 2
;;
esac
done
if (( USE_DRY_RUN && EXPLICIT_EXECUTE )); then
echo "[fail] use either --dry-run or --execute, not both" >&2
exit 2
fi
DRY_RUN="$USE_DRY_RUN"
if [[ -z "$PAIR" || -z "$BASE_AMOUNT" || -z "$QUOTE_AMOUNT" ]]; then
echo "[fail] required args: --pair, --base-amount, --quote-amount (see --help)" >&2
exit 1
fi
RPC_URL="${ETHEREUM_MAINNET_RPC:-}"
PRIVATE_KEY="${PRIVATE_KEY:-}"
INTEGRATION="${DODO_PMM_INTEGRATION_MAINNET:-}"
USDC_MAINNET="0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
USDT_MAINNET="0xdAC17F958D2ee523a2206206994597C13D831ec7"
if [[ -z "$RPC_URL" || -z "$PRIVATE_KEY" || -z "$INTEGRATION" ]]; then
echo "[fail] ETHEREUM_MAINNET_RPC, PRIVATE_KEY, and DODO_PMM_INTEGRATION_MAINNET are required" >&2
exit 1
fi
DEPLOYER="$(cast wallet address --private-key "$PRIVATE_KEY")"
BASE_TOKEN=""
QUOTE_TOKEN=""
POOL=""
PAIR_LABEL=""
case "$PAIR" in
cwusdc-usdc)
BASE_TOKEN="${CWUSDC_MAINNET:-0x2de5F116bFcE3d0f922d9C8351e0c5Fc24b9284a}"
QUOTE_TOKEN="$USDC_MAINNET"
# Default matches cross-chain-pmm-lps/config/deployment-status.json (public cWUSDC/USDC).
POOL="${POOL_CWUSDC_USDC_MAINNET:-0x69776fc607e9edA8042e320e7e43f54d06c68f0E}"
PAIR_LABEL="cWUSDC/USDC"
;;
cwusdt-usdc)
BASE_TOKEN="${CWUSDT_MAINNET:-0xaF5017d0163ecb99D9B5D94e3b4D7b09Af44D8AE}"
QUOTE_TOKEN="$USDC_MAINNET"
POOL="${POOL_CWUSDT_USDC_MAINNET:-0x27f3aE7EE71Be3d77bAf17d4435cF8B895DD25D2}"
PAIR_LABEL="cWUSDT/USDC"
;;
cwusdt-usdt)
BASE_TOKEN="${CWUSDT_MAINNET:-0xaF5017d0163ecb99D9B5D94e3b4D7b09Af44D8AE}"
QUOTE_TOKEN="$USDT_MAINNET"
POOL="${POOL_CWUSDT_USDT_MAINNET:-0x79156F6B7bf71a1B72D78189B540A89A6C13F6FC}"
PAIR_LABEL="cWUSDT/USDT"
;;
cwusdc-usdt)
BASE_TOKEN="${CWUSDC_MAINNET:-0x2de5F116bFcE3d0f922d9C8351e0c5Fc24b9284a}"
QUOTE_TOKEN="$USDT_MAINNET"
POOL="${POOL_CWUSDC_USDT_MAINNET:-0xCC0fd27A40775c9AfcD2BBd3f7c902b0192c247A}"
PAIR_LABEL="cWUSDC/USDT"
;;
cwusdt-cwusdc)
BASE_TOKEN="${CWUSDT_MAINNET:-0xaF5017d0163ecb99D9B5D94e3b4D7b09Af44D8AE}"
QUOTE_TOKEN="${CWUSDC_MAINNET:-0x2de5F116bFcE3d0f922d9C8351e0c5Fc24b9284a}"
POOL="${POOL_CWUSDT_CWUSDC_MAINNET:-0xe944b7Cb012A0820c07f54D51e92f0e1C74168DB}"
PAIR_LABEL="cWUSDT/cWUSDC"
;;
*)
echo "[fail] unsupported pair: $PAIR" >&2
exit 2
;;
esac
mapped_pool="$(cast call "$INTEGRATION" 'pools(address,address)(address)' "$BASE_TOKEN" "$QUOTE_TOKEN" --rpc-url "$RPC_URL" 2>/dev/null | awk '{print $1}' || true)"
if [[ -n "$mapped_pool" && "$mapped_pool" != "0x0000000000000000000000000000000000000000" && "${mapped_pool,,}" != "${POOL,,}" ]]; then
POOL="$mapped_pool"
fi
parse_tx_hash() {
local output="$1"
local tx_hash
tx_hash="$(printf '%s\n' "$output" | grep -E '^0x[0-9a-fA-F]{64}$' | tail -n1 || true)"
if [[ -z "$tx_hash" ]]; then
tx_hash="$(printf '%s\n' "$output" | grep -E '^transactionHash[[:space:]]+0x[0-9a-fA-F]{64}$' | awk '{print $2}' | tail -n1 || true)"
fi
if [[ -z "$tx_hash" ]]; then
return 1
fi
printf '%s\n' "$tx_hash"
}
base_balance_before="$(cast call "$BASE_TOKEN" 'balanceOf(address)(uint256)' "$DEPLOYER" --rpc-url "$RPC_URL" | awk '{print $1}')"
quote_balance_before="$(cast call "$QUOTE_TOKEN" 'balanceOf(address)(uint256)' "$DEPLOYER" --rpc-url "$RPC_URL" | awk '{print $1}')"
base_allowance_before="$(cast call "$BASE_TOKEN" 'allowance(address,address)(uint256)' "$DEPLOYER" "$INTEGRATION" --rpc-url "$RPC_URL" | awk '{print $1}')"
quote_allowance_before="$(cast call "$QUOTE_TOKEN" 'allowance(address,address)(uint256)' "$DEPLOYER" "$INTEGRATION" --rpc-url "$RPC_URL" | awk '{print $1}')"
reserves_before="$(cast call "$POOL" 'getVaultReserve()(uint256,uint256)' --rpc-url "$RPC_URL")"
base_reserve_before="$(printf '%s\n' "$reserves_before" | sed -n '1p' | awk '{print $1}')"
quote_reserve_before="$(printf '%s\n' "$reserves_before" | sed -n '2p' | awk '{print $1}')"
add_gas="$(
cast estimate "$INTEGRATION" 'addLiquidity(address,uint256,uint256)(uint256,uint256,uint256)' \
"$POOL" "$BASE_AMOUNT" "$QUOTE_AMOUNT" \
--from "$DEPLOYER" \
--rpc-url "$RPC_URL" 2>/dev/null || true
)"
if (( base_balance_before < BASE_AMOUNT )); then
echo "[fail] insufficient base balance: have=$base_balance_before need=$BASE_AMOUNT" >&2
exit 1
fi
if (( quote_balance_before < QUOTE_AMOUNT )); then
echo "[fail] insufficient quote balance: have=$quote_balance_before need=$QUOTE_AMOUNT" >&2
exit 1
fi
if (( DRY_RUN == 1 )); then
echo "pair=$PAIR_LABEL"
echo "pool=$POOL"
echo "baseToken=$BASE_TOKEN"
echo "quoteToken=$QUOTE_TOKEN"
echo "baseAmount=$BASE_AMOUNT"
echo "quoteAmount=$QUOTE_AMOUNT"
echo "addGas=${add_gas:-unknown_pre_approval}"
echo "baseReserveBefore=$base_reserve_before"
echo "quoteReserveBefore=$quote_reserve_before"
echo "baseReserveAfter=$(( base_reserve_before + BASE_AMOUNT ))"
echo "quoteReserveAfter=$(( quote_reserve_before + QUOTE_AMOUNT ))"
echo "baseBalanceBefore=$base_balance_before"
echo "quoteBalanceBefore=$quote_balance_before"
echo "baseAllowanceBefore=$base_allowance_before"
echo "quoteAllowanceBefore=$quote_allowance_before"
exit 0
fi
approve_base_tx=""
approve_quote_tx=""
if (( base_allowance_before < BASE_AMOUNT )); then
approve_base_output="$(
cast send "$BASE_TOKEN" \
'approve(address,uint256)(bool)' \
"$INTEGRATION" "$BASE_AMOUNT" \
--rpc-url "$RPC_URL" \
--private-key "$PRIVATE_KEY"
)"
approve_base_tx="$(parse_tx_hash "$approve_base_output")"
fi
if (( quote_allowance_before < QUOTE_AMOUNT )); then
approve_quote_output="$(
cast send "$QUOTE_TOKEN" \
'approve(address,uint256)(bool)' \
"$INTEGRATION" "$QUOTE_AMOUNT" \
--rpc-url "$RPC_URL" \
--private-key "$PRIVATE_KEY"
)"
approve_quote_tx="$(parse_tx_hash "$approve_quote_output")"
fi
add_output="$(
cast send "$INTEGRATION" \
'addLiquidity(address,uint256,uint256)(uint256,uint256,uint256)' \
"$POOL" "$BASE_AMOUNT" "$QUOTE_AMOUNT" \
--rpc-url "$RPC_URL" \
--private-key "$PRIVATE_KEY"
)"
add_liquidity_tx="$(parse_tx_hash "$add_output")"
reserves_after="$(cast call "$POOL" 'getVaultReserve()(uint256,uint256)' --rpc-url "$RPC_URL")"
base_reserve_after="$(printf '%s\n' "$reserves_after" | sed -n '1p' | awk '{print $1}')"
quote_reserve_after="$(printf '%s\n' "$reserves_after" | sed -n '2p' | awk '{print $1}')"
base_balance_after="$(cast call "$BASE_TOKEN" 'balanceOf(address)(uint256)' "$DEPLOYER" --rpc-url "$RPC_URL" | awk '{print $1}')"
quote_balance_after="$(cast call "$QUOTE_TOKEN" 'balanceOf(address)(uint256)' "$DEPLOYER" --rpc-url "$RPC_URL" | awk '{print $1}')"
echo "pair=$PAIR_LABEL"
echo "pool=$POOL"
echo "baseToken=$BASE_TOKEN"
echo "quoteToken=$QUOTE_TOKEN"
echo "baseAmount=$BASE_AMOUNT"
echo "quoteAmount=$QUOTE_AMOUNT"
echo "approveBaseTx=${approve_base_tx:-none}"
echo "approveQuoteTx=${approve_quote_tx:-none}"
echo "addLiquidityTx=$add_liquidity_tx"
echo "baseReserveBefore=$base_reserve_before"
echo "quoteReserveBefore=$quote_reserve_before"
echo "baseReserveAfter=$base_reserve_after"
echo "quoteReserveAfter=$quote_reserve_after"
echo "baseBalanceBefore=$base_balance_before"
echo "baseBalanceAfter=$base_balance_after"
echo "quoteBalanceBefore=$quote_balance_before"
echo "quoteBalanceAfter=$quote_balance_after"

View File

@@ -0,0 +1,128 @@
#!/usr/bin/env bash
set -euo pipefail
# Add liquidity to **both** Mainnet volatile pools (cWUSDT/TRUU then cWUSDC/TRUU) in one run,
# sharing one TRUU balance. Optionally keep a fraction of each pair's ratio-matched maximum
# on the deployer for spot/arbitrage inventory (same idea as --reserve-bps on the single-pair top-up).
#
# Usage:
# bash scripts/deployment/add-mainnet-truu-pmm-fund-both-pools.sh
# bash scripts/deployment/add-mainnet-truu-pmm-fund-both-pools.sh --reserve-bps=2000 --dry-run
#
# Requires: load-env (ETHEREUM_MAINNET_RPC, PRIVATE_KEY, DODO_PMM_INTEGRATION_MAINNET, token addresses).
# See: docs/03-deployment/MAINNET_PMM_TRUU_CWUSD_PEG_AND_BOT_RUNBOOK.md
# load-env.sh overwrites SCRIPT_DIR; keep this repo's deployment dir for sibling scripts.
DEPLOYMENT_SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "${DEPLOYMENT_SCRIPT_DIR}/../.." && pwd)"
source "${PROJECT_ROOT}/smom-dbis-138/scripts/load-env.sh" >/dev/null 2>&1
RESERVE_BPS=0
DRY_RUN=0
B_REF=1500000
Q_REF=372348929900893
INITIAL_I="${PMM_TRUU_INITIAL_I:-2482326199339289065}"
for arg in "$@"; do
case "$arg" in
--reserve-bps=*) RESERVE_BPS="${arg#*=}" ;;
--dry-run) DRY_RUN=1 ;;
*)
echo "[fail] unknown arg: $arg" >&2
exit 2
;;
esac
done
if [[ "$RESERVE_BPS" -lt 0 || "$RESERVE_BPS" -ge 10000 ]]; then
echo "[fail] --reserve-bps must be 0..9999" >&2
exit 2
fi
command -v cast >/dev/null 2>&1 || {
echo "[fail] cast required" >&2
exit 1
}
command -v python3 >/dev/null 2>&1 || {
echo "[fail] python3 required" >&2
exit 1
}
RPC_URL="${ETHEREUM_MAINNET_RPC:-}"
PRIVATE_KEY="${PRIVATE_KEY:-}"
if [[ -z "$RPC_URL" || -z "$PRIVATE_KEY" ]]; then
echo "[fail] ETHEREUM_MAINNET_RPC and PRIVATE_KEY required" >&2
exit 1
fi
DEPLOYER="$(cast wallet address --private-key "$PRIVATE_KEY")"
CWUSDT="${CWUSDT_MAINNET:-0xaF5017d0163ecb99D9B5D94e3b4D7b09Af44D8AE}"
CWUSDC="${CWUSDC_MAINNET:-0x2de5F116bFcE3d0f922d9C8351e0c5Fc24b9284a}"
TRUU="${TRUU_MAINNET:-0xDAe0faFD65385E7775Cf75b1398735155EF6aCD2}"
scale_pair() {
local cb="$1"
local ct="$2"
python3 - <<PY
b_ref = $B_REF
q_ref = $Q_REF
reserve_bps = int("$RESERVE_BPS")
cb = int("$cb")
ct = int("$ct")
if cb <= 0 or ct <= 0:
print(0, 0)
raise SystemExit(0)
base_max_from_truu = ct * b_ref // q_ref
base = min(cb, base_max_from_truu)
quote = base * q_ref // b_ref
if reserve_bps > 0:
m = 10000 - reserve_bps
base = base * m // 10000
quote = quote * m // 10000
print(base, quote)
PY
}
run_one() {
local pair="$1"
local b="$2"
local q="$3"
if [[ "$b" -le 0 || "$q" -le 0 ]]; then
echo "[info] skip $pair (base=$b quote=$q)"
return 0
fi
echo "[info] $pair base=$b quote=$q"
bash "${DEPLOYMENT_SCRIPT_DIR}/deploy-mainnet-pmm-cw-truu-pool.sh" \
--pair="$pair" \
--initial-price="$INITIAL_I" \
--base-amount="$b" \
--quote-amount="$q" \
$([[ "$DRY_RUN" -eq 1 ]] && echo --dry-run)
}
CB="$(cast call "$CWUSDT" 'balanceOf(address)(uint256)' "$DEPLOYER" --rpc-url "$RPC_URL" | awk '{print $1}')"
CC="$(cast call "$CWUSDC" 'balanceOf(address)(uint256)' "$DEPLOYER" --rpc-url "$RPC_URL" | awk '{print $1}')"
CT="$(cast call "$TRUU" 'balanceOf(address)(uint256)' "$DEPLOYER" --rpc-url "$RPC_URL" | awk '{print $1}')"
echo "[info] deployer=$DEPLOYER reserve_bps=$RESERVE_BPS dry_run=$DRY_RUN"
echo "[info] balances cWUSDT=$CB cWUSDC=$CC TRUU=$CT"
read -r B1 Q1 <<<"$(scale_pair "$CB" "$CT")"
CT2="$(python3 -c "print(int('$CT') - int('$Q1'))")"
read -r B2 Q2 <<<"$(scale_pair "$CC" "$CT2")"
if [[ "$RESERVE_BPS" -gt 0 ]]; then
echo "[info] planned adds after reserve: cwusdt-truu $B1 $Q1 | cwusdc-truu $B2 $Q2"
fi
run_one cwusdt-truu "$B1" "$Q1"
# Re-read cWUSDC balance if not dry-run (cWUSDT changed on-chain)
if [[ "$DRY_RUN" -eq 0 ]]; then
CC="$(cast call "$CWUSDC" 'balanceOf(address)(uint256)' "$DEPLOYER" --rpc-url "$RPC_URL" | awk '{print $1}')"
CT="$(cast call "$TRUU" 'balanceOf(address)(uint256)' "$DEPLOYER" --rpc-url "$RPC_URL" | awk '{print $1}')"
read -r B2 Q2 <<<"$(scale_pair "$CC" "$CT")"
echo "[info] second leg after tx1: cWUSDC=$CC TRUU=$CT -> base=$B2 quote=$Q2"
fi
run_one cwusdc-truu "$B2" "$Q2"
echo "[ok] done."

View File

@@ -0,0 +1,142 @@
#!/usr/bin/env bash
set -euo pipefail
# Add liquidity to an existing Mainnet cWUSDT/TRUU or cWUSDC/TRUU DODO PMM pool using
# **all wallet balance that fits** both legs at the USD reference ratio from the first
# cWUSDT/TRUU seed (B_ref=1_500_000 raw cW, Q_ref=372_348_929_900_893 raw TRUU).
#
# Requires: same env as deploy-mainnet-pmm-cw-truu-pool.sh (ETHEREUM_MAINNET_RPC,
# PRIVATE_KEY, DODO_PMM_INTEGRATION_MAINNET). Pool must already exist.
#
# Usage:
# bash scripts/deployment/add-mainnet-truu-pmm-topup.sh --pair=cwusdc-truu
# bash scripts/deployment/add-mainnet-truu-pmm-topup.sh --pair=cwusdt-truu --dry-run
# bash scripts/deployment/add-mainnet-truu-pmm-topup.sh --pair=cwusdt-truu --reserve-bps=2000
# (keep 20% of the ratio-matched max on-wallet for trading; deploy 80%)
#
# Optional: PMM_TRUU_INITIAL_I (default matches live pools: 2482326199339289065).
# If TRUU or cW balance is zero, exits 0 after printing a short message (nothing to do).
#
# See: docs/03-deployment/MAINNET_PMM_TRUU_CWUSD_PEG_AND_BOT_RUNBOOK.md
# load-env.sh overwrites SCRIPT_DIR; keep proxmox/scripts/deployment for sibling deploy script.
DEPLOYMENT_SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "${DEPLOYMENT_SCRIPT_DIR}/../.." && pwd)"
source "${PROJECT_ROOT}/smom-dbis-138/scripts/load-env.sh" >/dev/null 2>&1
PAIR=""
DRY_RUN=0
RESERVE_BPS=0
B_REF=1500000
Q_REF=372348929900893
INITIAL_I="${PMM_TRUU_INITIAL_I:-2482326199339289065}"
for arg in "$@"; do
case "$arg" in
--pair=*) PAIR="${arg#*=}" ;;
--dry-run) DRY_RUN=1 ;;
--reserve-bps=*) RESERVE_BPS="${arg#*=}" ;;
*)
echo "[fail] unknown arg: $arg" >&2
exit 2
;;
esac
done
if [[ "$RESERVE_BPS" -lt 0 || "$RESERVE_BPS" -ge 10000 ]]; then
echo "[fail] --reserve-bps must be 0..9999" >&2
exit 2
fi
if [[ -z "$PAIR" ]]; then
echo "[fail] required: --pair=cwusdt-truu or --pair=cwusdc-truu" >&2
exit 1
fi
command -v cast >/dev/null 2>&1 || {
echo "[fail] cast required" >&2
exit 1
}
command -v python3 >/dev/null 2>&1 || {
echo "[fail] python3 required" >&2
exit 1
}
RPC_URL="${ETHEREUM_MAINNET_RPC:-}"
PRIVATE_KEY="${PRIVATE_KEY:-}"
if [[ -z "$RPC_URL" || -z "$PRIVATE_KEY" ]]; then
echo "[fail] ETHEREUM_MAINNET_RPC and PRIVATE_KEY required" >&2
exit 1
fi
TRUU_MAINNET="${TRUU_MAINNET:-0xDAe0faFD65385E7775Cf75b1398735155EF6aCD2}"
case "$PAIR" in
cwusdt-truu)
BASE_TOKEN="${CWUSDT_MAINNET:-0xaF5017d0163ecb99D9B5D94e3b4D7b09Af44D8AE}"
;;
cwusdc-truu)
BASE_TOKEN="${CWUSDC_MAINNET:-0x2de5F116bFcE3d0f922d9C8351e0c5Fc24b9284a}"
;;
*)
echo "[fail] unsupported pair: $PAIR" >&2
exit 2
;;
esac
DEPLOYER="$(cast wallet address --private-key "$PRIVATE_KEY")"
BASE_BAL="$(cast call "$BASE_TOKEN" 'balanceOf(address)(uint256)' "$DEPLOYER" --rpc-url "$RPC_URL" | awk '{print $1}')"
TRUU_BAL="$(cast call "$TRUU_MAINNET" 'balanceOf(address)(uint256)' "$DEPLOYER" --rpc-url "$RPC_URL" | awk '{print $1}')"
read -r BASE_AMT QUOTE_AMT <<EOF
$(python3 - <<PY
b_ref = $B_REF
q_ref = $Q_REF
cb = int("$BASE_BAL")
ct = int("$TRUU_BAL")
if cb <= 0 or ct <= 0:
print(0, 0)
raise SystemExit(0)
# Max base limited by TRUU: quote = base * q_ref / b_ref <= ct
base_max_from_truu = ct * b_ref // q_ref
base = min(cb, base_max_from_truu)
quote = base * q_ref // b_ref
print(base, quote)
PY
)
EOF
if [[ -z "${BASE_AMT:-}" || -z "${QUOTE_AMT:-}" ]]; then
echo "[fail] could not compute amounts" >&2
exit 1
fi
if [[ "$BASE_AMT" -eq 0 || "$QUOTE_AMT" -eq 0 ]]; then
if [[ "$BASE_BAL" -eq 0 || "$TRUU_BAL" -eq 0 ]]; then
echo "[info] nothing to add: cW balance=$BASE_BAL TRUU balance=$TRUU_BAL (need both > 0)"
else
echo "[info] nothing to add: TRUU balance=$TRUU_BAL is below one reference-ratio step of cW (need ≥$(( Q_REF / B_REF )) raw TRUU per 1 raw cW increment, 6-dec tokens; B_ref/Q_ref=$B_REF/$Q_REF; fund more TRUU or pass explicit amounts to deploy-mainnet-pmm-cw-truu-pool.sh)"
fi
exit 0
fi
if [[ "$RESERVE_BPS" -gt 0 ]]; then
DEPLOY_MULT=$((10000 - RESERVE_BPS))
BASE_AMT=$((BASE_AMT * DEPLOY_MULT / 10000))
QUOTE_AMT=$((QUOTE_AMT * DEPLOY_MULT / 10000))
echo "[info] applied --reserve-bps=$RESERVE_BPS (deploy $DEPLOY_MULT/10000 of ratio-matched max)"
fi
if [[ "$BASE_AMT" -eq 0 || "$QUOTE_AMT" -eq 0 ]]; then
echo "[info] nothing to add after reserve scaling (try smaller --reserve-bps)" >&2
exit 0
fi
echo "[info] pair=$PAIR deployer=$DEPLOYER base_raw=$BASE_AMT quote_raw=$QUOTE_AMT (ratio B_ref/Q_ref=$B_REF/$Q_REF)"
exec bash "${DEPLOYMENT_SCRIPT_DIR}/deploy-mainnet-pmm-cw-truu-pool.sh" \
--pair="$PAIR" \
--initial-price="$INITIAL_I" \
--base-amount="$BASE_AMT" \
--quote-amount="$QUOTE_AMT" \
$([[ "$DRY_RUN" -eq 1 ]] && echo --dry-run)

View File

@@ -0,0 +1,170 @@
#!/usr/bin/env bash
set -euo pipefail
# Apply one on-chain liquidity tranche toward Mainnet cWUSDC/USDC 1:1 vault peg
# using the deployer wallet (PRIVATE_KEY). Wraps add-mainnet-public-dodo-cw-liquidity.sh.
#
# Requires: ETHEREUM_MAINNET_RPC, PRIVATE_KEY, DODO_PMM_INTEGRATION_MAINNET
# Amounts are chosen like plan-mainnet-cwusdc-usdc-rebalance-liquidity.sh:
# - prefer largest asymmetric peg tranche when base_reserve > quote_reserve
# - else matched 1:1 deepen (min wallet cWUSDC, wallet USDC)
#
# Usage (from repo root):
# source scripts/lib/load-project-env.sh
# bash scripts/deployment/apply-mainnet-cwusdc-usdc-peg-tranche-from-wallet.sh --dry-run
# bash scripts/deployment/apply-mainnet-cwusdc-usdc-peg-tranche-from-wallet.sh --apply
#
# Exit:
# 0 — dry-run or live add submitted
# 3 — wallet cannot fund any tranche (print funding hint)
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
# shellcheck disable=SC1091
source "${PROJECT_ROOT}/scripts/lib/load-project-env.sh" 2>/dev/null || true
# shellcheck disable=SC1091
source "${PROJECT_ROOT}/smom-dbis-138/scripts/load-env.sh" >/dev/null 2>&1 || true
MODE="dry-run"
for a in "$@"; do
case "$a" in
--dry-run) MODE=dry-run ;;
--apply) MODE=apply ;;
--help|-h)
sed -n '1,25p' "$0"
exit 0
;;
*)
echo "[fail] unknown arg: $a (use --dry-run or --apply)" >&2
exit 2
;;
esac
done
require_cmd() {
command -v "$1" >/dev/null 2>&1 || {
echo "[fail] missing required command: $1" >&2
exit 1
}
}
require_cmd cast
require_cmd python3
RPC_URL="${ETHEREUM_MAINNET_RPC:-}"
PRIVATE_KEY="${PRIVATE_KEY:-}"
INTEGRATION="${DODO_PMM_INTEGRATION_MAINNET:-}"
CWUSDC="${CWUSDC_MAINNET:-0x2de5F116bFcE3d0f922d9C8351e0c5Fc24b9284a}"
USDC="0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
POOL="${POOL_CWUSDC_USDC_MAINNET:-0x69776fc607e9edA8042e320e7e43f54d06c68f0E}"
if [[ -z "$RPC_URL" || -z "$PRIVATE_KEY" || -z "$INTEGRATION" ]]; then
echo "[fail] ETHEREUM_MAINNET_RPC, PRIVATE_KEY, and DODO_PMM_INTEGRATION_MAINNET are required" >&2
exit 1
fi
if [[ -n "$INTEGRATION" ]]; then
mapped_pool="$(cast call "$INTEGRATION" 'pools(address,address)(address)' "$CWUSDC" "$USDC" --rpc-url "$RPC_URL" 2>/dev/null | awk '{print $1}' || true)"
if [[ -n "$mapped_pool" && "$mapped_pool" != "0x0000000000000000000000000000000000000000" && "${mapped_pool,,}" != "${POOL,,}" ]]; then
POOL="$mapped_pool"
fi
fi
dep="$(cast wallet address --private-key "$PRIVATE_KEY")"
out="$(cast call "$POOL" 'getVaultReserve()(uint256,uint256)' --rpc-url "$RPC_URL")"
base_r="$(printf '%s\n' "$out" | sed -n '1p' | awk '{print $1}')"
quote_r="$(printf '%s\n' "$out" | sed -n '2p' | awk '{print $1}')"
wb="$(cast call "$CWUSDC" 'balanceOf(address)(uint256)' "$dep" --rpc-url "$RPC_URL" | awk '{print $1}')"
wq="$(cast call "$USDC" 'balanceOf(address)(uint256)' "$dep" --rpc-url "$RPC_URL" | awk '{print $1}')"
if (( base_r > quote_r )); then
G=$(( base_r - quote_r ))
else
G=$(( quote_r - base_r ))
fi
read -r opt_b opt_q <<<"$(
python3 - "$wb" "$wq" "$base_r" "$quote_r" <<'PY'
import sys
wb, wq, br, qr = map(int, sys.argv[1:5])
if br > qr:
g = br - qr
if g <= 0:
print("0 0")
else:
cap = min(wb, wq - g) if wq > g else 0
if cap < 1:
print("0 0")
else:
print(cap, cap + g)
else:
g = qr - br
if g <= 0:
print("0 0")
else:
cap = min(wq, wb - g) if wb > g else 0
if cap < 1:
print("0 0")
else:
print(cap + g, cap)
PY
)"
BASE_AMOUNT=""
QUOTE_AMOUNT=""
if (( opt_b >= 1 && opt_q >= 1 )); then
BASE_AMOUNT="$opt_b"
QUOTE_AMOUNT="$opt_q"
else
matched="$(python3 - "$wb" "$wq" <<'PY'
import sys
print(min(int(sys.argv[1]), int(sys.argv[2])))
PY
)"
if (( matched >= 1 )); then
BASE_AMOUNT="$matched"
QUOTE_AMOUNT="$matched"
fi
fi
to_human() {
python3 - "$1" <<'PY'
import sys
print(f"{int(sys.argv[1]) / 1_000_000:.6f}")
PY
}
echo "=== apply-mainnet-cwusdc-usdc-peg-tranche-from-wallet ($MODE) ==="
echo "deployer=$dep"
echo "pool=$POOL"
echo "reserve_base=$(to_human "$base_r") cWUSDC reserve_quote=$(to_human "$quote_r") USDC"
echo "wallet_cWUSDC=$(to_human "$wb") wallet_USDC=$(to_human "$wq")"
if [[ -z "$BASE_AMOUNT" || -z "$QUOTE_AMOUNT" ]]; then
echo
echo "[stop] No affordable tranche: need USDC on deployer for peg-close (asymmetric needs USDC_raw > gap when base-heavy)."
echo " gap_raw=$G human=$(to_human "$G") USDC-side deficit vs cWUSDC reserve"
echo " Send USDC (and keep cWUSDC) to: $dep"
echo " Then: bash scripts/deployment/apply-mainnet-cwusdc-usdc-peg-tranche-from-wallet.sh --dry-run"
echo " Flash path (no wallet USDC): see scripts/deployment/plan-mainnet-cwusdc-flash-quote-push-rebalance.sh"
exit 3
fi
echo "chosen_base_amount_raw=$BASE_AMOUNT chosen_quote_amount_raw=$QUOTE_AMOUNT"
echo "human_base=$(to_human "$BASE_AMOUNT") human_quote=$(to_human "$QUOTE_AMOUNT")"
ADD_ARGS=(
"${PROJECT_ROOT}/scripts/deployment/add-mainnet-public-dodo-cw-liquidity.sh"
"--pair=cwusdc-usdc"
"--base-amount=$BASE_AMOUNT"
"--quote-amount=$QUOTE_AMOUNT"
)
if [[ "$MODE" == "dry-run" ]]; then
ADD_ARGS+=("--dry-run")
else
ADD_ARGS+=("--execute")
fi
exec bash "${ADD_ARGS[@]}"

View File

@@ -95,9 +95,10 @@ log "upserted IT_READ_API_URL and IT_READ_API_KEY in repo .env (gitignored)"
log "rsync minimal repo tree → root@${PROXMOX_HOST}:${REMOTE_ROOT}"
ssh "${SSH_OPTS[@]}" "root@${PROXMOX_HOST}" \
"mkdir -p '${REMOTE_ROOT}/config' '${REMOTE_ROOT}/scripts/it-ops' '${REMOTE_ROOT}/services/sankofa-it-read-api' '${REMOTE_ROOT}/docs/04-configuration' '${REMOTE_ROOT}/reports/status'"
"mkdir -p '${REMOTE_ROOT}/config' '${REMOTE_ROOT}/config/it-operations' '${REMOTE_ROOT}/scripts/it-ops' '${REMOTE_ROOT}/services/sankofa-it-read-api' '${REMOTE_ROOT}/docs/04-configuration' '${REMOTE_ROOT}/reports/status'"
cd "$PROJECT_ROOT"
rsync -az --delete -e "$RSYNC_SSH" ./config/ip-addresses.conf "root@${PROXMOX_HOST}:${REMOTE_ROOT}/config/"
rsync -az --delete -e "$RSYNC_SSH" ./config/it-operations/ "root@${PROXMOX_HOST}:${REMOTE_ROOT}/config/it-operations/"
rsync -az --delete -e "$RSYNC_SSH" ./scripts/it-ops/ "root@${PROXMOX_HOST}:${REMOTE_ROOT}/scripts/it-ops/"
rsync -az --delete -e "$RSYNC_SSH" ./services/sankofa-it-read-api/server.py "root@${PROXMOX_HOST}:${REMOTE_ROOT}/services/sankofa-it-read-api/"
rsync -az --delete -e "$RSYNC_SSH" ./docs/04-configuration/ALL_VMIDS_ENDPOINTS.md "root@${PROXMOX_HOST}:${REMOTE_ROOT}/docs/04-configuration/"

View File

@@ -0,0 +1,262 @@
#!/usr/bin/env bash
# Cancel a queued nonce range for a Chain 138 EOA by sending 0-value self-transfers
# at each nonce with a bumped gas price. Default mode is dry-run.
#
# Example:
# PRIVATE_KEY=0xabc... bash scripts/deployment/cancel-chain138-eoa-nonce-range.sh \
# --rpc http://192.168.11.217:8545 \
# --from-nonce 1
#
# PRIVATE_KEY=0xabc... bash scripts/deployment/cancel-chain138-eoa-nonce-range.sh \
# --rpc http://192.168.11.217:8545 \
# --from-nonce 1 \
# --to-nonce 8 \
# --apply
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
if [[ -f "${PROJECT_ROOT}/scripts/lib/load-project-env.sh" ]]; then
# shellcheck source=/dev/null
source "${PROJECT_ROOT}/scripts/lib/load-project-env.sh" 2>/dev/null || true
fi
RPC_URL="${RPC_URL_138:-http://192.168.11.211:8545}"
PRIVATE_KEY_INPUT="${PRIVATE_KEY:-${DEPLOYER_PRIVATE_KEY:-}}"
FROM_NONCE=""
TO_NONCE=""
APPLY=0
GAS_LIMIT=21000
GAS_BUMP_MULTIPLIER=2
MIN_GAS_WEI=2000
usage() {
cat <<'EOF'
Usage: cancel-chain138-eoa-nonce-range.sh [options]
Options:
--private-key <hex> Private key for the EOA owner.
--rpc <url> Chain 138 RPC URL. Default: RPC_URL_138 or http://192.168.11.211:8545
--from-nonce <n> First nonce to cancel. Required.
--to-nonce <n> Last nonce to cancel. Optional; defaults to highest pending owner nonce seen on the RPC.
--apply Actually send cancel/self-transfer txs.
--min-gas-wei <n> Floor gas price in wei. Default: 2000
--gas-multiplier <n> Multiply current eth_gasPrice by this factor. Default: 2
--help Show this help.
Behavior:
1. Derives the owner address from the provided private key.
2. Reads latest nonce and owner pending txs from txpool_besuPendingTransactions.
3. Plans a cancel range from --from-nonce to --to-nonce.
4. In --apply mode, sends 0-value self-transfers at each nonce in that range.
Notes:
- Dry-run is the default and is recommended first.
- Use this when the owner wants to invalidate old deployment attempts instead of completing them.
- Requires: cast, curl, jq
EOF
}
while [[ $# -gt 0 ]]; do
case "$1" in
--private-key)
PRIVATE_KEY_INPUT="${2:-}"
shift 2
;;
--rpc)
RPC_URL="${2:-}"
shift 2
;;
--from-nonce)
FROM_NONCE="${2:-}"
shift 2
;;
--to-nonce)
TO_NONCE="${2:-}"
shift 2
;;
--apply)
APPLY=1
shift
;;
--min-gas-wei)
MIN_GAS_WEI="${2:-}"
shift 2
;;
--gas-multiplier)
GAS_BUMP_MULTIPLIER="${2:-}"
shift 2
;;
--help|-h)
usage
exit 0
;;
*)
echo "Unknown argument: $1" >&2
usage
exit 1
;;
esac
done
for cmd in cast curl jq; do
command -v "$cmd" >/dev/null 2>&1 || {
echo "Missing required command: $cmd" >&2
exit 1
}
done
if [[ -z "$PRIVATE_KEY_INPUT" ]]; then
echo "Missing private key. Pass --private-key or set PRIVATE_KEY." >&2
exit 1
fi
if [[ -z "$FROM_NONCE" ]]; then
echo "Missing required --from-nonce." >&2
exit 1
fi
case "$FROM_NONCE" in
''|*[!0-9]*)
echo "--from-nonce must be a non-negative integer." >&2
exit 1
;;
esac
if [[ -n "$TO_NONCE" ]]; then
case "$TO_NONCE" in
''|*[!0-9]*)
echo "--to-nonce must be a non-negative integer." >&2
exit 1
;;
esac
fi
OWNER_ADDRESS="$(cast wallet address --private-key "$PRIVATE_KEY_INPUT" 2>/dev/null || true)"
if [[ -z "$OWNER_ADDRESS" ]]; then
echo "Could not derive owner address from private key." >&2
exit 1
fi
OWNER_ADDRESS_LC="$(printf '%s' "$OWNER_ADDRESS" | tr '[:upper:]' '[:lower:]')"
rpc_call() {
local method="$1"
local params_json="$2"
curl -fsS "$RPC_URL" \
-H 'content-type: application/json' \
--data "{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"${method}\",\"params\":${params_json}}"
}
hex_to_dec() {
cast --to-dec "$1"
}
latest_hex="$(rpc_call eth_getTransactionCount "[\"${OWNER_ADDRESS}\",\"latest\"]" | jq -r '.result // "0x0"')"
pending_hex="$(rpc_call eth_getTransactionCount "[\"${OWNER_ADDRESS}\",\"pending\"]" | jq -r '.result // "0x0"')"
gas_hex="$(rpc_call eth_gasPrice "[]" | jq -r '.result // "0x0"')"
latest_nonce="$(hex_to_dec "$latest_hex")"
pending_nonce="$(hex_to_dec "$pending_hex")"
network_gas_wei="$(hex_to_dec "$gas_hex")"
pending_json="$(rpc_call txpool_besuPendingTransactions "[]")"
owner_pending_json="$(
printf '%s' "$pending_json" | jq --arg owner "$OWNER_ADDRESS_LC" '
[
.result[]?
| select(((.from // .sender // "") | ascii_downcase) == $owner)
| {
hash,
nonce_hex: .nonce,
to,
gas,
gasPrice,
maxFeePerGas,
maxPriorityFeePerGas,
type
}
]
'
)"
owner_pending_count="$(printf '%s' "$owner_pending_json" | jq 'length')"
highest_pending_nonce="$latest_nonce"
if [[ "$owner_pending_count" -gt 0 ]]; then
highest_pending_nonce="$(
printf '%s' "$owner_pending_json" \
| jq -r '.[].nonce_hex' \
| while read -r nonce_hex; do cast --to-dec "$nonce_hex"; done \
| sort -n \
| tail -n1
)"
fi
if [[ -z "$TO_NONCE" ]]; then
TO_NONCE="$highest_pending_nonce"
fi
if (( TO_NONCE < FROM_NONCE )); then
echo "--to-nonce must be greater than or equal to --from-nonce." >&2
exit 1
fi
effective_gas_wei="$(( network_gas_wei * GAS_BUMP_MULTIPLIER ))"
if (( effective_gas_wei < MIN_GAS_WEI )); then
effective_gas_wei="$MIN_GAS_WEI"
fi
echo "=== Chain 138 EOA nonce-range cancel ==="
echo "Owner address: $OWNER_ADDRESS"
echo "RPC URL: $RPC_URL"
echo "Latest nonce: $latest_nonce"
echo "Pending nonce view: $pending_nonce"
echo "Pending tx count: $owner_pending_count"
echo "Highest pending nonce:${highest_pending_nonce}"
echo "Cancel from nonce: $FROM_NONCE"
echo "Cancel to nonce: $TO_NONCE"
echo "Network gas price: $network_gas_wei wei"
echo "Planned gas price: $effective_gas_wei wei"
echo ""
if [[ "$owner_pending_count" -gt 0 ]]; then
echo "--- Pending txs for owner in txpool_besuPendingTransactions ---"
printf '%s\n' "$owner_pending_json" | jq '.'
echo ""
else
echo "No owner txs found in txpool_besuPendingTransactions."
echo ""
fi
echo "--- Planned cancel range ---"
for ((nonce = FROM_NONCE; nonce <= TO_NONCE; nonce++)); do
echo "nonce $nonce -> self-transfer cancel"
done
echo ""
if [[ "$APPLY" -ne 1 ]]; then
echo "Dry-run only. Re-run with --apply to submit cancels for the full nonce range above."
exit 0
fi
echo "--- Applying cancels ---"
for ((nonce = FROM_NONCE; nonce <= TO_NONCE; nonce++)); do
echo "Sending cancel tx for nonce $nonce ..."
cast send "$OWNER_ADDRESS" \
--private-key "$PRIVATE_KEY_INPUT" \
--rpc-url "$RPC_URL" \
--nonce "$nonce" \
--value 0 \
--gas-limit "$GAS_LIMIT" \
--gas-price "$effective_gas_wei" \
>/tmp/cancel-chain138-eoa-nonce-range.out 2>/tmp/cancel-chain138-eoa-nonce-range.err || {
echo "Failed to send cancel for nonce $nonce" >&2
cat /tmp/cancel-chain138-eoa-nonce-range.err >&2 || true
exit 1
}
cat /tmp/cancel-chain138-eoa-nonce-range.out
done
echo ""
echo "Submitted cancel transactions. Wait for inclusion, then rerun in dry-run mode to verify the queue is gone."

View File

@@ -6,7 +6,7 @@
# RPC_URL_138=https://rpc-core.d-bis.org ./scripts/deployment/check-deployer-balance-chain138-and-funding-plan.sh
# # Or from smom-dbis-138: source .env then run from repo root with RPC_URL_138 set
#
# Requires: cast (foundry), jq (optional). RPC_URL_138 must be set and reachable.
# Requires: cast (foundry), python3 (uint256-safe halving), jq (optional). RPC_URL_138 must be set and reachable.
set -euo pipefail
@@ -28,13 +28,19 @@ CUSDC="0xf22258f57794CC8E06237084b353Ab30fFfa640b"
USDT_OFFICIAL="0x004b63A7B5b0E06f6bB6adb4a5F9f590BF3182D1"
# PMM pool addresses (from LIQUIDITY_POOLS_MASTER_MAP / ADDRESS_MATRIX)
POOL_CUSDTCUSDC="0x9fcB06Aa1FD5215DC0E91Fd098aeff4B62fEa5C8"
POOL_CUSDTUSDT="0x6fc60DEDc92a2047062294488539992710b99D71"
POOL_CUSDCUSDC="0x90bd9Bf18Daa26Af3e814ea224032d015db58Ea5"
POOL_CUSDTCUSDC="0x9e89bAe009adf128782E19e8341996c596ac40dC"
POOL_CUSDTUSDT="0x866Cb44b59303d8dc5f4F9E3E7A8e8b0bf238d66"
POOL_CUSDCUSDC="0xc39B7D0F40838cbFb54649d327f49a6DAC964062"
get_balance() {
local addr="$1"
cast call "$addr" "balanceOf(address)(uint256)" "$DEPLOYER" --rpc-url "$RPC" 2>/dev/null || echo "0"
# cast may append human-readable suffix like " [6.971e14]" — only the first field is valid for arithmetic
cast call "$addr" "balanceOf(address)(uint256)" "$DEPLOYER" --rpc-url "$RPC" 2>/dev/null | awk '{print $1; exit}' || echo "0"
}
# Integer halving for uint256-sized values (bash $(( )) overflows past ~2^63)
idiv2() {
python3 -c "print(int('$1') // 2)" 2>/dev/null || echo "0"
}
get_decimals() {
@@ -42,10 +48,10 @@ get_decimals() {
cast call "$addr" "decimals()(uint8)" --rpc-url "$RPC" 2>/dev/null | cast --to-dec 2>/dev/null || echo "18"
}
# Native balance
native_wei=$(cast balance "$DEPLOYER" --rpc-url "$RPC" 2>/dev/null || echo "0")
native_eth=$(awk "BEGIN { printf \"%.6f\", $native_wei / 1e18 }" 2>/dev/null || echo "0")
half_native_wei=$((native_wei / 2))
# Native balance (strip any cast suffix)
native_wei=$(cast balance "$DEPLOYER" --rpc-url "$RPC" 2>/dev/null | awk '{print $1; exit}' || echo "0")
native_eth=$(python3 -c "print(f'{int(\"$native_wei\")/1e18:.6f}')" 2>/dev/null || echo "0")
half_native_wei=$(idiv2 "$native_wei")
echo "============================================"
echo "Deployer wallet — ChainID 138"
@@ -63,7 +69,7 @@ HALF_WETH=0; HALF_WETH10=0; HALF_LINK=0; HALF_CUSDT=0; HALF_CUSDC=0
for entry in "WETH:$WETH:18" "WETH10:$WETH10:18" "LINK:$LINK:18" "cUSDT:$CUSDT:6" "cUSDC:$CUSDC:6"; do
sym="${entry%%:*}"; rest="${entry#*:}"; addr="${rest%%:*}"; dec="${rest#*:}"
raw=$(get_balance "$addr")
half=$((raw / 2))
half=$(idiv2 "$raw")
case "$sym" in
WETH) RAW_WETH=$raw; HALF_WETH=$half ;;
WETH10) RAW_WETH10=$raw; HALF_WETH10=$half ;;
@@ -108,6 +114,6 @@ echo "# ADD_LIQUIDITY_CUSDTUSDT_BASE=$HALF_CUSDT ADD_LIQUIDITY_CUSDTUSDT_QUOTE=
echo "# ADD_LIQUIDITY_CUSDCUSDC_BASE=$HALF_CUSDC ADD_LIQUIDITY_CUSDCUSDC_QUOTE=<deployer USDC balance / 2>"
echo ""
echo "--- Reserve ---"
echo " Keep half of native ETH for gas. Half for LP (if wrapping to WETH for a pool): $((half_native_wei / 2)) wei."
echo " Keep half of native ETH for gas. Half for LP (if wrapping to WETH for a pool): $(idiv2 "$half_native_wei") wei."
echo " WETH/LINK: half amounts above can be reserved for other use or future pools."
echo "============================================"

View File

@@ -9,9 +9,18 @@ SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
SMOM="${PROJECT_ROOT}/smom-dbis-138"
[[ -f "$SMOM/.env" ]] || { echo "Missing $SMOM/.env" >&2; exit 1; }
set -a
source "$SMOM/.env"
set +a
if [[ -f "$SMOM/scripts/lib/deployment/dotenv.sh" ]]; then
# shellcheck disable=SC1090
source "$SMOM/scripts/lib/deployment/dotenv.sh"
load_deployment_env --repo-root "$SMOM"
else
set -a
source "$SMOM/.env"
set +a
if [[ -z "${PRIVATE_KEY:-}" && -n "${DEPLOYER_PRIVATE_KEY:-}" ]]; then
export PRIVATE_KEY="$DEPLOYER_PRIVATE_KEY"
fi
fi
DEPLOYER=""
if [[ -n "${PRIVATE_KEY:-}" ]]; then

View File

@@ -0,0 +1,159 @@
#!/usr/bin/env bash
set -euo pipefail
# End-to-end Mainnet flash quote-push (cWUSDC/USDC) using:
# - AaveQuotePushFlashReceiver (default: already deployed at 0x241cb416aaFC2654078b7E2376adED2bDeFbCBa2)
# - TwoHopDodoIntegrationUnwinder (cWUSDC -> cWUSDT -> USDC on registered PMM pools)
#
# Why two-hop: no Uniswap V3 cwUSDC/USDC pool; single-hop cwUSDC/USDC PMM is the illiquid venue you are fixing.
#
# Usage (repo root, env loaded):
# source scripts/lib/load-project-env.sh
# bash scripts/deployment/complete-mainnet-flash-quote-push-path.sh --dry-run
# bash scripts/deployment/complete-mainnet-flash-quote-push-path.sh --apply
#
# Env overrides (optional):
# AAVE_QUOTE_PUSH_RECEIVER_MAINNET (default canonical broadcast below)
# FLASH_QUOTE_AMOUNT_RAW default 200000 (0.2 USDC; keep small vs thin cWUSDT/USDC pool)
# MIN_OUT_PMM / MIN_OUT_UNWIND set explicitly if forge simulation reverts on PMM leg
# UNWIND_MIN_MID_OUT_RAW default 1
#
# References: cross-chain-pmm-lps/config/deployment-status.json chains."1".pmmPools
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROXMOX_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
SMOM="${PROXMOX_ROOT}/smom-dbis-138"
# Preserve explicit caller overrides across sourced repo env files.
_qp_private_key="${PRIVATE_KEY-}"
_qp_rpc="${ETHEREUM_MAINNET_RPC-}"
_qp_receiver="${AAVE_QUOTE_PUSH_RECEIVER_MAINNET-}"
_qp_amount="${FLASH_QUOTE_AMOUNT_RAW-}"
_qp_integration="${DODO_PMM_INTEGRATION_MAINNET-}"
_qp_pool="${POOL_CWUSDC_USDC_MAINNET-}"
_qp_pool_a="${UNWIND_TWO_HOP_POOL_A-}"
_qp_pool_b="${UNWIND_TWO_HOP_POOL_B-}"
_qp_mid_token="${UNWIND_TWO_HOP_MID_TOKEN-}"
_qp_min_mid_out="${UNWIND_MIN_MID_OUT_RAW-}"
_qp_min_out_pmm="${MIN_OUT_PMM-}"
_qp_min_out_unwind="${MIN_OUT_UNWIND-}"
# shellcheck disable=SC1091
source "${PROXMOX_ROOT}/scripts/lib/load-project-env.sh" 2>/dev/null || true
# shellcheck disable=SC1091
source "${SMOM}/scripts/load-env.sh" >/dev/null 2>&1 || true
[[ -n "$_qp_private_key" ]] && export PRIVATE_KEY="$_qp_private_key"
[[ -n "$_qp_rpc" ]] && export ETHEREUM_MAINNET_RPC="$_qp_rpc"
[[ -n "$_qp_receiver" ]] && export AAVE_QUOTE_PUSH_RECEIVER_MAINNET="$_qp_receiver"
[[ -n "$_qp_amount" ]] && export FLASH_QUOTE_AMOUNT_RAW="$_qp_amount"
[[ -n "$_qp_integration" ]] && export DODO_PMM_INTEGRATION_MAINNET="$_qp_integration"
[[ -n "$_qp_pool" ]] && export POOL_CWUSDC_USDC_MAINNET="$_qp_pool"
[[ -n "$_qp_pool_a" ]] && export UNWIND_TWO_HOP_POOL_A="$_qp_pool_a"
[[ -n "$_qp_pool_b" ]] && export UNWIND_TWO_HOP_POOL_B="$_qp_pool_b"
[[ -n "$_qp_mid_token" ]] && export UNWIND_TWO_HOP_MID_TOKEN="$_qp_mid_token"
[[ -n "$_qp_min_mid_out" ]] && export UNWIND_MIN_MID_OUT_RAW="$_qp_min_mid_out"
[[ -n "$_qp_min_out_pmm" ]] && export MIN_OUT_PMM="$_qp_min_out_pmm"
[[ -n "$_qp_min_out_unwind" ]] && export MIN_OUT_UNWIND="$_qp_min_out_unwind"
unset _qp_private_key _qp_rpc _qp_receiver _qp_amount _qp_integration _qp_pool
unset _qp_pool_a _qp_pool_b _qp_mid_token _qp_min_mid_out _qp_min_out_pmm _qp_min_out_unwind
MODE="dry-run"
for a in "$@"; do
case "$a" in
--apply) MODE=apply ;;
--dry-run) MODE=dry-run ;;
--help|-h)
sed -n '1,35p' "$0"
exit 0
;;
*)
echo "[fail] unknown arg: $a (use --dry-run or --apply)" >&2
exit 2
;;
esac
done
require_cmd() {
command -v "$1" >/dev/null 2>&1 || {
echo "[fail] missing required command: $1" >&2
exit 1
}
}
require_cmd jq
if [[ -z "${ETHEREUM_MAINNET_RPC:-}" || -z "${PRIVATE_KEY:-}" || -z "${DODO_PMM_INTEGRATION_MAINNET:-}" ]]; then
echo "[fail] ETHEREUM_MAINNET_RPC, PRIVATE_KEY, DODO_PMM_INTEGRATION_MAINNET required" >&2
exit 1
fi
export AAVE_QUOTE_PUSH_RECEIVER_MAINNET="${AAVE_QUOTE_PUSH_RECEIVER_MAINNET:-0x241cb416aaFC2654078b7E2376adED2bDeFbCBa2}"
if [[ -z "$_qp_pool" ]]; then
export POOL_CWUSDC_USDC_MAINNET="0x69776fc607e9edA8042e320e7e43f54d06c68f0E"
fi
# Canonical Mainnet public PMM pools (deployment-status.json)
export UNWIND_MODE=4
export UNWIND_TWO_HOP_POOL_A="${UNWIND_TWO_HOP_POOL_A:-0xe944b7Cb012A0820c07f54D51e92f0e1C74168DB}"
export UNWIND_TWO_HOP_POOL_B="${UNWIND_TWO_HOP_POOL_B:-0x27f3aE7EE71Be3d77bAf17d4435cF8B895DD25D2}"
export UNWIND_TWO_HOP_MID_TOKEN="${UNWIND_TWO_HOP_MID_TOKEN:-0xaF5017d0163ecb99d9B5D94e3b4D7b09Af44D8AE}"
export UNWIND_MIN_MID_OUT_RAW="${UNWIND_MIN_MID_OUT_RAW:-1}"
export FLASH_QUOTE_AMOUNT_RAW="${FLASH_QUOTE_AMOUNT_RAW:-200000}"
# PMM pool querySellQuote often reverts off-chain for this venue; operator should set MIN_OUT_PMM explicitly for production.
if [[ -z "${MIN_OUT_PMM:-}" ]]; then
export MIN_OUT_PMM=1
echo "[warn] MIN_OUT_PMM unset — using 1 (unsafe). Set MIN_OUT_PMM for real slippage protection." >&2
fi
DEPLOY_ARGS=()
if [[ "$MODE" == "apply" ]]; then
DEPLOY_ARGS=(--apply)
else
DEPLOY_ARGS=(--dry-run)
fi
echo "=== 1/2 Deploy TwoHopDodoIntegrationUnwinder (skip Aave receiver redeploy) ==="
DEPLOY_MAINNET_FLASH_SKIP_RECEIVER=1 QUOTE_PUSH_UNWINDER_TYPE=two_hop_dodo \
bash "${PROXMOX_ROOT}/scripts/deployment/deploy-mainnet-aave-quote-push-stack.sh" "${DEPLOY_ARGS[@]}"
pick_twohop_addr() {
local j1="${SMOM}/broadcast/DeployTwoHopDodoIntegrationUnwinder.s.sol/1/run-latest.json"
if [[ ! -f "$j1" ]]; then
echo "[fail] could not find a broadcasted TwoHop deployment under smom-dbis-138/broadcast/ (forge dry-run artifacts are not reusable across scripts)" >&2
exit 1
fi
jq -r '.transactions[]? | select(.transactionType == "CREATE" and .contractName == "TwoHopDodoIntegrationUnwinder") | .contractAddress' "$j1" | tail -n1
}
export QUOTE_PUSH_EXTERNAL_UNWINDER_MAINNET="${QUOTE_PUSH_EXTERNAL_UNWINDER_MAINNET:-$(pick_twohop_addr)}"
if [[ -z "$QUOTE_PUSH_EXTERNAL_UNWINDER_MAINNET" || "$QUOTE_PUSH_EXTERNAL_UNWINDER_MAINNET" == "null" ]]; then
echo "[fail] could not parse TwoHopDodoIntegrationUnwinder address from broadcast json" >&2
exit 1
fi
echo "AAVE_QUOTE_PUSH_RECEIVER_MAINNET=$AAVE_QUOTE_PUSH_RECEIVER_MAINNET"
echo "QUOTE_PUSH_EXTERNAL_UNWINDER_MAINNET=$QUOTE_PUSH_EXTERNAL_UNWINDER_MAINNET"
echo "FLASH_QUOTE_AMOUNT_RAW=$FLASH_QUOTE_AMOUNT_RAW UNWIND_MODE=$UNWIND_MODE"
echo "=== 2/2 Run one flash quote-push ==="
RUN_ARGS=()
if [[ "$MODE" == "apply" ]]; then
RUN_ARGS=(--apply)
else
RUN_ARGS=(--dry-run)
fi
bash "${PROXMOX_ROOT}/scripts/deployment/run-mainnet-aave-cwusdc-quote-push-once.sh" "${RUN_ARGS[@]}"
echo
echo "Done. Append to operator .env (do not commit secrets):"
echo " AAVE_QUOTE_PUSH_RECEIVER_MAINNET=$AAVE_QUOTE_PUSH_RECEIVER_MAINNET"
echo " QUOTE_PUSH_EXTERNAL_UNWINDER_MAINNET=$QUOTE_PUSH_EXTERNAL_UNWINDER_MAINNET"
echo " UNWIND_MODE=4"
echo " UNWIND_TWO_HOP_POOL_A=$UNWIND_TWO_HOP_POOL_A"
echo " UNWIND_TWO_HOP_POOL_B=$UNWIND_TWO_HOP_POOL_B"
echo " UNWIND_TWO_HOP_MID_TOKEN=$UNWIND_TWO_HOP_MID_TOKEN"
echo "Then: bash scripts/verify/check-mainnet-cwusdc-usdc-reserve-peg.sh"

View File

@@ -0,0 +1,135 @@
#!/usr/bin/env bash
set -euo pipefail
# Suggest matched 1:1 seed raw amounts (6 decimals) for a new Mainnet cWUSDT/cWUSDC PMM pool
# by scaling the smaller of the two reference rails' matched depth:
# - cWUSDT/USDT
# - cWUSDC/USDC
#
# This is a planning helper only. Review wallet balances before executing deploy/add-liquidity.
#
# Usage:
# source scripts/lib/load-project-env.sh
# bash scripts/deployment/compute-mainnet-cwusdt-cwusdc-seed-amounts.sh --multiplier=50
# bash scripts/deployment/compute-mainnet-cwusdt-cwusdc-seed-amounts.sh --multiplier=50 --cap-raw=500000000000
#
# Env overrides:
# POOL_CWUSDT_USDT_MAINNET, POOL_CWUSDC_USDC_MAINNET
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
source "${PROJECT_ROOT}/scripts/lib/load-project-env.sh" 2>/dev/null || true
require_cmd() {
command -v "$1" >/dev/null 2>&1 || {
echo "[fail] missing required command: $1" >&2
exit 1
}
}
require_cmd cast
require_cmd python3
MULTIPLIER="50"
CAP_RAW=""
for arg in "$@"; do
case "$arg" in
--multiplier=*) MULTIPLIER="${arg#*=}" ;;
--cap-raw=*) CAP_RAW="${arg#*=}" ;;
-h | --help)
sed -n '1,22p' "$0"
exit 0
;;
*)
echo "[fail] unknown arg: $arg" >&2
exit 2
;;
esac
done
RPC_URL="${ETHEREUM_MAINNET_RPC:-}"
if [[ -z "$RPC_URL" ]]; then
echo "[fail] ETHEREUM_MAINNET_RPC is required" >&2
exit 1
fi
POOL_UT="${POOL_CWUSDT_USDT_MAINNET:-0x79156F6B7bf71a1B72D78189B540A89A6C13F6FC}"
POOL_UC="${POOL_CWUSDC_USDC_MAINNET:-0x69776fc607e9edA8042e320e7e43f54d06c68f0E}"
read_reserves() {
local pool="$1"
cast call "$pool" 'getVaultReserve()(uint256,uint256)' --rpc-url "$RPC_URL"
}
min_side() {
python3 - "$1" "$2" <<'PY'
import sys
b, q = int(sys.argv[1]), int(sys.argv[2])
print(min(b, q))
PY
}
line1_ut="$(read_reserves "$POOL_UT" | sed -n '1p' | awk '{print $1}')"
line2_ut="$(read_reserves "$POOL_UT" | sed -n '2p' | awk '{print $1}')"
line1_uc="$(read_reserves "$POOL_UC" | sed -n '1p' | awk '{print $1}')"
line2_uc="$(read_reserves "$POOL_UC" | sed -n '2p' | awk '{print $1}')"
min_ut="$(min_side "$line1_ut" "$line2_ut")"
min_uc="$(min_side "$line1_uc" "$line2_uc")"
ref="$(python3 - "$min_ut" "$min_uc" <<'PY'
import sys
print(min(int(sys.argv[1]), int(sys.argv[2])))
PY
)"
seed="$(python3 - "$ref" "$MULTIPLIER" <<'PY'
import sys
r, m = int(sys.argv[1]), int(sys.argv[2])
print(r * m)
PY
)"
if [[ -n "$CAP_RAW" ]]; then
seed="$(python3 - "$seed" "$CAP_RAW" <<'PY'
import sys
s, c = int(sys.argv[1]), int(sys.argv[2])
print(min(s, c))
PY
)"
fi
echo "=== cWUSDT/cWUSDC matched seed suggestion (1:1 raw on both legs) ==="
echo "referencePools: cWUSDT/USDT=$POOL_UT cWUSDC/USDC=$POOL_UC"
echo "minSide_cWUSDT_USDT_raw=$min_ut"
echo "minSide_cWUSDC_USDC_raw=$min_uc"
echo "referenceMin_raw=$ref"
echo "multiplier=$MULTIPLIER"
echo "suggested_base_raw(cWUSDT)=$seed"
echo "suggested_quote_raw(cWUSDC)=$seed"
echo
echo "Dry-run deploy + seed:"
echo " bash scripts/deployment/deploy-mainnet-cwusdt-cwusdc-pool.sh \\"
echo " --initial-price=1000000000000000000 \\"
echo " --base-amount=$seed --quote-amount=$seed --dry-run"
echo
echo "Optional: deepen the two reference rails toward ~${MULTIPLIER}x their current min-side depth"
echo "(only if you intend matched adds and pool state supports equal base/quote adds; verify live reserves first):"
delta_ut="$(python3 - "$min_ut" "$MULTIPLIER" <<'PY'
import sys
m, k = int(sys.argv[1]), int(sys.argv[2])
print(max(0, m * (k - 1)))
PY
)"
delta_uc="$(python3 - "$min_uc" "$MULTIPLIER" <<'PY'
import sys
m, k = int(sys.argv[1]), int(sys.argv[2])
print(max(0, m * (k - 1)))
PY
)"
echo " suggested extra raw per side cWUSDT/USDT: base=$delta_ut quote=$delta_ut"
echo " bash scripts/deployment/add-mainnet-public-dodo-cw-liquidity.sh --pair=cwusdt-usdt --base-amount=$delta_ut --quote-amount=$delta_ut --dry-run"
echo " suggested extra raw per side cWUSDC/USDC: base=$delta_uc quote=$delta_uc"
echo " bash scripts/deployment/add-mainnet-public-dodo-cw-liquidity.sh --pair=cwusdc-usdc --base-amount=$delta_uc --quote-amount=$delta_uc --dry-run"

View File

@@ -0,0 +1,78 @@
#!/usr/bin/env bash
set -euo pipefail
# Print base_raw / quote_raw for Mainnet cW/TRUU PMM adds at 1:1 USD notional per leg
# (same ratio as seeds: B_ref=1_500_000 cW raw, Q_ref=372_348_929_900_893 TRUU raw).
#
# Usage:
# bash scripts/deployment/compute-mainnet-truu-liquidity-amounts.sh --usd-per-leg=125000
# bash scripts/deployment/compute-mainnet-truu-liquidity-amounts.sh --usd-per-leg=50000 --print-cast=0
#
# After funding the deployer wallet, run (existing pool → addLiquidity only):
# bash scripts/deployment/deploy-mainnet-pmm-cw-truu-pool.sh \
# --pair=cwusdt-truu --initial-price=<I> --base-amount=<base_raw> --quote-amount=<quote_raw>
#
# See: docs/03-deployment/MAINNET_PMM_TRUU_CWUSD_PEG_AND_BOT_RUNBOOK.md section 11
USD_PER_LEG=""
PRINT_CAST=1
I_DEFAULT="${PMM_TRUU_INITIAL_I:-2482326199339289065}"
for arg in "$@"; do
case "$arg" in
--usd-per-leg=*) USD_PER_LEG="${arg#*=}" ;;
--print-cast=*) PRINT_CAST="${arg#*=}" ;;
*)
echo "[fail] unknown arg: $arg" >&2
exit 2
;;
esac
done
if [[ -z "$USD_PER_LEG" ]]; then
echo "[fail] required: --usd-per-leg=N (USD notional on cW leg = TRUU leg at reference ratio)" >&2
exit 1
fi
command -v python3 >/dev/null 2>&1 || {
echo "[fail] python3 required" >&2
exit 1
}
read -r BASE_RAW QUOTE_RAW <<EOF
$(python3 - <<PY
from decimal import Decimal
usd = Decimal("$USD_PER_LEG")
if usd <= 0:
raise SystemExit("usd must be positive")
base = int(usd * Decimal(10) ** 6)
b_ref = 1_500_000
q_ref = 372_348_929_900_893
quote = base * q_ref // b_ref
print(base, quote)
PY
)
EOF
echo "=== Mainnet cW/TRUU add — 1:1 USD legs (reference ratio) ==="
echo "usd_per_leg=$USD_PER_LEG"
echo "base_raw (cWUSDT or cWUSDC, 6 dec)=$BASE_RAW"
echo "quote_raw (TRUU, 10 dec)=$QUOTE_RAW"
truu_full="$(python3 -c "print($QUOTE_RAW / 10**10)")"
echo "TRUU full tokens ≈ $truu_full"
echo "implied USD/TRUU if legs 1:1 ≈ $(python3 -c "print(float($USD_PER_LEG) / ($QUOTE_RAW / 10**10))")"
echo
echo "Both pools at this size need 2× cW (cWUSDT + cWUSDC) and 2× TRUU quote if funded to the same USD each."
echo
if [[ "$PRINT_CAST" == "1" ]]; then
echo "deploy-mainnet-pmm-cw-truu-pool.sh --pair=cwusdt-truu \\"
echo " --initial-price=$I_DEFAULT \\"
echo " --base-amount=$BASE_RAW \\"
echo " --quote-amount=$QUOTE_RAW"
echo
echo "deploy-mainnet-pmm-cw-truu-pool.sh --pair=cwusdc-truu \\"
echo " --initial-price=$I_DEFAULT \\"
echo " --base-amount=$BASE_RAW \\"
echo " --quote-amount=$QUOTE_RAW"
fi

View File

@@ -0,0 +1,74 @@
#!/usr/bin/env bash
set -euo pipefail
# Compute raw token amounts for a cWUSD* / TRUU PMM seed where legs are 1:1 in USD NOTIONAL
# (same dollar value on each side at deposit). This is NOT 1:1 raw token count unless TRUU/USD = $1.
#
# Args (env or flags):
# USD_NOTIONAL_PER_LEG — USD value to place on EACH side (default: 1000)
# TRUU_USD_PRICE — USD price of one full TRUU token (10 decimals), e.g. 0.00004
#
# Output: base_amount (6-dec cW), quote_amount (10-dec TRUU) for deploy-mainnet-pmm-cw-truu-pool.sh
#
# Example:
# USD_NOTIONAL_PER_LEG=5000 TRUU_USD_PRICE=0.00004 bash scripts/deployment/compute-mainnet-truu-pmm-seed-amounts.sh
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
USD_NOTIONAL_PER_LEG="${USD_NOTIONAL_PER_LEG:-1000}"
TRUU_USD_PRICE="${TRUU_USD_PRICE:-}"
for arg in "$@"; do
case "$arg" in
--usd-per-leg=*) USD_NOTIONAL_PER_LEG="${arg#*=}" ;;
--truu-usd-price=*) TRUU_USD_PRICE="${arg#*=}" ;;
*)
echo "[fail] unknown arg: $arg (use --usd-per-leg= N --truu-usd-price= P)" >&2
exit 2
;;
esac
done
if [[ -z "$TRUU_USD_PRICE" ]]; then
echo "[fail] TRUU_USD_PRICE required (USD per 1 full TRUU token), e.g. export TRUU_USD_PRICE=0.00004" >&2
exit 1
fi
command -v python3 >/dev/null 2>&1 || {
echo "[fail] python3 required" >&2
exit 1
}
read -r BASE_RAW QUOTE_RAW TRUU_FULL <<EOF
$(python3 - <<PY
from decimal import Decimal, getcontext
getcontext().prec = 40
usd = Decimal("$USD_NOTIONAL_PER_LEG")
p = Decimal("$TRUU_USD_PRICE")
if p <= 0:
raise SystemExit("price must be positive")
# cW leg: 6 decimals, 1 unit = 1 USD notionally
base_raw = int((usd * Decimal(10) ** 6).quantize(Decimal("1")))
# TRUU leg: 10 decimals; token count = usd / price
truu_tokens = usd / p
quote_raw = int((truu_tokens * Decimal(10) ** 10).quantize(Decimal("1")))
print(base_raw, quote_raw, format(truu_tokens.normalize(), "f"))
PY
)
EOF
echo "=== cW / TRUU seed (1:1 USD notional per leg) ==="
echo "usd_per_leg=$USD_NOTIONAL_PER_LEG"
echo "truu_usd_price=$TRUU_USD_PRICE"
echo "cW_base_amount_raw (6 dec)=$BASE_RAW (~${USD_NOTIONAL_PER_LEG} USD of cW)"
echo "truu_quote_amount_raw (10 dec)=$QUOTE_RAW (~${USD_NOTIONAL_PER_LEG} USD of TRUU ≈ $TRUU_FULL full tokens)"
echo
echo "deploy-mainnet-pmm-cw-truu-pool.sh \\"
echo " --pair=cwusdt-truu # or --pair=cwusdc-truu \\"
echo " --initial-price=<set from DODO fork> \\"
echo " --base-amount=$BASE_RAW \\"
echo " --quote-amount=$QUOTE_RAW \\"
echo " --dry-run"
echo
echo "# After pools exist, ratio-matched top-up from wallet balances:"
echo "# bash scripts/deployment/add-mainnet-truu-pmm-topup.sh --pair=cwusdt-truu"

View File

@@ -0,0 +1,316 @@
#!/usr/bin/env bash
#
# Wire one generic Chain 138 canonical c* asset to its cW* mirror on a public chain.
# Dry-run by default; pass --execute to broadcast writes.
#
# Examples:
# ./scripts/deployment/configure-cstar-cw-bridge-pair.sh --chain MAINNET --asset cUSDT
# ./scripts/deployment/configure-cstar-cw-bridge-pair.sh --chain POLYGON --asset cWEURC --freeze --execute
# ./scripts/deployment/configure-cstar-cw-bridge-pair.sh --chain AVALANCHE --asset cUSDC --max-outstanding 1000000000000 --freeze --execute
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
SMOM_ROOT="${PROJECT_ROOT}/smom-dbis-138"
CONFIG_FILE="${PROJECT_ROOT}/config/token-mapping-multichain.json"
CHAIN=""
ASSET=""
MAX_OUTSTANDING=""
FREEZE=false
EXECUTE=false
usage() {
cat <<'EOF'
Usage:
./scripts/deployment/configure-cstar-cw-bridge-pair.sh --chain <CHAIN> --asset <c*> [options]
Required:
--chain <CHAIN> MAINNET | CRONOS | BSC | POLYGON | GNOSIS | AVALANCHE | BASE | ARBITRUM | OPTIMISM | CELO | ALL
--asset <c*> cUSDT | cUSDC | cAUSDT | cUSDW | cEURC | cEURT | cGBPC | cGBPT | cAUDC | cJPYC | cCHFC | cCADC | cXAUC | cXAUT
Options:
--max-outstanding <raw> Set L1 maxOutstanding(canonical, selector, rawAmount)
--freeze Freeze the L2 token pair and the 138 destination after wiring
--execute Broadcast writes; default is dry-run
--help Show this message
EOF
}
while [[ $# -gt 0 ]]; do
case "$1" in
--chain) CHAIN="${2:-}"; shift 2 ;;
--asset) ASSET="${2:-}"; shift 2 ;;
--max-outstanding) MAX_OUTSTANDING="${2:-}"; shift 2 ;;
--freeze) FREEZE=true; shift ;;
--execute) EXECUTE=true; shift ;;
--help|-h) usage; exit 0 ;;
*)
echo "Unknown argument: $1" >&2
usage >&2
exit 2
;;
esac
done
[[ -n "$CHAIN" && -n "$ASSET" ]] || { usage >&2; exit 2; }
if [[ -f "${SMOM_ROOT}/scripts/lib/deployment/dotenv.sh" ]]; then
# shellcheck disable=SC1090
source "${SMOM_ROOT}/scripts/lib/deployment/dotenv.sh"
load_deployment_env --repo-root "$SMOM_ROOT"
else
set -a
# shellcheck disable=SC1090
source "${SMOM_ROOT}/.env"
set +a
fi
command -v cast >/dev/null 2>&1 || { echo "cast (foundry) required" >&2; exit 1; }
command -v node >/dev/null 2>&1 || { echo "node required" >&2; exit 1; }
[[ -n "${PRIVATE_KEY:-}" ]] || { echo "PRIVATE_KEY required in ${SMOM_ROOT}/.env" >&2; exit 1; }
[[ -f "$CONFIG_FILE" ]] || { echo "Missing $CONFIG_FILE" >&2; exit 1; }
declare -A CHAIN_NAME_TO_ID=(
[MAINNET]=1
[CRONOS]=25
[BSC]=56
[POLYGON]=137
[GNOSIS]=100
[AVALANCHE]=43114
[BASE]=8453
[ARBITRUM]=42161
[OPTIMISM]=10
[CELO]=42220
[ALL]=651940
)
declare -A CHAIN_NAME_TO_SELECTOR_VAR=(
[MAINNET]=ETH_MAINNET_SELECTOR
[CRONOS]=CRONOS_SELECTOR
[BSC]=BSC_SELECTOR
[POLYGON]=POLYGON_SELECTOR
[GNOSIS]=GNOSIS_SELECTOR
[AVALANCHE]=AVALANCHE_SELECTOR
[BASE]=BASE_SELECTOR
[ARBITRUM]=ARBITRUM_SELECTOR
[OPTIMISM]=OPTIMISM_SELECTOR
[CELO]=CELO_SELECTOR
)
declare -A ASSET_TO_MAPPING_KEY=(
[cUSDT]=Compliant_USDT_cW
[cUSDC]=Compliant_USDC_cW
[cAUSDT]=Compliant_AUSDT_cW
[cUSDW]=Compliant_USDW_cW
[cEURC]=Compliant_EURC_cW
[cEURT]=Compliant_EURT_cW
[cGBPC]=Compliant_GBPC_cW
[cGBPT]=Compliant_GBPT_cW
[cAUDC]=Compliant_AUDC_cW
[cJPYC]=Compliant_JPYC_cW
[cCHFC]=Compliant_CHFC_cW
[cCADC]=Compliant_CADC_cW
[cXAUC]=Compliant_XAUC_cW
[cXAUT]=Compliant_XAUT_cW
)
declare -A ASSET_TO_CW_PREFIX=(
[cUSDT]=CWUSDT
[cUSDC]=CWUSDC
[cAUSDT]=CWAUSDT
[cUSDW]=CWUSDW
[cEURC]=CWEURC
[cEURT]=CWEURT
[cGBPC]=CWGBPC
[cGBPT]=CWGBPT
[cAUDC]=CWAUDC
[cJPYC]=CWJPYC
[cCHFC]=CWCHFC
[cCADC]=CWCADC
[cXAUC]=CWXAUC
[cXAUT]=CWXAUT
)
chain_id="${CHAIN_NAME_TO_ID[$CHAIN]:-}"
[[ -n "$chain_id" ]] || { echo "Unsupported chain: $CHAIN" >&2; exit 2; }
mapping_key="${ASSET_TO_MAPPING_KEY[$ASSET]:-}"
cw_prefix="${ASSET_TO_CW_PREFIX[$ASSET]:-}"
[[ -n "$mapping_key" && -n "$cw_prefix" ]] || { echo "Unsupported asset: $ASSET" >&2; exit 2; }
selector_var="${CHAIN_NAME_TO_SELECTOR_VAR[$CHAIN]:-}"
selector="${selector_var:+${!selector_var:-}}"
if [[ "$CHAIN" != "ALL" && -z "$selector" ]]; then
echo "Missing selector env for $CHAIN ($selector_var)" >&2
exit 1
fi
resolve_chain_rpc() {
case "$1" in
MAINNET) printf '%s' "${ETH_MAINNET_RPC_URL:-${ETHEREUM_MAINNET_RPC:-}}" ;;
CRONOS) printf '%s' "${CRONOS_RPC_URL:-${CRONOS_RPC:-}}" ;;
BSC) printf '%s' "${BSC_RPC_URL:-}" ;;
POLYGON) printf '%s' "${POLYGON_MAINNET_RPC:-${POLYGON_RPC_URL:-}}" ;;
GNOSIS) printf '%s' "${GNOSIS_RPC:-${GNOSIS_MAINNET_RPC:-}}" ;;
AVALANCHE) printf '%s' "${AVALANCHE_RPC_URL:-}" ;;
BASE) printf '%s' "${BASE_MAINNET_RPC:-}" ;;
ARBITRUM) printf '%s' "${ARBITRUM_MAINNET_RPC:-${ARBITRUM_MAINNET_RPC_URL:-${ARBITRUM_RPC_URL:-}}}" ;;
OPTIMISM) printf '%s' "${OPTIMISM_MAINNET_RPC:-}" ;;
CELO) printf '%s' "${CELO_RPC:-${CELO_MAINNET_RPC:-}}" ;;
ALL) printf '%s' "${ALL_MAINNET_RPC_URL:-${ALL_RPC_URL:-}}" ;;
*) printf '' ;;
esac
}
CHAIN_RPC="$(resolve_chain_rpc "$CHAIN")"
[[ -n "${RPC_URL_138:-}" ]] || { echo "RPC_URL_138 required" >&2; exit 1; }
[[ -n "$CHAIN_RPC" ]] || { echo "Missing RPC for $CHAIN" >&2; exit 1; }
CHAIN138_L1_BRIDGE="${CW_L1_BRIDGE_CHAIN138:-${CHAIN138_L1_BRIDGE:-}}"
[[ -n "$CHAIN138_L1_BRIDGE" ]] || { echo "Set CW_L1_BRIDGE_CHAIN138 (or CHAIN138_L1_BRIDGE) in .env" >&2; exit 1; }
CW_BRIDGE_VAR="CW_BRIDGE_${CHAIN}"
CHAIN_L2_BRIDGE="${!CW_BRIDGE_VAR:-}"
[[ -n "$CHAIN_L2_BRIDGE" ]] || { echo "Set ${CW_BRIDGE_VAR} in .env" >&2; exit 1; }
MIRRORED_ENV_VAR="${cw_prefix}_${CHAIN}"
MIRRORED_ADDRESS_VAR="${cw_prefix}_ADDRESS_${chain_id}"
CHAIN_MIRRORED_TOKEN="${!MIRRORED_ENV_VAR:-${!MIRRORED_ADDRESS_VAR:-}}"
map_info="$(
node - <<'NODE' "$CONFIG_FILE" "$chain_id" "$mapping_key"
const fs = require('fs');
const [configFile, chainIdRaw, mappingKey] = process.argv.slice(2);
const chainId = Number(chainIdRaw);
const json = JSON.parse(fs.readFileSync(configFile, 'utf8'));
const pair = (json.pairs || []).find((entry) => entry.fromChainId === 138 && entry.toChainId === chainId);
if (!pair) {
process.exit(10);
}
const token = (pair.tokens || []).find((entry) => entry.key === mappingKey);
if (!token) {
process.exit(11);
}
process.stdout.write([
token.addressFrom || '',
token.addressTo || '',
token.name || '',
pair.notes || ''
].join('\n'));
NODE
)" || {
echo "Failed to resolve $ASSET / $CHAIN from $CONFIG_FILE" >&2
exit 1
}
canonical_address="$(sed -n '1p' <<<"$map_info")"
mapping_mirrored_address="$(sed -n '2p' <<<"$map_info")"
mapping_name="$(sed -n '3p' <<<"$map_info")"
[[ -n "$canonical_address" && "$canonical_address" != "0x0000000000000000000000000000000000000000" ]] || {
echo "Canonical address missing for $ASSET -> $CHAIN in $CONFIG_FILE" >&2
exit 1
}
if [[ -z "$CHAIN_MIRRORED_TOKEN" || "$CHAIN_MIRRORED_TOKEN" == "0x0000000000000000000000000000000000000000" ]]; then
if [[ -n "$mapping_mirrored_address" && "$mapping_mirrored_address" != "0x0000000000000000000000000000000000000000" ]]; then
CHAIN_MIRRORED_TOKEN="$mapping_mirrored_address"
else
echo "Mirrored token for $ASSET on $CHAIN is not configured in .env or $CONFIG_FILE" >&2
exit 1
fi
fi
send_cast() {
local rpc="$1"
local to="$2"
local sig="$3"
shift 3
if $EXECUTE; then
cast send "$to" "$sig" "$@" --rpc-url "$rpc" --private-key "$PRIVATE_KEY" --legacy
else
printf 'cast send %q %q' "$to" "$sig"
for arg in "$@"; do
printf ' %q' "$arg"
done
printf ' --rpc-url %q --private-key "$PRIVATE_KEY" --legacy\n' "$rpc"
fi
}
read_struct_field() {
local value="$1"
tr '\n' ' ' <<<"$value" | xargs 2>/dev/null || true
}
pair_current="$(cast call "$CHAIN_L2_BRIDGE" "canonicalToMirrored(address)(address)" "$canonical_address" --rpc-url "$CHAIN_RPC" 2>/dev/null || true)"
l1_dest_current="$(cast call "$CHAIN138_L1_BRIDGE" "destinations(address,uint64)((address,bool))" "$canonical_address" "${selector:-0}" --rpc-url "$RPC_URL_138" 2>/dev/null || true)"
l2_dest_current="$(cast call "$CHAIN_L2_BRIDGE" "destinations(uint64)((address,bool))" 138 --rpc-url "$CHAIN_RPC" 2>/dev/null || true)"
allowlisted_current="$(cast call "$CHAIN138_L1_BRIDGE" "supportedCanonicalToken(address)(bool)" "$canonical_address" --rpc-url "$RPC_URL_138" 2>/dev/null || true)"
freeze_pair_current="$(cast call "$CHAIN_L2_BRIDGE" "tokenPairFrozen(address)(bool)" "$canonical_address" --rpc-url "$CHAIN_RPC" 2>/dev/null || true)"
freeze_dest_current="$(cast call "$CHAIN_L2_BRIDGE" "destinationFrozen(uint64)(bool)" 138 --rpc-url "$CHAIN_RPC" 2>/dev/null || true)"
echo "=== Generic c* -> cW* bridge wiring ==="
echo "Chain: $CHAIN (chainId $chain_id)"
echo "Asset: $ASSET"
echo "Mapping key: $mapping_key"
echo "Lane: $mapping_name"
echo "Canonical (138): $canonical_address"
echo "Mirrored ($CHAIN): $CHAIN_MIRRORED_TOKEN"
echo "L1 bridge: $CHAIN138_L1_BRIDGE"
echo "L2 bridge: $CHAIN_L2_BRIDGE"
if [[ -n "$selector" ]]; then
echo "Chain selector: $selector"
fi
echo "Mode: $([[ "$EXECUTE" == "true" ]] && echo "execute" || echo "dry-run")"
echo
echo "Current state:"
echo " L2 canonicalToMirrored: $(read_struct_field "$pair_current")"
echo " L1 destinations(): $(read_struct_field "$l1_dest_current")"
echo " L2 destinations(138): $(read_struct_field "$l2_dest_current")"
echo " L1 allowlisted: ${allowlisted_current:-unavailable}"
if [[ -n "$MAX_OUTSTANDING" ]]; then
current_max="$(cast call "$CHAIN138_L1_BRIDGE" "maxOutstanding(address,uint64)(uint256)" "$canonical_address" "${selector:-0}" --rpc-url "$RPC_URL_138" 2>/dev/null || true)"
echo " L1 maxOutstanding: ${current_max:-unavailable}"
fi
echo " L2 tokenPairFrozen: ${freeze_pair_current:-unavailable}"
echo " L2 destinationFrozen: ${freeze_dest_current:-unavailable}"
echo
echo "Planned actions:"
echo "1. Configure L2 token pair"
send_cast "$CHAIN_RPC" "$CHAIN_L2_BRIDGE" "configureTokenPair(address,address)" "$canonical_address" "$CHAIN_MIRRORED_TOKEN"
if [[ "$CHAIN" != "ALL" ]]; then
echo "2. Configure L1 destination to L2 bridge"
send_cast "$RPC_URL_138" "$CHAIN138_L1_BRIDGE" "configureDestination(address,uint64,address,bool)" "$canonical_address" "$selector" "$CHAIN_L2_BRIDGE" true
echo "3. Configure L2 destination back to Chain 138"
send_cast "$CHAIN_RPC" "$CHAIN_L2_BRIDGE" "configureDestination(uint64,address,bool)" 138 "$CHAIN138_L1_BRIDGE" true
else
echo "2. ALL Mainnet uses a non-CCIP adapter lane; no selector-driven CWMultiTokenBridge destination writes were generated."
fi
echo "4. Allowlist canonical token on L1"
send_cast "$RPC_URL_138" "$CHAIN138_L1_BRIDGE" "configureSupportedCanonicalToken(address,bool)" "$canonical_address" true
if [[ -n "$MAX_OUTSTANDING" && "$CHAIN" != "ALL" ]]; then
echo "5. Set L1 maxOutstanding"
send_cast "$RPC_URL_138" "$CHAIN138_L1_BRIDGE" "setMaxOutstanding(address,uint64,uint256)" "$canonical_address" "$selector" "$MAX_OUTSTANDING"
fi
if $FREEZE && [[ "$CHAIN" != "ALL" ]]; then
echo "6. Freeze L2 token pair"
send_cast "$CHAIN_RPC" "$CHAIN_L2_BRIDGE" "freezeTokenPair(address)" "$canonical_address"
echo "7. Freeze L2 destination back to Chain 138"
send_cast "$CHAIN_RPC" "$CHAIN_L2_BRIDGE" "freezeDestination(uint64)" 138
fi
echo
if ! $EXECUTE; then
echo "Dry-run only. Re-run with --execute to broadcast."
fi

View File

@@ -26,7 +26,7 @@ set +a
RPC="${RPC_URL_138:-http://192.168.11.211:8545}"
GAS_PRICE="${GAS_PRICE_138:-${GAS_PRICE:-1000000000}}"
export DODO_PMM_INTEGRATION="${DODO_PMM_INTEGRATION_ADDRESS:-${DODO_PMM_INTEGRATION:-0x5BDc62f1ae7D630c37A8B363a1d49845356Ee72d}}"
export DODO_PMM_INTEGRATION="${DODO_PMM_INTEGRATION_ADDRESS:-${DODO_PMM_INTEGRATION:-0x86ADA6Ef91A3B450F89f2b751e93B1b7A3218895}}"
export RPC_URL_138="$RPC"
cd "$SMOM"

View File

@@ -0,0 +1,27 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
SMOM_DIR="${PROJECT_ROOT}/smom-dbis-138"
if [[ -f "${PROJECT_ROOT}/scripts/lib/load-project-env.sh" ]]; then
# shellcheck source=/dev/null
source "${PROJECT_ROOT}/scripts/lib/load-project-env.sh"
fi
RPC_URL="${RPC_URL_138:-${CHAIN138_RPC_URL:-http://192.168.11.211:8545}}"
GAS_PRICE_WEI="${CHAIN138_DEPLOY_GAS_PRICE_WEI:-1000}"
BROADCAST_ARGS=()
if [[ "${1:-}" == "--broadcast" ]]; then
BROADCAST_ARGS+=(--broadcast)
fi
cd "${SMOM_DIR}"
forge script script/bridge/trustless/DeployChain138PilotDexVenues.s.sol:DeployChain138PilotDexVenues \
--rpc-url "${RPC_URL}" \
--legacy \
--slow \
--with-gas-price "${GAS_PRICE_WEI}" \
"${BROADCAST_ARGS[@]}"

View File

@@ -0,0 +1,108 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
CORE_REPO="${CHAIN138_UNISWAP_V3_NATIVE_CORE_REPO:-/home/intlc/projects/uniswap-v3-core}"
PERIPHERY_REPO="${CHAIN138_UNISWAP_V3_NATIVE_PERIPHERY_REPO:-/home/intlc/projects/uniswap-v3-periphery}"
RPC_URL="${CHAIN138_UNISWAP_V3_UPSTREAM_RPC_URL:-http://192.168.11.221:8545}"
if [[ -f "${PROJECT_ROOT}/scripts/lib/load-project-env.sh" ]]; then
# shellcheck source=/dev/null
source "${PROJECT_ROOT}/scripts/lib/load-project-env.sh"
fi
require_cmd() {
command -v "$1" >/dev/null 2>&1 || {
echo "[fail] missing required command: $1" >&2
exit 1
}
}
require_cmd jq
require_cmd npm
require_cmd npx
if [[ -z "${PRIVATE_KEY:-}" ]]; then
echo "[fail] PRIVATE_KEY is required" >&2
exit 1
fi
echo "== Chain 138 upstream-native Uniswap v3 deployment =="
echo "RPC: ${RPC_URL}"
echo "Core repo: ${CORE_REPO}"
echo "Periphery repo: ${PERIPHERY_REPO}"
echo
echo "1. Installing dependencies..."
(
cd "${CORE_REPO}"
npm install
)
(
cd "${PERIPHERY_REPO}"
npm install
)
echo
echo "2. Compiling..."
(
cd "${CORE_REPO}"
npx hardhat compile
)
(
cd "${PERIPHERY_REPO}"
npx hardhat compile
)
echo
echo "3. Deploying factory..."
(
cd "${CORE_REPO}"
npx hardhat run scripts/chain138/deploy-factory.ts --network chain138
)
FACTORY_JSON="${CORE_REPO}/deployments/chain138/factory.json"
export CHAIN138_UNISWAP_V3_NATIVE_FACTORY
CHAIN138_UNISWAP_V3_NATIVE_FACTORY="$(jq -r '.factory' "${FACTORY_JSON}")"
echo " factory=${CHAIN138_UNISWAP_V3_NATIVE_FACTORY}"
echo
echo "4. Deploying periphery..."
(
cd "${PERIPHERY_REPO}"
npx hardhat run scripts/chain138/deploy-periphery.ts --network chain138
)
PERIPHERY_JSON="${PERIPHERY_REPO}/deployments/chain138/periphery.json"
export CHAIN138_UNISWAP_V3_NATIVE_POSITION_MANAGER
export CHAIN138_UNISWAP_V3_NATIVE_SWAP_ROUTER
export CHAIN138_UNISWAP_V3_NATIVE_QUOTER_V2
CHAIN138_UNISWAP_V3_NATIVE_POSITION_MANAGER="$(jq -r '.positionManager' "${PERIPHERY_JSON}")"
CHAIN138_UNISWAP_V3_NATIVE_SWAP_ROUTER="$(jq -r '.swapRouter' "${PERIPHERY_JSON}")"
CHAIN138_UNISWAP_V3_NATIVE_QUOTER_V2="$(jq -r '.quoterV2' "${PERIPHERY_JSON}")"
echo " positionManager=${CHAIN138_UNISWAP_V3_NATIVE_POSITION_MANAGER}"
echo " swapRouter=${CHAIN138_UNISWAP_V3_NATIVE_SWAP_ROUTER}"
echo " quoterV2=${CHAIN138_UNISWAP_V3_NATIVE_QUOTER_V2}"
echo
echo "5. Creating and seeding canonical pools..."
(
cd "${PERIPHERY_REPO}"
npx hardhat run scripts/chain138/create-seed-pools.ts --network chain138
)
POOLS_JSON="${PERIPHERY_REPO}/deployments/chain138/pools.json"
export CHAIN138_UNISWAP_V3_NATIVE_WETH_USDT_POOL
export CHAIN138_UNISWAP_V3_NATIVE_WETH_USDC_POOL
CHAIN138_UNISWAP_V3_NATIVE_WETH_USDT_POOL="$(jq -r '.wethUsdt.pool' "${POOLS_JSON}")"
CHAIN138_UNISWAP_V3_NATIVE_WETH_USDC_POOL="$(jq -r '.wethUsdc.pool' "${POOLS_JSON}")"
echo
echo "== deployed addresses =="
echo "CHAIN138_UNISWAP_V3_NATIVE_FACTORY=${CHAIN138_UNISWAP_V3_NATIVE_FACTORY}"
echo "CHAIN138_UNISWAP_V3_NATIVE_POSITION_MANAGER=${CHAIN138_UNISWAP_V3_NATIVE_POSITION_MANAGER}"
echo "CHAIN138_UNISWAP_V3_NATIVE_SWAP_ROUTER=${CHAIN138_UNISWAP_V3_NATIVE_SWAP_ROUTER}"
echo "CHAIN138_UNISWAP_V3_NATIVE_QUOTER_V2=${CHAIN138_UNISWAP_V3_NATIVE_QUOTER_V2}"
echo "CHAIN138_UNISWAP_V3_NATIVE_WETH_USDT_POOL=${CHAIN138_UNISWAP_V3_NATIVE_WETH_USDT_POOL}"
echo "CHAIN138_UNISWAP_V3_NATIVE_WETH_USDC_POOL=${CHAIN138_UNISWAP_V3_NATIVE_WETH_USDC_POOL}"

View File

@@ -0,0 +1,76 @@
#!/usr/bin/env bash
# Deploy Chain 138 cross-chain flash infrastructure via Foundry:
# - UniversalCCIPFlashBridgeAdapter
# - CrossChainFlashRepayReceiver
# - CrossChainFlashVaultCreditReceiver
#
# Requires:
# PRIVATE_KEY
# RPC_URL_138
# UNIVERSAL_CCIP_BRIDGE or FLASH_UNIVERSAL_CCIP_BRIDGE
# CCIP_ROUTER / CCIP_ROUTER_ADDRESS / CCIP_ROUTER_CHAIN138 or FLASH_CCIP_ROUTER
#
# Optional:
# FLASH_REPAY_RECEIVER_ROUTER
# FLASH_VAULT_CREDIT_ROUTER
#
# Usage:
# source scripts/lib/load-project-env.sh
# ./scripts/deployment/deploy-cross-chain-flash-infra-chain138.sh
# ./scripts/deployment/deploy-cross-chain-flash-infra-chain138.sh --dry-run
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
SMOM="$PROJECT_ROOT/smom-dbis-138"
if [[ -f "$PROJECT_ROOT/scripts/lib/load-project-env.sh" ]]; then
# shellcheck disable=SC1091
source "$PROJECT_ROOT/scripts/lib/load-project-env.sh" 2>/dev/null || true
fi
RPC="${RPC_URL_138:-}"
if [[ -z "$RPC" ]]; then
echo "ERROR: Set RPC_URL_138 (Core Chain 138 RPC)." >&2
exit 1
fi
if [[ -z "${PRIVATE_KEY:-}" ]]; then
echo "ERROR: PRIVATE_KEY must be set." >&2
exit 1
fi
BRIDGE="${FLASH_UNIVERSAL_CCIP_BRIDGE:-${UNIVERSAL_CCIP_BRIDGE:-}}"
if [[ -z "$BRIDGE" ]]; then
echo "ERROR: Set FLASH_UNIVERSAL_CCIP_BRIDGE or UNIVERSAL_CCIP_BRIDGE." >&2
exit 1
fi
ROUTER="${FLASH_CCIP_ROUTER:-${CCIP_ROUTER:-${CCIP_ROUTER_ADDRESS:-${CCIP_ROUTER_CHAIN138:-}}}}"
if [[ -z "$ROUTER" ]]; then
echo "ERROR: Set FLASH_CCIP_ROUTER or one of CCIP_ROUTER / CCIP_ROUTER_ADDRESS / CCIP_ROUTER_CHAIN138." >&2
exit 1
fi
cd "$SMOM"
ARGS=()
if [[ "${1:-}" == "--dry-run" ]]; then
ARGS=(-vvvv)
else
ARGS=(--broadcast --with-gas-price 1000000000 --gas-estimate-multiplier 150 -vvvv)
fi
echo "RPC_URL_138=$RPC"
echo "FLASH_UNIVERSAL_CCIP_BRIDGE=$BRIDGE"
echo "FLASH_CCIP_ROUTER=$ROUTER"
if [[ -n "${FLASH_REPAY_RECEIVER_ROUTER:-}" ]]; then
echo "FLASH_REPAY_RECEIVER_ROUTER=${FLASH_REPAY_RECEIVER_ROUTER}"
fi
if [[ -n "${FLASH_VAULT_CREDIT_ROUTER:-}" ]]; then
echo "FLASH_VAULT_CREDIT_ROUTER=${FLASH_VAULT_CREDIT_ROUTER}"
fi
exec forge script script/deploy/DeployCrossChainFlashInfrastructure.s.sol:DeployCrossChainFlashInfrastructure \
--rpc-url "$RPC" "${ARGS[@]}"

View File

@@ -0,0 +1,136 @@
#!/usr/bin/env bash
# Deploy fresh source-aligned GRU c* V2 USD tokens to Chain 138 and stage them in UniversalAssetRegistry.
#
# Usage:
# bash scripts/deployment/deploy-gru-v2-chain138.sh
# bash scripts/deployment/deploy-gru-v2-chain138.sh --dry-run
#
# Defaults:
# - INITIAL_SUPPLY=0
# - FORWARD_CANONICAL=1
# - REGISTER_IN_GRU=1
# - GOVERNANCE_CONTROLLER from env or canonical Chain 138 inventory
# - per-token metadata/disclosure/reporting URIs can be exported from
# scripts/deployment/publish-gru-v2-chain138-metadata-to-ipfs.sh
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
SMOM_ROOT="${PROJECT_ROOT}/smom-dbis-138"
DRY_RUN=0
for arg in "$@"; do
case "$arg" in
--dry-run) DRY_RUN=1 ;;
*)
echo "Unknown argument: $arg" >&2
exit 2
;;
esac
done
if [[ -f "${PROJECT_ROOT}/scripts/lib/load-project-env.sh" ]]; then
set +eu
# shellcheck source=../lib/load-project-env.sh
source "${PROJECT_ROOT}/scripts/lib/load-project-env.sh" 2>/dev/null || true
set -euo pipefail
fi
: "${PRIVATE_KEY:?PRIVATE_KEY is required}"
RPC_URL_138="${RPC_URL_138:-http://192.168.11.211:8545}"
UNIVERSAL_ASSET_REGISTRY="${UNIVERSAL_ASSET_REGISTRY:-0xAEE4b7fBe82E1F8295951584CBc772b8BBD68575}"
GOVERNANCE_CONTROLLER="${GOVERNANCE_CONTROLLER:-0xA6891D5229f2181a34D4FF1B515c3Aa37dd90E0e}"
INITIAL_SUPPLY="${INITIAL_SUPPLY:-0}"
FORWARD_CANONICAL="${FORWARD_CANONICAL:-1}"
REGISTER_IN_GRU="${REGISTER_IN_GRU:-1}"
DEPLOY_CUSDT_V2="${DEPLOY_CUSDT_V2:-1}"
DEPLOY_CUSDC_V2="${DEPLOY_CUSDC_V2:-1}"
GAS_PRICE_WEI="${GAS_PRICE_WEI:-1000000000}"
if ! command -v cast >/dev/null 2>&1; then
echo "cast is required for nonce preflight checks" >&2
exit 1
fi
deployer_address="$(cast wallet address --private-key "$PRIVATE_KEY" 2>/dev/null)"
latest_nonce="$(cast rpc --rpc-url "$RPC_URL_138" eth_getTransactionCount "$deployer_address" latest | tr -d '\"')"
pending_nonce="$(cast rpc --rpc-url "$RPC_URL_138" eth_getTransactionCount "$deployer_address" pending | tr -d '\"')"
if [[ "$latest_nonce" != "$pending_nonce" ]]; then
echo "Refusing GRU v2 deploy: deployer has pending transactions." >&2
echo "deployer=$deployer_address latest_nonce=$latest_nonce pending_nonce=$pending_nonce" >&2
echo "Resolve or replace the pending transaction set before retrying." >&2
exit 1
fi
CMD=(
forge script script/deploy/DeployAndStageCompliantFiatTokensV2ForChain.s.sol
--rpc-url "$RPC_URL_138"
--broadcast
--private-key "$PRIVATE_KEY"
--with-gas-price "$GAS_PRICE_WEI"
)
echo "=== GRU v2 Chain 138 deploy ==="
echo "RPC_URL_138=$RPC_URL_138"
echo "UNIVERSAL_ASSET_REGISTRY=$UNIVERSAL_ASSET_REGISTRY"
echo "GOVERNANCE_CONTROLLER=$GOVERNANCE_CONTROLLER"
echo "INITIAL_SUPPLY=$INITIAL_SUPPLY"
echo "FORWARD_CANONICAL=$FORWARD_CANONICAL"
echo "REGISTER_IN_GRU=$REGISTER_IN_GRU"
echo "DEPLOY_CUSDT_V2=$DEPLOY_CUSDT_V2"
echo "DEPLOY_CUSDC_V2=$DEPLOY_CUSDC_V2"
echo "DEPLOYER=$deployer_address"
echo "NONCE_LATEST=$latest_nonce"
echo "NONCE_PENDING=$pending_nonce"
echo "CUSDT_V2_TOKEN_URI=${CUSDT_V2_TOKEN_URI:-}"
echo "CUSDC_V2_TOKEN_URI=${CUSDC_V2_TOKEN_URI:-}"
if [[ "$DRY_RUN" == "1" ]]; then
printf '[DRY_RUN] (cd %s && ' "$SMOM_ROOT"
printf 'INITIAL_SUPPLY=%q FORWARD_CANONICAL=%q REGISTER_IN_GRU=%q GOVERNANCE_CONTROLLER=%q UNIVERSAL_ASSET_REGISTRY=%q DEPLOY_CUSDT_V2=%q DEPLOY_CUSDC_V2=%q GAS_PRICE_WEI=%q ' \
"$INITIAL_SUPPLY" "$FORWARD_CANONICAL" "$REGISTER_IN_GRU" "$GOVERNANCE_CONTROLLER" "$UNIVERSAL_ASSET_REGISTRY" "$DEPLOY_CUSDT_V2" "$DEPLOY_CUSDC_V2" "$GAS_PRICE_WEI"
printf 'CUSDT_V2_TOKEN_URI=%q CUSDT_V2_REGULATORY_DISCLOSURE_URI=%q CUSDT_V2_REPORTING_URI=%q ' \
"${CUSDT_V2_TOKEN_URI:-}" "${CUSDT_V2_REGULATORY_DISCLOSURE_URI:-}" "${CUSDT_V2_REPORTING_URI:-}"
printf 'CUSDC_V2_TOKEN_URI=%q CUSDC_V2_REGULATORY_DISCLOSURE_URI=%q CUSDC_V2_REPORTING_URI=%q ' \
"${CUSDC_V2_TOKEN_URI:-}" "${CUSDC_V2_REGULATORY_DISCLOSURE_URI:-}" "${CUSDC_V2_REPORTING_URI:-}"
printed=()
skip_next=0
for arg in "${CMD[@]}"; do
if [[ "$skip_next" == "1" ]]; then
printed+=("<hidden>")
skip_next=0
continue
fi
if [[ "$arg" == "--private-key" ]]; then
printed+=("$arg")
skip_next=1
continue
fi
printed+=("$arg")
done
printf '%q ' "${printed[@]}"
printf ')\n'
exit 0
fi
(
cd "$SMOM_ROOT"
INITIAL_SUPPLY="$INITIAL_SUPPLY" \
FORWARD_CANONICAL="$FORWARD_CANONICAL" \
REGISTER_IN_GRU="$REGISTER_IN_GRU" \
DEPLOY_CUSDT_V2="$DEPLOY_CUSDT_V2" \
DEPLOY_CUSDC_V2="$DEPLOY_CUSDC_V2" \
GOVERNANCE_CONTROLLER="$GOVERNANCE_CONTROLLER" \
UNIVERSAL_ASSET_REGISTRY="$UNIVERSAL_ASSET_REGISTRY" \
GAS_PRICE_WEI="$GAS_PRICE_WEI" \
CUSDT_V2_TOKEN_URI="${CUSDT_V2_TOKEN_URI:-}" \
CUSDT_V2_REGULATORY_DISCLOSURE_URI="${CUSDT_V2_REGULATORY_DISCLOSURE_URI:-}" \
CUSDT_V2_REPORTING_URI="${CUSDT_V2_REPORTING_URI:-}" \
CUSDC_V2_TOKEN_URI="${CUSDC_V2_TOKEN_URI:-}" \
CUSDC_V2_REGULATORY_DISCLOSURE_URI="${CUSDC_V2_REGULATORY_DISCLOSURE_URI:-}" \
CUSDC_V2_REPORTING_URI="${CUSDC_V2_REPORTING_URI:-}" \
"${CMD[@]}"
)

View File

@@ -0,0 +1,155 @@
#!/usr/bin/env bash
# Deploy a generic GRU c* V2 asset to Chain 138 and optionally stage it in UniversalAssetRegistry.
#
# Usage:
# TOKEN_NAME="Euro Coin (Compliant V2)" TOKEN_SYMBOL="cEURC" CURRENCY_CODE=EUR \
# REGISTRY_NAME="Euro Coin (Compliant V2)" REGISTRY_SYMBOL="cEURC.v2" \
# bash scripts/deployment/deploy-gru-v2-generic-chain138.sh
# ... --dry-run
#
# Required env:
# PRIVATE_KEY
# TOKEN_NAME
# TOKEN_SYMBOL
# CURRENCY_CODE
#
# Optional env:
# TOKEN_DECIMALS=6
# VERSION_TAG=2
# INITIAL_SUPPLY=0
# FORWARD_CANONICAL=1
# REGISTER_IN_GRU=1
# GOVERNANCE_CONTROLLER, UNIVERSAL_ASSET_REGISTRY
# REGISTRY_NAME, REGISTRY_SYMBOL
# TOKEN_URI, REGULATORY_DISCLOSURE_URI, REPORTING_URI
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
SMOM_ROOT="${PROJECT_ROOT}/smom-dbis-138"
DRY_RUN=0
for arg in "$@"; do
case "$arg" in
--dry-run) DRY_RUN=1 ;;
*)
echo "Unknown argument: $arg" >&2
exit 2
;;
esac
done
if [[ -f "${PROJECT_ROOT}/scripts/lib/load-project-env.sh" ]]; then
set +eu
# shellcheck source=../lib/load-project-env.sh
source "${PROJECT_ROOT}/scripts/lib/load-project-env.sh" 2>/dev/null || true
set -euo pipefail
fi
: "${PRIVATE_KEY:?PRIVATE_KEY is required}"
: "${TOKEN_NAME:?TOKEN_NAME is required}"
: "${TOKEN_SYMBOL:?TOKEN_SYMBOL is required}"
: "${CURRENCY_CODE:?CURRENCY_CODE is required}"
RPC_URL_138="${RPC_URL_138:-http://192.168.11.211:8545}"
UNIVERSAL_ASSET_REGISTRY="${UNIVERSAL_ASSET_REGISTRY:-0xAEE4b7fBe82E1F8295951584CBc772b8BBD68575}"
GOVERNANCE_CONTROLLER="${GOVERNANCE_CONTROLLER:-0xA6891D5229f2181a34D4FF1B515c3Aa37dd90E0e}"
TOKEN_DECIMALS="${TOKEN_DECIMALS:-6}"
VERSION_TAG="${VERSION_TAG:-2}"
INITIAL_SUPPLY="${INITIAL_SUPPLY:-0}"
FORWARD_CANONICAL="${FORWARD_CANONICAL:-1}"
REGISTER_IN_GRU="${REGISTER_IN_GRU:-1}"
REGISTRY_NAME="${REGISTRY_NAME:-$TOKEN_NAME}"
REGISTRY_SYMBOL="${REGISTRY_SYMBOL:-${TOKEN_SYMBOL}.v${VERSION_TAG}}"
GAS_PRICE_WEI="${GAS_PRICE_WEI:-1000000000}"
if ! command -v cast >/dev/null 2>&1; then
echo "cast is required for nonce preflight checks" >&2
exit 1
fi
deployer_address="$(cast wallet address --private-key "$PRIVATE_KEY" 2>/dev/null)"
latest_nonce="$(cast rpc --rpc-url "$RPC_URL_138" eth_getTransactionCount "$deployer_address" latest | tr -d '\"')"
pending_nonce="$(cast rpc --rpc-url "$RPC_URL_138" eth_getTransactionCount "$deployer_address" pending | tr -d '\"')"
if [[ "$latest_nonce" != "$pending_nonce" ]]; then
echo "Refusing generic GRU v2 deploy: deployer has pending transactions." >&2
echo "deployer=$deployer_address latest_nonce=$latest_nonce pending_nonce=$pending_nonce" >&2
exit 1
fi
CMD=(
forge script script/deploy/DeployAndStageGenericCompliantFiatTokenV2ForChain.s.sol
--rpc-url "$RPC_URL_138"
--broadcast
--private-key "$PRIVATE_KEY"
--with-gas-price "$GAS_PRICE_WEI"
)
echo "=== Generic GRU v2 Chain 138 deploy ==="
echo "RPC_URL_138=$RPC_URL_138"
echo "TOKEN_NAME=$TOKEN_NAME"
echo "TOKEN_SYMBOL=$TOKEN_SYMBOL"
echo "CURRENCY_CODE=$CURRENCY_CODE"
echo "TOKEN_DECIMALS=$TOKEN_DECIMALS"
echo "VERSION_TAG=$VERSION_TAG"
echo "REGISTRY_NAME=$REGISTRY_NAME"
echo "REGISTRY_SYMBOL=$REGISTRY_SYMBOL"
echo "UNIVERSAL_ASSET_REGISTRY=$UNIVERSAL_ASSET_REGISTRY"
echo "GOVERNANCE_CONTROLLER=$GOVERNANCE_CONTROLLER"
echo "INITIAL_SUPPLY=$INITIAL_SUPPLY"
echo "FORWARD_CANONICAL=$FORWARD_CANONICAL"
echo "REGISTER_IN_GRU=$REGISTER_IN_GRU"
echo "DEPLOYER=$deployer_address"
echo "NONCE_LATEST=$latest_nonce"
echo "NONCE_PENDING=$pending_nonce"
if [[ "$DRY_RUN" == "1" ]]; then
printf '[DRY_RUN] (cd %s && ' "$SMOM_ROOT"
printf 'TOKEN_NAME=%q TOKEN_SYMBOL=%q CURRENCY_CODE=%q TOKEN_DECIMALS=%q VERSION_TAG=%q ' \
"$TOKEN_NAME" "$TOKEN_SYMBOL" "$CURRENCY_CODE" "$TOKEN_DECIMALS" "$VERSION_TAG"
printf 'INITIAL_SUPPLY=%q FORWARD_CANONICAL=%q REGISTER_IN_GRU=%q GOVERNANCE_CONTROLLER=%q UNIVERSAL_ASSET_REGISTRY=%q REGISTRY_NAME=%q REGISTRY_SYMBOL=%q GAS_PRICE_WEI=%q ' \
"$INITIAL_SUPPLY" "$FORWARD_CANONICAL" "$REGISTER_IN_GRU" "$GOVERNANCE_CONTROLLER" "$UNIVERSAL_ASSET_REGISTRY" "$REGISTRY_NAME" "$REGISTRY_SYMBOL" "$GAS_PRICE_WEI"
printf 'TOKEN_URI=%q REGULATORY_DISCLOSURE_URI=%q REPORTING_URI=%q ' \
"${TOKEN_URI:-}" "${REGULATORY_DISCLOSURE_URI:-}" "${REPORTING_URI:-}"
printed=()
skip_next=0
for arg in "${CMD[@]}"; do
if [[ "$skip_next" == "1" ]]; then
printed+=("<hidden>")
skip_next=0
continue
fi
if [[ "$arg" == "--private-key" ]]; then
printed+=("$arg")
skip_next=1
continue
fi
printed+=("$arg")
done
printf '%q ' "${printed[@]}"
printf ')\n'
exit 0
fi
(
cd "$SMOM_ROOT"
TOKEN_NAME="$TOKEN_NAME" \
TOKEN_SYMBOL="$TOKEN_SYMBOL" \
CURRENCY_CODE="$CURRENCY_CODE" \
TOKEN_DECIMALS="$TOKEN_DECIMALS" \
VERSION_TAG="$VERSION_TAG" \
INITIAL_SUPPLY="$INITIAL_SUPPLY" \
FORWARD_CANONICAL="$FORWARD_CANONICAL" \
REGISTER_IN_GRU="$REGISTER_IN_GRU" \
GOVERNANCE_CONTROLLER="$GOVERNANCE_CONTROLLER" \
UNIVERSAL_ASSET_REGISTRY="$UNIVERSAL_ASSET_REGISTRY" \
REGISTRY_NAME="$REGISTRY_NAME" \
REGISTRY_SYMBOL="$REGISTRY_SYMBOL" \
GAS_PRICE_WEI="$GAS_PRICE_WEI" \
TOKEN_URI="${TOKEN_URI:-}" \
REGULATORY_DISCLOSURE_URI="${REGULATORY_DISCLOSURE_URI:-}" \
REPORTING_URI="${REPORTING_URI:-}" \
"${CMD[@]}"
)

View File

@@ -0,0 +1,170 @@
#!/usr/bin/env bash
set -euo pipefail
# Deploy Mainnet flash quote-push stack (Aave receiver + external unwinder).
# Default: simulation only (no --broadcast). Use --apply to broadcast.
#
# Env:
# PRIVATE_KEY, ETHEREUM_MAINNET_RPC
# DODO_PMM_INTEGRATION_MAINNET required when QUOTE_PUSH_UNWINDER_TYPE=dodo
# UNISWAP_V3_SWAP_ROUTER_MAINNET optional for univ3; defaults to legacy SwapRouter `0xE592...`
# QUOTE_PUSH_UNWINDER_TYPE univ3 (default) | dodo | two_hop_dodo | dodo_univ3
#
# Usage:
# source scripts/lib/load-project-env.sh
# bash scripts/deployment/deploy-mainnet-aave-quote-push-stack.sh --dry-run
# bash scripts/deployment/deploy-mainnet-aave-quote-push-stack.sh --apply
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROXMOX_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
SMOM="${PROXMOX_ROOT}/smom-dbis-138"
# Preserve explicit caller overrides across sourced repo env files.
_qp_private_key="${PRIVATE_KEY-}"
_qp_rpc="${ETHEREUM_MAINNET_RPC-}"
_qp_unwind_type="${QUOTE_PUSH_UNWINDER_TYPE-}"
_qp_integration="${DODO_PMM_INTEGRATION_MAINNET-}"
_qp_router="${UNISWAP_V3_SWAP_ROUTER_MAINNET-}"
_qp_skip_receiver="${DEPLOY_MAINNET_FLASH_SKIP_RECEIVER-}"
# shellcheck disable=SC1091
source "${PROXMOX_ROOT}/scripts/lib/load-project-env.sh" 2>/dev/null || true
# shellcheck disable=SC1091
source "${SMOM}/scripts/load-env.sh" >/dev/null 2>&1 || true
[[ -n "$_qp_private_key" ]] && export PRIVATE_KEY="$_qp_private_key"
[[ -n "$_qp_rpc" ]] && export ETHEREUM_MAINNET_RPC="$_qp_rpc"
[[ -n "$_qp_unwind_type" ]] && export QUOTE_PUSH_UNWINDER_TYPE="$_qp_unwind_type"
[[ -n "$_qp_integration" ]] && export DODO_PMM_INTEGRATION_MAINNET="$_qp_integration"
[[ -n "$_qp_router" ]] && export UNISWAP_V3_SWAP_ROUTER_MAINNET="$_qp_router"
[[ -n "$_qp_skip_receiver" ]] && export DEPLOY_MAINNET_FLASH_SKIP_RECEIVER="$_qp_skip_receiver"
unset _qp_private_key _qp_rpc _qp_unwind_type _qp_integration _qp_router _qp_skip_receiver
pick_latest_create_address() {
local script_name="$1"
local contract_name="$2"
local latest_json="${SMOM}/broadcast/${script_name}/1/run-latest.json"
if [[ ! -f "$latest_json" ]] || ! command -v jq >/dev/null 2>&1; then
return 1
fi
jq -r --arg contract "$contract_name" \
'.transactions[]? | select(.transactionType == "CREATE" and .contractName == $contract) | .contractAddress' \
"$latest_json" | tail -n1
}
BROADCAST=()
MODE="dry-run"
if (($# == 0)); then
:
else
for a in "$@"; do
case "$a" in
--apply) BROADCAST=(--broadcast); MODE=apply ;;
--dry-run) BROADCAST=(); MODE=dry-run ;;
*)
echo "[fail] unknown arg: $a (use --dry-run or --apply)" >&2
exit 2
;;
esac
done
fi
if [[ -z "${ETHEREUM_MAINNET_RPC:-}" || -z "${PRIVATE_KEY:-}" ]]; then
echo "[fail] ETHEREUM_MAINNET_RPC and PRIVATE_KEY are required" >&2
exit 1
fi
UNW="${QUOTE_PUSH_UNWINDER_TYPE:-univ3}"
echo "mode=$MODE unwinder_type=$UNW"
if [[ "${DEPLOY_MAINNET_FLASH_SKIP_RECEIVER:-}" == "1" ]]; then
echo "Step 1/2: skipped (DEPLOY_MAINNET_FLASH_SKIP_RECEIVER=1; reuse AAVE_QUOTE_PUSH_RECEIVER_MAINNET)"
else
echo "Step 1/2: DeployAaveQuotePushFlashReceiver"
(
cd "$SMOM"
forge script script/deploy/DeployAaveQuotePushFlashReceiver.s.sol:DeployAaveQuotePushFlashReceiver \
--rpc-url "$ETHEREUM_MAINNET_RPC" \
"${BROADCAST[@]}" \
-vvvv
)
fi
if [[ "$UNW" == "dodo" ]]; then
if [[ -z "${DODO_PMM_INTEGRATION_MAINNET:-}" ]]; then
echo "[fail] DODO_PMM_INTEGRATION_MAINNET required for dodo unwinder" >&2
exit 1
fi
echo "Step 2/2: DeployDODOIntegrationExternalUnwinder"
(
cd "$SMOM"
forge script script/deploy/DeployDODOIntegrationExternalUnwinder.s.sol:DeployDODOIntegrationExternalUnwinder \
--rpc-url "$ETHEREUM_MAINNET_RPC" \
"${BROADCAST[@]}" \
-vvvv
)
elif [[ "$UNW" == "two_hop_dodo" ]]; then
if [[ -z "${DODO_PMM_INTEGRATION_MAINNET:-}" ]]; then
echo "[fail] DODO_PMM_INTEGRATION_MAINNET required for two_hop_dodo unwinder" >&2
exit 1
fi
echo "Step 2/2: DeployTwoHopDodoIntegrationUnwinder"
(
cd "$SMOM"
forge script script/deploy/DeployTwoHopDodoIntegrationUnwinder.s.sol:DeployTwoHopDodoIntegrationUnwinder \
--rpc-url "$ETHEREUM_MAINNET_RPC" \
"${BROADCAST[@]}" \
-vvvv
)
elif [[ "$UNW" == "dodo_univ3" ]]; then
if [[ -z "${DODO_PMM_INTEGRATION_MAINNET:-}" ]]; then
echo "[fail] DODO_PMM_INTEGRATION_MAINNET required for dodo_univ3 unwinder" >&2
exit 1
fi
echo "Step 2/2: DeployDODOToUniswapV3MultiHopExternalUnwinder"
(
cd "$SMOM"
forge script script/deploy/DeployDODOToUniswapV3MultiHopExternalUnwinder.s.sol:DeployDODOToUniswapV3MultiHopExternalUnwinder \
--rpc-url "$ETHEREUM_MAINNET_RPC" \
"${BROADCAST[@]}" \
-vvvv
)
else
echo "Step 2/2: DeployUniswapV3ExternalUnwinder"
(
cd "$SMOM"
forge script script/deploy/DeployUniswapV3ExternalUnwinder.s.sol:DeployUniswapV3ExternalUnwinder \
--rpc-url "$ETHEREUM_MAINNET_RPC" \
"${BROADCAST[@]}" \
-vvvv
)
fi
echo
receiver_addr="$(pick_latest_create_address "DeployAaveQuotePushFlashReceiver.s.sol" "AaveQuotePushFlashReceiver" || true)"
unwinder_contract="UniswapV3ExternalUnwinder"
unwinder_script="DeployUniswapV3ExternalUnwinder.s.sol"
case "$UNW" in
dodo)
unwinder_contract="DODOIntegrationExternalUnwinder"
unwinder_script="DeployDODOIntegrationExternalUnwinder.s.sol"
;;
two_hop_dodo)
unwinder_contract="TwoHopDodoIntegrationUnwinder"
unwinder_script="DeployTwoHopDodoIntegrationUnwinder.s.sol"
;;
dodo_univ3)
unwinder_contract="DODOToUniswapV3MultiHopExternalUnwinder"
unwinder_script="DeployDODOToUniswapV3MultiHopExternalUnwinder.s.sol"
;;
esac
unwinder_addr="$(pick_latest_create_address "$unwinder_script" "$unwinder_contract" || true)"
echo "After --apply: copy deployed addresses into .env:"
echo " AAVE_QUOTE_PUSH_RECEIVER_MAINNET=${receiver_addr:-...}"
echo " QUOTE_PUSH_EXTERNAL_UNWINDER_MAINNET=${unwinder_addr:-...}"
echo "Or rerun immediately with QUOTE_PUSH_UNWINDER_TYPE=${UNW} so run-mainnet-aave-cwusdc-quote-push-once.sh can auto-pick the latest broadcast unwinder."
echo "Then set FLASH_QUOTE_AMOUNT_RAW, UNWIND_MODE, UNWIND_V3_FEE_U24 (or UNWIND_DODO_POOL / two-hop vars) and run:"
echo " bash scripts/deployment/run-mainnet-aave-cwusdc-quote-push-once.sh --dry-run"

View File

@@ -0,0 +1,256 @@
#!/usr/bin/env bash
set -euo pipefail
# Create and seed Mainnet DODO PMM pool cWUSDT (base) / cWUSDC (quote) at 1:1 initial price.
#
# One integration pool supports swaps in both directions; there is no second pool for the reverse.
# lpFeeRate defaults to 3 bps to match other public cW*/USD rails; k=0; TWAP off (same as wave-1 public stables).
#
# initialPrice uses DODO convention (1e18 = unity peg for same-decimal USD-linked stables).
#
# Examples:
# bash scripts/deployment/compute-mainnet-cwusdt-cwusdc-seed-amounts.sh --multiplier=50 --cap-raw=...
# bash scripts/deployment/deploy-mainnet-cwusdt-cwusdc-pool.sh \
# --initial-price=1000000000000000000 \
# --base-amount=<seed> --quote-amount=<seed> \
# --dry-run
#
# Registry: cross-chain-pmm-lps/config/deployment-status.json lists the live vault
# 0xe944b7Cb012A0820c07f54D51e92f0e1C74168DB when already created; this script skips
# createPool when integration mapping exists and only tops up liquidity if you pass amounts.
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
source "${PROJECT_ROOT}/smom-dbis-138/scripts/load-env.sh" >/dev/null 2>&1
require_cmd() {
command -v "$1" >/dev/null 2>&1 || {
echo "[fail] missing required command: $1" >&2
exit 1
}
}
require_cmd cast
INITIAL_PRICE="1000000000000000000"
BASE_AMOUNT=""
QUOTE_AMOUNT=""
MINT_BASE_AMOUNT="0"
MINT_QUOTE_AMOUNT="0"
FEE_BPS="3"
K_VALUE="0"
OPEN_TWAP="false"
DRY_RUN=0
for arg in "$@"; do
case "$arg" in
--initial-price=*) INITIAL_PRICE="${arg#*=}" ;;
--base-amount=*) BASE_AMOUNT="${arg#*=}" ;;
--quote-amount=*) QUOTE_AMOUNT="${arg#*=}" ;;
--mint-base-amount=*) MINT_BASE_AMOUNT="${arg#*=}" ;;
--mint-quote-amount=*) MINT_QUOTE_AMOUNT="${arg#*=}" ;;
--fee-bps=*) FEE_BPS="${arg#*=}" ;;
--k=*) K_VALUE="${arg#*=}" ;;
--open-twap=*) OPEN_TWAP="${arg#*=}" ;;
--dry-run) DRY_RUN=1 ;;
-h | --help)
sed -n '1,28p' "$0"
exit 0
;;
*)
echo "[fail] unknown arg: $arg" >&2
exit 2
;;
esac
done
if [[ -z "$BASE_AMOUNT" || -z "$QUOTE_AMOUNT" ]]; then
echo "[fail] required: --base-amount and --quote-amount (use matched 1:1 raw for peg seed)" >&2
exit 1
fi
RPC_URL="${ETHEREUM_MAINNET_RPC:-}"
PRIVATE_KEY="${PRIVATE_KEY:-}"
INTEGRATION="${DODO_PMM_INTEGRATION_MAINNET:-}"
CWUSDT="${CWUSDT_MAINNET:-0xaF5017d0163ecb99D9B5D94e3b4D7b09Af44D8AE}"
CWUSDC="${CWUSDC_MAINNET:-0x2de5F116bFcE3d0f922d9C8351e0c5Fc24b9284a}"
if [[ -z "$RPC_URL" || -z "$PRIVATE_KEY" || -z "$INTEGRATION" ]]; then
echo "[fail] ETHEREUM_MAINNET_RPC, PRIVATE_KEY, and DODO_PMM_INTEGRATION_MAINNET are required" >&2
exit 1
fi
DEPLOYER="$(cast wallet address --private-key "$PRIVATE_KEY")"
BASE_TOKEN="$CWUSDT"
QUOTE_TOKEN="$CWUSDC"
PAIR_LABEL="cWUSDT/cWUSDC"
parse_tx_hash() {
local output="$1"
local tx_hash
tx_hash="$(printf '%s\n' "$output" | grep -E '^0x[0-9a-fA-F]{64}$' | tail -n1 || true)"
if [[ -z "$tx_hash" ]]; then
tx_hash="$(printf '%s\n' "$output" | grep -E '^transactionHash[[:space:]]+0x[0-9a-fA-F]{64}$' | awk '{print $2}' | tail -n1 || true)"
fi
if [[ -z "$tx_hash" ]]; then
return 1
fi
printf '%s\n' "$tx_hash"
}
bool_to_cli() {
if [[ "$1" == "true" ]]; then
printf 'true'
else
printf 'false'
fi
}
existing_pool="$(cast call "$INTEGRATION" 'pools(address,address)(address)' "$BASE_TOKEN" "$QUOTE_TOKEN" --rpc-url "$RPC_URL" | awk '{print $1}')"
base_balance_before="$(cast call "$BASE_TOKEN" 'balanceOf(address)(uint256)' "$DEPLOYER" --rpc-url "$RPC_URL" | awk '{print $1}')"
quote_balance_before="$(cast call "$QUOTE_TOKEN" 'balanceOf(address)(uint256)' "$DEPLOYER" --rpc-url "$RPC_URL" | awk '{print $1}')"
base_allowance_before="$(cast call "$BASE_TOKEN" 'allowance(address,address)(uint256)' "$DEPLOYER" "$INTEGRATION" --rpc-url "$RPC_URL" | awk '{print $1}')"
quote_allowance_before="$(cast call "$QUOTE_TOKEN" 'allowance(address,address)(uint256)' "$DEPLOYER" "$INTEGRATION" --rpc-url "$RPC_URL" | awk '{print $1}')"
create_gas="0"
if [[ "$existing_pool" == "0x0000000000000000000000000000000000000000" ]]; then
create_gas="$(cast estimate "$INTEGRATION" \
'createPool(address,address,uint256,uint256,uint256,bool)(address)' \
"$BASE_TOKEN" "$QUOTE_TOKEN" "$FEE_BPS" "$INITIAL_PRICE" "$K_VALUE" "$(bool_to_cli "$OPEN_TWAP")" \
--from "$DEPLOYER" \
--rpc-url "$RPC_URL")"
fi
if (( base_balance_before < BASE_AMOUNT )); then
base_after_mint=$(( base_balance_before + MINT_BASE_AMOUNT ))
if (( MINT_BASE_AMOUNT == 0 || base_after_mint < BASE_AMOUNT )); then
echo "[fail] insufficient cWUSDT: have=$base_balance_before need=$BASE_AMOUNT mint_base=$MINT_BASE_AMOUNT" >&2
exit 1
fi
fi
if (( quote_balance_before < QUOTE_AMOUNT )); then
quote_after_mint=$(( quote_balance_before + MINT_QUOTE_AMOUNT ))
if (( MINT_QUOTE_AMOUNT == 0 || quote_after_mint < QUOTE_AMOUNT )); then
echo "[fail] insufficient cWUSDC: have=$quote_balance_before need=$QUOTE_AMOUNT mint_quote=$MINT_QUOTE_AMOUNT" >&2
exit 1
fi
fi
if (( DRY_RUN == 1 )); then
echo "pair=$PAIR_LABEL"
echo "baseToken=$BASE_TOKEN"
echo "quoteToken=$QUOTE_TOKEN"
echo "integration=$INTEGRATION"
echo "existingPool=$existing_pool"
echo "initialPrice=$INITIAL_PRICE"
echo "feeBps=$FEE_BPS"
echo "k=$K_VALUE"
echo "openTwap=$OPEN_TWAP"
echo "createGas=$create_gas"
echo "baseAmount=$BASE_AMOUNT"
echo "quoteAmount=$QUOTE_AMOUNT"
echo "mintBaseAmount=$MINT_BASE_AMOUNT"
echo "mintQuoteAmount=$MINT_QUOTE_AMOUNT"
echo "baseBalanceBefore=$base_balance_before"
echo "quoteBalanceBefore=$quote_balance_before"
exit 0
fi
mint_base_tx=""
mint_quote_tx=""
create_tx=""
approve_base_tx=""
approve_quote_tx=""
add_liquidity_tx=""
if (( MINT_BASE_AMOUNT > 0 )); then
mint_base_output="$(
cast send "$BASE_TOKEN" \
'mint(address,uint256)' \
"$DEPLOYER" "$MINT_BASE_AMOUNT" \
--rpc-url "$RPC_URL" \
--private-key "$PRIVATE_KEY"
)"
mint_base_tx="$(parse_tx_hash "$mint_base_output")"
fi
if (( MINT_QUOTE_AMOUNT > 0 )); then
mint_quote_output="$(
cast send "$QUOTE_TOKEN" \
'mint(address,uint256)' \
"$DEPLOYER" "$MINT_QUOTE_AMOUNT" \
--rpc-url "$RPC_URL" \
--private-key "$PRIVATE_KEY"
)"
mint_quote_tx="$(parse_tx_hash "$mint_quote_output")"
fi
if [[ "$existing_pool" == "0x0000000000000000000000000000000000000000" ]]; then
create_output="$(
cast send "$INTEGRATION" \
'createPool(address,address,uint256,uint256,uint256,bool)(address)' \
"$BASE_TOKEN" "$QUOTE_TOKEN" "$FEE_BPS" "$INITIAL_PRICE" "$K_VALUE" "$(bool_to_cli "$OPEN_TWAP")" \
--rpc-url "$RPC_URL" \
--private-key "$PRIVATE_KEY"
)"
create_tx="$(parse_tx_hash "$create_output")"
existing_pool="$(cast call "$INTEGRATION" 'pools(address,address)(address)' "$BASE_TOKEN" "$QUOTE_TOKEN" --rpc-url "$RPC_URL" | awk '{print $1}')"
fi
if [[ "$existing_pool" == "0x0000000000000000000000000000000000000000" ]]; then
echo "[fail] pool creation did not yield a pool address" >&2
exit 1
fi
if (( base_allowance_before < BASE_AMOUNT )); then
approve_base_output="$(
cast send "$BASE_TOKEN" \
'approve(address,uint256)(bool)' \
"$INTEGRATION" "$BASE_AMOUNT" \
--rpc-url "$RPC_URL" \
--private-key "$PRIVATE_KEY"
)"
approve_base_tx="$(parse_tx_hash "$approve_base_output")"
fi
if (( quote_allowance_before < QUOTE_AMOUNT )); then
approve_quote_output="$(
cast send "$QUOTE_TOKEN" \
'approve(address,uint256)(bool)' \
"$INTEGRATION" "$QUOTE_AMOUNT" \
--rpc-url "$RPC_URL" \
--private-key "$PRIVATE_KEY"
)"
approve_quote_tx="$(parse_tx_hash "$approve_quote_output")"
fi
add_liquidity_output="$(
cast send "$INTEGRATION" \
'addLiquidity(address,uint256,uint256)(uint256,uint256,uint256)' \
"$existing_pool" "$BASE_AMOUNT" "$QUOTE_AMOUNT" \
--rpc-url "$RPC_URL" \
--private-key "$PRIVATE_KEY"
)"
add_liquidity_tx="$(parse_tx_hash "$add_liquidity_output")"
reserves="$(cast call "$existing_pool" 'getVaultReserve()(uint256,uint256)' --rpc-url "$RPC_URL")"
base_reserve="$(printf '%s\n' "$reserves" | sed -n '1p' | awk '{print $1}')"
quote_reserve="$(printf '%s\n' "$reserves" | sed -n '2p' | awk '{print $1}')"
echo "pair=$PAIR_LABEL"
echo "pool=$existing_pool"
echo "export POOL_CWUSDT_CWUSDC_MAINNET=$existing_pool"
echo "initialPrice=$INITIAL_PRICE"
echo "feeBps=$FEE_BPS"
echo "mintBaseTx=${mint_base_tx:-none}"
echo "mintQuoteTx=${mint_quote_tx:-none}"
echo "createTx=${create_tx:-none}"
echo "approveBaseTx=${approve_base_tx:-none}"
echo "approveQuoteTx=${approve_quote_tx:-none}"
echo "addLiquidityTx=$add_liquidity_tx"
echo "baseReserve=$base_reserve"
echo "quoteReserve=$quote_reserve"

View File

@@ -0,0 +1,268 @@
#!/usr/bin/env bash
set -euo pipefail
# Create and seed a Mainnet DODO PMM pool: cWUSD* (base) / TRUU (quote).
#
# Seeding "1:1" means **equal USD on each leg**, not equal raw token amounts — see
# docs/03-deployment/MAINNET_PMM_TRUU_CWUSD_PEG_AND_BOT_RUNBOOK.md section 9 and:
# bash scripts/deployment/compute-mainnet-truu-pmm-seed-amounts.sh
# Large fixed USD per leg (prints base/quote + command lines):
# bash scripts/deployment/compute-mainnet-truu-liquidity-amounts.sh --usd-per-leg=125000
#
# Base tokens are 6 decimals (cWUSDT, cWUSDC). TRUU is 10 decimals on mainnet.
# You MUST set --initial-price to the correct DODO PMM "i" for your base/quote order
# (depends on DODO DVM math — verify on a fork before mainnet size).
#
# Defaults for volatile-style pools: fee 30 bps, k = 0.5e18, TWAP off.
# For peg-only cW↔stable pools use deploy-mainnet-public-dodo-wave1-pool.sh instead.
#
# Example (dry-run):
# bash scripts/deployment/deploy-mainnet-pmm-cw-truu-pool.sh \
# --pair=cwusdt-truu \
# --initial-price=<YOUR_I_VALUE> \
# --base-amount=10000000 \
# --quote-amount=25000000000 \
# --dry-run
#
# Top up an **existing** pool (createPool skipped when integration already has base/TRUU):
# Use the same --pair, --initial-price (ignored for create but kept for logs), and 1:1 USD legs.
# Quote raw from base raw: quote = base * Q_ref / B_ref using any prior deposit (e.g. cWUSDT/TRUU
# seed B_ref=1500000, Q_ref=372348929900893). Require wallet TRUU >= quote and cW base >= base.
#
# After live deploy, record the pool in cross-chain-pmm-lps/config/deployment-status.json
# and set PMM_TRUU_BASE_TOKEN / PMM_TRUU_QUOTE_TOKEN for scripts/verify/check-mainnet-pmm-peg-bot-readiness.sh
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
source "${PROJECT_ROOT}/smom-dbis-138/scripts/load-env.sh" >/dev/null 2>&1
require_cmd() {
command -v "$1" >/dev/null 2>&1 || {
echo "[fail] missing required command: $1" >&2
exit 1
}
}
require_cmd cast
PAIR=""
INITIAL_PRICE=""
BASE_AMOUNT=""
QUOTE_AMOUNT=""
MINT_BASE_AMOUNT="0"
FEE_BPS="${PMM_TRUU_FEE_BPS:-30}"
K_VALUE="${PMM_TRUU_K:-500000000000000000}"
OPEN_TWAP="false"
DRY_RUN=0
for arg in "$@"; do
case "$arg" in
--pair=*) PAIR="${arg#*=}" ;;
--initial-price=*) INITIAL_PRICE="${arg#*=}" ;;
--base-amount=*) BASE_AMOUNT="${arg#*=}" ;;
--quote-amount=*) QUOTE_AMOUNT="${arg#*=}" ;;
--mint-base-amount=*) MINT_BASE_AMOUNT="${arg#*=}" ;;
--fee-bps=*) FEE_BPS="${arg#*=}" ;;
--k=*) K_VALUE="${arg#*=}" ;;
--open-twap=*) OPEN_TWAP="${arg#*=}" ;;
--dry-run) DRY_RUN=1 ;;
*)
echo "[fail] unknown arg: $arg" >&2
exit 2
;;
esac
done
if [[ -z "$PAIR" || -z "$INITIAL_PRICE" || -z "$BASE_AMOUNT" || -z "$QUOTE_AMOUNT" ]]; then
echo "[fail] required args: --pair, --initial-price, --base-amount, --quote-amount" >&2
exit 1
fi
RPC_URL="${ETHEREUM_MAINNET_RPC:-}"
PRIVATE_KEY="${PRIVATE_KEY:-}"
INTEGRATION="${DODO_PMM_INTEGRATION_MAINNET:-}"
TRUU_MAINNET="${TRUU_MAINNET:-0xDAe0faFD65385E7775Cf75b1398735155EF6aCD2}"
if [[ -z "$RPC_URL" || -z "$PRIVATE_KEY" || -z "$INTEGRATION" ]]; then
echo "[fail] ETHEREUM_MAINNET_RPC, PRIVATE_KEY, and DODO_PMM_INTEGRATION_MAINNET are required" >&2
exit 1
fi
DEPLOYER="$(cast wallet address --private-key "$PRIVATE_KEY")"
BASE_SYMBOL=""
BASE_TOKEN=""
PAIR_LABEL=""
case "$PAIR" in
cwusdt-truu)
BASE_SYMBOL="cWUSDT"
BASE_TOKEN="${CWUSDT_MAINNET:-0xaF5017d0163ecb99D9B5D94e3b4D7b09Af44D8AE}"
PAIR_LABEL="cWUSDT/TRUU"
;;
cwusdc-truu)
BASE_SYMBOL="cWUSDC"
BASE_TOKEN="${CWUSDC_MAINNET:-0x2de5F116bFcE3d0f922d9C8351e0c5Fc24b9284a}"
PAIR_LABEL="cWUSDC/TRUU"
;;
*)
echo "[fail] unsupported pair: $PAIR (use cwusdt-truu or cwusdc-truu)" >&2
exit 2
;;
esac
parse_tx_hash() {
local output="$1"
local tx_hash
tx_hash="$(printf '%s\n' "$output" | grep -E '^0x[0-9a-fA-F]{64}$' | tail -n1 || true)"
if [[ -z "$tx_hash" ]]; then
tx_hash="$(printf '%s\n' "$output" | grep -E '^transactionHash[[:space:]]+0x[0-9a-fA-F]{64}$' | awk '{print $2}' | tail -n1 || true)"
fi
if [[ -z "$tx_hash" ]]; then
return 1
fi
printf '%s\n' "$tx_hash"
}
bool_to_cli() {
if [[ "$1" == "true" ]]; then
printf 'true'
else
printf 'false'
fi
}
QUOTE_TOKEN="$TRUU_MAINNET"
existing_pool="$(cast call "$INTEGRATION" 'pools(address,address)(address)' "$BASE_TOKEN" "$QUOTE_TOKEN" --rpc-url "$RPC_URL" | awk '{print $1}')"
base_balance_before="$(cast call "$BASE_TOKEN" 'balanceOf(address)(uint256)' "$DEPLOYER" --rpc-url "$RPC_URL" | awk '{print $1}')"
quote_balance_before="$(cast call "$QUOTE_TOKEN" 'balanceOf(address)(uint256)' "$DEPLOYER" --rpc-url "$RPC_URL" | awk '{print $1}')"
base_allowance_before="$(cast call "$BASE_TOKEN" 'allowance(address,address)(uint256)' "$DEPLOYER" "$INTEGRATION" --rpc-url "$RPC_URL" | awk '{print $1}')"
quote_allowance_before="$(cast call "$QUOTE_TOKEN" 'allowance(address,address)(uint256)' "$DEPLOYER" "$INTEGRATION" --rpc-url "$RPC_URL" | awk '{print $1}')"
create_gas="0"
if [[ "$existing_pool" == "0x0000000000000000000000000000000000000000" ]]; then
create_gas="$(cast estimate "$INTEGRATION" \
'createPool(address,address,uint256,uint256,uint256,bool)(address)' \
"$BASE_TOKEN" "$QUOTE_TOKEN" "$FEE_BPS" "$INITIAL_PRICE" "$K_VALUE" "$(bool_to_cli "$OPEN_TWAP")" \
--from "$DEPLOYER" \
--rpc-url "$RPC_URL")"
fi
if (( base_balance_before < BASE_AMOUNT )); then
base_balance_after_mint=$(( base_balance_before + MINT_BASE_AMOUNT ))
if (( MINT_BASE_AMOUNT == 0 || base_balance_after_mint < BASE_AMOUNT )); then
echo "[fail] insufficient ${BASE_SYMBOL} balance: have=$base_balance_before need=$BASE_AMOUNT mint_base_amount=$MINT_BASE_AMOUNT" >&2
exit 1
fi
fi
if (( quote_balance_before < QUOTE_AMOUNT )); then
echo "[fail] insufficient TRUU balance: have=$quote_balance_before need=$QUOTE_AMOUNT" >&2
exit 1
fi
if (( DRY_RUN == 1 )); then
echo "pair=$PAIR_LABEL"
echo "baseToken=$BASE_TOKEN"
echo "quoteToken=$QUOTE_TOKEN"
echo "integration=$INTEGRATION"
echo "existingPool=$existing_pool"
echo "initialPrice=$INITIAL_PRICE"
echo "feeBps=$FEE_BPS"
echo "k=$K_VALUE"
echo "openTwap=$OPEN_TWAP"
echo "createGas=$create_gas"
echo "baseAmount=$BASE_AMOUNT (6-decimal cW units)"
echo "quoteAmount=$QUOTE_AMOUNT (10-decimal TRUU units)"
echo "mintBaseAmount=$MINT_BASE_AMOUNT"
echo "suggestedVerifierEnv=PMM_TRUU_BASE_TOKEN=$BASE_TOKEN PMM_TRUU_QUOTE_TOKEN=$QUOTE_TOKEN"
exit 0
fi
mint_tx=""
create_tx=""
approve_base_tx=""
approve_quote_tx=""
add_liquidity_tx=""
if (( MINT_BASE_AMOUNT > 0 )); then
mint_output="$(
cast send "$BASE_TOKEN" \
'mint(address,uint256)' \
"$DEPLOYER" "$MINT_BASE_AMOUNT" \
--rpc-url "$RPC_URL" \
--private-key "$PRIVATE_KEY"
)"
mint_tx="$(parse_tx_hash "$mint_output")"
fi
if [[ "$existing_pool" == "0x0000000000000000000000000000000000000000" ]]; then
create_output="$(
cast send "$INTEGRATION" \
'createPool(address,address,uint256,uint256,uint256,bool)(address)' \
"$BASE_TOKEN" "$QUOTE_TOKEN" "$FEE_BPS" "$INITIAL_PRICE" "$K_VALUE" "$(bool_to_cli "$OPEN_TWAP")" \
--rpc-url "$RPC_URL" \
--private-key "$PRIVATE_KEY"
)"
create_tx="$(parse_tx_hash "$create_output")"
existing_pool="$(cast call "$INTEGRATION" 'pools(address,address)(address)' "$BASE_TOKEN" "$QUOTE_TOKEN" --rpc-url "$RPC_URL" | awk '{print $1}')"
fi
if [[ "$existing_pool" == "0x0000000000000000000000000000000000000000" ]]; then
echo "[fail] pool creation did not yield a pool address" >&2
exit 1
fi
if (( base_allowance_before < BASE_AMOUNT )); then
approve_base_output="$(
cast send "$BASE_TOKEN" \
'approve(address,uint256)(bool)' \
"$INTEGRATION" "$BASE_AMOUNT" \
--rpc-url "$RPC_URL" \
--private-key "$PRIVATE_KEY"
)"
approve_base_tx="$(parse_tx_hash "$approve_base_output")"
fi
if (( quote_allowance_before < QUOTE_AMOUNT )); then
approve_quote_output="$(
cast send "$QUOTE_TOKEN" \
'approve(address,uint256)(bool)' \
"$INTEGRATION" "$QUOTE_AMOUNT" \
--rpc-url "$RPC_URL" \
--private-key "$PRIVATE_KEY"
)"
approve_quote_tx="$(parse_tx_hash "$approve_quote_output")"
fi
add_liquidity_output="$(
cast send "$INTEGRATION" \
'addLiquidity(address,uint256,uint256)(uint256,uint256,uint256)' \
"$existing_pool" "$BASE_AMOUNT" "$QUOTE_AMOUNT" \
--rpc-url "$RPC_URL" \
--private-key "$PRIVATE_KEY"
)"
add_liquidity_tx="$(parse_tx_hash "$add_liquidity_output")"
reserves="$(cast call "$existing_pool" 'getVaultReserve()(uint256,uint256)' --rpc-url "$RPC_URL")"
base_reserve="$(printf '%s\n' "$reserves" | sed -n '1p' | awk '{print $1}')"
quote_reserve="$(printf '%s\n' "$reserves" | sed -n '2p' | awk '{print $1}')"
echo "pair=$PAIR_LABEL"
echo "pool=$existing_pool"
echo "baseToken=$BASE_TOKEN"
echo "quoteToken=$QUOTE_TOKEN"
echo "initialPrice=$INITIAL_PRICE"
echo "feeBps=$FEE_BPS"
echo "k=$K_VALUE"
echo "mintTx=${mint_tx:-none}"
echo "createTx=${create_tx:-none}"
echo "approveBaseTx=${approve_base_tx:-none}"
echo "approveQuoteTx=${approve_quote_tx:-none}"
echo "addLiquidityTx=$add_liquidity_tx"
echo "baseReserve=$base_reserve"
echo "quoteReserve=$quote_reserve"
echo "nextSteps=Add pool row to cross-chain-pmm-lps/config/deployment-status.json; set PMM_TRUU_BASE_TOKEN and PMM_TRUU_QUOTE_TOKEN; run scripts/verify/check-mainnet-pmm-peg-bot-readiness.sh"

View File

@@ -0,0 +1,317 @@
#!/usr/bin/env bash
set -euo pipefail
# Deploy and seed a first-tier Mainnet DODO PMM Wave 1 cW*/USDC pool.
#
# Amount arguments are base units (6 decimals for the current cW* / USDC suite).
#
# Example:
# bash scripts/deployment/deploy-mainnet-public-dodo-wave1-pool.sh \
# --pair=cweurc-usdc \
# --initial-price=1151700000000000000 \
# --base-amount=1250000 \
# --quote-amount=1439625 \
# --mint-base-amount=1300000
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
source "${PROJECT_ROOT}/smom-dbis-138/scripts/load-env.sh" >/dev/null 2>&1
require_cmd() {
command -v "$1" >/dev/null 2>&1 || {
echo "[fail] missing required command: $1" >&2
exit 1
}
}
require_cmd cast
PAIR=""
INITIAL_PRICE=""
BASE_AMOUNT=""
QUOTE_AMOUNT=""
MINT_BASE_AMOUNT="0"
FEE_BPS="10"
K_VALUE="0"
OPEN_TWAP="false"
DRY_RUN=0
for arg in "$@"; do
case "$arg" in
--pair=*) PAIR="${arg#*=}" ;;
--initial-price=*) INITIAL_PRICE="${arg#*=}" ;;
--base-amount=*) BASE_AMOUNT="${arg#*=}" ;;
--quote-amount=*) QUOTE_AMOUNT="${arg#*=}" ;;
--mint-base-amount=*) MINT_BASE_AMOUNT="${arg#*=}" ;;
--fee-bps=*) FEE_BPS="${arg#*=}" ;;
--k=*) K_VALUE="${arg#*=}" ;;
--open-twap=*) OPEN_TWAP="${arg#*=}" ;;
--dry-run) DRY_RUN=1 ;;
*)
echo "[fail] unknown arg: $arg" >&2
exit 2
;;
esac
done
if [[ -z "$PAIR" || -z "$INITIAL_PRICE" || -z "$BASE_AMOUNT" || -z "$QUOTE_AMOUNT" ]]; then
echo "[fail] required args: --pair, --initial-price, --base-amount, --quote-amount" >&2
exit 1
fi
RPC_URL="${ETHEREUM_MAINNET_RPC:-}"
PRIVATE_KEY="${PRIVATE_KEY:-}"
INTEGRATION="${DODO_PMM_INTEGRATION_MAINNET:-}"
USDC_MAINNET="0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
if [[ -z "$RPC_URL" || -z "$PRIVATE_KEY" || -z "$INTEGRATION" ]]; then
echo "[fail] ETHEREUM_MAINNET_RPC, PRIVATE_KEY, and DODO_PMM_INTEGRATION_MAINNET are required" >&2
exit 1
fi
DEPLOYER="$(cast wallet address --private-key "$PRIVATE_KEY")"
BASE_SYMBOL=""
BASE_TOKEN=""
POOL_ENV_KEY=""
PAIR_LABEL=""
case "$PAIR" in
cweurc-usdc)
BASE_SYMBOL="cWEURC"
BASE_TOKEN="${CWEURC_MAINNET:-}"
POOL_ENV_KEY="POOL_CWEURC_USDC_MAINNET"
PAIR_LABEL="cWEURC/USDC"
;;
cweurt-usdc)
BASE_SYMBOL="cWEURT"
BASE_TOKEN="${CWEURT_MAINNET:-}"
POOL_ENV_KEY="POOL_CWEURT_USDC_MAINNET"
PAIR_LABEL="cWEURT/USDC"
;;
cwgbpc-usdc)
BASE_SYMBOL="cWGBPC"
BASE_TOKEN="${CWGBPC_MAINNET:-}"
POOL_ENV_KEY="POOL_CWGBPC_USDC_MAINNET"
PAIR_LABEL="cWGBPC/USDC"
;;
cwgbpt-usdc)
BASE_SYMBOL="cWGBPT"
BASE_TOKEN="${CWGBPT_MAINNET:-}"
POOL_ENV_KEY="POOL_CWGBPT_USDC_MAINNET"
PAIR_LABEL="cWGBPT/USDC"
;;
cwaudc-usdc)
BASE_SYMBOL="cWAUDC"
BASE_TOKEN="${CWAUDC_MAINNET:-}"
POOL_ENV_KEY="POOL_CWAUDC_USDC_MAINNET"
PAIR_LABEL="cWAUDC/USDC"
;;
cwjpyc-usdc)
BASE_SYMBOL="cWJPYC"
BASE_TOKEN="${CWJPYC_MAINNET:-}"
POOL_ENV_KEY="POOL_CWJPYC_USDC_MAINNET"
PAIR_LABEL="cWJPYC/USDC"
;;
cwchfc-usdc)
BASE_SYMBOL="cWCHFC"
BASE_TOKEN="${CWCHFC_MAINNET:-}"
POOL_ENV_KEY="POOL_CWCHFC_USDC_MAINNET"
PAIR_LABEL="cWCHFC/USDC"
;;
cwcadc-usdc)
BASE_SYMBOL="cWCADC"
BASE_TOKEN="${CWCADC_MAINNET:-}"
POOL_ENV_KEY="POOL_CWCADC_USDC_MAINNET"
PAIR_LABEL="cWCADC/USDC"
;;
cwxauc-usdc)
BASE_SYMBOL="cWXAUC"
BASE_TOKEN="${CWXAUC_MAINNET:-}"
POOL_ENV_KEY="POOL_CWXAUC_USDC_MAINNET"
PAIR_LABEL="cWXAUC/USDC"
;;
cwxaut-usdc)
BASE_SYMBOL="cWXAUT"
BASE_TOKEN="${CWXAUT_MAINNET:-}"
POOL_ENV_KEY="POOL_CWXAUT_USDC_MAINNET"
PAIR_LABEL="cWXAUT/USDC"
;;
*)
echo "[fail] unsupported pair: $PAIR" >&2
exit 2
;;
esac
if [[ -z "$BASE_TOKEN" ]]; then
echo "[fail] base token address missing in .env for pair: $PAIR" >&2
exit 1
fi
parse_tx_hash() {
local output="$1"
local tx_hash
tx_hash="$(printf '%s\n' "$output" | grep -E '^0x[0-9a-fA-F]{64}$' | tail -n1 || true)"
if [[ -z "$tx_hash" ]]; then
tx_hash="$(printf '%s\n' "$output" | grep -E '^transactionHash[[:space:]]+0x[0-9a-fA-F]{64}$' | awk '{print $2}' | tail -n1 || true)"
fi
if [[ -z "$tx_hash" ]]; then
return 1
fi
printf '%s\n' "$tx_hash"
}
bool_to_cli() {
if [[ "$1" == "true" ]]; then
printf 'true'
else
printf 'false'
fi
}
existing_pool="$(cast call "$INTEGRATION" 'pools(address,address)(address)' "$BASE_TOKEN" "$USDC_MAINNET" --rpc-url "$RPC_URL" | awk '{print $1}')"
base_balance_before="$(cast call "$BASE_TOKEN" 'balanceOf(address)(uint256)' "$DEPLOYER" --rpc-url "$RPC_URL" | awk '{print $1}')"
quote_balance_before="$(cast call "$USDC_MAINNET" 'balanceOf(address)(uint256)' "$DEPLOYER" --rpc-url "$RPC_URL" | awk '{print $1}')"
base_allowance_before="$(cast call "$BASE_TOKEN" 'allowance(address,address)(uint256)' "$DEPLOYER" "$INTEGRATION" --rpc-url "$RPC_URL" | awk '{print $1}')"
quote_allowance_before="$(cast call "$USDC_MAINNET" 'allowance(address,address)(uint256)' "$DEPLOYER" "$INTEGRATION" --rpc-url "$RPC_URL" | awk '{print $1}')"
create_gas="0"
if [[ "$existing_pool" == "0x0000000000000000000000000000000000000000" ]]; then
create_gas="$(cast estimate "$INTEGRATION" \
'createPool(address,address,uint256,uint256,uint256,bool)(address)' \
"$BASE_TOKEN" "$USDC_MAINNET" "$FEE_BPS" "$INITIAL_PRICE" "$K_VALUE" "$(bool_to_cli "$OPEN_TWAP")" \
--from "$DEPLOYER" \
--rpc-url "$RPC_URL")"
fi
if (( base_balance_before < BASE_AMOUNT )); then
base_balance_after_mint=$(( base_balance_before + MINT_BASE_AMOUNT ))
if (( MINT_BASE_AMOUNT == 0 || base_balance_after_mint < BASE_AMOUNT )); then
echo "[fail] insufficient ${BASE_SYMBOL} balance: have=$base_balance_before need=$BASE_AMOUNT mint_base_amount=$MINT_BASE_AMOUNT" >&2
exit 1
fi
fi
if (( quote_balance_before < QUOTE_AMOUNT )); then
echo "[fail] insufficient USDC balance: have=$quote_balance_before need=$QUOTE_AMOUNT" >&2
exit 1
fi
if (( DRY_RUN == 1 )); then
echo "pair=$PAIR_LABEL"
echo "baseToken=$BASE_TOKEN"
echo "quoteToken=$USDC_MAINNET"
echo "integration=$INTEGRATION"
echo "existingPool=$existing_pool"
echo "poolEnvKey=$POOL_ENV_KEY"
echo "initialPrice=$INITIAL_PRICE"
echo "feeBps=$FEE_BPS"
echo "k=$K_VALUE"
echo "openTwap=$OPEN_TWAP"
echo "createGas=$create_gas"
echo "baseAmount=$BASE_AMOUNT"
echo "quoteAmount=$QUOTE_AMOUNT"
echo "mintBaseAmount=$MINT_BASE_AMOUNT"
echo "baseBalanceBefore=$base_balance_before"
echo "quoteBalanceBefore=$quote_balance_before"
echo "baseAllowanceBefore=$base_allowance_before"
echo "quoteAllowanceBefore=$quote_allowance_before"
exit 0
fi
mint_tx=""
create_tx=""
approve_base_tx=""
approve_quote_tx=""
add_liquidity_tx=""
if (( MINT_BASE_AMOUNT > 0 )); then
mint_output="$(
cast send "$BASE_TOKEN" \
'mint(address,uint256)' \
"$DEPLOYER" "$MINT_BASE_AMOUNT" \
--rpc-url "$RPC_URL" \
--private-key "$PRIVATE_KEY"
)"
mint_tx="$(parse_tx_hash "$mint_output")"
fi
if [[ "$existing_pool" == "0x0000000000000000000000000000000000000000" ]]; then
create_output="$(
cast send "$INTEGRATION" \
'createPool(address,address,uint256,uint256,uint256,bool)(address)' \
"$BASE_TOKEN" "$USDC_MAINNET" "$FEE_BPS" "$INITIAL_PRICE" "$K_VALUE" "$(bool_to_cli "$OPEN_TWAP")" \
--rpc-url "$RPC_URL" \
--private-key "$PRIVATE_KEY"
)"
create_tx="$(parse_tx_hash "$create_output")"
existing_pool="$(cast call "$INTEGRATION" 'pools(address,address)(address)' "$BASE_TOKEN" "$USDC_MAINNET" --rpc-url "$RPC_URL" | awk '{print $1}')"
fi
if [[ "$existing_pool" == "0x0000000000000000000000000000000000000000" ]]; then
echo "[fail] pool creation did not yield a pool address" >&2
exit 1
fi
if (( base_allowance_before < BASE_AMOUNT )); then
approve_base_output="$(
cast send "$BASE_TOKEN" \
'approve(address,uint256)(bool)' \
"$INTEGRATION" "$BASE_AMOUNT" \
--rpc-url "$RPC_URL" \
--private-key "$PRIVATE_KEY"
)"
approve_base_tx="$(parse_tx_hash "$approve_base_output")"
fi
if (( quote_allowance_before < QUOTE_AMOUNT )); then
approve_quote_output="$(
cast send "$USDC_MAINNET" \
'approve(address,uint256)(bool)' \
"$INTEGRATION" "$QUOTE_AMOUNT" \
--rpc-url "$RPC_URL" \
--private-key "$PRIVATE_KEY"
)"
approve_quote_tx="$(parse_tx_hash "$approve_quote_output")"
fi
add_liquidity_output="$(
cast send "$INTEGRATION" \
'addLiquidity(address,uint256,uint256)(uint256,uint256,uint256)' \
"$existing_pool" "$BASE_AMOUNT" "$QUOTE_AMOUNT" \
--rpc-url "$RPC_URL" \
--private-key "$PRIVATE_KEY"
)"
add_liquidity_tx="$(parse_tx_hash "$add_liquidity_output")"
reserves="$(cast call "$existing_pool" 'getVaultReserve()(uint256,uint256)' --rpc-url "$RPC_URL")"
base_reserve="$(printf '%s\n' "$reserves" | sed -n '1p' | awk '{print $1}')"
quote_reserve="$(printf '%s\n' "$reserves" | sed -n '2p' | awk '{print $1}')"
base_balance_after="$(cast call "$BASE_TOKEN" 'balanceOf(address)(uint256)' "$DEPLOYER" --rpc-url "$RPC_URL" | awk '{print $1}')"
quote_balance_after="$(cast call "$USDC_MAINNET" 'balanceOf(address)(uint256)' "$DEPLOYER" --rpc-url "$RPC_URL" | awk '{print $1}')"
echo "pair=$PAIR_LABEL"
echo "poolEnvKey=$POOL_ENV_KEY"
echo "pool=$existing_pool"
echo "baseToken=$BASE_TOKEN"
echo "quoteToken=$USDC_MAINNET"
echo "initialPrice=$INITIAL_PRICE"
echo "feeBps=$FEE_BPS"
echo "k=$K_VALUE"
echo "openTwap=$OPEN_TWAP"
echo "baseAmount=$BASE_AMOUNT"
echo "quoteAmount=$QUOTE_AMOUNT"
echo "mintBaseAmount=$MINT_BASE_AMOUNT"
echo "mintTx=${mint_tx:-none}"
echo "createTx=${create_tx:-none}"
echo "approveBaseTx=${approve_base_tx:-none}"
echo "approveQuoteTx=${approve_quote_tx:-none}"
echo "addLiquidityTx=$add_liquidity_tx"
echo "baseReserve=$base_reserve"
echo "quoteReserve=$quote_reserve"
echo "baseBalanceBefore=$base_balance_before"
echo "baseBalanceAfter=$base_balance_after"
echo "quoteBalanceBefore=$quote_balance_before"
echo "quoteBalanceAfter=$quote_balance_after"

View File

@@ -67,6 +67,55 @@ pct_exec() {
run_pct "exec $VMID -- $*"
}
normalize_studio_sources() {
local compose_path="$APP_DIR/docker-compose.yml"
local app_path="$APP_DIR/src/fusionai_creator/api/app.py"
pct_exec "bash -lc 'python3 - <<\"PY\"
from pathlib import Path
compose_path = Path(\"$compose_path\")
if compose_path.exists():
text = compose_path.read_text()
updated = text.replace(\"version: \\\"3.9\\\"\\n\\n\", \"\", 1)
updated = updated.replace(
\" test: [\\\"CMD\\\", \\\"curl\\\", \\\"-f\\\", \\\"http://localhost:8000/health\\\"]\",
\" test: [\\\"CMD\\\", \\\"python3\\\", \\\"-c\\\", \\\"import urllib.request; urllib.request.urlopen(\\\\\\\"http://localhost:8000/health\\\\\\\").read()\\\"]\",
1,
)
if updated != text:
compose_path.write_text(updated)
app_path = Path(\"$app_path\")
if app_path.exists():
text = app_path.read_text()
updated = text
import_line = \"from fastapi.responses import RedirectResponse\\n\"
anchor = \"from fastapi.staticfiles import StaticFiles\\n\"
if import_line not in updated and anchor in updated:
updated = updated.replace(anchor, anchor + import_line, 1)
route_block = (
\"@app.api_route(\\\"/\\\", methods=[\\\"GET\\\", \\\"HEAD\\\"], include_in_schema=False)\\n\"
\"def studio_root():\\n\"
\" return RedirectResponse(url=\\\"/studio/\\\", status_code=302)\\n\\n\\n\"
)
legacy_route = \"@app.get(\\\"/\\\", include_in_schema=False)\\ndef studio_root():\"
route_anchor = \"@app.post(\\\"/jobs\\\", dependencies=[Depends(_verify_api_key), Depends(_rate_limit)])\\n\"
if legacy_route in updated:
updated = updated.replace(
legacy_route,
\"@app.api_route(\\\\\\\"/\\\\\\\", methods=[\\\\\\\"GET\\\\\\\", \\\\\\\"HEAD\\\\\\\"], include_in_schema=False)\\ndef studio_root():\",
1,
)
elif \"def studio_root()\" not in updated and route_anchor in updated:
updated = updated.replace(route_anchor, route_block + route_anchor, 1)
if updated != text:
app_path.write_text(updated)
PY'"
}
echo "=== Sankofa Studio LXC ($VMID) — $HOSTNAME ==="
echo "URL: https://studio.sankofa.nexus → http://${IP}:8000"
echo "IP: $IP | Memory: ${MEMORY_MB}MB | Cores: $CORES | Disk: ${DISK_GB}G"
@@ -147,8 +196,13 @@ if [[ -n "$ENV_FILE" && -f "$ENV_FILE" ]]; then
run_pct "push $VMID $ENV_FILE $APP_DIR/.env"
fi
echo "Normalizing FusionAI Creator app and compose files for clean Studio routing and health checks..."
normalize_studio_sources
echo "Starting FusionAI Creator stack (docker compose up -d)..."
pct_exec "bash -c 'cd \"$APP_DIR\" && docker compose up -d'"
echo "Setting FusionAI Creator containers to restart automatically after Docker/container restarts..."
pct_exec "bash -c 'cd \"$APP_DIR\" && docker compose ps -q | xargs -r docker update --restart unless-stopped > /dev/null'"
echo ""
echo "Done. Verify: curl -s http://${IP}:8000/health"

View File

@@ -0,0 +1,44 @@
#!/usr/bin/env bash
# Deploy SimpleERC3156FlashVault on Chain 138 via Foundry (broadcast).
#
# Requires: forge, PRIVATE_KEY in env (repo root load-project-env loads .env + smom-dbis-138/.env).
#
# Optional env:
# FLASH_VAULT_OWNER, FLASH_VAULT_FEE_BPS (default 5), FLASH_VAULT_TOKEN, FLASH_VAULT_SEED_AMOUNT (raw units)
#
# Usage:
# source scripts/lib/load-project-env.sh
# ./scripts/deployment/deploy-simple-erc3156-flash-vault-chain138.sh
# ./scripts/deployment/deploy-simple-erc3156-flash-vault-chain138.sh --dry-run
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
SMOM="$PROJECT_ROOT/smom-dbis-138"
if [[ -f "$PROJECT_ROOT/scripts/lib/load-project-env.sh" ]]; then
# shellcheck disable=SC1091
source "$PROJECT_ROOT/scripts/lib/load-project-env.sh" 2>/dev/null || true
fi
RPC="${RPC_URL_138:-}"
if [[ -z "$RPC" ]]; then
echo "ERROR: Set RPC_URL_138 (e.g. http://192.168.11.211:8545)" >&2
exit 1
fi
if [[ -z "${PRIVATE_KEY:-}" ]]; then
echo "ERROR: PRIVATE_KEY must be set for --broadcast" >&2
exit 1
fi
cd "$SMOM"
EXTRA=()
if [[ "${1:-}" == "--dry-run" ]]; then
EXTRA=(-vvvv)
else
EXTRA=(--broadcast -vvvv)
fi
exec forge script script/deploy/DeploySimpleERC3156FlashVault.s.sol:DeploySimpleERC3156FlashVault \
--rpc-url "$RPC" "${EXTRA[@]}"

View File

@@ -110,7 +110,7 @@ fi
cd "$SMOM"
export RPC_URL_138="$RPC"
export DODO_PMM_INTEGRATION="${DODO_PMM_INTEGRATION_ADDRESS:-${DODO_PMM_INTEGRATION:-0x5BDc62f1ae7D630c37A8B363a1d49845356Ee72d}}"
export DODO_PMM_INTEGRATION="${DODO_PMM_INTEGRATION_ADDRESS:-${DODO_PMM_INTEGRATION:-0x86ADA6Ef91A3B450F89f2b751e93B1b7A3218895}}"
# Skip TransactionMirror deploy if already deployed at TRANSACTION_MIRROR_ADDRESS or if --skip-mirror
MIRROR_ADDR="${TRANSACTION_MIRROR_ADDRESS:-}"

View File

@@ -0,0 +1,131 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
SECURE_DIR_DEFAULT="${HOME}/.secure-secrets"
SECRET_FILE_DEFAULT="${SECURE_DIR_DEFAULT}/chain138-keeper.env"
EXPORT_FILE_DEFAULT="${PROJECT_ROOT}/.env.keeper.local"
SECRET_FILE="${KEEPER_SECRET_FILE:-$SECRET_FILE_DEFAULT}"
EXPORT_FILE="${KEEPER_EXPORT_FILE:-$EXPORT_FILE_DEFAULT}"
FORCE=0
NO_EXPORT=0
usage() {
cat <<EOF
Usage: $(basename "$0") [--force] [--no-export-file] [--secret-file PATH] [--export-file PATH]
Generates a dedicated Chain 138 keeper signer, stores the raw private key outside the repo,
and writes a gitignored local helper that sources it.
Defaults:
secret file: $SECRET_FILE_DEFAULT
export file: $EXPORT_FILE_DEFAULT
The secret file contains:
KEEPER_PRIVATE_KEY=0x...
KEEPER_SIGNER_ADDRESS=0x...
The export file does not contain the secret. It only sources the secret file.
EOF
}
while [[ $# -gt 0 ]]; do
case "$1" in
--secret-file)
SECRET_FILE="$2"
shift 2
;;
--export-file)
EXPORT_FILE="$2"
shift 2
;;
--force)
FORCE=1
shift
;;
--no-export-file)
NO_EXPORT=1
shift
;;
-h|--help)
usage
exit 0
;;
*)
echo "Unknown argument: $1" >&2
usage >&2
exit 1
;;
esac
done
command -v openssl >/dev/null 2>&1 || { echo "openssl is required" >&2; exit 1; }
command -v cast >/dev/null 2>&1 || { echo "cast is required" >&2; exit 1; }
if [[ -f "$SECRET_FILE" && "$FORCE" -ne 1 ]]; then
echo "Refusing to overwrite existing secret file: $SECRET_FILE" >&2
echo "Re-run with --force to replace it." >&2
exit 1
fi
umask 077
mkdir -p "$(dirname "$SECRET_FILE")"
chmod 700 "$(dirname "$SECRET_FILE")" 2>/dev/null || true
KEEPER_PRIVATE_KEY=""
KEEPER_SIGNER_ADDRESS=""
for _ in $(seq 1 8); do
candidate="0x$(openssl rand -hex 32)"
if addr="$(cast wallet address --private-key "$candidate" 2>/dev/null)"; then
KEEPER_PRIVATE_KEY="$candidate"
KEEPER_SIGNER_ADDRESS="$addr"
break
fi
done
if [[ -z "$KEEPER_PRIVATE_KEY" || -z "$KEEPER_SIGNER_ADDRESS" ]]; then
echo "Failed to generate a valid keeper private key" >&2
exit 1
fi
cat >"$SECRET_FILE" <<EOF
# Generated by $(basename "$0") on $(date -Iseconds)
KEEPER_PRIVATE_KEY=$KEEPER_PRIVATE_KEY
KEEPER_SIGNER_ADDRESS=$KEEPER_SIGNER_ADDRESS
EOF
chmod 600 "$SECRET_FILE"
if [[ "$NO_EXPORT" -ne 1 ]]; then
cat >"$EXPORT_FILE" <<EOF
# Generated by $(basename "$0") on $(date -Iseconds)
# Local helper only; this file contains no secret material.
export KEEPER_SECRET_FILE="${SECRET_FILE}"
if [ -f "\${KEEPER_SECRET_FILE}" ]; then
set -a
# shellcheck source=/dev/null
source "\${KEEPER_SECRET_FILE}"
set +a
fi
EOF
chmod 600 "$EXPORT_FILE"
fi
cat <<EOF
Keeper signer generated.
Address: $KEEPER_SIGNER_ADDRESS
Secret: $SECRET_FILE
EOF
if [[ "$NO_EXPORT" -ne 1 ]]; then
cat <<EOF
Export: $EXPORT_FILE
Next:
1. source "$EXPORT_FILE"
2. grant KEEPER_ROLE / allow the signer where required
3. install KEEPER_PRIVATE_KEY on the runtime that submits performUpkeep()
EOF
fi

View File

@@ -0,0 +1,198 @@
#!/usr/bin/env bash
# Grant MINTER_ROLE and BURNER_ROLE on CompliantWrappedToken (cW*) to the chain CW bridge
# when the deployer still has DEFAULT_ADMIN_ROLE and roles are not frozen.
#
# Use after DeployCWTokens or when verify shows MINTER=false BURNER=false for some symbols.
# See: docs/07-ccip/CW_DEPLOY_AND_WIRE_RUNBOOK.md
#
# Usage (repo root):
# ./scripts/deployment/grant-cw-bridge-roles-on-chain.sh AVALANCHE [--dry-run]
# ./scripts/deployment/grant-cw-bridge-roles-on-chain.sh MAINNET [--dry-run]
#
# Env: PRIVATE_KEY, CW_BRIDGE_<CHAIN>, <CHAIN>_RPC from smom-dbis-138/.env (via dotenv).
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
SMOM="${PROJECT_ROOT}/smom-dbis-138"
CHAIN="${1:-}"
DRY_RUN=false
[[ "${2:-}" == "--dry-run" ]] && DRY_RUN=true
if [[ -z "$CHAIN" ]]; then
echo "Usage: $0 <CHAIN_NAME> [--dry-run]" >&2
echo " CHAIN_NAME: MAINNET CRONOS BSC POLYGON GNOSIS AVALANCHE BASE ARBITRUM OPTIMISM CELO WEMIX" >&2
exit 2
fi
if [[ ! -f "$SMOM/.env" ]]; then
echo "Missing $SMOM/.env" >&2
exit 1
fi
if [[ -f "$SMOM/scripts/lib/deployment/dotenv.sh" ]]; then
# shellcheck disable=SC1090
source "$SMOM/scripts/lib/deployment/dotenv.sh"
load_deployment_env --repo-root "$SMOM"
else
set -a
# shellcheck disable=SC1090
source "$SMOM/.env"
set +a
fi
[[ -n "${PRIVATE_KEY:-}" ]] || { echo "PRIVATE_KEY required in smom-dbis-138/.env" >&2; exit 1; }
command -v cast >/dev/null 2>&1 || { echo "cast (foundry) required" >&2; exit 1; }
command -v python3 >/dev/null 2>&1 || { echo "python3 required (gas bump)" >&2; exit 1; }
declare -A CHAIN_NAME_TO_ID=(
[MAINNET]=1 [CRONOS]=25 [BSC]=56 [POLYGON]=137 [GNOSIS]=100
[AVALANCHE]=43114 [BASE]=8453 [ARBITRUM]=42161 [OPTIMISM]=10 [CELO]=42220 [WEMIX]=1111
)
chain_id="${CHAIN_NAME_TO_ID[$CHAIN]:-}"
if [[ -z "$chain_id" ]]; then
echo "Unknown chain: $CHAIN" >&2
exit 2
fi
bridge_var="CW_BRIDGE_${CHAIN}"
bridge="${!bridge_var:-}"
if [[ -z "$bridge" ]]; then
echo "SKIP: CW_BRIDGE_${CHAIN} is not set (nothing to grant on this chain)."
exit 0
fi
rpc=""
case "$CHAIN" in
MAINNET) rpc="${ETH_MAINNET_RPC_URL:-${ETHEREUM_MAINNET_RPC:-}}" ;;
CRONOS) rpc="${CRONOS_RPC_URL:-${CRONOS_RPC:-}}" ;;
BSC) rpc="${BSC_RPC_URL:-}" ;;
POLYGON) rpc="${POLYGON_MAINNET_RPC:-${POLYGON_RPC_URL:-}}" ;;
GNOSIS) rpc="${GNOSIS_RPC:-}" ;;
AVALANCHE) rpc="${AVALANCHE_RPC_URL:-}" ;;
BASE) rpc="${BASE_MAINNET_RPC:-}" ;;
ARBITRUM) rpc="${ARBITRUM_MAINNET_RPC:-${ARBITRUM_MAINNET_RPC_URL:-${ARBITRUM_RPC_URL:-}}}" ;;
OPTIMISM) rpc="${OPTIMISM_MAINNET_RPC:-}" ;;
CELO) rpc="${CELO_RPC:-${CELO_MAINNET_RPC:-}}" ;;
WEMIX) rpc="${WEMIX_RPC:-${WEMIX_MAINNET_RPC:-}}" ;;
esac
if [[ -z "$rpc" ]]; then
echo "No RPC URL for $CHAIN (set e.g. AVALANCHE_RPC_URL)" >&2
exit 1
fi
# Busy networks: base fee can exceed cast defaults — bump legacy gas price (percent, default 150%).
CW_GRANT_GAS_BUMP_PCT="${CW_GRANT_GAS_BUMP_PCT:-150}"
_gas_price_wei() {
local base
base="$(cast gas-price --rpc-url "$rpc" 2>/dev/null | head -1 | tr -d '[:space:]')"
[[ -z "$base" || ! "$base" =~ ^[0-9]+$ ]] && base="50000000000"
python3 -c "import sys; b=int(sys.argv[1]); p=int(sys.argv[2]); print(max(b * p // 100, b + 10**9))" "$base" "$CW_GRANT_GAS_BUMP_PCT"
}
GAS_PRICE_WEI="${CW_GRANT_GAS_PRICE_WEI:-$(_gas_price_wei)}"
DEPLOYER="$(cast wallet address "$PRIVATE_KEY" 2>/dev/null)" || { echo "Invalid PRIVATE_KEY" >&2; exit 1; }
MINTER_ROLE="$(cast keccak "MINTER_ROLE")"
BURNER_ROLE="$(cast keccak "BURNER_ROLE")"
# OpenZeppelin AccessControl DEFAULT_ADMIN_ROLE
DEFAULT_ADMIN_ROLE="0x0000000000000000000000000000000000000000000000000000000000000000"
CW_TOKEN_PREFIXES=(CWUSDT CWUSDC CWAUSDT CWEURC CWEURT CWGBPC CWGBPT CWAUDC CWJPYC CWCHFC CWCADC CWXAUC CWXAUT)
resolve_token() {
local prefix="$1"
local chain_var="${prefix}_${CHAIN}"
local address_var="${prefix}_ADDRESS_${chain_id}"
if [[ -n "${!chain_var:-}" ]]; then
printf '%s' "${!chain_var}"
return
fi
if [[ -n "${!address_var:-}" ]]; then
printf '%s' "${!address_var}"
return
fi
printf ''
}
echo "=== Grant cW* bridge roles: $CHAIN (chainId $chain_id) ==="
echo "RPC: $rpc"
echo "Bridge: $bridge"
echo "From: $DEPLOYER"
echo "Gas: legacy wei=$GAS_PRICE_WEI (override CW_GRANT_GAS_PRICE_WEI or bump CW_GRANT_GAS_BUMP_PCT)"
if $DRY_RUN; then echo "MODE: dry-run (no transactions)"; fi
echo ""
granted=0
skipped=0
failed=0
for prefix in "${CW_TOKEN_PREFIXES[@]}"; do
token="$(resolve_token "$prefix")"
[[ -z "$token" ]] && continue
# Some proxies revert on operationalRolesFrozen(); skip that probe and rely on grantRole + admin check.
is_admin="$(cast call "$token" "hasRole(bytes32,address)(bool)" "$DEFAULT_ADMIN_ROLE" "$DEPLOYER" --rpc-url "$rpc" 2>/dev/null || echo "false")"
if [[ "$is_admin" != "true" ]]; then
echo " SKIP $prefix $token — deployer lacks DEFAULT_ADMIN_ROLE"
skipped=$((skipped + 1))
continue
fi
hm="$(cast call "$token" "hasRole(bytes32,address)(bool)" "$MINTER_ROLE" "$bridge" --rpc-url "$rpc" 2>/dev/null || echo "false")"
hb="$(cast call "$token" "hasRole(bytes32,address)(bool)" "$BURNER_ROLE" "$bridge" --rpc-url "$rpc" 2>/dev/null || echo "false")"
if [[ "$hm" == "true" && "$hb" == "true" ]]; then
skipped=$((skipped + 1))
continue
fi
echo " FIX $prefix $token MINTER=$hm BURNER=$hb"
if $DRY_RUN; then
[[ "$hm" != "true" ]] && echo " would: grantRole(MINTER_ROLE, bridge)"
[[ "$hb" != "true" ]] && echo " would: grantRole(BURNER_ROLE, bridge)"
granted=$((granted + 1))
continue
fi
_after_minter() {
cast call "$token" "hasRole(bytes32,address)(bool)" "$MINTER_ROLE" "$bridge" --rpc-url "$rpc" 2>/dev/null | grep -q true
}
_after_burner() {
cast call "$token" "hasRole(bytes32,address)(bool)" "$BURNER_ROLE" "$bridge" --rpc-url "$rpc" 2>/dev/null | grep -q true
}
if [[ "$hm" != "true" ]]; then
cast send "$token" "grantRole(bytes32,address)" "$MINTER_ROLE" "$bridge" \
--rpc-url "$rpc" --private-key "$PRIVATE_KEY" --legacy --gas-price "$GAS_PRICE_WEI" >/dev/null 2>&1 || true
sleep 2
if _after_minter; then
echo " OK MINTER_ROLE"
else
echo " FAIL MINTER_ROLE" >&2
failed=$((failed + 1))
fi
fi
if [[ "$hb" != "true" ]]; then
cast send "$token" "grantRole(bytes32,address)" "$BURNER_ROLE" "$bridge" \
--rpc-url "$rpc" --private-key "$PRIVATE_KEY" --legacy --gas-price "$GAS_PRICE_WEI" >/dev/null 2>&1 || true
sleep 2
if _after_burner; then
echo " OK BURNER_ROLE"
else
echo " FAIL BURNER_ROLE" >&2
failed=$((failed + 1))
fi
fi
granted=$((granted + 1))
done
echo ""
echo "Summary: chains=$CHAIN fixed_tokens=$granted skipped=$skipped tx_failures=$failed"
if [[ "$failed" -gt 0 ]]; then
exit 1
fi

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env bash
# Push Chain 138 PMM mesh into Proxmox LXC and enable systemd.
# Copies: pmm-mesh-6s-automation.sh, update-oracle-price.sh, smom-dbis-138/.env, and this host's cast binary.
# Copies: pmm-mesh-6s-automation.sh, update-oracle-price.sh, smom-dbis-138/.env,
# optional dedicated keeper env, and this host's cast binary.
#
# Run from repo root (LAN + SSH root@Proxmox BatchMode). Requires: cast in PATH, smom-dbis-138/.env.
#
@@ -28,6 +29,9 @@ TARGETS="${PMM_MESH_LXC_TARGETS:-192.168.11.11:3500 192.168.11.12:5700}"
CAST_SRC="$(command -v cast || true)"
[[ -x "$CAST_SRC" ]] || { echo "ERROR: cast not in PATH" >&2; exit 1; }
[[ -f "$SMOM/.env" ]] || { echo "ERROR: missing $SMOM/.env" >&2; exit 1; }
KEEPER_SECRET_SRC="${KEEPER_SECRET_FILE:-${HOME}/.secure-secrets/chain138-keeper.env}"
HAS_KEEPER_SECRET=false
[[ -f "$KEEPER_SECRET_SRC" ]] && HAS_KEEPER_SECRET=true
MESH_TGZ="$(mktemp /tmp/c138-mesh-XXXXXX.tgz)"
cleanup() { rm -f "$MESH_TGZ" 2>/dev/null || true; }
@@ -53,6 +57,9 @@ for pair in $TARGETS; do
scp -o BatchMode=yes -o ConnectTimeout=20 "$MESH_TGZ" "root@${host}:/tmp/c138-mesh-install.tgz"
scp -o BatchMode=yes -o ConnectTimeout=20 "$CAST_SRC" "root@${host}:/tmp/cast-bin-lxc"
if [[ "$HAS_KEEPER_SECRET" == true ]]; then
scp -o BatchMode=yes -o ConnectTimeout=20 "$KEEPER_SECRET_SRC" "root@${host}:/tmp/c138-keeper.env"
fi
ssh -o BatchMode=yes -o ConnectTimeout=25 "root@${host}" \
"VMID=${vmid} bash -s" <<'REMOTE'
@@ -68,6 +75,12 @@ sleep 1
pct push "$VMID" /tmp/c138-mesh-install.tgz /var/tmp/c138-mesh.tgz
pct push "$VMID" /tmp/cast-bin-lxc /var/tmp/cast-bin
if [[ -f /tmp/c138-keeper.env ]]; then
pct exec "$VMID" -- mkdir -p /root/.secure-secrets
pct push "$VMID" /tmp/c138-keeper.env /root/.secure-secrets/chain138-keeper.env
pct exec "$VMID" -- chmod 700 /root/.secure-secrets
pct exec "$VMID" -- chmod 600 /root/.secure-secrets/chain138-keeper.env
fi
# Unprivileged LXCs may have /opt and /var/lib root-owned on host as nobody: use /var/tmp (writable as CT root).
BASE=/var/tmp/chain138-mesh
pct exec "$VMID" -- mkdir -p "$BASE/bin"
@@ -128,6 +141,8 @@ Environment=ENABLE_MESH_ORACLE_TICK=1
Environment=ENABLE_MESH_KEEPER_TICK=1
Environment=ENABLE_MESH_PMM_READS=1
Environment=ENABLE_MESH_WETH_READS=1
EnvironmentFile=-/opt/oracle-publisher/.env
EnvironmentFile=-/root/.secure-secrets/chain138-keeper.env
EnvironmentFile=-/var/tmp/chain138-mesh/smom-dbis-138/.env
ExecStart=/bin/bash /var/tmp/chain138-mesh/smom-dbis-138/scripts/reserve/pmm-mesh-6s-automation.sh
Restart=always
@@ -175,7 +190,7 @@ UNIT_HOST
}
fi
rm -f /tmp/c138-mesh-install.tgz /tmp/cast-bin-lxc
rm -f /tmp/c138-mesh-install.tgz /tmp/cast-bin-lxc /tmp/c138-keeper.env
REMOTE
done

View File

@@ -0,0 +1,175 @@
#!/usr/bin/env bash
# Create Keycloak group sankofa-it-admin (or SANKOFA_IT_ADMIN_GROUP_NAME) and map realm role
# sankofa-it-admin onto it. Run after keycloak-sankofa-ensure-it-admin-role.sh.
#
# Usage: ./scripts/deployment/keycloak-sankofa-ensure-it-admin-group.sh [--dry-run]
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
# shellcheck source=/dev/null
source "${PROJECT_ROOT}/config/ip-addresses.conf" 2>/dev/null || true
if [ -f "$PROJECT_ROOT/.env" ]; then
set +u
set -a
# shellcheck source=/dev/null
source "$PROJECT_ROOT/.env" 2>/dev/null || true
set +a
set -u
fi
PROXMOX_HOST="${PROXMOX_HOST:-${PROXMOX_HOST_R630_01:-192.168.11.11}}"
KEYCLOAK_CT_VMID="${KEYCLOAK_CT_VMID:-${SANKOFA_KEYCLOAK_VMID:-7802}}"
REALM="${KEYCLOAK_REALM:-master}"
ADMIN_USER="${KEYCLOAK_ADMIN:-admin}"
ADMIN_PASS="${KEYCLOAK_ADMIN_PASSWORD:-}"
ROLE_NAME="${SANKOFA_IT_ADMIN_ROLE_NAME:-sankofa-it-admin}"
GROUP_NAME="${SANKOFA_IT_ADMIN_GROUP_NAME:-sankofa-it-admin}"
SSH_OPTS=(-o BatchMode=yes -o StrictHostKeyChecking=accept-new -o ConnectTimeout=15)
DRY=0
[[ "${1:-}" == "--dry-run" ]] && DRY=1
if [ -z "$ADMIN_PASS" ]; then
echo "KEYCLOAK_ADMIN_PASSWORD is not set in .env" >&2
exit 1
fi
if [ "$DRY" = 1 ]; then
echo "[dry-run] Would ensure group ${GROUP_NAME} + map realm role ${ROLE_NAME} in ${REALM}"
exit 0
fi
ssh "${SSH_OPTS[@]}" "root@${PROXMOX_HOST}" \
"pct exec ${KEYCLOAK_CT_VMID} -- env KC_PASS=\"${ADMIN_PASS}\" ADMUSER=\"${ADMIN_USER}\" REALM=\"${REALM}\" ROLE_NAME=\"${ROLE_NAME}\" GROUP_NAME=\"${GROUP_NAME}\" python3 -u -" <<'PY'
import json
import os
import urllib.error
import urllib.parse
import urllib.request
base = "http://127.0.0.1:8080"
realm = os.environ["REALM"]
role_name = os.environ["ROLE_NAME"]
group_name = os.environ["GROUP_NAME"]
admin_user = os.environ["ADMUSER"]
password = os.environ["KC_PASS"]
def post_form(url: str, data: dict) -> dict:
body = urllib.parse.urlencode(data).encode()
req = urllib.request.Request(url, data=body, method="POST")
with urllib.request.urlopen(req, timeout=60) as resp:
return json.loads(resp.read().decode())
def req_json(method: str, url: str, headers: dict, data=None):
body = None
hdrs = dict(headers)
if data is not None:
body = json.dumps(data).encode()
hdrs["Content-Type"] = "application/json"
r = urllib.request.Request(url, data=body, headers=hdrs, method=method)
with urllib.request.urlopen(r, timeout=120) as resp:
return resp.read().decode()
def req_json_ignore_404(method: str, url: str, headers: dict, data=None):
body = None
hdrs = dict(headers)
if data is not None:
body = json.dumps(data).encode()
hdrs["Content-Type"] = "application/json"
r = urllib.request.Request(url, data=body, headers=hdrs, method=method)
try:
with urllib.request.urlopen(r, timeout=120) as resp:
return resp.getcode(), resp.read().decode()
except urllib.error.HTTPError as e:
return e.code, e.read().decode() if e.fp else ""
tok = post_form(
f"{base}/realms/master/protocol/openid-connect/token",
{
"grant_type": "password",
"client_id": "admin-cli",
"username": admin_user,
"password": password,
},
)
access = tok.get("access_token")
if not access:
raise SystemExit(f"token failed: {tok}")
h = {"Authorization": f"Bearer {access}"}
# Role id
role_url = f"{base}/admin/realms/{realm}/roles/{urllib.parse.quote(role_name, safe='')}"
req_r = urllib.request.Request(role_url, headers=h)
try:
with urllib.request.urlopen(req_r, timeout=60) as resp:
role = json.loads(resp.read().decode())
except urllib.error.HTTPError as e:
raise SystemExit(
f"Realm role {role_name!r} missing; run keycloak-sankofa-ensure-it-admin-role.sh first. HTTP {e.code}"
) from e
role_id = role.get("id")
if not role_id:
raise SystemExit("role JSON missing id")
# Find or create group
list_url = f"{base}/admin/realms/{realm}/groups?search={urllib.parse.quote(group_name)}"
req_g = urllib.request.Request(list_url, headers=h)
with urllib.request.urlopen(req_g, timeout=60) as resp:
groups = json.loads(resp.read().decode())
gid = None
for g in groups:
if g.get("name") == group_name:
gid = g.get("id")
break
if not gid:
try:
req_json(
"POST",
f"{base}/admin/realms/{realm}/groups",
h,
{"name": group_name},
)
except urllib.error.HTTPError as e:
if e.code != 409:
raise
with urllib.request.urlopen(
urllib.request.Request(list_url, headers=h), timeout=60
) as resp:
groups = json.loads(resp.read().decode())
for g in groups:
if g.get("name") == group_name:
gid = g.get("id")
break
if not gid:
raise SystemExit("failed to resolve group id after create")
# Map realm role to group (idempotent: POST may 204 or 409 depending on KC version)
map_url = f"{base}/admin/realms/{realm}/groups/{gid}/role-mappings/realm"
code, _body = req_json_ignore_404(
"POST",
map_url,
h,
[{"id": role_id, "name": role_name}],
)
if code in (200, 204):
print(f"Mapped realm role {role_name!r} to group {group_name!r}.", flush=True)
elif code == 409:
print(f"Role likely already mapped to group {group_name!r}.", flush=True)
else:
print(f"POST role-mappings HTTP {code}: {_body[:500]}", flush=True)
print(
f"Add IT users to group {group_name!r} in Admin Console (Groups → Members).",
flush=True,
)
PY

View File

@@ -0,0 +1,63 @@
#!/usr/bin/env bash
# Print dry-run deployment commands for the Wave 1 GRU V2 Chain 138 plan.
#
# Usage:
# bash scripts/deployment/plan-gru-v2-wave1-chain138.sh
# bash scripts/deployment/plan-gru-v2-wave1-chain138.sh --code=EUR
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
PLAN_JSON="${PROJECT_ROOT}/config/gru-v2-chain138-wave1-v2-plan.json"
FILTER_CODE=""
for arg in "$@"; do
case "$arg" in
--code=*) FILTER_CODE="${arg#--code=}" ;;
*)
echo "Unknown argument: $arg" >&2
exit 2
;;
esac
done
if ! command -v jq >/dev/null 2>&1; then
echo "jq is required" >&2
exit 1
fi
echo "=== GRU V2 Wave 1 Chain 138 plan ==="
echo "Plan: $PLAN_JSON"
echo ""
jq -c --arg filter "$FILTER_CODE" '
.assets[]
| select(($filter == "") or (.code == $filter))
| . as $asset
| .deployments[]
| {
code: $asset.code,
tokenName,
tokenSymbol,
currencyCode,
registryName,
registrySymbol,
currentCanonicalAddress
}
' "$PLAN_JSON" | while IFS= read -r row; do
code="$(jq -r '.code' <<<"$row")"
token_name="$(jq -r '.tokenName' <<<"$row")"
token_symbol="$(jq -r '.tokenSymbol' <<<"$row")"
currency_code="$(jq -r '.currencyCode' <<<"$row")"
registry_name="$(jq -r '.registryName' <<<"$row")"
registry_symbol="$(jq -r '.registrySymbol' <<<"$row")"
current_address="$(jq -r '.currentCanonicalAddress' <<<"$row")"
echo "- ${code} :: ${token_symbol}"
echo " legacy/current canonical: ${current_address}"
printf " command: TOKEN_NAME=%q TOKEN_SYMBOL=%q CURRENCY_CODE=%q REGISTRY_NAME=%q REGISTRY_SYMBOL=%q bash %q --dry-run\n" \
"$token_name" "$token_symbol" "$currency_code" "$registry_name" "$registry_symbol" \
"${PROJECT_ROOT}/scripts/deployment/deploy-gru-v2-generic-chain138.sh"
echo ""
done

View File

@@ -0,0 +1,188 @@
#!/usr/bin/env bash
set -euo pipefail
# Build an operator-safe stabilization plan for the live Mainnet cW/USD rails.
# Reads wallet balances and current DODO PMM reserves, then prints:
# - current live state
# - suggested micro-support swap sizes
# - reserve shortfalls versus a target matched-rail depth
# - exact dry-run commands for the existing swap helper
#
# Example:
# bash scripts/deployment/plan-mainnet-cw-stabilization.sh
# TARGET_MATCHED_QUOTE_UNITS=1000 MICRO_BPS=25 bash scripts/deployment/plan-mainnet-cw-stabilization.sh
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
source "${PROJECT_ROOT}/smom-dbis-138/scripts/load-env.sh" >/dev/null 2>&1
require_cmd() {
command -v "$1" >/dev/null 2>&1 || {
echo "[fail] missing required command: $1" >&2
exit 1
}
}
require_cmd cast
require_cmd python3
RPC_URL="${ETHEREUM_MAINNET_RPC:-}"
PRIVATE_KEY="${PRIVATE_KEY:-}"
if [[ -z "$RPC_URL" || -z "$PRIVATE_KEY" ]]; then
echo "[fail] ETHEREUM_MAINNET_RPC and PRIVATE_KEY are required" >&2
exit 1
fi
DEPLOYER="$(cast wallet address --private-key "$PRIVATE_KEY")"
MICRO_BPS="${MICRO_BPS:-25}"
REBALANCE_BPS="${REBALANCE_BPS:-100}"
TARGET_MATCHED_QUOTE_UNITS="${TARGET_MATCHED_QUOTE_UNITS:-1000}"
CWUSDT_MAINNET="${CWUSDT_MAINNET:-0xaF5017d0163ecb99D9B5D94e3b4D7b09Af44D8AE}"
CWUSDC_MAINNET="${CWUSDC_MAINNET:-0x2de5F116bFcE3d0f922d9C8351e0c5Fc24b9284a}"
USDC_MAINNET="0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
USDT_MAINNET="0xdAC17F958D2ee523a2206206994597C13D831ec7"
POOL_CWUSDT_USDC_MAINNET="${POOL_CWUSDT_USDC_MAINNET:-0x27f3aE7EE71Be3d77bAf17d4435cF8B895DD25D2}"
POOL_CWUSDC_USDC_MAINNET="${POOL_CWUSDC_USDC_MAINNET:-0x69776fc607e9edA8042e320e7e43f54d06c68f0E}"
POOL_CWUSDT_USDT_MAINNET="${POOL_CWUSDT_USDT_MAINNET:-0x79156F6B7bf71a1B72D78189B540A89A6C13F6FC}"
POOL_CWUSDC_USDT_MAINNET="${POOL_CWUSDC_USDT_MAINNET:-0xCC0fd27A40775c9AfcD2BBd3f7c902b0192c247A}"
to_units() {
python3 - "$1" <<'PY'
import sys
raw = int(sys.argv[1])
print(f"{raw / 1_000_000:.6f}")
PY
}
from_units_raw() {
python3 - "$1" <<'PY'
import sys
units = float(sys.argv[1])
print(int(round(units * 1_000_000)))
PY
}
read_balance() {
cast call "$1" 'balanceOf(address)(uint256)' "$DEPLOYER" --rpc-url "$RPC_URL" | awk '{print $1}'
}
read_mapping_pool() {
cast call "${DODO_PMM_INTEGRATION_MAINNET:-}" 'pools(address,address)(address)' "$1" "$2" --rpc-url "$RPC_URL" 2>/dev/null | awk '{print $1}'
}
resolve_pool() {
local configured="$1"
local base_token="$2"
local quote_token="$3"
local mapped
mapped="$(read_mapping_pool "$base_token" "$quote_token")"
if [[ -n "$mapped" && "$mapped" != "0x0000000000000000000000000000000000000000" ]]; then
printf '%s\n' "$mapped"
else
printf '%s\n' "$configured"
fi
}
read_reserves() {
local output
output="$(cast call "$1" 'getVaultReserve()(uint256,uint256)' --rpc-url "$RPC_URL")"
printf '%s\n' "$output" | sed -n '1p' | awk '{print $1}'
printf '%s\n' "$output" | sed -n '2p' | awk '{print $1}'
}
suggest_trade_raw() {
python3 - "$1" "$2" <<'PY'
import sys
reserve = int(sys.argv[1])
bps = int(sys.argv[2])
trade = max(1, reserve * bps // 10_000)
print(trade)
PY
}
POOL_CWUSDT_USDC_MAINNET="$(resolve_pool "$POOL_CWUSDT_USDC_MAINNET" "$CWUSDT_MAINNET" "$USDC_MAINNET")"
POOL_CWUSDC_USDC_MAINNET="$(resolve_pool "$POOL_CWUSDC_USDC_MAINNET" "$CWUSDC_MAINNET" "$USDC_MAINNET")"
POOL_CWUSDT_USDT_MAINNET="$(resolve_pool "$POOL_CWUSDT_USDT_MAINNET" "$CWUSDT_MAINNET" "$USDT_MAINNET")"
POOL_CWUSDC_USDT_MAINNET="$(resolve_pool "$POOL_CWUSDC_USDT_MAINNET" "$CWUSDC_MAINNET" "$USDT_MAINNET")"
wallet_cwusdt_raw="$(read_balance "$CWUSDT_MAINNET")"
wallet_cwusdc_raw="$(read_balance "$CWUSDC_MAINNET")"
wallet_usdt_raw="$(read_balance "$USDT_MAINNET")"
wallet_usdc_raw="$(read_balance "$USDC_MAINNET")"
read -r cwusdt_usdc_base cwusdt_usdc_quote < <(read_reserves "$POOL_CWUSDT_USDC_MAINNET" | xargs)
read -r cwusdc_usdc_base cwusdc_usdc_quote < <(read_reserves "$POOL_CWUSDC_USDC_MAINNET" | xargs)
read -r cwusdt_usdt_base cwusdt_usdt_quote < <(read_reserves "$POOL_CWUSDT_USDT_MAINNET" | xargs)
read -r cwusdc_usdt_base cwusdc_usdt_quote < <(read_reserves "$POOL_CWUSDC_USDT_MAINNET" | xargs)
target_quote_raw="$(from_units_raw "$TARGET_MATCHED_QUOTE_UNITS")"
cwusdt_usdt_micro_raw="$(suggest_trade_raw "$cwusdt_usdt_quote" "$MICRO_BPS")"
cwusdc_usdc_micro_raw="$(suggest_trade_raw "$cwusdc_usdc_quote" "$MICRO_BPS")"
cwusdt_usdc_micro_raw="$(suggest_trade_raw "$cwusdt_usdc_quote" "$MICRO_BPS")"
cwusdc_usdt_micro_raw="$(suggest_trade_raw "$cwusdc_usdt_quote" "$MICRO_BPS")"
cwusdt_usdt_rebalance_raw="$(suggest_trade_raw "$cwusdt_usdt_quote" "$REBALANCE_BPS")"
cwusdc_usdc_rebalance_raw="$(suggest_trade_raw "$cwusdc_usdc_quote" "$REBALANCE_BPS")"
echo "=== Mainnet cW Stabilization Plan ==="
echo "deployer=$DEPLOYER"
echo "rpc=$RPC_URL"
echo
echo "Live wallet balances (6 decimals):"
echo "- cWUSDT=$(to_units "$wallet_cwusdt_raw")"
echo "- cWUSDC=$(to_units "$wallet_cwusdc_raw")"
echo "- USDT=$(to_units "$wallet_usdt_raw")"
echo "- USDC=$(to_units "$wallet_usdc_raw")"
echo
echo "Live matched rails (base/quote, 6 decimals):"
echo "- cWUSDT/USDT: $(to_units "$cwusdt_usdt_base") / $(to_units "$cwusdt_usdt_quote")"
echo "- cWUSDC/USDC: $(to_units "$cwusdc_usdc_base") / $(to_units "$cwusdc_usdc_quote")"
echo "- cWUSDT/USDC: $(to_units "$cwusdt_usdc_base") / $(to_units "$cwusdt_usdc_quote")"
echo "- cWUSDC/USDT: $(to_units "$cwusdc_usdt_base") / $(to_units "$cwusdc_usdt_quote")"
echo
echo "Recommended operating mode:"
if (( wallet_usdc_raw < target_quote_raw || wallet_usdt_raw < target_quote_raw )); then
echo "- meaningful matched-rail deepening is quote-constrained on mainnet today"
echo "- use direct liquidity adds only after acquiring more USDC/USDT on mainnet"
echo "- until then, prefer micro-support swaps only"
else
echo "- wallet quote inventory is sufficient for at least one ${TARGET_MATCHED_QUOTE_UNITS} unit tranche on each matched rail"
echo "- deepen matched rails first, then use micro-support swaps"
fi
echo
echo "Target matched-rail depth per quote side:"
echo "- targetQuoteUnits=${TARGET_MATCHED_QUOTE_UNITS}"
echo "- targetQuoteRaw=${target_quote_raw}"
echo "- USDT shortfall raw=$(( target_quote_raw > wallet_usdt_raw ? target_quote_raw - wallet_usdt_raw : 0 ))"
echo "- USDC shortfall raw=$(( target_quote_raw > wallet_usdc_raw ? target_quote_raw - wallet_usdc_raw : 0 ))"
echo
echo "Suggested micro-support sizes:"
echo "- cWUSDT/USDT micro raw=${cwusdt_usdt_micro_raw} units=$(to_units "$cwusdt_usdt_micro_raw")"
echo "- cWUSDC/USDC micro raw=${cwusdc_usdc_micro_raw} units=$(to_units "$cwusdc_usdc_micro_raw")"
echo "- cWUSDT/USDC cross-rail micro raw=${cwusdt_usdc_micro_raw} units=$(to_units "$cwusdt_usdc_micro_raw")"
echo "- cWUSDC/USDT cross-rail micro raw=${cwusdc_usdt_micro_raw} units=$(to_units "$cwusdc_usdt_micro_raw")"
echo
echo "Suggested matched-rail rebalance ceilings:"
echo "- cWUSDT/USDT rebalance raw=${cwusdt_usdt_rebalance_raw} units=$(to_units "$cwusdt_usdt_rebalance_raw")"
echo "- cWUSDC/USDC rebalance raw=${cwusdc_usdc_rebalance_raw} units=$(to_units "$cwusdc_usdc_rebalance_raw")"
echo
echo "Recommended sequence:"
echo "1. Acquire mainnet quote inventory (USDC/USDT) before attempting meaningful depth adds."
echo "2. Deepen matched rails first: cWUSDT/USDT and cWUSDC/USDC."
echo "3. Use cross-quote rails only for small quote-lane rebalancing."
echo "4. Keep support swaps below micro size unless reserves are materially larger."
echo
echo "Dry-run commands:"
cat <<EOF
bash scripts/deployment/run-mainnet-public-dodo-cw-swap.sh --pair=cwusdt-usdt --direction=base-to-quote --amount=${cwusdt_usdt_micro_raw} --dry-run
bash scripts/deployment/run-mainnet-public-dodo-cw-swap.sh --pair=cwusdc-usdc --direction=base-to-quote --amount=${cwusdc_usdc_micro_raw} --dry-run
bash scripts/deployment/run-mainnet-public-dodo-cw-swap.sh --pair=cwusdt-usdc --direction=base-to-quote --amount=${cwusdt_usdc_micro_raw} --dry-run
bash scripts/deployment/run-mainnet-public-dodo-cw-swap.sh --pair=cwusdc-usdt --direction=base-to-quote --amount=${cwusdc_usdt_micro_raw} --dry-run
bash scripts/verify/check-mainnet-public-dodo-cw-bootstrap-pools.sh
EOF

View File

@@ -0,0 +1,108 @@
#!/usr/bin/env bash
set -euo pipefail
# Plan rebalancing Mainnet cWUSDC/USDC when the operator wallet has **no USDC**.
#
# Direct addLiquidity (add-mainnet-public-dodo-cw-liquidity.sh) cannot run with zero
# wallet USDC: DODOPMMIntegration.addLiquidity requires baseAmount>0 AND quoteAmount>0.
#
# The repo-supported path without wallet USDC is the **flash quote-push** loop
# (Aave flashLoan USDC → PMM swap USDC→cWUSDC → external unwind cWUSDC→USDC
# → repay Aave + premium). Each successful round **raises PMM quote reserves** and
# **draws base** from the pool, moving base-heavy / quote-thin books toward peg.
#
# This script is read-only + local modeling:
# - reads live pool reserves (cast)
# - prints copy/paste node commands for pmm-flash-push-break-even.mjs
# - points to on-chain execution (deploy + call receiver), not implemented here
#
# References:
# docs/03-deployment/MAINNET_CWUSD_HYBRID_FLASH_LOOP_CALCULATION_WHITEPAPER.md
# smom-dbis-138/contracts/flash/AaveQuotePushFlashReceiver.sol
# smom-dbis-138/script/deploy/DeployAaveQuotePushFlashReceiver.s.sol
# scripts/verify/run-mainnet-cwusdc-usdc-ladder-steps-1-3.sh (preflight + dry-run ladder)
# scripts/deployment/run-mainnet-cwusdc-flash-quote-push-model-sweep.sh (multi-size dry-run loop)
#
# Usage:
# source scripts/lib/load-project-env.sh
# bash scripts/deployment/plan-mainnet-cwusdc-flash-quote-push-rebalance.sh
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROXMOX_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
# shellcheck disable=SC1091
source "${PROXMOX_ROOT}/scripts/lib/load-project-env.sh" 2>/dev/null || true
# shellcheck disable=SC1091
source "${PROXMOX_ROOT}/smom-dbis-138/scripts/load-env.sh" >/dev/null 2>&1 || true
require_cmd() {
command -v "$1" >/dev/null 2>&1 || {
echo "[fail] missing required command: $1" >&2
exit 1
}
}
require_cmd cast
RPC_URL="${ETHEREUM_MAINNET_RPC:-}"
CWUSDC="${CWUSDC_MAINNET:-0x2de5F116bFcE3d0f922d9C8351e0c5Fc24b9284a}"
USDC="0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
INTEGRATION="${DODO_PMM_INTEGRATION_MAINNET:-}"
POOL="${POOL_CWUSDC_USDC_MAINNET:-0x69776fc607e9edA8042e320e7e43f54d06c68f0E}"
if [[ -z "$RPC_URL" ]]; then
echo "[fail] ETHEREUM_MAINNET_RPC is required" >&2
exit 1
fi
if [[ -n "$INTEGRATION" ]]; then
mapped_pool="$(cast call "$INTEGRATION" 'pools(address,address)(address)' "$CWUSDC" "$USDC" --rpc-url "$RPC_URL" 2>/dev/null | awk '{print $1}' || true)"
if [[ -n "$mapped_pool" && "$mapped_pool" != "0x0000000000000000000000000000000000000000" && "${mapped_pool,,}" != "${POOL,,}" ]]; then
POOL="$mapped_pool"
fi
fi
out="$(cast call "$POOL" 'getVaultReserve()(uint256,uint256)' --rpc-url "$RPC_URL")"
B="$(printf '%s\n' "$out" | sed -n '1p' | awk '{print $1}')"
Q="$(printf '%s\n' "$out" | sed -n '2p' | awk '{print $1}')"
PMM_JS="${PROXMOX_ROOT}/scripts/analytics/pmm-flash-push-break-even.mjs"
DEFAULT_EXIT_CMD="printf 1.02"
EXIT_CMD="${PMM_FLASH_EXIT_PRICE_CMD:-$DEFAULT_EXIT_CMD}"
echo "=== Mainnet cWUSDC/USDC rebalance without wallet USDC (flash quote-push plan) ==="
echo "pool=$POOL"
echo "live_base_reserve_raw=$B live_quote_reserve_raw=$Q"
echo
echo "Why addLiquidity is not enough: integration requires both tokens from the wallet."
echo "Flash quote-push borrows USDC for the PMM leg; wallet only needs ETH for gas + a deployed receiver."
echo
echo "1) Model one or more flash sizes (read-only; tune -x or use model sweep script):"
echo "node \"${PMM_JS}\" \\"
echo " -B \"$B\" -Q \"$Q\" \\"
echo " --full-loop-dry-run --loop-strategy quote-push \\"
echo " -x 10000000 \\"
echo " --base-asset cWUSDC --base-decimals 6 --quote-asset USDC --quote-decimals 6 \\"
echo " --external-exit-price-cmd \"$EXIT_CMD\" \\"
echo " --flash-fee-bps 5 --lp-fee-bps 3 \\"
echo " --flash-provider-cap 1000000000000 --flash-provider-name 'Aave mainnet (cap placeholder)' \\"
echo " --gas-tx-count 3 --gas-per-tx 250000 --max-fee-gwei 40 --native-token-price 3200 \\"
echo " --max-post-trade-deviation-bps 500"
echo
echo "2) Multi-size modeling loop (still no chain writes):"
echo " FLASH_MODEL_SCAN_SIZES=5000000,10000000,25000000 \\"
echo " bash scripts/deployment/run-mainnet-cwusdc-flash-quote-push-model-sweep.sh"
echo
echo "3) On-chain stack (forge, from repo root):"
echo " bash scripts/deployment/deploy-mainnet-aave-quote-push-stack.sh --dry-run # then --apply"
echo " bash scripts/deployment/run-mainnet-aave-cwusdc-quote-push-once.sh --dry-run"
echo " FLASH_LOOP_COUNT=3 bash scripts/deployment/run-mainnet-aave-cwusdc-quote-push-loop.sh --apply"
echo " Solidity: smom-dbis-138/script/flash/RunMainnetAaveCwusdcUsdcQuotePushOnce.s.sol"
echo " Fork proofs: cd smom-dbis-138 && forge test --match-contract AaveQuotePushFlashReceiverMainnetForkTest"
echo
echo "4) Reserve peg monitor after any live work:"
echo " bash scripts/verify/check-mainnet-cwusdc-usdc-reserve-peg.sh"
echo
echo "5) Pick a valid Uniswap V3 unwind (no guessing fee tiers):"
echo " bash scripts/verify/probe-uniswap-v3-cwusdc-usdc-mainnet.sh"
echo " If no pool: UNWIND_MODE=2 — build path: bash scripts/verify/build-uniswap-v3-exact-input-path-hex.sh ... (VERIFY_POOLS=1 optional)"
echo " Calculations report (dry-run snapshots): reports/status/mainnet_cwusdc_usdc_quote_push_calculations_2026-04-12.md"

View File

@@ -0,0 +1,231 @@
#!/usr/bin/env bash
set -euo pipefail
# Read-only plan for maintaining a 1:1 vault reserve peg on Mainnet cWUSDC/USDC
# (both tokens use 6 decimals — target is base_reserve ≈ quote_reserve in raw units).
#
# DODOPMMIntegration.addLiquidity requires baseAmount>0 and quoteAmount>0.
# First-order correction when base > quote:
# quote_add = base_add + (base_reserve - quote_reserve)
# When quote > base:
# base_add = quote_add + (quote_reserve - base_reserve)
#
# Largest single tranche the deployer wallet can fund toward that linear model:
# base-thick: base_add = min(wallet_cWUSDC, wallet_USDC - gap)
# quote-thick: quote_add = min(wallet_USDC, wallet_cWUSDC - gap)
# (skipped when the min is < 1). Re-read on-chain reserves after each tx.
#
# When wallet USDC is insufficient for asymmetric peg-close, matched 1:1 adds still
# reduce base/quote imbalance ratio until you can fund quote inventory.
#
# Usage:
# source scripts/lib/load-project-env.sh
# bash scripts/deployment/plan-mainnet-cwusdc-usdc-rebalance-liquidity.sh
#
# Env:
# ETHEREUM_MAINNET_RPC (required)
# PRIVATE_KEY (optional — wallet balances + suggested add sizes)
# POOL_CWUSDC_USDC_MAINNET (optional)
# PEG_IMBALANCE_MAX_BPS (optional — printed vs |base-quote|/max; default 25)
# MIN_BASE_SUGGEST (optional — small example tranche when base thick; default 1e6 raw)
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
# shellcheck disable=SC1091
source "${PROJECT_ROOT}/scripts/lib/load-project-env.sh" 2>/dev/null || true
# shellcheck disable=SC1091
source "${PROJECT_ROOT}/smom-dbis-138/scripts/load-env.sh" >/dev/null 2>&1 || true
require_cmd() {
command -v "$1" >/dev/null 2>&1 || {
echo "[fail] missing required command: $1" >&2
exit 1
}
}
require_cmd cast
require_cmd python3
RPC_URL="${ETHEREUM_MAINNET_RPC:-}"
CWUSDC="${CWUSDC_MAINNET:-0x2de5F116bFcE3d0f922d9C8351e0c5Fc24b9284a}"
USDC="0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
INTEGRATION="${DODO_PMM_INTEGRATION_MAINNET:-}"
# Default public pool; env may point at a legacy rail — prefer integration mapping when it disagrees.
POOL="${POOL_CWUSDC_USDC_MAINNET:-0x69776fc607e9edA8042e320e7e43f54d06c68f0E}"
MIN_BASE_SUGGEST="${MIN_BASE_SUGGEST:-1000000}"
PEG_MAX_BPS="${PEG_IMBALANCE_MAX_BPS:-25}"
if [[ -z "$RPC_URL" ]]; then
echo "[fail] ETHEREUM_MAINNET_RPC is required" >&2
exit 1
fi
if [[ -n "$INTEGRATION" ]]; then
mapped_pool="$(cast call "$INTEGRATION" 'pools(address,address)(address)' "$CWUSDC" "$USDC" --rpc-url "$RPC_URL" 2>/dev/null | awk '{print $1}' || true)"
if [[ -n "$mapped_pool" && "$mapped_pool" != "0x0000000000000000000000000000000000000000" && "${mapped_pool,,}" != "${POOL,,}" ]]; then
POOL="$mapped_pool"
fi
fi
read_reserves() {
cast call "$POOL" 'getVaultReserve()(uint256,uint256)' --rpc-url "$RPC_URL"
}
to_human() {
python3 - "$1" <<'PY'
import sys
print(f"{int(sys.argv[1]) / 1_000_000:.6f}")
PY
}
line1="$(read_reserves | sed -n '1p' | awk '{print $1}')"
line2="$(read_reserves | sed -n '2p' | awk '{print $1}')"
base_r="$line1"
quote_r="$line2"
gap="$(python3 - "$base_r" "$quote_r" <<'PY'
import sys
b, q = int(sys.argv[1]), int(sys.argv[2])
print(abs(b - q))
PY
)"
if (( base_r > quote_r )); then
thicker=base
need_asym_quote_over_base="$(( base_r - quote_r ))"
else
thicker=quote
need_asym_quote_over_base="$(( quote_r - base_r ))"
fi
imb_bps="$(python3 - "$base_r" "$quote_r" <<'PY'
import sys
b, q = int(sys.argv[1]), int(sys.argv[2])
m = max(b, q, 1)
print(abs(b - q) * 10000 // m)
PY
)"
peg_ok=1
if (( PEG_MAX_BPS <= 0 )); then
(( gap == 0 )) || peg_ok=0
else
(( imb_bps <= PEG_MAX_BPS )) || peg_ok=0
fi
echo "=== Mainnet cWUSDC/USDC 1:1 reserve peg plan ==="
echo "pool=$POOL"
echo "baseReserve_raw=$base_r human=$(to_human "$base_r") cWUSDC"
echo "quoteReserve_raw=$quote_r human=$(to_human "$quote_r") USDC"
echo "abs(base-quote)_raw=$gap human=$(to_human "$gap")"
echo "imbalance_bps=$imb_bps peg_max_bps=$PEG_MAX_BPS peg_status=$([[ "$peg_ok" == 1 ]] && echo IN_BAND || echo OUT_OF_BAND)"
echo "thicker_side=$thicker"
echo "peg_check: bash scripts/verify/check-mainnet-cwusdc-usdc-reserve-peg.sh"
echo
if [[ -n "${PRIVATE_KEY:-}" ]]; then
dep="$(cast wallet address --private-key "$PRIVATE_KEY")"
wb="$(cast call "$CWUSDC" 'balanceOf(address)(uint256)' "$dep" --rpc-url "$RPC_URL" | awk '{print $1}')"
wq="$(cast call "$USDC" 'balanceOf(address)(uint256)' "$dep" --rpc-url "$RPC_URL" | awk '{print $1}')"
echo "deployer=$dep"
echo "wallet_cWUSDC_raw=$wb human=$(to_human "$wb")"
echo "wallet_USDC_raw=$wq human=$(to_human "$wq")"
echo
matched="$(python3 - "$wb" "$wq" <<'PY'
import sys
print(min(int(sys.argv[1]), int(sys.argv[2])))
PY
)"
if (( matched > 0 )); then
echo "Suggested matched 1:1 deepen (uses min(wallet cWUSDC, wallet USDC)):"
echo " bash scripts/deployment/add-mainnet-public-dodo-cw-liquidity.sh \\"
echo " --pair=cwusdc-usdc --base-amount=$matched --quote-amount=$matched --dry-run"
fi
if [[ "$thicker" == base ]]; then
G="$need_asym_quote_over_base"
read -r opt_b opt_q <<<"$(
python3 - "$wb" "$wq" "$G" <<'PY'
import sys
wb, wq, G = int(sys.argv[1]), int(sys.argv[2]), int(sys.argv[3])
if G <= 0:
print("0 0")
else:
cap = min(wb, wq - G) if wq > G else 0
if cap < 1:
print("0 0")
else:
print(cap, cap + G)
PY
)"
echo
echo "Asymmetric peg correction (base>quote): quote_add = base_add + (base_reserve - quote_reserve)"
if (( opt_b >= 1 && opt_q >= 1 )); then
echo " largest wallet-sized peg tranche (linear model): base_add=$opt_b quote_add=$opt_q"
echo " bash scripts/deployment/add-mainnet-public-dodo-cw-liquidity.sh \\"
echo " --pair=cwusdc-usdc --base-amount=$opt_b --quote-amount=$opt_q --dry-run"
else
echo " no affordable asymmetric peg tranche (need USDC_raw > gap in wallet; gap_raw=$G)."
fi
asym_b="$MIN_BASE_SUGGEST"
asym_q="$(python3 - "$asym_b" "$need_asym_quote_over_base" <<'PY'
import sys
b, g = int(sys.argv[1]), int(sys.argv[2])
print(b + g)
PY
)"
echo
echo " fixed example MIN_BASE_SUGGEST=$MIN_BASE_SUGGEST -> base_add=$asym_b quote_add=$asym_q"
if (( wq >= asym_q && wb >= asym_b )); then
echo " wallet sufficient for example tranche — dry-run:"
echo " bash scripts/deployment/add-mainnet-public-dodo-cw-liquidity.sh \\"
echo " --pair=cwusdc-usdc --base-amount=$asym_b --quote-amount=$asym_q --dry-run"
else
need_q_short="$(( asym_q > wq ? asym_q - wq : 0 ))"
need_b_short="$(( asym_b > wb ? asym_b - wb : 0 ))"
echo " wallet shortfall vs example: short_quote_raw=$need_q_short short_base_raw=$need_b_short"
echo " fund deployer USDC (and cWUSDC if base-thin), then dry-run before live add."
fi
else
G="$need_asym_quote_over_base"
read -r opt_b opt_q <<<"$(
python3 - "$wb" "$wq" "$G" <<'PY'
import sys
wb, wq, G = int(sys.argv[1]), int(sys.argv[2]), int(sys.argv[3])
if G <= 0:
print("0 0")
else:
cap = min(wq, wb - G) if wb > G else 0
if cap < 1:
print("0 0")
else:
print(cap + G, cap)
PY
)"
echo
echo "Asymmetric peg correction (quote>base): base_add = quote_add + (quote_reserve - base_reserve)"
if (( opt_b >= 1 && opt_q >= 1 )); then
echo " largest wallet-sized peg tranche (linear model): base_add=$opt_b quote_add=$opt_q"
echo " bash scripts/deployment/add-mainnet-public-dodo-cw-liquidity.sh \\"
echo " --pair=cwusdc-usdc --base-amount=$opt_b --quote-amount=$opt_q --dry-run"
else
echo " no affordable asymmetric peg tranche (need cWUSDC_raw > gap in wallet; gap_raw=$G)."
fi
fi
else
echo "Set PRIVATE_KEY (via .env) to print wallet-based add suggestions."
fi
echo
echo "Reference: smom-dbis-138/contracts/dex/DODOPMMIntegration.sol addLiquidity (both amounts must be > 0)."
if [[ -n "${PRIVATE_KEY:-}" ]]; then
wq_check="$(cast call "$USDC" 'balanceOf(address)(uint256)' "$(cast wallet address --private-key "$PRIVATE_KEY")" --rpc-url "$RPC_URL" | awk '{print $1}')"
if (( wq_check == 0 )); then
echo
echo "Wallet USDC is 0: direct addLiquidity is impossible. Flash quote-push path:"
echo " bash scripts/deployment/plan-mainnet-cwusdc-flash-quote-push-rebalance.sh"
echo " bash scripts/deployment/run-mainnet-cwusdc-flash-quote-push-model-sweep.sh"
echo " bash scripts/deployment/deploy-mainnet-aave-quote-push-stack.sh --dry-run"
echo " bash scripts/deployment/run-mainnet-aave-cwusdc-quote-push-once.sh --dry-run"
fi
fi

View File

@@ -0,0 +1,141 @@
#!/usr/bin/env python3
"""
Export 33 x 33 x 6 = 6534 Chain 138 soak wallet addresses from a BIP39 mnemonic.
Uses `cast wallet address` (Foundry) per index — parallel workers for speed.
Never writes the mnemonic; read from env PMM_SOAK_GRID_MNEMONIC only.
PMM_SOAK_GRID_MNEMONIC='...' python3 scripts/deployment/pmm-soak-export-wallet-grid.py --out config/pmm-soak-wallet-grid.json
# Smoke / CI (first N indices only):
PMM_SOAK_GRID_MNEMONIC='...' python3 scripts/deployment/pmm-soak-export-wallet-grid.py --out /tmp/grid-smoke.json --count 10
"""
from __future__ import annotations
import argparse
import json
import os
import subprocess
import sys
from concurrent.futures import ProcessPoolExecutor, as_completed
LPBCA_COUNT = 33
BRANCH_COUNT = 33
CLASS_COUNT = 6
LINEAR_COUNT = LPBCA_COUNT * BRANCH_COUNT * CLASS_COUNT # 6534
def linear_to_coords(linear: int) -> tuple[int, int, int]:
lpbca = linear // (BRANCH_COUNT * CLASS_COUNT)
rem = linear % (BRANCH_COUNT * CLASS_COUNT)
branch = rem // CLASS_COUNT
klass = rem % CLASS_COUNT
return lpbca, branch, klass
def derive_address(linear_index: int) -> tuple[int, str]:
mn = os.environ.get("PMM_SOAK_GRID_MNEMONIC", "").strip()
if not mn:
raise RuntimeError("PMM_SOAK_GRID_MNEMONIC is not set")
path = f"m/44'/60'/0'/0/{linear_index}"
r = subprocess.run(
[
"cast",
"wallet",
"address",
"--mnemonic",
mn,
"--mnemonic-derivation-path",
path,
],
capture_output=True,
text=True,
check=False,
)
if r.returncode != 0:
raise RuntimeError(f"cast failed for index {linear_index}: {r.stderr.strip()}")
addr = r.stdout.strip()
if not addr.startswith("0x") or len(addr) != 42:
raise RuntimeError(f"bad address for index {linear_index}: {addr!r}")
return linear_index, addr
def main() -> int:
ap = argparse.ArgumentParser(description="Export PMM soak 33x33x6 wallet grid addresses.")
ap.add_argument(
"--out",
required=True,
help="Output JSON path (use a gitignored file for real grids).",
)
ap.add_argument("--jobs", type=int, default=8, help="Parallel cast workers (default 8).")
ap.add_argument(
"--count",
type=int,
default=None,
metavar="N",
help=f"Export only linear indices 0..N-1 (default: full grid {LINEAR_COUNT}). For smoke tests.",
)
args = ap.parse_args()
if not os.environ.get("PMM_SOAK_GRID_MNEMONIC", "").strip():
print("ERROR: set PMM_SOAK_GRID_MNEMONIC in the environment.", file=sys.stderr)
return 1
n = LINEAR_COUNT if args.count is None else int(args.count)
if n < 1 or n > LINEAR_COUNT:
print(f"ERROR: --count must be 1..{LINEAR_COUNT}", file=sys.stderr)
return 1
addrs: list[str | None] = [None] * n
jobs = max(1, min(args.jobs, 32))
with ProcessPoolExecutor(max_workers=jobs) as ex:
futs = {ex.submit(derive_address, i): i for i in range(n)}
for fut in as_completed(futs):
i, addr = fut.result()
addrs[i] = addr
wallets = []
for li in range(n):
lpbca, branch, klass = linear_to_coords(li)
wallets.append(
{
"lpbca": lpbca,
"branch": branch,
"class": klass,
"linearIndex": li,
"address": addrs[li],
}
)
doc: dict = {
"version": 1,
"dimensions": {
"lpbcaCount": LPBCA_COUNT,
"branchCount": BRANCH_COUNT,
"classCount": CLASS_COUNT,
"linearCount": LINEAR_COUNT,
"linearCountExported": n,
},
"derivation": {
"pathTemplate": "m/44'/60'/0'/0/{linearIndex}",
"linearIndexFormula": "linear = lpbca * 198 + branch * 6 + class",
},
"wallets": wallets,
}
if n < LINEAR_COUNT:
doc["partialExport"] = True
out_path = args.out
out_dir = os.path.dirname(os.path.abspath(out_path))
if out_dir:
os.makedirs(out_dir, exist_ok=True)
with open(out_path, "w", encoding="utf-8") as f:
json.dump(doc, f, indent=0)
f.write("\n")
print(f"Wrote {n} wallet(s) to {out_path}", file=sys.stderr)
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -0,0 +1,56 @@
#!/usr/bin/env bash
# Non-secret smoke: export a small grid (test mnemonic), dry-run fund + soak bots.
# Does not broadcast txs. Uses Foundry `cast` + LAN RPC if reachable.
#
# Usage:
# bash scripts/deployment/pmm-soak-grid-smoke-check.sh
# Env:
# RPC_URL_138 (optional; in CI defaults to public read-only RPC if unset)
# PMM_SOAK_GRID_SMOKE_RPC_URL — override public RPC URL used when CI is set
# PMM_SOAK_GRID_SMOKE_MNEMONIC — override test mnemonic (default: Anvil-style test phrase)
#
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
cd "$PROJECT_ROOT"
SMOKE_JSON="${PMM_SOAK_GRID_SMOKE_JSON:-${TMPDIR:-/tmp}/pmm-soak-grid-smoke-$$.json}"
MN="${PMM_SOAK_GRID_SMOKE_MNEMONIC:-test test test test test test test test test test test junk}"
# GitHub Actions (and similar) have no LAN; child scripts source .env and would default to Core LAN RPC.
# Export a read-only public endpoint unless the caller already set RPC_URL_138.
if [[ -n "${CI:-}" && -z "${RPC_URL_138:-}" ]]; then
export RPC_URL_138="${PMM_SOAK_GRID_SMOKE_RPC_URL:-https://rpc-http-pub.d-bis.org}"
fi
echo "=== PMM soak grid smoke (dry-run only) ==="
echo "Temp grid JSON: $SMOKE_JSON"
export PMM_SOAK_GRID_MNEMONIC="$MN"
python3 scripts/deployment/pmm-soak-export-wallet-grid.py --out "$SMOKE_JSON" --count 8 --jobs 4
export PMM_SOAK_GRID_JSON="$SMOKE_JSON"
echo ""
echo "[1/4] Operator fund-grid dry-run (native, linear 0-4)..."
NATIVE_AMOUNT_WEI=10000000000000000 \
bash scripts/deployment/pmm-soak-operator-fund-grid.sh --native --from-linear 0 --to-linear 4
echo ""
echo "[2/4] Operator fund-grid dry-run (cUSDT, linear 0-3)..."
TOKEN=0x93E66202A11B1772E55407B32B44e5Cd8eda7f22 AMOUNT_WEI_PER_WALLET=1000000 \
bash scripts/deployment/pmm-soak-operator-fund-grid.sh --from-linear 0 --to-linear 3
echo ""
echo "[3/4] Grid bot dry-run (2 ticks, linear 0-7 only for partial export)..."
PMM_SOAK_LINEAR_MIN=0 PMM_SOAK_LINEAR_MAX=7 \
bash scripts/deployment/chain138-pmm-soak-grid-bot.sh --dry-run --max-ticks 2 --pool-preset stable
echo ""
echo "[4/4] Single-wallet soak dry-run (1 tick)..."
bash scripts/deployment/chain138-pmm-random-soak-swaps.sh --dry-run --max-ticks 1 --pool-preset stable
rm -f "$SMOKE_JSON"
echo ""
echo "=== PMM soak grid smoke OK ==="

View File

@@ -0,0 +1,46 @@
#!/usr/bin/env bash
set -euo pipefail
# Prepare the local upstream-native Uniswap v3 worktree used for the Chain 138
# replacement track. This keeps the live pilot venue layer untouched while
# giving operators a canonical place to build and audit upstream contracts.
CORE_REPO="${CHAIN138_UNISWAP_V3_NATIVE_CORE_REPO:-/home/intlc/projects/uniswap-v3-core}"
PERIPHERY_REPO="${CHAIN138_UNISWAP_V3_NATIVE_PERIPHERY_REPO:-/home/intlc/projects/uniswap-v3-periphery}"
CORE_URL="${CHAIN138_UNISWAP_V3_NATIVE_CORE_URL:-https://github.com/Uniswap/v3-core.git}"
PERIPHERY_URL="${CHAIN138_UNISWAP_V3_NATIVE_PERIPHERY_URL:-https://github.com/Uniswap/v3-periphery.git}"
require_cmd() {
command -v "$1" >/dev/null 2>&1 || {
echo "[fail] missing required command: $1" >&2
exit 1
}
}
require_cmd git
prepare_repo() {
local label="$1"
local path="$2"
local url="$3"
if [[ ! -d "${path}/.git" ]]; then
echo "[info] cloning ${label} into ${path}"
git clone "${url}" "${path}"
else
echo "[info] ${label} already present at ${path}"
fi
git -C "${path}" fetch --tags origin >/dev/null 2>&1 || true
printf '[ok] %s %s @ %s\n' \
"${label}" \
"$(git -C "${path}" remote get-url origin 2>/dev/null || echo unknown-remote)" \
"$(git -C "${path}" rev-parse --short HEAD)"
}
echo "== Chain 138 upstream-native Uniswap v3 worktree =="
prepare_repo "v3-core" "${CORE_REPO}" "${CORE_URL}"
prepare_repo "v3-periphery" "${PERIPHERY_REPO}" "${PERIPHERY_URL}"
echo
echo "[note] These repos support the upstream-native replacement track only."
echo "[note] The live Chain 138 planner still routes through the funded pilot-compatible venue layer until native deployment passes preflight."

View File

@@ -0,0 +1,176 @@
#!/usr/bin/env bash
# Print the exact Chain 138 CWMultiTokenBridgeL1 destination-wiring commands
# still required for the active gas-native rollout lanes.
#
# Usage:
# bash scripts/deployment/print-gas-l1-destination-wiring-commands.sh
# bash scripts/deployment/print-gas-l1-destination-wiring-commands.sh --json
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
export PROJECT_ROOT
# shellcheck source=/dev/null
source "$PROJECT_ROOT/scripts/lib/load-project-env.sh" >/dev/null 2>&1 || true
OUTPUT_JSON=0
for arg in "$@"; do
case "$arg" in
--json) OUTPUT_JSON=1 ;;
*)
echo "Unknown argument: $arg" >&2
exit 2
;;
esac
done
command -v node >/dev/null 2>&1 || {
echo "[FAIL] Missing required command: node" >&2
exit 1
}
command -v cast >/dev/null 2>&1 || {
echo "[FAIL] Missing required command: cast" >&2
exit 1
}
OUTPUT_JSON="$OUTPUT_JSON" node <<'NODE'
const path = require('path');
const { execFileSync } = require('child_process');
const loader = require(path.join(process.env.PROJECT_ROOT, 'config/token-mapping-loader.cjs'));
const outputJson = process.env.OUTPUT_JSON === '1';
const chain138RpcUrl = process.env.RPC_URL_138 || '';
const l1BridgeAddress =
process.env.CHAIN138_L1_BRIDGE ||
process.env.CW_L1_BRIDGE_CHAIN138 ||
process.env.CW_L1_BRIDGE ||
'';
function callRead(address, rpcUrl, signature, args = []) {
if (!address || !/^0x[a-fA-F0-9]{40}$/.test(address) || !rpcUrl) {
return { ok: false, value: null, error: 'missing_rpc_or_address' };
}
try {
const out = execFileSync(
'cast',
['call', address, signature, ...args.map((arg) => String(arg)), '--rpc-url', rpcUrl],
{
encoding: 'utf8',
stdio: ['ignore', 'pipe', 'pipe'],
}
).trim();
return { ok: true, value: out, error: null };
} catch (error) {
const stderr = error && error.stderr ? String(error.stderr).trim() : '';
const stdout = error && error.stdout ? String(error.stdout).trim() : '';
return {
ok: false,
value: null,
error: stderr || stdout || (error && error.message ? String(error.message).trim() : 'cast_call_failed'),
};
}
}
function parseDestinationConfig(rawValue) {
if (typeof rawValue !== 'string') {
return { receiverBridge: null, enabled: null };
}
const match = rawValue.match(/^\((0x[a-fA-F0-9]{40}),\s*(true|false)\)$/);
if (!match) {
return { receiverBridge: null, enabled: null };
}
return {
receiverBridge: match[1],
enabled: match[2] === 'true',
};
}
const rows = loader
.getActiveTransportPairs()
.filter((pair) => pair.assetClass === 'gas_native')
.map((pair) => {
const selector = typeof pair.destinationChainSelector === 'string' ? pair.destinationChainSelector : '';
const l2BridgeAddress = pair.runtimeL2BridgeAddress || '';
const destinationProbe =
l1BridgeAddress && chain138RpcUrl && pair.canonicalAddress && selector
? callRead(l1BridgeAddress, chain138RpcUrl, 'destinations(address,uint64)((address,bool))', [
pair.canonicalAddress,
selector,
])
: { ok: false, value: null, error: 'missing_probe_inputs' };
const destinationConfig = parseDestinationConfig(destinationProbe.value);
const receiverMatches =
!!destinationConfig.receiverBridge &&
!!l2BridgeAddress &&
destinationConfig.receiverBridge.toLowerCase() === l2BridgeAddress.toLowerCase();
const needsWiring = destinationConfig.enabled !== true || !receiverMatches;
return {
key: pair.key,
chainId: pair.destinationChainId,
chainName: pair.destinationChainName || `chain-${pair.destinationChainId}`,
familyKey: pair.familyKey,
canonicalSymbol: pair.canonicalSymbol,
mirroredSymbol: pair.mirroredSymbol,
canonicalAddress: pair.canonicalAddress,
destinationChainSelector: selector || null,
l1BridgeAddress: l1BridgeAddress || null,
l2BridgeAddress: l2BridgeAddress || null,
destinationConfiguredOnL1: destinationConfig.enabled,
destinationReceiverBridgeOnL1: destinationConfig.receiverBridge,
destinationReceiverMatchesRuntimeL2: receiverMatches,
probeError: destinationProbe.ok ? null : destinationProbe.error,
needsWiring,
command:
selector && l2BridgeAddress
? `cast send "$CHAIN138_L1_BRIDGE" "configureDestination(address,uint64,address,bool)" ${pair.canonicalAddress} ${selector} ${l2BridgeAddress} true --rpc-url "$RPC_URL_138" --private-key "$PRIVATE_KEY"`
: null,
};
});
const blocked = rows.filter((row) => row.needsWiring);
const summary = {
activeGasPairs: rows.length,
pairsNeedingL1DestinationWiring: blocked.length,
pairsAlreadyWired: rows.length - blocked.length,
l1BridgeAddress: l1BridgeAddress || null,
};
if (outputJson) {
console.log(JSON.stringify({ summary, rows: blocked }, null, 2));
process.exit(0);
}
console.log('=== Gas L1 Destination Wiring Commands ===');
console.log(`Active gas pairs: ${summary.activeGasPairs}`);
console.log(`Pairs needing Chain 138 destination wiring: ${summary.pairsNeedingL1DestinationWiring}`);
console.log(`Pairs already wired: ${summary.pairsAlreadyWired}`);
console.log(`Chain 138 L1 bridge: ${summary.l1BridgeAddress || '<unset>'}`);
if (blocked.length === 0) {
console.log('All active gas lanes already have Chain 138 destination wiring.');
process.exit(0);
}
for (const row of blocked) {
console.log('');
console.log(`- ${row.chainId} ${row.chainName} | ${row.familyKey} | ${row.canonicalSymbol}->${row.mirroredSymbol}`);
console.log(` selector: ${row.destinationChainSelector || '<missing>'}`);
console.log(` destination on L1: ${row.destinationConfiguredOnL1 === true ? 'wired' : 'missing_or_mismatched'}`);
if (row.destinationReceiverBridgeOnL1) {
console.log(` current L1 receiver: ${row.destinationReceiverBridgeOnL1}`);
}
if (row.l2BridgeAddress) {
console.log(` expected L2 bridge: ${row.l2BridgeAddress}`);
}
if (row.probeError) {
console.log(` probe: ${row.probeError}`);
}
if (row.command) {
console.log(` command: ${row.command}`);
}
}
NODE

View File

@@ -0,0 +1,86 @@
#!/usr/bin/env bash
# Dedicated LXC for static web: info.defi-oracle.io (nginx + SPA root).
# Do not use VMID 2400 (ThirdWeb RPC); use this CT + sync-info-defi-oracle-to-vmid2400.sh.
#
# Defaults: VMID 2410, 192.168.11.218, Proxmox r630-01 (override PROXMOX_HOST for ml110).
#
# Usage (from dev machine with SSH to Proxmox):
# bash scripts/deployment/provision-info-defi-oracle-web-lxc.sh [--dry-run]
# Then:
# bash scripts/deployment/sync-info-defi-oracle-to-vmid2400.sh
# bash scripts/nginx-proxy-manager/update-npmplus-proxy-hosts-api.sh
# pnpm run verify:info-defi-oracle-public
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
# shellcheck source=/dev/null
source "${PROJECT_ROOT}/config/ip-addresses.conf" 2>/dev/null || true
PROXMOX_HOST="${PROXMOX_HOST:-${PROXMOX_HOST_R630_01:-192.168.11.11}}"
VMID="${INFO_DEFI_ORACLE_WEB_VMID:-${INFO_DEFI_ORACLE_VMID:-2410}}"
IP_CT="${IP_INFO_DEFI_ORACLE_WEB:-192.168.11.218}"
HOSTNAME_CT="${INFO_DEFI_ORACLE_WEB_HOSTNAME:-info-defi-oracle-web}"
APP_DIR="${INFO_DEFI_ORACLE_WEB_ROOT:-/var/www/info.defi-oracle.io/html}"
SITE_FILE="${INFO_DEFI_ORACLE_NGINX_SITE:-/etc/nginx/sites-available/info-defi-oracle}"
NGINX_TEMPLATE="${PROJECT_ROOT}/config/nginx/info-defi-oracle-io.site.conf"
TEMPLATE_CT="${TEMPLATE:-local:vztmpl/debian-12-standard_12.12-1_amd64.tar.zst}"
STORAGE="${STORAGE:-local-lvm}"
NETWORK="${NETWORK:-vmbr0}"
GATEWAY="${NETWORK_GATEWAY:-192.168.11.1}"
SSH_OPTS="-o BatchMode=yes -o ConnectTimeout=15 -o StrictHostKeyChecking=accept-new"
DRY_RUN=false
[[ "${1:-}" == "--dry-run" ]] && DRY_RUN=true
if [[ ! -f "$NGINX_TEMPLATE" ]]; then
echo "ERROR: Missing $NGINX_TEMPLATE" >&2
exit 1
fi
echo "=== Provision info.defi-oracle.io web LXC ==="
echo "Proxmox: ${PROXMOX_HOST} VMID: ${VMID} IP: ${IP_CT}"
if $DRY_RUN; then
echo "[DRY-RUN] pct create ${VMID} if missing, apt nginx, install ${SITE_FILE}, enable site"
exit 0
fi
if ssh $SSH_OPTS "root@${PROXMOX_HOST}" "pct list 2>/dev/null | grep -q '^${VMID} '"; then
echo "CT ${VMID} already exists — skipping pct create"
else
echo "Creating CT ${VMID} (${HOSTNAME_CT}) @ ${IP_CT}/24..."
ssh $SSH_OPTS "root@${PROXMOX_HOST}" bash -s <<EOF
set -euo pipefail
pct create ${VMID} ${TEMPLATE_CT} \\
--hostname ${HOSTNAME_CT} \\
--memory 1024 \\
--cores 1 \\
--rootfs ${STORAGE}:8 \\
--net0 name=eth0,bridge=${NETWORK},ip=${IP_CT}/24,gw=${GATEWAY} \\
--nameserver ${DNS_PRIMARY:-1.1.1.1} \\
--description 'Dedicated nginx static host: info.defi-oracle.io (Chain 138 SPA)' \\
--start 1 \\
--onboot 1 \\
--unprivileged 1
EOF
echo "Waiting for CT to boot..."
sleep 15
fi
ssh $SSH_OPTS "root@${PROXMOX_HOST}" "pct status ${VMID}" | grep -q running || {
echo "ERROR: CT ${VMID} not running — start with: ssh root@${PROXMOX_HOST} 'pct start ${VMID}'" >&2
exit 1
}
echo "Installing nginx inside CT ${VMID}..."
ssh $SSH_OPTS "root@${PROXMOX_HOST}" "pct exec ${VMID} -- bash -lc \"set -euo pipefail; export DEBIAN_FRONTEND=noninteractive; apt-get update -qq; apt-get install -y -qq nginx ca-certificates curl; mkdir -p '${APP_DIR}'; rm -f /etc/nginx/sites-enabled/default; systemctl enable nginx\""
echo "Installing nginx site config..."
scp $SSH_OPTS "$NGINX_TEMPLATE" "root@${PROXMOX_HOST}:/tmp/info-defi-oracle-io.site.conf"
ssh $SSH_OPTS "root@${PROXMOX_HOST}" "pct push ${VMID} /tmp/info-defi-oracle-io.site.conf ${SITE_FILE} && rm -f /tmp/info-defi-oracle-io.site.conf"
ssh $SSH_OPTS "root@${PROXMOX_HOST}" "pct exec ${VMID} -- bash -lc \"ln -sf '${SITE_FILE}' /etc/nginx/sites-enabled/info-defi-oracle && nginx -t && systemctl reload nginx && sleep 1 && curl -fsS -H 'Host: info.defi-oracle.io' http://127.0.0.1/health >/dev/null\""
echo ""
echo "✅ Dedicated web LXC ${VMID} ready at ${IP_CT}:80"
echo " Next: bash scripts/deployment/sync-info-defi-oracle-to-vmid2400.sh"
echo " NPM: point info.defi-oracle.io → http://${IP_CT}:80 (fleet: update-npmplus-proxy-hosts-api.sh)"

View File

@@ -0,0 +1,85 @@
#!/usr/bin/env bash
# Dedicated LXC: static nginx site for https://omdnl.org (and www).
#
# Defaults: VMID 10203, 192.168.11.222, Proxmox r630-01 (override PROXMOX_HOST).
#
# Usage (from a host with SSH to Proxmox):
# bash scripts/deployment/provision-omdnl-org-web-lxc.sh [--dry-run]
# Then:
# bash scripts/deployment/sync-omdnl-org-static-to-ct.sh
# bash scripts/cloudflare/configure-omdnl-org-dns.sh
# bash scripts/nginx-proxy-manager/upsert-omdnl-org-proxy-host.sh
# Request TLS in NPMplus UI (or scripts/request-npmplus-certificates.sh) once DNS resolves.
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
# shellcheck source=/dev/null
source "${PROJECT_ROOT}/config/ip-addresses.conf" 2>/dev/null || true
PROXMOX_HOST="${PROXMOX_HOST:-${PROXMOX_HOST_R630_01:-192.168.11.11}}"
VMID="${OMDNL_ORG_WEB_VMID:-10203}"
IP_CT="${IP_OMDNL_ORG_WEB:-192.168.11.222}"
HOSTNAME_CT="${OMDNL_ORG_WEB_HOSTNAME:-omdnl-org-web}"
APP_DIR="${OMDNL_ORG_WEB_ROOT:-/var/www/omdnl.org/html}"
SITE_FILE="${OMDNL_ORG_NGINX_SITE:-/etc/nginx/sites-available/omdnl-org}"
NGINX_TEMPLATE="${PROJECT_ROOT}/config/nginx/omdnl-org.site.conf"
TEMPLATE_CT="${TEMPLATE:-local:vztmpl/debian-12-standard_12.12-1_amd64.tar.zst}"
STORAGE="${STORAGE:-local-lvm}"
NETWORK="${NETWORK:-vmbr0}"
GATEWAY="${NETWORK_GATEWAY:-192.168.11.1}"
SSH_OPTS="-o BatchMode=yes -o ConnectTimeout=15 -o StrictHostKeyChecking=accept-new"
DRY_RUN=false
[[ "${1:-}" == "--dry-run" ]] && DRY_RUN=true
if [[ ! -f "$NGINX_TEMPLATE" ]]; then
echo "ERROR: Missing $NGINX_TEMPLATE" >&2
exit 1
fi
echo "=== Provision omdnl.org web LXC ==="
echo "Proxmox: ${PROXMOX_HOST} VMID: ${VMID} IP: ${IP_CT}"
if $DRY_RUN; then
echo "[DRY-RUN] pct create ${VMID} if missing, apt nginx, install ${SITE_FILE}, enable site"
exit 0
fi
if ssh $SSH_OPTS "root@${PROXMOX_HOST}" "pct list 2>/dev/null | grep -q '^${VMID} '"; then
echo "CT ${VMID} already exists — skipping pct create"
else
echo "Creating CT ${VMID} (${HOSTNAME_CT}) @ ${IP_CT}/24..."
ssh $SSH_OPTS "root@${PROXMOX_HOST}" bash -s <<EOF
set -euo pipefail
pct create ${VMID} ${TEMPLATE_CT} \\
--hostname ${HOSTNAME_CT} \\
--memory 512 \\
--cores 1 \\
--rootfs ${STORAGE}:4 \\
--net0 name=eth0,bridge=${NETWORK},ip=${IP_CT}/24,gw=${GATEWAY} \\
--nameserver ${DNS_PRIMARY:-1.1.1.1} \\
--description 'Static nginx: omdnl.org (SMOM + Absolute Realms central bank presence)' \\
--start 1 \\
--onboot 1 \\
--unprivileged 1
EOF
echo "Waiting for CT to boot..."
sleep 15
fi
ssh $SSH_OPTS "root@${PROXMOX_HOST}" "pct status ${VMID}" | grep -q running || {
echo "ERROR: CT ${VMID} not running — start with: ssh root@${PROXMOX_HOST} 'pct start ${VMID}'" >&2
exit 1
}
echo "Installing nginx inside CT ${VMID}..."
ssh $SSH_OPTS "root@${PROXMOX_HOST}" "pct exec ${VMID} -- bash -lc \"set -euo pipefail; export DEBIAN_FRONTEND=noninteractive; apt-get update -qq; apt-get install -y -qq nginx ca-certificates curl; mkdir -p '${APP_DIR}'; rm -f /etc/nginx/sites-enabled/default; systemctl enable nginx\""
echo "Installing nginx site config..."
scp $SSH_OPTS "$NGINX_TEMPLATE" "root@${PROXMOX_HOST}:/tmp/omdnl-org.site.conf"
ssh $SSH_OPTS "root@${PROXMOX_HOST}" "pct push ${VMID} /tmp/omdnl-org.site.conf ${SITE_FILE} && rm -f /tmp/omdnl-org.site.conf"
ssh $SSH_OPTS "root@${PROXMOX_HOST}" "pct exec ${VMID} -- bash -lc \"ln -sf '${SITE_FILE}' /etc/nginx/sites-enabled/omdnl-org && nginx -t && systemctl reload nginx && sleep 1 && curl -fsS -H 'Host: omdnl.org' http://127.0.0.1/health >/dev/null\""
echo ""
echo "✅ Web LXC ${VMID} ready at ${IP_CT}:80"
echo " Next: bash scripts/deployment/sync-omdnl-org-static-to-ct.sh"

View File

@@ -0,0 +1,227 @@
#!/usr/bin/env bash
# Provision the OP Stack operator landing zone on Proxmox r630-02.
#
# Creates:
# 5751 op-stack-deployer-1 (192.168.11.69)
# 5752 op-stack-ops-1 (192.168.11.70)
#
# Installs baseline tooling, enables SSH, seeds repo OP Stack scaffolding,
# and prepares /etc/op-stack and /opt/op-stack-bootstrap inside each CT.
#
# Usage:
# bash scripts/deployment/provision-op-stack-operator-lxcs.sh [--dry-run]
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
cd "$PROJECT_ROOT"
source "$PROJECT_ROOT/.env" 2>/dev/null || true
source "$PROJECT_ROOT/config/ip-addresses.conf" 2>/dev/null || true
CT_PROXMOX_HOST="${OP_STACK_PROXMOX_HOST:-${PROXMOX_HOST_R630_02:-192.168.11.12}}"
CT_STORAGE="${OP_STACK_PROXMOX_STORAGE:-thin5}"
CT_TEMPLATE="${OP_STACK_PROXMOX_TEMPLATE:-local:vztmpl/debian-12-standard_12.12-1_amd64.tar.zst}"
CT_NETWORK="${OP_STACK_PROXMOX_NETWORK:-vmbr0}"
CT_GATEWAY="${OP_STACK_PROXMOX_GATEWAY:-${NETWORK_GATEWAY:-192.168.11.1}}"
CT_NAMESERVER="${OP_STACK_PROXMOX_NAMESERVER:-${DNS_PRIMARY:-1.1.1.1}}"
SSH_OPTS="-o BatchMode=yes -o ConnectTimeout=15 -o StrictHostKeyChecking=accept-new"
DRY_RUN=false
[[ "${1:-}" == "--dry-run" ]] && DRY_RUN=true
SSH_PUBKEY_PATH="${SSH_PUBKEY_PATH:-$HOME/.ssh/id_ed25519_proxmox.pub}"
if [[ ! -f "$SSH_PUBKEY_PATH" ]]; then
SSH_PUBKEY_PATH="${HOME}/.ssh/id_ed25519.pub"
fi
if [[ ! -f "$SSH_PUBKEY_PATH" ]]; then
echo "ERROR: No SSH public key found at ~/.ssh/id_ed25519_proxmox.pub or ~/.ssh/id_ed25519.pub"
exit 1
fi
SSH_KEY_PATH="${SSH_PUBKEY_PATH%.pub}"
DIRECT_CT_SSH_OPTS="-o BatchMode=yes -o ConnectTimeout=15 -o StrictHostKeyChecking=accept-new"
if [[ -f "$SSH_KEY_PATH" ]]; then
DIRECT_CT_SSH_OPTS="-i $SSH_KEY_PATH $DIRECT_CT_SSH_OPTS"
fi
BOOTSTRAP_TAR="$(mktemp /tmp/op-stack-bootstrap.XXXXXX.tgz)"
REMOTE_BOOTSTRAP_TAR="/root/op-stack-bootstrap.tgz"
REMOTE_PUBKEY="/root/op-stack-authorized_key.pub"
trap 'rm -f "$BOOTSTRAP_TAR"' EXIT
CTS=(
"${OP_STACK_DEPLOYER_VMID:-5751}|op-stack-deployer-1|${IP_OP_STACK_DEPLOYER_CT:-192.168.11.69}|8192|4|32|op-stack-deployer-operator-workspace"
"${OP_STACK_OPS_VMID:-5752}|op-stack-ops-1|${IP_OP_STACK_OPS_CT:-192.168.11.70}|12288|6|48|op-stack-runtime-service-staging"
)
tar -C "$PROJECT_ROOT" -czf "$BOOTSTRAP_TAR" \
config/op-stack-superchain \
config/wormhole \
scripts/op-stack \
scripts/wormhole \
docs/03-deployment/OP_STACK_STANDARD_ROLLUP_SUPERCHAIN_RUNBOOK.md \
docs/03-deployment/OP_STACK_L2_AND_BESU138_BRIDGE_NOTES.md \
docs/03-deployment/WORMHOLE_NTT_EXECUTOR_OPERATOR_RUNBOOK.md \
config/systemd/op-stack-batcher.example.service \
config/systemd/op-stack-challenger.example.service \
config/systemd/op-stack-op-node.example.service \
config/systemd/op-stack-op-reth.example.service \
config/systemd/op-stack-proposer.example.service \
config/systemd/op-stack-sequencer.example.service
echo "=== OP Stack operator landing zone ==="
echo "Proxmox host: $CT_PROXMOX_HOST"
echo "Storage: $CT_STORAGE | Template: $CT_TEMPLATE | Network: $CT_NETWORK"
echo "SSH public key: $SSH_PUBKEY_PATH"
echo
for spec in "${CTS[@]}"; do
IFS="|" read -r vmid hostname ip memory cores disk description <<<"$spec"
echo " - CT $vmid $hostname @ $ip (${memory}MB RAM, ${cores} cores, ${disk}G disk)"
done
echo
if $DRY_RUN; then
echo "[DRY-RUN] Would create/bootstrap the CTs above on $CT_PROXMOX_HOST."
exit 0
fi
scp $SSH_OPTS "$SSH_PUBKEY_PATH" "root@${CT_PROXMOX_HOST}:${REMOTE_PUBKEY}"
scp $SSH_OPTS "$BOOTSTRAP_TAR" "root@${CT_PROXMOX_HOST}:${REMOTE_BOOTSTRAP_TAR}"
for spec in "${CTS[@]}"; do
IFS="|" read -r vmid hostname ip memory cores disk description <<<"$spec"
echo "=== Provisioning CT ${vmid} (${hostname}) ==="
if ssh $SSH_OPTS "root@${CT_PROXMOX_HOST}" "pct list 2>/dev/null | grep -q '^${vmid} '"; then
echo "CT ${vmid} already exists — skipping create"
else
ssh $SSH_OPTS "root@${CT_PROXMOX_HOST}" bash -s -- \
"$vmid" "$hostname" "$memory" "$cores" "$disk" "$ip" "$description" \
"$CT_TEMPLATE" "$CT_STORAGE" "$CT_NETWORK" "$CT_GATEWAY" "$CT_NAMESERVER" <<'REMOTE_CREATE'
set -euo pipefail
vmid="$1"
hostname="$2"
memory="$3"
cores="$4"
disk="$5"
ip="$6"
description="$7"
template="$8"
storage="$9"
network="${10}"
gateway="${11}"
nameserver="${12}"
pct create "$vmid" "$template" \
--hostname "$hostname" \
--memory "$memory" \
--cores "$cores" \
--rootfs "${storage}:${disk}" \
--net0 "name=eth0,bridge=${network},ip=${ip}/24,gw=${gateway}" \
--nameserver "$nameserver" \
--description "$description" \
--start 1 \
--onboot 1 \
--unprivileged 0 \
--features nesting=1,keyctl=1
REMOTE_CREATE
echo "Waiting for CT ${vmid} to boot..."
sleep 20
fi
ssh $SSH_OPTS "root@${CT_PROXMOX_HOST}" "pct start ${vmid} >/dev/null 2>&1 || true"
ssh $SSH_OPTS "root@${CT_PROXMOX_HOST}" "pct push ${vmid} ${REMOTE_PUBKEY} /root/op-stack-authorized_key.pub >/dev/null"
ssh $SSH_OPTS "root@${CT_PROXMOX_HOST}" "pct push ${vmid} ${REMOTE_BOOTSTRAP_TAR} /root/op-stack-bootstrap.tgz >/dev/null"
ssh $SSH_OPTS "root@${CT_PROXMOX_HOST}" bash -s -- "$vmid" "$hostname" <<'REMOTE_BOOTSTRAP'
set -euo pipefail
vmid="$1"
hostname="$2"
pct exec "$vmid" -- bash -lc '
set -euo pipefail
export DEBIAN_FRONTEND=noninteractive
apt-get update -qq
apt-get install -y -qq \
sudo openssh-server ca-certificates curl git jq rsync unzip zip \
tar gzip xz-utils make build-essential tmux htop python3 python3-pip golang-go
systemctl enable ssh >/dev/null 2>&1 || systemctl enable ssh.service >/dev/null 2>&1 || true
systemctl restart ssh >/dev/null 2>&1 || systemctl restart ssh.service >/dev/null 2>&1 || true
id -u opuser >/dev/null 2>&1 || useradd -m -s /bin/bash -G sudo opuser
install -d -m 700 /root/.ssh /home/opuser/.ssh
install -d -m 755 /opt/op-stack /etc/op-stack /etc/op-stack/systemd-examples /opt/op-stack-bootstrap
if ! grep -qxF "$(cat /root/op-stack-authorized_key.pub)" /root/.ssh/authorized_keys 2>/dev/null; then
cat /root/op-stack-authorized_key.pub >> /root/.ssh/authorized_keys
fi
if ! grep -qxF "$(cat /root/op-stack-authorized_key.pub)" /home/opuser/.ssh/authorized_keys 2>/dev/null; then
cat /root/op-stack-authorized_key.pub >> /home/opuser/.ssh/authorized_keys
fi
chown -R opuser:opuser /home/opuser/.ssh /opt/op-stack /opt/op-stack-bootstrap
chmod 600 /root/.ssh/authorized_keys /home/opuser/.ssh/authorized_keys
echo "opuser ALL=(ALL) NOPASSWD:ALL" >/etc/sudoers.d/90-opuser
chmod 440 /etc/sudoers.d/90-opuser
rm -rf /opt/op-stack-bootstrap/*
tar -xzf /root/op-stack-bootstrap.tgz -C /opt/op-stack-bootstrap
cp /opt/op-stack-bootstrap/config/op-stack-superchain/op-stack-l2.example.env /etc/op-stack/op-stack-l2.example.env
cp /opt/op-stack-bootstrap/config/systemd/op-stack-*.example.service /etc/op-stack/systemd-examples/ 2>/dev/null || true
case "'"${hostname}"'" in
op-stack-deployer-1)
bash /opt/op-stack-bootstrap/scripts/op-stack/prepare-operator-ct.sh deployer
;;
op-stack-ops-1)
bash /opt/op-stack-bootstrap/scripts/op-stack/prepare-operator-ct.sh ops
;;
esac
echo "'"${hostname}"'" >/etc/hostname
hostname "'"${hostname}"'"
'
REMOTE_BOOTSTRAP
ssh $DIRECT_CT_SSH_OPTS "opuser@${ip}" "hostname && id"
done
echo "Refreshing governance TOMLs inside deployer CT if cache is missing..."
ssh $DIRECT_CT_SSH_OPTS "opuser@${IP_OP_STACK_DEPLOYER_CT:-192.168.11.69}" '
set -euo pipefail
cache_dir=/opt/op-stack-bootstrap/config/op-stack-superchain/cache
required=(
standard-config-params-mainnet.toml
standard-config-roles-mainnet.toml
standard-versions-mainnet.toml
)
missing=0
for file in "${required[@]}"; do
if [[ ! -s "$cache_dir/$file" ]]; then
missing=1
fi
done
if [[ "${OP_STACK_REFRESH_TOMLS:-0}" == "1" || "$missing" == "1" ]]; then
cd /opt/op-stack-bootstrap
bash scripts/op-stack/fetch-standard-mainnet-toml.sh
else
echo "Using seeded governance TOML cache in $cache_dir"
fi
'
echo
echo "✅ OP Stack operator landing zone ready:"
echo " - 5751 op-stack-deployer-1 @ ${IP_OP_STACK_DEPLOYER_CT:-192.168.11.69}"
echo " - 5752 op-stack-ops-1 @ ${IP_OP_STACK_OPS_CT:-192.168.11.70}"
echo
echo "Next:"
echo " 1. SSH as opuser to the CTs"
echo " 2. Copy live secrets/env into /etc/op-stack/"
echo " 3. Install pinned Optimism binaries or packages"
echo " 4. Run Sepolia rehearsal from 5751, then stage runtime services from 5752"

View File

@@ -0,0 +1,93 @@
#!/usr/bin/env bash
# Publish the full GRU v2 Chain 138 metadata bundle to IPFS and write a symbol-keyed manifest.
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
SOURCE_DIR="${PROJECT_ROOT}/config/gru-v2-metadata/chain138"
IPFS_API="${IPFS_API:-http://192.168.11.35:5001}"
MANIFEST_OUT="${SOURCE_DIR}/ipfs-manifest.latest.json"
if ! command -v curl >/dev/null 2>&1; then
echo "curl is required" >&2
exit 1
fi
if ! command -v jq >/dev/null 2>&1; then
echo "jq is required" >&2
exit 1
fi
tmpdir="$(mktemp -d)"
trap 'rm -rf "$tmpdir"' EXIT
metadata_dirs=()
while IFS= read -r dir; do
metadata_dirs+=("$dir")
done < <(find "$SOURCE_DIR" -mindepth 1 -maxdepth 1 -type d | sort)
if [[ "${#metadata_dirs[@]}" -eq 0 ]]; then
echo "No metadata directories found under $SOURCE_DIR" >&2
exit 1
fi
for dir in "${metadata_dirs[@]}"; do
base="$(basename "$dir")"
mkdir -p "$tmpdir/$base"
cp "$dir/"*.json "$tmpdir/$base/"
done
curl_args=(-sS -X POST "${IPFS_API}/api/v0/add?pin=true&wrap-with-directory=true")
for dir in "${metadata_dirs[@]}"; do
base="$(basename "$dir")"
for file in metadata.json regulatory-disclosure.json reporting.json; do
curl_args+=(-F "file=@${tmpdir}/${base}/${file};filename=${base}/${file}")
done
done
response_file="$tmpdir/ipfs-add.jsonl"
curl "${curl_args[@]}" > "$response_file"
root_cid="$(tail -n1 "$response_file" | jq -r '.Hash')"
timestamp="$(date -u +"%Y-%m-%dT%H:%M:%SZ")"
assets_json="$(
{
for dir in "${metadata_dirs[@]}"; do
base="$(basename "$dir")"
symbol="$(jq -r '.symbol' "$dir/metadata.json")"
jq -n \
--arg symbol "$symbol" \
--arg key "$base" \
--arg root "$root_cid" \
'{
($symbol): {
metadataKey: $key,
tokenURI: ("ipfs://" + $root + "/" + $key + "/metadata.json"),
regulatoryDisclosureURI: ("ipfs://" + $root + "/" + $key + "/regulatory-disclosure.json"),
reportingURI: ("ipfs://" + $root + "/" + $key + "/reporting.json")
}
}'
done
} | jq -s 'add'
)"
jq -n \
--arg publishedAt "$timestamp" \
--arg rootCID "$root_cid" \
--arg ipfsApi "$IPFS_API" \
--argjson assets "$assets_json" \
'{
chainId: 138,
publishedAt: $publishedAt,
rootCID: $rootCID,
ipfsApi: $ipfsApi,
assets: $assets
}' > "$MANIFEST_OUT"
echo "Published GRU v2 metadata bundle to IPFS."
echo "Root CID: ${root_cid}"
echo "Manifest: ${MANIFEST_OUT}"
echo
echo "Per-symbol URIs:"
jq -r '.assets | to_entries[] | "- \(.key): \(.value.tokenURI)"' "$MANIFEST_OUT"

View File

@@ -0,0 +1,57 @@
#!/usr/bin/env bash
# Copy a local publication bundle (from deploy-token-aggregation-for-publication.sh) to the explorer host
# and optionally restart systemd. Run from repo root on LAN.
#
# Prereq:
# bash scripts/deploy-token-aggregation-for-publication.sh "$PWD/token-aggregation-build"
#
# Usage:
# EXPLORER_SSH=root@192.168.11.140 REMOTE_DIR=/opt/token-aggregation \
# bash scripts/deployment/push-token-aggregation-bundle-to-explorer.sh /path/to/token-aggregation-build
#
# Env:
# EXPLORER_SSH default root@192.168.11.140
# REMOTE_DIR default /opt/token-aggregation
# REMOTE_SERVICE systemd unit to restart (default: token-aggregation); empty to skip restart
# SYNC_REMOTE_ENV set to 1 to allow bundle .env to overwrite remote .env (default: preserve remote env)
set -euo pipefail
BUNDLE_ROOT="${1:?Usage: $0 /path/to/token-aggregation-build}"
SERVICE_SRC="$BUNDLE_ROOT/smom-dbis-138/services/token-aggregation"
EXPLORER_SSH="${EXPLORER_SSH:-root@192.168.11.140}"
REMOTE_DIR="${REMOTE_DIR:-/opt/token-aggregation}"
REMOTE_SERVICE="${REMOTE_SERVICE:-token-aggregation}"
SYNC_REMOTE_ENV="${SYNC_REMOTE_ENV:-0}"
if [[ ! -d "$SERVICE_SRC" || ! -f "$SERVICE_SRC/dist/index.js" ]]; then
echo "Expected built service at $SERVICE_SRC (run deploy-token-aggregation-for-publication.sh first)." >&2
exit 1
fi
echo "Rsync $SERVICE_SRC/ → ${EXPLORER_SSH}:${REMOTE_DIR}/"
rsync_args=(
-avz
--delete
--exclude '.git'
--exclude 'src'
--exclude '*.test.ts'
)
if [[ "$SYNC_REMOTE_ENV" != "1" ]]; then
rsync_args+=(--exclude '.env' --exclude '.env.local')
echo "Preserving remote .env files (set SYNC_REMOTE_ENV=1 to overwrite them)."
fi
RSYNC_RSH="ssh -o BatchMode=yes" rsync "${rsync_args[@]}" \
"$SERVICE_SRC/" "${EXPLORER_SSH}:${REMOTE_DIR}/"
if [[ -n "$REMOTE_SERVICE" ]]; then
echo "Restart ${REMOTE_SERVICE} on ${EXPLORER_SSH}..."
ssh -o BatchMode=yes "$EXPLORER_SSH" "systemctl restart '${REMOTE_SERVICE}'" || {
echo "systemctl restart failed (unit may differ). Start manually: cd $REMOTE_DIR && node dist/index.js" >&2
exit 1
}
fi
echo "Done. Verify: BASE_URL=https://explorer.d-bis.org pnpm run verify:token-aggregation-api"

View File

@@ -0,0 +1,245 @@
#!/usr/bin/env bash
# Diagnose and optionally fill nonce gaps for a Chain 138 EOA whose future txs are
# stranded in Besu txpool_besuPendingTransactions.
#
# Default mode is dry-run. Use --apply to actually send gap-filler self-transfers.
#
# Example:
# PRIVATE_KEY=0xabc... bash scripts/deployment/recover-chain138-eoa-nonce-gaps.sh \
# --rpc http://192.168.11.217:8545
#
# bash scripts/deployment/recover-chain138-eoa-nonce-gaps.sh \
# --private-key 0xabc... \
# --rpc http://192.168.11.217:8545 \
# --apply
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
if [[ -f "${PROJECT_ROOT}/scripts/lib/load-project-env.sh" ]]; then
# shellcheck source=/dev/null
source "${PROJECT_ROOT}/scripts/lib/load-project-env.sh" 2>/dev/null || true
fi
RPC_URL="${RPC_URL_138:-http://192.168.11.211:8545}"
PRIVATE_KEY_INPUT="${PRIVATE_KEY:-${DEPLOYER_PRIVATE_KEY:-}}"
APPLY=0
CHAIN_ID=138
GAS_LIMIT=21000
GAS_BUMP_MULTIPLIER=2
MIN_GAS_WEI=2000
usage() {
cat <<'EOF'
Usage: recover-chain138-eoa-nonce-gaps.sh [options]
Options:
--private-key <hex> Private key for the EOA owner.
--rpc <url> Chain 138 RPC URL. Default: RPC_URL_138 or http://192.168.11.211:8545
--apply Actually send self-transfers for missing nonces.
--min-gas-wei <n> Floor gas price / max fee in wei. Default: 2000
--gas-multiplier <n> Multiply current eth_gasPrice by this factor. Default: 2
--help Show this help.
Behavior:
1. Derives the owner address from the provided private key.
2. Reads latest nonce and txpool_besuPendingTransactions for that address.
3. Finds any missing nonce gaps between latest nonce and the highest pending nonce.
4. In --apply mode, sends 0-value self-transfers for those missing nonces.
Notes:
- Dry-run is the default and is recommended first.
- This only fills missing nonce gaps. It does not cancel every future tx.
- Requires: cast, curl, jq
EOF
}
while [[ $# -gt 0 ]]; do
case "$1" in
--private-key)
PRIVATE_KEY_INPUT="${2:-}"
shift 2
;;
--rpc)
RPC_URL="${2:-}"
shift 2
;;
--apply)
APPLY=1
shift
;;
--min-gas-wei)
MIN_GAS_WEI="${2:-}"
shift 2
;;
--gas-multiplier)
GAS_BUMP_MULTIPLIER="${2:-}"
shift 2
;;
--help|-h)
usage
exit 0
;;
*)
echo "Unknown argument: $1" >&2
usage
exit 1
;;
esac
done
for cmd in cast curl jq; do
command -v "$cmd" >/dev/null 2>&1 || {
echo "Missing required command: $cmd" >&2
exit 1
}
done
if [[ -z "$PRIVATE_KEY_INPUT" ]]; then
echo "Missing private key. Pass --private-key or set PRIVATE_KEY." >&2
exit 1
fi
OWNER_ADDRESS="$(cast wallet address --private-key "$PRIVATE_KEY_INPUT" 2>/dev/null || true)"
if [[ -z "$OWNER_ADDRESS" ]]; then
echo "Could not derive owner address from private key." >&2
exit 1
fi
OWNER_ADDRESS_LC="$(printf '%s' "$OWNER_ADDRESS" | tr '[:upper:]' '[:lower:]')"
rpc_call() {
local method="$1"
local params_json="$2"
curl -fsS "$RPC_URL" \
-H 'content-type: application/json' \
--data "{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"${method}\",\"params\":${params_json}}"
}
hex_to_dec() {
cast --to-dec "$1"
}
latest_hex="$(rpc_call eth_getTransactionCount "[\"${OWNER_ADDRESS}\",\"latest\"]" | jq -r '.result // "0x0"')"
pending_hex="$(rpc_call eth_getTransactionCount "[\"${OWNER_ADDRESS}\",\"pending\"]" | jq -r '.result // "0x0"')"
gas_hex="$(rpc_call eth_gasPrice "[]" | jq -r '.result // "0x0"')"
latest_nonce="$(hex_to_dec "$latest_hex")"
pending_nonce="$(hex_to_dec "$pending_hex")"
network_gas_wei="$(hex_to_dec "$gas_hex")"
pending_json="$(rpc_call txpool_besuPendingTransactions "[]")"
owner_pending_json="$(printf '%s' "$pending_json" | jq --arg owner "$OWNER_ADDRESS_LC" '
[
.result[]?
| select(((.from // .sender // "") | ascii_downcase) == $owner)
| {
hash,
nonce_hex: .nonce,
to,
gas,
gasPrice,
maxFeePerGas,
maxPriorityFeePerGas,
type
}
]
')"
owner_pending_json="$(
printf '%s' "$owner_pending_json" | jq '
map(. + {
nonce: (.nonce_hex | sub("^0x"; "") | if . == "" then "0" else . end)
})
'
)"
owner_pending_count="$(printf '%s' "$owner_pending_json" | jq 'length')"
highest_pending_nonce="$latest_nonce"
if [[ "$owner_pending_count" -gt 0 ]]; then
highest_pending_nonce="$(
printf '%s' "$owner_pending_json" \
| jq -r '.[].nonce' \
| while read -r nonce_hex; do cast --to-dec "0x${nonce_hex}"; done \
| sort -n \
| tail -n1
)"
fi
present_nonce_csv=","
if [[ "$owner_pending_count" -gt 0 ]]; then
while read -r nonce_dec; do
present_nonce_csv+="${nonce_dec},"
done < <(
printf '%s' "$owner_pending_json" \
| jq -r '.[].nonce' \
| while read -r nonce_hex; do cast --to-dec "0x${nonce_hex}"; done
)
fi
missing_nonces=()
for ((nonce = latest_nonce; nonce <= highest_pending_nonce; nonce++)); do
if [[ "$present_nonce_csv" != *",$nonce,"* ]]; then
missing_nonces+=("$nonce")
fi
done
effective_gas_wei="$(( network_gas_wei * GAS_BUMP_MULTIPLIER ))"
if (( effective_gas_wei < MIN_GAS_WEI )); then
effective_gas_wei="$MIN_GAS_WEI"
fi
echo "=== Chain 138 EOA nonce-gap recovery ==="
echo "Owner address: $OWNER_ADDRESS"
echo "RPC URL: $RPC_URL"
echo "Latest nonce: $latest_nonce"
echo "Pending nonce view: $pending_nonce"
echo "Network gas price: $network_gas_wei wei"
echo "Planned gas price: $effective_gas_wei wei"
echo "Pending tx count: $owner_pending_count"
echo "Highest pending nonce:${highest_pending_nonce}"
echo ""
if [[ "$owner_pending_count" -gt 0 ]]; then
echo "--- Pending txs for owner in txpool_besuPendingTransactions ---"
printf '%s\n' "$owner_pending_json" | jq '.'
echo ""
else
echo "No owner txs found in txpool_besuPendingTransactions."
echo ""
fi
if [[ "${#missing_nonces[@]}" -eq 0 ]]; then
echo "No nonce gaps detected between latest nonce and highest pending nonce."
exit 0
fi
echo "--- Missing nonce gaps ---"
printf 'Missing nonces: %s\n' "${missing_nonces[*]}"
echo ""
if [[ "$APPLY" -ne 1 ]]; then
echo "Dry-run only. Re-run with --apply to send 0-value self-transfers for the missing nonces above."
exit 0
fi
echo "--- Applying gap fillers ---"
for nonce in "${missing_nonces[@]}"; do
echo "Sending self-transfer for nonce $nonce ..."
cast send "$OWNER_ADDRESS" \
--private-key "$PRIVATE_KEY_INPUT" \
--rpc-url "$RPC_URL" \
--nonce "$nonce" \
--value 0 \
--gas-limit "$GAS_LIMIT" \
--gas-price "$effective_gas_wei" \
>/tmp/recover-chain138-eoa-nonce-gaps.out 2>/tmp/recover-chain138-eoa-nonce-gaps.err || {
echo "Failed to send nonce $nonce" >&2
cat /tmp/recover-chain138-eoa-nonce-gaps.err >&2 || true
exit 1
}
cat /tmp/recover-chain138-eoa-nonce-gaps.out
done
echo ""
echo "Submitted gap-filler transactions. Wait for inclusion, then rerun this script in dry-run mode."

View File

@@ -0,0 +1,253 @@
#!/usr/bin/env bash
set -euo pipefail
# Replace a legacy Mainnet public DODO cW/USD pool in the integration mapping,
# create a canonical factory-backed replacement, and seed it with matched liquidity.
#
# Example:
# bash scripts/deployment/replace-mainnet-public-dodo-cw-pool.sh \
# --pair=cwusdc-usdc \
# --initial-price=1000000000000000000 \
# --base-amount=100000000 \
# --quote-amount=100000000 \
# --dry-run
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
source "${PROJECT_ROOT}/smom-dbis-138/scripts/load-env.sh" >/dev/null 2>&1
require_cmd() {
command -v "$1" >/dev/null 2>&1 || {
echo "[fail] missing required command: $1" >&2
exit 1
}
}
require_cmd cast
PAIR=""
INITIAL_PRICE=""
BASE_AMOUNT=""
QUOTE_AMOUNT=""
FEE_BPS="3"
K_VALUE="0"
OPEN_TWAP="false"
DRY_RUN=0
for arg in "$@"; do
case "$arg" in
--pair=*) PAIR="${arg#*=}" ;;
--initial-price=*) INITIAL_PRICE="${arg#*=}" ;;
--base-amount=*) BASE_AMOUNT="${arg#*=}" ;;
--quote-amount=*) QUOTE_AMOUNT="${arg#*=}" ;;
--fee-bps=*) FEE_BPS="${arg#*=}" ;;
--k=*) K_VALUE="${arg#*=}" ;;
--open-twap=*) OPEN_TWAP="${arg#*=}" ;;
--dry-run) DRY_RUN=1 ;;
*)
echo "[fail] unknown arg: $arg" >&2
exit 2
;;
esac
done
if [[ -z "$PAIR" || -z "$INITIAL_PRICE" || -z "$BASE_AMOUNT" || -z "$QUOTE_AMOUNT" ]]; then
echo "[fail] required args: --pair, --initial-price, --base-amount, --quote-amount" >&2
exit 1
fi
RPC_URL="${ETHEREUM_MAINNET_RPC:-}"
PRIVATE_KEY="${PRIVATE_KEY:-}"
INTEGRATION="${DODO_PMM_INTEGRATION_MAINNET:-}"
USDC_MAINNET="0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
USDT_MAINNET="0xdAC17F958D2ee523a2206206994597C13D831ec7"
if [[ -z "$RPC_URL" || -z "$PRIVATE_KEY" || -z "$INTEGRATION" ]]; then
echo "[fail] ETHEREUM_MAINNET_RPC, PRIVATE_KEY, and DODO_PMM_INTEGRATION_MAINNET are required" >&2
exit 1
fi
DEPLOYER="$(cast wallet address --private-key "$PRIVATE_KEY")"
BASE_TOKEN=""
QUOTE_TOKEN=""
LEGACY_POOL=""
PAIR_LABEL=""
ENV_KEY=""
case "$PAIR" in
cwusdc-usdc)
BASE_TOKEN="${CWUSDC_MAINNET:-0x2de5F116bFcE3d0f922d9C8351e0c5Fc24b9284a}"
QUOTE_TOKEN="$USDC_MAINNET"
LEGACY_POOL="${POOL_CWUSDC_USDC_MAINNET:-0xB227dDA4e89E2EA63A1345509C9AC02644e8d526}"
PAIR_LABEL="cWUSDC/USDC"
ENV_KEY="POOL_CWUSDC_USDC_MAINNET"
;;
cwusdt-usdc)
BASE_TOKEN="${CWUSDT_MAINNET:-0xaF5017d0163ecb99D9B5D94e3b4D7b09Af44D8AE}"
QUOTE_TOKEN="$USDC_MAINNET"
LEGACY_POOL="${POOL_CWUSDT_USDC_MAINNET:-0x27f3aE7EE71Be3d77bAf17d4435cF8B895DD25D2}"
PAIR_LABEL="cWUSDT/USDC"
ENV_KEY="POOL_CWUSDT_USDC_MAINNET"
;;
cwusdt-usdt)
BASE_TOKEN="${CWUSDT_MAINNET:-0xaF5017d0163ecb99D9B5D94e3b4D7b09Af44D8AE}"
QUOTE_TOKEN="$USDT_MAINNET"
LEGACY_POOL="${POOL_CWUSDT_USDT_MAINNET:-0x79156F6B7bf71a1B72D78189B540A89A6C13F6FC}"
PAIR_LABEL="cWUSDT/USDT"
ENV_KEY="POOL_CWUSDT_USDT_MAINNET"
;;
cwusdc-usdt)
BASE_TOKEN="${CWUSDC_MAINNET:-0x2de5F116bFcE3d0f922d9C8351e0c5Fc24b9284a}"
QUOTE_TOKEN="$USDT_MAINNET"
LEGACY_POOL="${POOL_CWUSDC_USDT_MAINNET:-0xCC0fd27A40775c9AfcD2BBd3f7c902b0192c247A}"
PAIR_LABEL="cWUSDC/USDT"
ENV_KEY="POOL_CWUSDC_USDT_MAINNET"
;;
*)
echo "[fail] unsupported pair: $PAIR" >&2
exit 2
;;
esac
bool_to_cli() {
if [[ "$1" == "true" ]]; then
printf 'true'
else
printf 'false'
fi
}
parse_tx_hash() {
local output="$1"
local tx_hash
tx_hash="$(printf '%s\n' "$output" | grep -E '^0x[0-9a-fA-F]{64}$' | tail -n1 || true)"
if [[ -z "$tx_hash" ]]; then
tx_hash="$(printf '%s\n' "$output" | grep -E '^transactionHash[[:space:]]+0x[0-9a-fA-F]{64}$' | awk '{print $2}' | tail -n1 || true)"
fi
if [[ -z "$tx_hash" ]]; then
return 1
fi
printf '%s\n' "$tx_hash"
}
mapped_pool_before="$(cast call "$INTEGRATION" 'pools(address,address)(address)' "$BASE_TOKEN" "$QUOTE_TOKEN" --rpc-url "$RPC_URL" | awk '{print $1}')"
base_balance_before="$(cast call "$BASE_TOKEN" 'balanceOf(address)(uint256)' "$DEPLOYER" --rpc-url "$RPC_URL" | awk '{print $1}')"
quote_balance_before="$(cast call "$QUOTE_TOKEN" 'balanceOf(address)(uint256)' "$DEPLOYER" --rpc-url "$RPC_URL" | awk '{print $1}')"
base_allowance_before="$(cast call "$BASE_TOKEN" 'allowance(address,address)(uint256)' "$DEPLOYER" "$INTEGRATION" --rpc-url "$RPC_URL" | awk '{print $1}')"
quote_allowance_before="$(cast call "$QUOTE_TOKEN" 'allowance(address,address)(uint256)' "$DEPLOYER" "$INTEGRATION" --rpc-url "$RPC_URL" | awk '{print $1}')"
if (( base_balance_before < BASE_AMOUNT )); then
echo "[fail] insufficient base balance: have=$base_balance_before need=$BASE_AMOUNT" >&2
exit 1
fi
if (( quote_balance_before < QUOTE_AMOUNT )); then
echo "[fail] insufficient quote balance: have=$quote_balance_before need=$QUOTE_AMOUNT" >&2
exit 1
fi
remove_gas="$(cast estimate --from "$DEPLOYER" "$INTEGRATION" 'removePool(address)' "$LEGACY_POOL" --rpc-url "$RPC_URL")"
create_gas="post-remove estimate not available while current mapping exists"
if (( DRY_RUN == 1 )); then
echo "pair=$PAIR_LABEL"
echo "envKey=$ENV_KEY"
echo "legacyPool=$LEGACY_POOL"
echo "mappedPoolBefore=$mapped_pool_before"
echo "baseToken=$BASE_TOKEN"
echo "quoteToken=$QUOTE_TOKEN"
echo "initialPrice=$INITIAL_PRICE"
echo "feeBps=$FEE_BPS"
echo "k=$K_VALUE"
echo "openTwap=$OPEN_TWAP"
echo "baseAmount=$BASE_AMOUNT"
echo "quoteAmount=$QUOTE_AMOUNT"
echo "removeGas=$remove_gas"
echo "createGas=$create_gas"
echo "baseBalanceBefore=$base_balance_before"
echo "quoteBalanceBefore=$quote_balance_before"
echo "baseAllowanceBefore=$base_allowance_before"
echo "quoteAllowanceBefore=$quote_allowance_before"
exit 0
fi
remove_output="$(
cast send "$INTEGRATION" \
'removePool(address)' \
"$LEGACY_POOL" \
--rpc-url "$RPC_URL" \
--private-key "$PRIVATE_KEY"
)"
remove_tx="$(parse_tx_hash "$remove_output")"
create_output="$(
cast send "$INTEGRATION" \
'createPool(address,address,uint256,uint256,uint256,bool)(address)' \
"$BASE_TOKEN" "$QUOTE_TOKEN" "$FEE_BPS" "$INITIAL_PRICE" "$K_VALUE" "$(bool_to_cli "$OPEN_TWAP")" \
--rpc-url "$RPC_URL" \
--private-key "$PRIVATE_KEY"
)"
create_tx="$(parse_tx_hash "$create_output")"
new_pool="$(cast call "$INTEGRATION" 'pools(address,address)(address)' "$BASE_TOKEN" "$QUOTE_TOKEN" --rpc-url "$RPC_URL" | awk '{print $1}')"
if [[ "$new_pool" == "0x0000000000000000000000000000000000000000" ]]; then
echo "[fail] replacement pool mapping is empty after createPool" >&2
exit 1
fi
approve_base_tx=""
approve_quote_tx=""
if (( base_allowance_before < BASE_AMOUNT )); then
approve_base_output="$(
cast send "$BASE_TOKEN" \
'approve(address,uint256)(bool)' \
"$INTEGRATION" "$BASE_AMOUNT" \
--rpc-url "$RPC_URL" \
--private-key "$PRIVATE_KEY"
)"
approve_base_tx="$(parse_tx_hash "$approve_base_output")"
fi
if (( quote_allowance_before < QUOTE_AMOUNT )); then
approve_quote_output="$(
cast send "$QUOTE_TOKEN" \
'approve(address,uint256)(bool)' \
"$INTEGRATION" "$QUOTE_AMOUNT" \
--rpc-url "$RPC_URL" \
--private-key "$PRIVATE_KEY"
)"
approve_quote_tx="$(parse_tx_hash "$approve_quote_output")"
fi
add_output="$(
cast send "$INTEGRATION" \
'addLiquidity(address,uint256,uint256)(uint256,uint256,uint256)' \
"$new_pool" "$BASE_AMOUNT" "$QUOTE_AMOUNT" \
--rpc-url "$RPC_URL" \
--private-key "$PRIVATE_KEY"
)"
add_liquidity_tx="$(parse_tx_hash "$add_output")"
reserves_after="$(cast call "$new_pool" 'getVaultReserve()(uint256,uint256)' --rpc-url "$RPC_URL")"
base_reserve_after="$(printf '%s\n' "$reserves_after" | sed -n '1p' | awk '{print $1}')"
quote_reserve_after="$(printf '%s\n' "$reserves_after" | sed -n '2p' | awk '{print $1}')"
base_balance_after="$(cast call "$BASE_TOKEN" 'balanceOf(address)(uint256)' "$DEPLOYER" --rpc-url "$RPC_URL" | awk '{print $1}')"
quote_balance_after="$(cast call "$QUOTE_TOKEN" 'balanceOf(address)(uint256)' "$DEPLOYER" --rpc-url "$RPC_URL" | awk '{print $1}')"
echo "pair=$PAIR_LABEL"
echo "envKey=$ENV_KEY"
echo "legacyPool=$LEGACY_POOL"
echo "newPool=$new_pool"
echo "removeTx=$remove_tx"
echo "createTx=$create_tx"
echo "approveBaseTx=${approve_base_tx:-none}"
echo "approveQuoteTx=${approve_quote_tx:-none}"
echo "addLiquidityTx=$add_liquidity_tx"
echo "baseReserveAfter=$base_reserve_after"
echo "quoteReserveAfter=$quote_reserve_after"
echo "baseBalanceBefore=$base_balance_before"
echo "baseBalanceAfter=$base_balance_after"
echo "quoteBalanceBefore=$quote_balance_before"
echo "quoteBalanceAfter=$quote_balance_after"

View File

@@ -0,0 +1,62 @@
#!/usr/bin/env bash
# Clear EIP-7702 delegation on an EOA by signing authorization for the zero address (EIP-7702).
# When the transaction sender is the same account as the authority, the authorization tuple
# nonce must equal the account nonce AFTER the outer tx nonce is consumed (pending + 1).
#
# Usage (repo root):
# ./scripts/deployment/revoke-eip7702-delegation.sh [--dry-run] [--rpc-url URL] [--chain ID]
#
# Env: PRIVATE_KEY (via scripts/lib/load-project-env.sh + smom-dbis-138/.env).
# Default RPC: ETHEREUM_MAINNET_RPC; default chain: 1.
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
# shellcheck source=../../lib/load-project-env.sh
source "$PROJECT_ROOT/scripts/lib/load-project-env.sh"
DRY_RUN=false
RPC_URL="${ETHEREUM_MAINNET_RPC:-}"
CHAIN_ID="${CHAIN_ID:-1}"
while [[ $# -gt 0 ]]; do
case "$1" in
--dry-run) DRY_RUN=true ;;
--rpc-url=*) RPC_URL="${1#*=}" ;;
--rpc-url) RPC_URL="${2:-}"; shift ;;
--chain=*) CHAIN_ID="${1#*=}" ;;
--chain) CHAIN_ID="${2:-}"; shift ;;
*) echo "Unknown option: $1" >&2; exit 2 ;;
esac
shift
done
[[ -n "${PRIVATE_KEY:-}" ]] || { echo "ERROR: PRIVATE_KEY not set" >&2; exit 1; }
[[ -n "$RPC_URL" ]] || { echo "ERROR: Set ETHEREUM_MAINNET_RPC or pass --rpc-url" >&2; exit 1; }
DEPLOYER="$(cast wallet address --private-key "$PRIVATE_KEY")"
CODE="$(cast code "$DEPLOYER" --rpc-url "$RPC_URL" 2>/dev/null || true)"
if [[ "$CODE" == "0x" || -z "$CODE" ]]; then
echo "No contract/delegation code at $DEPLOYER (already plain EOA on this RPC)."
exit 0
fi
NEXT_NONCE="$(cast nonce "$DEPLOYER" --rpc-url "$RPC_URL")"
AUTH_NONCE=$((NEXT_NONCE + 1))
echo "Deployer $DEPLOYER — pending tx nonce $NEXT_NONCE; EIP-7702 auth nonce (must be +1) $AUTH_NONCE"
if [[ "$DRY_RUN" == true ]]; then
echo "Dry-run: would sign auth for 0x0 with --nonce $AUTH_NONCE and cast send (type 4) to self."
exit 0
fi
AUTH="$(cast wallet sign-auth 0x0000000000000000000000000000000000000000 \
--rpc-url "$RPC_URL" --chain "$CHAIN_ID" --private-key "$PRIVATE_KEY" --nonce "$AUTH_NONCE")"
cast send "$DEPLOYER" \
--rpc-url "$RPC_URL" --chain "$CHAIN_ID" --private-key "$PRIVATE_KEY" \
--value 0 --auth "$AUTH" \
--gas-limit 100000
echo "Verify: cast code $DEPLOYER --rpc-url <rpc>"

View File

@@ -7,7 +7,7 @@
# --dry-run Run deploy-cw in dry-run mode (print commands only).
# --deploy Run deploy-cw on all chains (requires RPC/PRIVATE_KEY in smom-dbis-138/.env).
# --update-mapping Update config/token-mapping-multichain.json from CWUSDT_*/CWUSDC_* in .env.
# --verify For each chain with CWUSDT_* set, check MINTER_ROLE/BURNER_ROLE on cW* for CW_BRIDGE_*.
# --verify For each chain with configured cW* tokens, check MINTER_ROLE/BURNER_ROLE on those tokens for CW_BRIDGE_*.
# --verify-hard-peg Check Avalanche hard-peg bridge controls for cWUSDT/cWUSDC.
# With no options, runs --dry-run then --update-mapping (if any CWUSDT_* in .env).
set -euo pipefail
@@ -39,16 +39,26 @@ if [[ ! -f "$SMOM/.env" ]]; then
echo "Missing $SMOM/.env" >&2
exit 1
fi
set -a
set +u
source "$SMOM/.env"
set -u
set +a
if [[ -f "$SMOM/scripts/lib/deployment/dotenv.sh" ]]; then
# shellcheck disable=SC1090
source "$SMOM/scripts/lib/deployment/dotenv.sh"
load_deployment_env --repo-root "$SMOM"
else
set -a
set +u
source "$SMOM/.env"
set -u
set +a
if [[ -z "${PRIVATE_KEY:-}" && -n "${DEPLOYER_PRIVATE_KEY:-}" ]]; then
export PRIVATE_KEY="$DEPLOYER_PRIVATE_KEY"
fi
fi
# Chain name (env suffix) -> chainId for 138 -> chain pairs
declare -A CHAIN_NAME_TO_ID=(
[MAINNET]=1 [CRONOS]=25 [BSC]=56 [POLYGON]=137 [GNOSIS]=100
[AVALANCHE]=43114 [BASE]=8453 [ARBITRUM]=42161 [OPTIMISM]=10 [ALL]=651940
[AVALANCHE]=43114 [BASE]=8453 [ARBITRUM]=42161 [OPTIMISM]=10 [CELO]=42220 [WEMIX]=1111 [ALL]=651940
)
if $DO_DEPLOY; then
@@ -79,8 +89,11 @@ function loadEnv(f) {
return out;
}
const env = loadEnv('$env_file');
const chainToId = { MAINNET: 1, CRONOS: 25, BSC: 56, POLYGON: 137, GNOSIS: 100, AVALANCHE: 43114, BASE: 8453, ARBITRUM: 42161, OPTIMISM: 10, ALL: 651940 };
const keyToEnv = { Compliant_USDT_cW: 'CWUSDT', Compliant_USDC_cW: 'CWUSDC', Compliant_EURC_cW: 'CWEURC', Compliant_EURT_cW: 'CWEURT', Compliant_GBPC_cW: 'CWGBPC', Compliant_GBPT_cW: 'CWGBPT', Compliant_AUDC_cW: 'CWAUDC', Compliant_JPYC_cW: 'CWJPYC', Compliant_CHFC_cW: 'CWCHFC', Compliant_CADC_cW: 'CWCADC', Compliant_XAUC_cW: 'CWXAUC', Compliant_XAUT_cW: 'CWXAUT' };
const chainToId = { MAINNET: 1, CRONOS: 25, BSC: 56, POLYGON: 137, GNOSIS: 100, AVALANCHE: 43114, BASE: 8453, ARBITRUM: 42161, OPTIMISM: 10, CELO: 42220, WEMIX: 1111, ALL: 651940 };
const keyToEnv = { Compliant_USDT_cW: 'CWUSDT', Compliant_USDC_cW: 'CWUSDC', Compliant_AUSDT_cW: 'CWAUSDT', Compliant_EURC_cW: 'CWEURC', Compliant_EURT_cW: 'CWEURT', Compliant_GBPC_cW: 'CWGBPC', Compliant_GBPT_cW: 'CWGBPT', Compliant_AUDC_cW: 'CWAUDC', Compliant_JPYC_cW: 'CWJPYC', Compliant_CHFC_cW: 'CWCHFC', Compliant_CADC_cW: 'CWCADC', Compliant_XAUC_cW: 'CWXAUC', Compliant_XAUT_cW: 'CWXAUT' };
function getEnvAddress(prefix, chainName, chainId) {
return env[prefix + '_' + chainName] || env[prefix + '_ADDRESS_' + chainId] || '';
}
const j = JSON.parse(fs.readFileSync('$config_file', 'utf8'));
let updated = 0;
for (const [name, chainId] of Object.entries(chainToId)) {
@@ -89,7 +102,7 @@ for (const [name, chainId] of Object.entries(chainToId)) {
for (const t of pair.tokens) {
const envKey = keyToEnv[t.key];
if (!envKey) continue;
const addr = env[envKey + '_' + name];
const addr = getEnvAddress(envKey, name, chainId);
if (addr && t.addressTo !== addr) { t.addressTo = addr; updated++; }
}
}
@@ -111,15 +124,33 @@ if $DO_VERIFY; then
echo "=== Verify MINTER/BURNER roles on cW* for each chain ==="
MINTER_ROLE=$(cast keccak "MINTER_ROLE" 2>/dev/null || echo "0x")
BURNER_ROLE=$(cast keccak "BURNER_ROLE" 2>/dev/null || echo "0x")
for name in MAINNET CRONOS BSC POLYGON GNOSIS AVALANCHE BASE ARBITRUM OPTIMISM; do
cwusdt_var="CWUSDT_${name}"
CW_TOKEN_PREFIXES=(CWUSDT CWUSDC CWAUSDT CWEURC CWEURT CWGBPC CWGBPT CWAUDC CWJPYC CWCHFC CWCADC CWXAUC CWXAUT)
resolve_cw_token_value() {
local prefix="$1"
local name="$2"
local chain_id="${CHAIN_NAME_TO_ID[$name]:-}"
local chain_var="${prefix}_${name}"
local address_var=""
if [[ -n "$chain_id" ]]; then
address_var="${prefix}_ADDRESS_${chain_id}"
fi
if [[ -n "${!chain_var:-}" ]]; then
printf '%s' "${!chain_var}"
return
fi
if [[ -n "$address_var" && -n "${!address_var:-}" ]]; then
printf '%s' "${!address_var}"
return
fi
printf ''
}
for name in MAINNET CRONOS BSC POLYGON GNOSIS AVALANCHE BASE ARBITRUM OPTIMISM CELO WEMIX; do
bridge_var="CW_BRIDGE_${name}"
cwusdt="${!cwusdt_var:-}"
bridge="${!bridge_var:-}"
rpc_var="${name}_RPC_URL"
[[ -z "$rpc_var" ]] && rpc_var="${name}_RPC"
rpc="${!rpc_var:-}"
if [[ -z "$cwusdt" || -z "$bridge" ]]; then continue; fi
if [[ -z "$bridge" ]]; then continue; fi
if [[ -z "$rpc" ]]; then
case "$name" in
MAINNET) rpc="${ETH_MAINNET_RPC_URL:-${ETHEREUM_MAINNET_RPC:-}}";;
@@ -131,12 +162,25 @@ if $DO_VERIFY; then
BASE) rpc="${BASE_MAINNET_RPC:-}";;
ARBITRUM) rpc="${ARBITRUM_MAINNET_RPC:-}";;
OPTIMISM) rpc="${OPTIMISM_MAINNET_RPC:-}";;
CELO) rpc="${CELO_RPC:-${CELO_MAINNET_RPC:-}}";;
WEMIX) rpc="${WEMIX_RPC:-${WEMIX_MAINNET_RPC:-}}";;
esac
fi
if [[ -z "$rpc" ]]; then echo " Skip $name: no RPC"; continue; fi
m=$(cast call "$cwusdt" "hasRole(bytes32,address)(bool)" "$MINTER_ROLE" "$bridge" --rpc-url "$rpc" 2>/dev/null || echo "false")
b=$(cast call "$cwusdt" "hasRole(bytes32,address)(bool)" "$BURNER_ROLE" "$bridge" --rpc-url "$rpc" 2>/dev/null || echo "false")
echo " $name: MINTER=$m BURNER=$b (cWUSDT=$cwusdt bridge=$bridge)"
found_any=false
for prefix in "${CW_TOKEN_PREFIXES[@]}"; do
token="$(resolve_cw_token_value "$prefix" "$name")"
if [[ -z "$token" ]]; then
continue
fi
found_any=true
m=$(cast call "$token" "hasRole(bytes32,address)(bool)" "$MINTER_ROLE" "$bridge" --rpc-url "$rpc" 2>/dev/null || echo "false")
b=$(cast call "$token" "hasRole(bytes32,address)(bool)" "$BURNER_ROLE" "$bridge" --rpc-url "$rpc" 2>/dev/null || echo "false")
echo " $name $prefix: MINTER=$m BURNER=$b (token=$token bridge=$bridge)"
done
if ! $found_any; then
echo " Skip $name: no cW* token addresses configured"
fi
done
fi

View File

@@ -0,0 +1,153 @@
#!/usr/bin/env bash
# Guarded operator runner for Chain 138 gas-lane destination wiring.
# Default mode is echo-only. Set EXECUTE_GAS_L1_DESTINATIONS=1 to broadcast.
#
# Usage:
# bash scripts/deployment/run-gas-l1-destination-wiring.sh
# EXECUTE_GAS_L1_DESTINATIONS=1 bash scripts/deployment/run-gas-l1-destination-wiring.sh
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
export PROJECT_ROOT
# shellcheck source=/dev/null
source "$PROJECT_ROOT/scripts/lib/load-project-env.sh" >/dev/null 2>&1 || true
command -v bash >/dev/null 2>&1 || { echo "[FAIL] bash is required" >&2; exit 1; }
command -v cast >/dev/null 2>&1 || { echo "[FAIL] cast is required" >&2; exit 1; }
command -v jq >/dev/null 2>&1 || { echo "[FAIL] jq is required" >&2; exit 1; }
EXECUTE="${EXECUTE_GAS_L1_DESTINATIONS:-0}"
GAS_LIMIT="${GAS_LIMIT:-250000}"
GAS_PRICE="${GAS_PRICE:-}"
CONTINUE_ON_ERROR="${CONTINUE_ON_ERROR:-0}"
log() { printf '%s\n' "$*"; }
ok() { printf '[OK] %s\n' "$*"; }
warn() { printf '[WARN] %s\n' "$*"; }
fail() { printf '[FAIL] %s\n' "$*" >&2; exit 1; }
RUNNER_ADDR="$(cast wallet address --private-key "${PRIVATE_KEY:?PRIVATE_KEY is required}")"
CHAIN138_L1_BRIDGE="${CHAIN138_L1_BRIDGE:-${CW_L1_BRIDGE_CHAIN138:-${CW_L1_BRIDGE:-}}}"
RPC_URL_138="${RPC_URL_138:?RPC_URL_138 is required}"
[[ -n "$CHAIN138_L1_BRIDGE" ]] || fail "CHAIN138_L1_BRIDGE or CW_L1_BRIDGE_CHAIN138 is required"
BRIDGE_ADMIN="$(cast call "$CHAIN138_L1_BRIDGE" 'admin()(address)' --rpc-url "$RPC_URL_138")"
if [[ "${RUNNER_ADDR,,}" != "${BRIDGE_ADMIN,,}" ]]; then
fail "Loaded PRIVATE_KEY resolves to $RUNNER_ADDR, but bridge admin is $BRIDGE_ADMIN"
fi
tmp_json="$(mktemp)"
trap 'rm -f "$tmp_json"' EXIT
bash "$PROJECT_ROOT/scripts/deployment/print-gas-l1-destination-wiring-commands.sh" --json >"$tmp_json"
pending_count="$(jq -r '.summary.pairsNeedingL1DestinationWiring // 0' "$tmp_json")"
log "=== Gas L1 Destination Wiring Runner ==="
log "Mode: $([[ "$EXECUTE" == "1" ]] && echo execute || echo dry-run)"
log "Bridge: $CHAIN138_L1_BRIDGE"
log "RPC: $RPC_URL_138"
log "Runner: $RUNNER_ADDR"
log "Pending lanes: $pending_count"
if [[ "$pending_count" == "0" ]]; then
ok "All active gas lanes are already wired on Chain 138."
exit 0
fi
successes=0
failures=0
while IFS= read -r row_b64; do
row_json="$(printf '%s' "$row_b64" | base64 -d)"
key="$(jq -r '.key' <<<"$row_json")"
chain_id="$(jq -r '.chainId' <<<"$row_json")"
chain_name="$(jq -r '.chainName' <<<"$row_json")"
family_key="$(jq -r '.familyKey' <<<"$row_json")"
canonical_symbol="$(jq -r '.canonicalSymbol' <<<"$row_json")"
mirrored_symbol="$(jq -r '.mirroredSymbol' <<<"$row_json")"
canonical_address="$(jq -r '.canonicalAddress' <<<"$row_json")"
selector="$(jq -r '.destinationChainSelector' <<<"$row_json")"
l2_bridge="$(jq -r '.l2BridgeAddress' <<<"$row_json")"
log ""
log "--- $key | $chain_id $chain_name | $family_key | $canonical_symbol->$mirrored_symbol"
if [[ "$EXECUTE" != "1" ]]; then
printf 'cast send %q %q %q %q %q %q --rpc-url %q --private-key %q --legacy --gas-limit %q' \
"$CHAIN138_L1_BRIDGE" \
"configureDestination(address,uint64,address,bool)" \
"$canonical_address" \
"$selector" \
"$l2_bridge" \
"true" \
"$RPC_URL_138" \
'$PRIVATE_KEY' \
"$GAS_LIMIT"
if [[ -n "$GAS_PRICE" ]]; then
printf ' --gas-price %q' "$GAS_PRICE"
fi
printf '\n'
continue
fi
cmd=(
cast send "$CHAIN138_L1_BRIDGE"
"configureDestination(address,uint64,address,bool)"
"$canonical_address"
"$selector"
"$l2_bridge"
true
--rpc-url "$RPC_URL_138"
--private-key "$PRIVATE_KEY"
--legacy
--gas-limit "$GAS_LIMIT"
)
if [[ -n "$GAS_PRICE" ]]; then
cmd+=(--gas-price "$GAS_PRICE")
fi
if ! output="$("${cmd[@]}" 2>&1)"; then
printf '%s\n' "$output"
warn "Broadcast failed for $key"
failures=$((failures + 1))
if [[ "$CONTINUE_ON_ERROR" != "1" ]]; then
fail "Stopping after first failure; set CONTINUE_ON_ERROR=1 to continue."
fi
continue
fi
printf '%s\n' "$output"
verify="$(cast call "$CHAIN138_L1_BRIDGE" 'destinations(address,uint64)((address,bool))' "$canonical_address" "$selector" --rpc-url "$RPC_URL_138")"
verify_normalized="$(printf '%s' "$verify" | tr '[:upper:]' '[:lower:]')"
if [[ "$verify_normalized" == *"${l2_bridge,,}"* && "$verify_normalized" == *"true"* ]]; then
ok "Verified $key wired on L1"
successes=$((successes + 1))
else
warn "Verification did not match expected receiver for $key: $verify"
failures=$((failures + 1))
if [[ "$CONTINUE_ON_ERROR" != "1" ]]; then
fail "Stopping after verification mismatch; set CONTINUE_ON_ERROR=1 to continue."
fi
fi
done < <(jq -r '.rows[] | @base64' "$tmp_json")
log ""
log "=== Summary ==="
log "Successful lanes: $successes"
log "Failed lanes: $failures"
if [[ "$EXECUTE" != "1" ]]; then
warn "Dry-run only. Re-run with EXECUTE_GAS_L1_DESTINATIONS=1 to broadcast."
exit 0
fi
if [[ "$failures" -gt 0 ]]; then
fail "One or more gas-lane destination writes failed."
fi
ok "All pending gas-lane destination writes completed."

View File

@@ -0,0 +1,122 @@
#!/usr/bin/env bash
# Execute or dry-run the GRU V2 Wave 1 Chain 138 deployment plan.
#
# Usage:
# bash scripts/deployment/run-gru-v2-wave1-chain138.sh
# bash scripts/deployment/run-gru-v2-wave1-chain138.sh --code=EUR
# bash scripts/deployment/run-gru-v2-wave1-chain138.sh --apply
#
# Default mode is dry-run. Use --apply to broadcast.
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
PLAN_JSON="${PROJECT_ROOT}/config/gru-v2-chain138-wave1-v2-plan.json"
DEPLOY_SCRIPT="${PROJECT_ROOT}/scripts/deployment/deploy-gru-v2-generic-chain138.sh"
METADATA_MANIFEST="${PROJECT_ROOT}/config/gru-v2-metadata/chain138/ipfs-manifest.latest.json"
FILTER_CODE=""
APPLY=0
for arg in "$@"; do
case "$arg" in
--code=*) FILTER_CODE="${arg#--code=}" ;;
--apply) APPLY=1 ;;
*)
echo "Unknown argument: $arg" >&2
exit 2
;;
esac
done
if ! command -v jq >/dev/null 2>&1; then
echo "jq is required" >&2
exit 1
fi
if [[ ! -f "$PLAN_JSON" ]]; then
echo "Missing plan JSON: $PLAN_JSON" >&2
exit 1
fi
RUN_TS="$(date -u +%Y%m%dT%H%M%SZ)"
LOG_DIR="${PROJECT_ROOT}/.codex-artifacts/gru-v2-wave1"
mkdir -p "$LOG_DIR"
echo "=== GRU V2 Wave 1 Chain 138 runner ==="
echo "Plan: $PLAN_JSON"
if [[ -n "$FILTER_CODE" ]]; then
echo "Filter: $FILTER_CODE"
fi
if [[ "$APPLY" == "1" ]]; then
echo "Mode: APPLY"
: "${PRIVATE_KEY:?PRIVATE_KEY is required for --apply}"
echo "Logs: $LOG_DIR/$RUN_TS-*.log"
else
echo "Mode: DRY_RUN"
fi
echo ""
jq -c --arg filter "$FILTER_CODE" '
.assets[]
| select(($filter == "") or (.code == $filter))
| . as $asset
| .deployments[]
| {
code: $asset.code,
tokenName,
tokenSymbol,
currencyCode,
metadataKey,
registryName,
registrySymbol
}
' "$PLAN_JSON" | while IFS= read -r row; do
code="$(jq -r '.code' <<<"$row")"
token_name="$(jq -r '.tokenName' <<<"$row")"
token_symbol="$(jq -r '.tokenSymbol' <<<"$row")"
currency_code="$(jq -r '.currencyCode' <<<"$row")"
metadata_key="$(jq -r '.metadataKey' <<<"$row")"
registry_name="$(jq -r '.registryName' <<<"$row")"
registry_symbol="$(jq -r '.registrySymbol' <<<"$row")"
token_uri=""
disclosure_uri=""
reporting_uri=""
if [[ -f "$METADATA_MANIFEST" ]]; then
token_uri="$(jq -r --arg symbol "$token_symbol" '.assets[$symbol].tokenURI // empty' "$METADATA_MANIFEST")"
disclosure_uri="$(jq -r --arg symbol "$token_symbol" '.assets[$symbol].regulatoryDisclosureURI // empty' "$METADATA_MANIFEST")"
reporting_uri="$(jq -r --arg symbol "$token_symbol" '.assets[$symbol].reportingURI // empty' "$METADATA_MANIFEST")"
fi
echo "--- ${code} :: ${token_symbol} ---"
if [[ -n "$token_uri" ]]; then
echo "metadata: ${metadata_key} -> ${token_uri}"
else
echo "metadata: ${metadata_key} -> no IPFS manifest entry found; deploying without token/disclosure/reporting URIs"
fi
if [[ "$APPLY" == "1" ]]; then
TOKEN_NAME="$token_name" \
TOKEN_SYMBOL="$token_symbol" \
CURRENCY_CODE="$currency_code" \
REGISTRY_NAME="$registry_name" \
REGISTRY_SYMBOL="$registry_symbol" \
TOKEN_URI="$token_uri" \
REGULATORY_DISCLOSURE_URI="$disclosure_uri" \
REPORTING_URI="$reporting_uri" \
bash "$DEPLOY_SCRIPT" | tee "$LOG_DIR/${RUN_TS}-${token_symbol}.log"
else
TOKEN_NAME="$token_name" \
TOKEN_SYMBOL="$token_symbol" \
CURRENCY_CODE="$currency_code" \
REGISTRY_NAME="$registry_name" \
REGISTRY_SYMBOL="$registry_symbol" \
TOKEN_URI="$token_uri" \
REGULATORY_DISCLOSURE_URI="$disclosure_uri" \
REPORTING_URI="$reporting_uri" \
PRIVATE_KEY="${PRIVATE_KEY:-0x1111111111111111111111111111111111111111111111111111111111111111}" \
bash "$DEPLOY_SCRIPT" --dry-run
fi
echo ""
done

View File

@@ -0,0 +1,38 @@
#!/usr/bin/env bash
set -euo pipefail
# Repeat run-mainnet-aave-cwusdc-quote-push-once.sh (same FLASH_QUOTE_AMOUNT_RAW) N times.
# Reserves move each round; the forge script recomputes MIN_OUT_PMM when MIN_OUT_PMM is unset.
#
# Env:
# FLASH_LOOP_COUNT default 3
# FLASH_LOOP_SLEEP_SECS default 15 (pause between broadcasts for inclusion / indexing)
# Same env as run-mainnet-aave-cwusdc-quote-push-once.sh
#
# Usage:
# FLASH_LOOP_COUNT=5 bash scripts/deployment/run-mainnet-aave-cwusdc-quote-push-loop.sh --dry-run
# FLASH_LOOP_COUNT=5 bash scripts/deployment/run-mainnet-aave-cwusdc-quote-push-loop.sh --apply
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
ONCE="${SCRIPT_DIR}/run-mainnet-aave-cwusdc-quote-push-once.sh"
COUNT="${FLASH_LOOP_COUNT:-3}"
SLEEP_SECS="${FLASH_LOOP_SLEEP_SECS:-15}"
if [[ ! -f "$ONCE" ]]; then
echo "[fail] missing $ONCE" >&2
exit 1
fi
for ((i = 1; i <= COUNT; i++)); do
echo "=== quote-push loop $i / $COUNT ==="
bash "$ONCE" "$@"
if [[ "$*" != *--apply* ]]; then
continue
fi
if ((i < COUNT)); then
echo "sleep ${SLEEP_SECS}s before next round..."
sleep "$SLEEP_SECS"
fi
done
echo "Loop finished ($COUNT rounds)."

View File

@@ -0,0 +1,287 @@
#!/usr/bin/env bash
set -euo pipefail
# Run one Mainnet Aave flash quote-push via forge (see RunMainnetAaveCwusdcUsdcQuotePushOnce.s.sol).
# Default: simulation only. Use --apply to broadcast.
#
# Required env:
# PRIVATE_KEY, ETHEREUM_MAINNET_RPC
# DODO_PMM_INTEGRATION_MAINNET
# AAVE_QUOTE_PUSH_RECEIVER_MAINNET optional; defaults to canonical receiver or latest broadcast
# QUOTE_PUSH_EXTERNAL_UNWINDER_MAINNET optional; can be auto-picked from latest broadcast when QUOTE_PUSH_UNWINDER_TYPE is set
# FLASH_QUOTE_AMOUNT_RAW optional; defaults to 200000
# UNWIND_MODE: 0 = V3 single-hop fee; 1 = DODO pool; 2 = V3 exactInput path (UNWIND_V3_PATH_HEX);
# 4 = TwoHopDodoIntegrationUnwinder (UNWIND_TWO_HOP_*)
# 5 = DODOToUniswapV3MultiHopExternalUnwinder (UNWIND_DODO_POOL + UNWIND_INTERMEDIATE_TOKEN + UNWIND_V3_PATH_HEX)
#
# Usage:
# source scripts/lib/load-project-env.sh
# bash scripts/deployment/run-mainnet-aave-cwusdc-quote-push-once.sh --dry-run
# bash scripts/deployment/run-mainnet-aave-cwusdc-quote-push-once.sh --apply
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROXMOX_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
SMOM="${PROXMOX_ROOT}/smom-dbis-138"
DEFAULT_AAVE_QUOTE_PUSH_RECEIVER_MAINNET="0x241cb416aaFC2654078b7E2376adED2bDeFbCBa2"
DEFAULT_POOL_CWUSDC_USDC_MAINNET="0x69776fc607e9edA8042e320e7e43f54d06c68f0E"
# Preserve explicit caller overrides across sourced repo env files.
_qp_private_key="${PRIVATE_KEY-}"
_qp_rpc="${ETHEREUM_MAINNET_RPC-}"
_qp_receiver="${AAVE_QUOTE_PUSH_RECEIVER_MAINNET-}"
_qp_unwinder="${QUOTE_PUSH_EXTERNAL_UNWINDER_MAINNET-}"
_qp_amount="${FLASH_QUOTE_AMOUNT_RAW-}"
_qp_unwind_type="${QUOTE_PUSH_UNWINDER_TYPE-}"
_qp_unwind_mode="${UNWIND_MODE-}"
_qp_pool="${POOL_CWUSDC_USDC_MAINNET-}"
_qp_integration="${DODO_PMM_INTEGRATION_MAINNET-}"
_qp_pool_a="${UNWIND_TWO_HOP_POOL_A-}"
_qp_pool_b="${UNWIND_TWO_HOP_POOL_B-}"
_qp_mid_token="${UNWIND_TWO_HOP_MID_TOKEN-}"
_qp_min_mid_out="${UNWIND_MIN_MID_OUT_RAW-}"
_qp_min_out_pmm="${MIN_OUT_PMM-}"
_qp_min_out_unwind="${MIN_OUT_UNWIND-}"
_qp_fee_u24="${UNWIND_V3_FEE_U24-}"
_qp_dodo_pool="${UNWIND_DODO_POOL-}"
_qp_v3_path="${UNWIND_V3_PATH_HEX-}"
_qp_intermediate_token="${UNWIND_INTERMEDIATE_TOKEN-}"
_qp_min_intermediate_out="${UNWIND_MIN_INTERMEDIATE_OUT_RAW-}"
# shellcheck disable=SC1091
source "${PROXMOX_ROOT}/scripts/lib/load-project-env.sh" 2>/dev/null || true
# shellcheck disable=SC1091
source "${SMOM}/scripts/load-env.sh" >/dev/null 2>&1 || true
[[ -n "$_qp_private_key" ]] && export PRIVATE_KEY="$_qp_private_key"
[[ -n "$_qp_rpc" ]] && export ETHEREUM_MAINNET_RPC="$_qp_rpc"
[[ -n "$_qp_receiver" ]] && export AAVE_QUOTE_PUSH_RECEIVER_MAINNET="$_qp_receiver"
[[ -n "$_qp_unwinder" ]] && export QUOTE_PUSH_EXTERNAL_UNWINDER_MAINNET="$_qp_unwinder"
[[ -n "$_qp_amount" ]] && export FLASH_QUOTE_AMOUNT_RAW="$_qp_amount"
[[ -n "$_qp_unwind_type" ]] && export QUOTE_PUSH_UNWINDER_TYPE="$_qp_unwind_type"
[[ -n "$_qp_unwind_mode" ]] && export UNWIND_MODE="$_qp_unwind_mode"
[[ -n "$_qp_pool" ]] && export POOL_CWUSDC_USDC_MAINNET="$_qp_pool"
[[ -n "$_qp_integration" ]] && export DODO_PMM_INTEGRATION_MAINNET="$_qp_integration"
[[ -n "$_qp_pool_a" ]] && export UNWIND_TWO_HOP_POOL_A="$_qp_pool_a"
[[ -n "$_qp_pool_b" ]] && export UNWIND_TWO_HOP_POOL_B="$_qp_pool_b"
[[ -n "$_qp_mid_token" ]] && export UNWIND_TWO_HOP_MID_TOKEN="$_qp_mid_token"
[[ -n "$_qp_min_mid_out" ]] && export UNWIND_MIN_MID_OUT_RAW="$_qp_min_mid_out"
[[ -n "$_qp_min_out_pmm" ]] && export MIN_OUT_PMM="$_qp_min_out_pmm"
[[ -n "$_qp_min_out_unwind" ]] && export MIN_OUT_UNWIND="$_qp_min_out_unwind"
[[ -n "$_qp_fee_u24" ]] && export UNWIND_V3_FEE_U24="$_qp_fee_u24"
[[ -n "$_qp_dodo_pool" ]] && export UNWIND_DODO_POOL="$_qp_dodo_pool"
[[ -n "$_qp_v3_path" ]] && export UNWIND_V3_PATH_HEX="$_qp_v3_path"
[[ -n "$_qp_intermediate_token" ]] && export UNWIND_INTERMEDIATE_TOKEN="$_qp_intermediate_token"
[[ -n "$_qp_min_intermediate_out" ]] && export UNWIND_MIN_INTERMEDIATE_OUT_RAW="$_qp_min_intermediate_out"
unset _qp_private_key _qp_rpc _qp_receiver _qp_unwinder _qp_amount _qp_unwind_type _qp_unwind_mode
unset _qp_pool _qp_integration _qp_pool_a _qp_pool_b _qp_mid_token _qp_min_mid_out _qp_min_out_pmm
unset _qp_min_out_unwind _qp_fee_u24 _qp_dodo_pool _qp_v3_path _qp_intermediate_token _qp_min_intermediate_out
BROADCAST=()
if (($# == 0)); then
:
else
for a in "$@"; do
case "$a" in
--apply) BROADCAST=(--broadcast) ;;
--dry-run) BROADCAST=() ;;
*)
echo "[fail] unknown arg: $a" >&2
exit 2
;;
esac
done
fi
require() {
local n="$1"
if [[ -z "${!n:-}" ]]; then
if [[ "$n" == "QUOTE_PUSH_EXTERNAL_UNWINDER_MAINNET" ]]; then
cat >&2 <<EOF
[fail] missing required env: QUOTE_PUSH_EXTERNAL_UNWINDER_MAINNET
[hint] No real broadcasted unwinder was found for auto-pick.
[hint] Next step:
QUOTE_PUSH_UNWINDER_TYPE=two_hop_dodo bash scripts/deployment/deploy-mainnet-aave-quote-push-stack.sh --apply
[hint] Then export:
AAVE_QUOTE_PUSH_RECEIVER_MAINNET=${AAVE_QUOTE_PUSH_RECEIVER_MAINNET:-$DEFAULT_AAVE_QUOTE_PUSH_RECEIVER_MAINNET}
QUOTE_PUSH_EXTERNAL_UNWINDER_MAINNET=<real_apply_deployed_unwinder_address>
FLASH_QUOTE_AMOUNT_RAW=${FLASH_QUOTE_AMOUNT_RAW:-200000}
UNWIND_MODE=${UNWIND_MODE:-4}
UNWIND_TWO_HOP_POOL_A=${UNWIND_TWO_HOP_POOL_A:-0xe944b7Cb012A0820c07f54D51e92f0e1C74168DB}
UNWIND_TWO_HOP_POOL_B=${UNWIND_TWO_HOP_POOL_B:-0x27f3aE7EE71Be3d77bAf17d4435cF8B895DD25D2}
UNWIND_TWO_HOP_MID_TOKEN=${UNWIND_TWO_HOP_MID_TOKEN:-0xaF5017d0163ecb99d9B5D94e3b4D7b09Af44D8AE}
EOF
exit 1
fi
echo "[fail] missing required env: $n" >&2
exit 1
fi
}
pick_latest_create_address() {
local script_name="$1"
local contract_name="$2"
local latest_json="${SMOM}/broadcast/${script_name}/1/run-latest.json"
if [[ ! -f "$latest_json" ]]; then
return 1
fi
if ! command -v jq >/dev/null 2>&1; then
return 1
fi
jq -r --arg contract "$contract_name" \
'.transactions[]? | select(.transactionType == "CREATE" and .contractName == $contract) | .contractAddress' \
"$latest_json" | tail -n1
}
pick_default_unwinder() {
local addr=""
PICK_DEFAULT_UNWINDER_ADDR=""
PICK_DEFAULT_UNWINDER_MODE=""
addr="$(pick_latest_create_address "DeployTwoHopDodoIntegrationUnwinder.s.sol" "TwoHopDodoIntegrationUnwinder" || true)"
if [[ -n "$addr" && "$addr" != "null" ]]; then
PICK_DEFAULT_UNWINDER_ADDR="$addr"
PICK_DEFAULT_UNWINDER_MODE="4"
return 0
fi
addr="$(pick_latest_create_address "DeployDODOIntegrationExternalUnwinder.s.sol" "DODOIntegrationExternalUnwinder" || true)"
if [[ -n "$addr" && "$addr" != "null" ]]; then
PICK_DEFAULT_UNWINDER_ADDR="$addr"
PICK_DEFAULT_UNWINDER_MODE="1"
return 0
fi
addr="$(pick_latest_create_address "DeployUniswapV3ExternalUnwinder.s.sol" "UniswapV3ExternalUnwinder" || true)"
if [[ -n "$addr" && "$addr" != "null" ]]; then
PICK_DEFAULT_UNWINDER_ADDR="$addr"
PICK_DEFAULT_UNWINDER_MODE="0"
return 0
fi
addr="$(pick_latest_create_address "DeployDODOToUniswapV3MultiHopExternalUnwinder.s.sol" "DODOToUniswapV3MultiHopExternalUnwinder" || true)"
if [[ -n "$addr" && "$addr" != "null" ]]; then
PICK_DEFAULT_UNWINDER_ADDR="$addr"
PICK_DEFAULT_UNWINDER_MODE="5"
return 0
fi
return 1
}
UNW="${QUOTE_PUSH_UNWINDER_TYPE:-}"
if [[ -z "${AAVE_QUOTE_PUSH_RECEIVER_MAINNET:-}" ]]; then
inferred_receiver="$(pick_latest_create_address "DeployAaveQuotePushFlashReceiver.s.sol" "AaveQuotePushFlashReceiver" || true)"
export AAVE_QUOTE_PUSH_RECEIVER_MAINNET="${inferred_receiver:-$DEFAULT_AAVE_QUOTE_PUSH_RECEIVER_MAINNET}"
fi
if [[ -z "${POOL_CWUSDC_USDC_MAINNET:-}" || "${_qp_pool:-}" == "" ]]; then
export POOL_CWUSDC_USDC_MAINNET="$DEFAULT_POOL_CWUSDC_USDC_MAINNET"
fi
if [[ -z "${FLASH_QUOTE_AMOUNT_RAW:-}" ]]; then
export FLASH_QUOTE_AMOUNT_RAW=200000
fi
if [[ -z "${MIN_OUT_PMM:-}" ]]; then
export MIN_OUT_PMM=1
fi
if [[ -z "${QUOTE_PUSH_EXTERNAL_UNWINDER_MAINNET:-}" && -n "$UNW" ]]; then
unwind_script=""
unwind_contract=""
case "$UNW" in
univ3)
unwind_script="DeployUniswapV3ExternalUnwinder.s.sol"
unwind_contract="UniswapV3ExternalUnwinder"
export UNWIND_MODE="${UNWIND_MODE:-0}"
;;
dodo)
unwind_script="DeployDODOIntegrationExternalUnwinder.s.sol"
unwind_contract="DODOIntegrationExternalUnwinder"
export UNWIND_MODE="${UNWIND_MODE:-1}"
;;
two_hop_dodo)
unwind_script="DeployTwoHopDodoIntegrationUnwinder.s.sol"
unwind_contract="TwoHopDodoIntegrationUnwinder"
export UNWIND_MODE="${UNWIND_MODE:-4}"
;;
dodo_univ3)
unwind_script="DeployDODOToUniswapV3MultiHopExternalUnwinder.s.sol"
unwind_contract="DODOToUniswapV3MultiHopExternalUnwinder"
export UNWIND_MODE="${UNWIND_MODE:-5}"
;;
*)
echo "[fail] QUOTE_PUSH_UNWINDER_TYPE must be univ3, dodo, two_hop_dodo, or dodo_univ3 when set" >&2
exit 1
;;
esac
inferred_unwinder="$(pick_latest_create_address "$unwind_script" "$unwind_contract" || true)"
if [[ -n "$inferred_unwinder" && "$inferred_unwinder" != "null" ]]; then
export QUOTE_PUSH_EXTERNAL_UNWINDER_MAINNET="$inferred_unwinder"
fi
fi
if [[ -z "${QUOTE_PUSH_EXTERNAL_UNWINDER_MAINNET:-}" ]]; then
if pick_default_unwinder; then
if [[ -n "$PICK_DEFAULT_UNWINDER_ADDR" && "$PICK_DEFAULT_UNWINDER_ADDR" != "null" ]]; then
export QUOTE_PUSH_EXTERNAL_UNWINDER_MAINNET="$PICK_DEFAULT_UNWINDER_ADDR"
export UNWIND_MODE="$PICK_DEFAULT_UNWINDER_MODE"
fi
fi
fi
if [[ "${UNWIND_MODE:-}" == "4" ]]; then
export UNWIND_TWO_HOP_POOL_A="${UNWIND_TWO_HOP_POOL_A:-0xe944b7Cb012A0820c07f54D51e92f0e1C74168DB}"
export UNWIND_TWO_HOP_POOL_B="${UNWIND_TWO_HOP_POOL_B:-0x27f3aE7EE71Be3d77bAf17d4435cF8B895DD25D2}"
export UNWIND_TWO_HOP_MID_TOKEN="${UNWIND_TWO_HOP_MID_TOKEN:-0xaF5017d0163ecb99d9B5D94e3b4D7b09Af44D8AE}"
export UNWIND_MIN_MID_OUT_RAW="${UNWIND_MIN_MID_OUT_RAW:-1}"
elif [[ "${UNWIND_MODE:-}" == "5" ]]; then
export UNWIND_DODO_POOL="${UNWIND_DODO_POOL:-0xCC0fd27A40775c9AfcD2BBd3f7c902b0192c247A}"
export UNWIND_INTERMEDIATE_TOKEN="${UNWIND_INTERMEDIATE_TOKEN:-0xdAC17F958D2ee523a2206206994597C13D831ec7}"
export UNWIND_MIN_INTERMEDIATE_OUT_RAW="${UNWIND_MIN_INTERMEDIATE_OUT_RAW:-1}"
fi
require ETHEREUM_MAINNET_RPC
require PRIVATE_KEY
require AAVE_QUOTE_PUSH_RECEIVER_MAINNET
require QUOTE_PUSH_EXTERNAL_UNWINDER_MAINNET
require DODO_PMM_INTEGRATION_MAINNET
require FLASH_QUOTE_AMOUNT_RAW
UM="${UNWIND_MODE:-0}"
if [[ "$UM" == "0" ]]; then
require UNWIND_V3_FEE_U24
elif [[ "$UM" == "1" ]]; then
require UNWIND_DODO_POOL
elif [[ "$UM" == "2" ]]; then
require UNWIND_V3_PATH_HEX
elif [[ "$UM" == "4" ]]; then
require UNWIND_TWO_HOP_POOL_A
require UNWIND_TWO_HOP_POOL_B
require UNWIND_TWO_HOP_MID_TOKEN
elif [[ "$UM" == "5" ]]; then
require UNWIND_DODO_POOL
require UNWIND_INTERMEDIATE_TOKEN
require UNWIND_V3_PATH_HEX
else
echo "[fail] UNWIND_MODE must be 0, 1, 2, 4, or 5" >&2
exit 1
fi
echo "receiver=$AAVE_QUOTE_PUSH_RECEIVER_MAINNET"
echo "unwinder=$QUOTE_PUSH_EXTERNAL_UNWINDER_MAINNET"
echo "flash_quote_amount_raw=$FLASH_QUOTE_AMOUNT_RAW unwind_mode=$UM"
(
cd "$SMOM"
forge script script/flash/RunMainnetAaveCwusdcUsdcQuotePushOnce.s.sol:RunMainnetAaveCwusdcUsdcQuotePushOnce \
--rpc-url "$ETHEREUM_MAINNET_RPC" \
"${BROADCAST[@]}" \
-vvvv
)
echo "Done. Re-check: bash scripts/verify/check-mainnet-cwusdc-usdc-reserve-peg.sh"

View File

@@ -0,0 +1,94 @@
#!/usr/bin/env bash
set -euo pipefail
# Read-only: loop **modeling** runs (no chain txs) for flash quote-push against live
# Mainnet cWUSDC/USDC reserves. Use when wallet USDC is zero but you want sized tranches.
#
# Env:
# ETHEREUM_MAINNET_RPC (required)
# FLASH_MODEL_SCAN_SIZES — comma-separated gross flash quote sizes in raw units (default 5e6,1e7,2e7)
# PMM_FLASH_EXIT_PRICE_CMD — passed to node --external-exit-price-cmd (default: printf 1.02)
# GAS_GWEI, NATIVE_PRICE — optional overrides for the economics model
# FLASH_MODEL_GAS_TX_COUNT, FLASH_MODEL_GAS_PER_TX — gas row (defaults 3 / 250000)
# FLASH_MODEL_MAX_POST_TRADE_DEV_BPS — deviation guard (default 500; raise only for stress math, not production)
#
# Usage:
# source scripts/lib/load-project-env.sh
# bash scripts/deployment/run-mainnet-cwusdc-flash-quote-push-model-sweep.sh
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROXMOX_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
# shellcheck disable=SC1091
source "${PROXMOX_ROOT}/scripts/lib/load-project-env.sh" 2>/dev/null || true
# shellcheck disable=SC1091
source "${PROXMOX_ROOT}/smom-dbis-138/scripts/load-env.sh" >/dev/null 2>&1 || true
require_cmd() {
command -v "$1" >/dev/null 2>&1 || {
echo "[fail] missing required command: $1" >&2
exit 1
}
}
require_cmd cast
require_cmd node
RPC_URL="${ETHEREUM_MAINNET_RPC:-}"
CWUSDC="${CWUSDC_MAINNET:-0x2de5F116bFcE3d0f922d9C8351e0c5Fc24b9284a}"
USDC="0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
INTEGRATION="${DODO_PMM_INTEGRATION_MAINNET:-}"
POOL="${POOL_CWUSDC_USDC_MAINNET:-0x69776fc607e9edA8042e320e7e43f54d06c68f0E}"
SCAN="${FLASH_MODEL_SCAN_SIZES:-5000000,10000000,25000000}"
EXIT_CMD="${PMM_FLASH_EXIT_PRICE_CMD:-printf 1.02}"
GAS_GWEI="${GAS_GWEI:-40}"
NATIVE_PRICE="${NATIVE_PRICE:-3200}"
GAS_TXN="${FLASH_MODEL_GAS_TX_COUNT:-3}"
GAS_PER="${FLASH_MODEL_GAS_PER_TX:-250000}"
DEV_BPS="${FLASH_MODEL_MAX_POST_TRADE_DEV_BPS:-500}"
if [[ -z "$RPC_URL" ]]; then
echo "[fail] ETHEREUM_MAINNET_RPC is required" >&2
exit 1
fi
if [[ -n "$INTEGRATION" ]]; then
mapped_pool="$(cast call "$INTEGRATION" 'pools(address,address)(address)' "$CWUSDC" "$USDC" --rpc-url "$RPC_URL" 2>/dev/null | awk '{print $1}' || true)"
if [[ -n "$mapped_pool" && "$mapped_pool" != "0x0000000000000000000000000000000000000000" && "${mapped_pool,,}" != "${POOL,,}" ]]; then
POOL="$mapped_pool"
fi
fi
out="$(cast call "$POOL" 'getVaultReserve()(uint256,uint256)' --rpc-url "$RPC_URL")"
B="$(printf '%s\n' "$out" | sed -n '1p' | awk '{print $1}')"
Q="$(printf '%s\n' "$out" | sed -n '2p' | awk '{print $1}')"
PMM_JS="${PROXMOX_ROOT}/scripts/analytics/pmm-flash-push-break-even.mjs"
echo "=== Flash quote-push model sweep (dry-run only) ==="
echo "pool=$POOL B=$B Q=$Q scan=$SCAN gas_tx=$GAS_TXN gas_per=$GAS_PER max_fee_gwei=$GAS_GWEI dev_bps=$DEV_BPS"
echo
# --full-loop-dry-run requires explicit -x; run one node invocation per scan size.
IFS=',' read -r -a sizes <<< "${SCAN// /}"
for x in "${sizes[@]}"; do
[[ -z "$x" ]] && continue
echo "----- model trade_raw=$x -----"
node "$PMM_JS" \
-B "$B" -Q "$Q" \
--full-loop-dry-run \
--loop-strategy quote-push \
-x "$x" \
--base-asset cWUSDC --base-decimals 6 \
--quote-asset USDC --quote-decimals 6 \
--external-exit-price-cmd "$EXIT_CMD" \
--flash-fee-bps 5 --lp-fee-bps 3 \
--flash-provider-cap 1000000000000 \
--flash-provider-name 'Aave mainnet (cap placeholder)' \
--gas-tx-count "$GAS_TXN" --gas-per-tx "$GAS_PER" \
--max-fee-gwei "$GAS_GWEI" \
--native-token-price "$NATIVE_PRICE" \
--max-post-trade-deviation-bps "$DEV_BPS"
echo
done
echo "Live execution: deploy-mainnet-aave-quote-push-stack.sh then run-mainnet-aave-cwusdc-quote-push-once.sh (see plan-mainnet-cwusdc-flash-quote-push-rebalance.sh)"

View File

@@ -0,0 +1,110 @@
#!/usr/bin/env bash
set -euo pipefail
# Deterministic cWUSDT <-> cWUSDC round-trip exercise on the Mainnet PMM pair (planning / canary).
#
# - Uses fixed operator-supplied raw amounts only (no RNG). Repeating the same list is for soak
# testing, not for fabricating "organic" volume.
# - Default is dry-run only. One DODO pool already supports both swap directions; there is no
# separate "reverse" pool deployment.
#
# Amounts are 6-decimal raw token units (e.g. 100 USDC face = 100000000 raw).
#
# Usage:
# source scripts/lib/load-project-env.sh
# export POOL_CWUSDT_CWUSDC_MAINNET=0x... # required after deploy
# bash scripts/deployment/run-mainnet-cwusdt-cwusdc-soak-roundtrips.sh \
# --amounts-raw 100000000,500000000,1000000000,5000000000,10000000000 \
# --repeat-list 10
#
# bash scripts/deployment/run-mainnet-cwusdt-cwusdc-soak-roundtrips.sh --dry-run # explicit (default)
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# Proxmox repo root (smom-dbis-138 load-env overwrites PROJECT_ROOT to the submodule)
PROXMOX_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
# shellcheck disable=SC1091
source "${PROXMOX_ROOT}/smom-dbis-138/scripts/load-env.sh" >/dev/null 2>&1
SWAP="${PROXMOX_ROOT}/scripts/deployment/run-mainnet-public-dodo-cw-swap.sh"
AMOUNTS_RAW=""
REPEAT_LIST="1"
DRY_RUN=1
for arg in "$@"; do
case "$arg" in
--amounts-raw=*) AMOUNTS_RAW="${arg#*=}" ;;
--repeat-list=*) REPEAT_LIST="${arg#*=}" ;;
--dry-run) DRY_RUN=1 ;;
--apply) DRY_RUN=0 ;;
-h | --help)
sed -n '1,28p' "$0"
exit 0
;;
*)
echo "[fail] unknown arg: $arg" >&2
exit 2
;;
esac
done
if [[ -z "$AMOUNTS_RAW" ]]; then
echo "[fail] required: --amounts-raw=comma-separated raw amounts (6 dp)" >&2
exit 1
fi
POOL_CWUSDT_CWUSDC_MAINNET="${POOL_CWUSDT_CWUSDC_MAINNET:-0xe944b7Cb012A0820c07f54D51e92f0e1C74168DB}"
export POOL_CWUSDT_CWUSDC_MAINNET
if ! [[ "$REPEAT_LIST" =~ ^[0-9]+$ ]] || (( REPEAT_LIST < 1 )); then
echo "[fail] --repeat-list must be a positive integer" >&2
exit 1
fi
DRY_FLAG=()
if (( DRY_RUN == 1 )); then
DRY_FLAG=(--dry-run)
fi
if (( DRY_RUN == 0 )); then
echo "[warn] --apply will broadcast mainnet txs; ensure amounts, slippage, and wallet inventory are correct." >&2
fi
IFS=',' read -r -a AMOUNTS <<<"$AMOUNTS_RAW"
run_leg() {
local direction="$1"
local amount="$2"
bash "$SWAP" --pair=cwusdt-cwusdc --direction="$direction" --amount="$amount" "${DRY_FLAG[@]}"
}
echo "=== cWUSDT/cWUSDC soak (repeat-list=$REPEAT_LIST dryRun=$DRY_RUN) ==="
echo "pool=${POOL_CWUSDT_CWUSDC_MAINNET}"
for ((c = 1; c <= REPEAT_LIST; c++)); do
echo
echo "--- list pass $c / $REPEAT_LIST ---"
for raw in "${AMOUNTS[@]}"; do
raw="$(echo "$raw" | tr -d '[:space:]')"
if ! [[ "$raw" =~ ^[0-9]+$ ]] || (( raw < 1 )); then
echo "[fail] invalid amount token: $raw" >&2
exit 1
fi
echo
echo "leg A base-to-quote amount=$raw"
leg_a_out="$(run_leg base-to-quote "$raw")"
printf '%s\n' "$leg_a_out"
est="$(printf '%s\n' "$leg_a_out" | sed -n 's/^estimatedOut=//p' | head -1)"
if [[ -z "$est" || ! "$est" =~ ^[0-9]+$ ]]; then
echo "[fail] could not parse estimatedOut from leg A dry-run / output" >&2
exit 1
fi
echo "leg B quote-to-base amount=$est (from leg A estimatedOut)"
leg_b_out="$(run_leg quote-to-base "$est")"
printf '%s\n' "$leg_b_out"
done
done
echo
echo "=== done ==="

View File

@@ -0,0 +1,408 @@
#!/usr/bin/env bash
set -euo pipefail
# Repeatable swap helper for the public Mainnet DODO PMM cW bootstrap pools.
# Uses the live Mainnet integration, exact-amount approvals, and a reserve-based
# quote fallback because the direct query/read path on these pools currently
# reverts for hosted reads.
#
# Examples:
# bash scripts/deployment/run-mainnet-public-dodo-cw-swap.sh --pair=cwusdt-usdc --direction=base-to-quote --amount=10000
# bash scripts/deployment/run-mainnet-public-dodo-cw-swap.sh --pair=cwusdt-usdc --direction=quote-to-base --amount=10000
# bash scripts/deployment/run-mainnet-public-dodo-cw-swap.sh --pair=cweurc-usdc --direction=quote-to-base --amount=1000
# bash scripts/deployment/run-mainnet-public-dodo-cw-swap.sh --pair=cwgbpc-usdc --direction=base-to-quote --amount=1000
# bash scripts/deployment/run-mainnet-public-dodo-cw-swap.sh --pair=cwaudc-usdc --direction=base-to-quote --amount=1000
# bash scripts/deployment/run-mainnet-public-dodo-cw-swap.sh --pair=cwcadc-usdc --direction=quote-to-base --amount=1000
# bash scripts/deployment/run-mainnet-public-dodo-cw-swap.sh --pair=cwjpyc-usdc --direction=base-to-quote --amount=1000
# bash scripts/deployment/run-mainnet-public-dodo-cw-swap.sh --pair=cwchfc-usdc --direction=quote-to-base --amount=1000
# bash scripts/deployment/run-mainnet-public-dodo-cw-swap.sh --pair=cwusdc-usdt --direction=base-to-quote --amount=10000 --min-out=9000
# bash scripts/deployment/run-mainnet-public-dodo-cw-swap.sh --pair=cwusdt-usdc --direction=base-to-quote --amount=5000 --dry-run
# bash scripts/deployment/run-mainnet-public-dodo-cw-swap.sh --pair=cwusdt-cwusdc --direction=base-to-quote --amount=1000000 --dry-run
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
source "${PROJECT_ROOT}/smom-dbis-138/scripts/load-env.sh" >/dev/null 2>&1
require_cmd() {
command -v "$1" >/dev/null 2>&1 || {
echo "[fail] missing required command: $1" >&2
exit 1
}
}
require_cmd cast
require_cmd python3
PAIR="cwusdt-usdc"
DIRECTION="base-to-quote"
AMOUNT="10000"
MIN_OUT=""
RPC_URL="${ETHEREUM_MAINNET_RPC:-}"
PRIVATE_KEY="${PRIVATE_KEY:-}"
INTEGRATION="${DODO_PMM_INTEGRATION_MAINNET:-}"
SLIPPAGE_BPS="${SLIPPAGE_BPS:-100}"
DRY_RUN=0
FORCE_DIRECT_DVM=0
ALLOW_DIRECT_DVM_FALLBACK=0
LOCK_DIR="/tmp/run-mainnet-public-dodo-cw-swap.${PAIR:-default}.lock"
for arg in "$@"; do
case "$arg" in
--pair=*) PAIR="${arg#*=}" ;;
--direction=*) DIRECTION="${arg#*=}" ;;
--amount=*) AMOUNT="${arg#*=}" ;;
--min-out=*) MIN_OUT="${arg#*=}" ;;
--rpc-url=*) RPC_URL="${arg#*=}" ;;
--integration=*) INTEGRATION="${arg#*=}" ;;
--slippage-bps=*) SLIPPAGE_BPS="${arg#*=}" ;;
--dry-run) DRY_RUN=1 ;;
--force-direct-dvm) FORCE_DIRECT_DVM=1 ;;
--allow-direct-dvm-fallback) ALLOW_DIRECT_DVM_FALLBACK=1 ;;
*)
echo "[fail] unknown arg: $arg" >&2
exit 2
;;
esac
done
if [[ -z "$RPC_URL" || -z "$PRIVATE_KEY" || -z "$INTEGRATION" ]]; then
echo "[fail] ETHEREUM_MAINNET_RPC, PRIVATE_KEY, and DODO_PMM_INTEGRATION_MAINNET are required" >&2
exit 1
fi
DEPLOYER="$(cast wallet address --private-key "$PRIVATE_KEY")"
case "$PAIR" in
cwusdt-usdc)
POOL="${POOL_CWUSDT_USDC_MAINNET:-}"
BASE_TOKEN="${CWUSDT_MAINNET:-0xaF5017d0163ecb99D9B5D94e3b4D7b09Af44D8AE}"
QUOTE_TOKEN="0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
;;
cwusdc-usdc)
POOL="${POOL_CWUSDC_USDC_MAINNET:-}"
BASE_TOKEN="${CWUSDC_MAINNET:-0x2de5F116bFcE3d0f922d9C8351e0c5Fc24b9284a}"
QUOTE_TOKEN="0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
;;
cwusdt-usdt)
POOL="${POOL_CWUSDT_USDT_MAINNET:-}"
BASE_TOKEN="${CWUSDT_MAINNET:-0xaF5017d0163ecb99D9B5D94e3b4D7b09Af44D8AE}"
QUOTE_TOKEN="0xdAC17F958D2ee523a2206206994597C13D831ec7"
;;
cwusdc-usdt)
POOL="${POOL_CWUSDC_USDT_MAINNET:-}"
BASE_TOKEN="${CWUSDC_MAINNET:-0x2de5F116bFcE3d0f922d9C8351e0c5Fc24b9284a}"
QUOTE_TOKEN="0xdAC17F958D2ee523a2206206994597C13D831ec7"
;;
cwusdt-cwusdc)
POOL="${POOL_CWUSDT_CWUSDC_MAINNET:-0xe944b7Cb012A0820c07f54D51e92f0e1C74168DB}"
BASE_TOKEN="${CWUSDT_MAINNET:-0xaF5017d0163ecb99D9B5D94e3b4D7b09Af44D8AE}"
QUOTE_TOKEN="${CWUSDC_MAINNET:-0x2de5F116bFcE3d0f922d9C8351e0c5Fc24b9284a}"
;;
cweurc-usdc)
POOL="${POOL_CWEURC_USDC_MAINNET:-}"
BASE_TOKEN="${CWEURC_MAINNET:-0xD4aEAa8cD3fB41Dc8437FaC7639B6d91B60A5e8d}"
QUOTE_TOKEN="0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
;;
cwgbpc-usdc)
POOL="${POOL_CWGBPC_USDC_MAINNET:-}"
BASE_TOKEN="${CWGBPC_MAINNET:-0xc074007dc0bfb384b1cf6426a56287ed23fe4d52}"
QUOTE_TOKEN="0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
;;
cwaudc-usdc)
POOL="${POOL_CWAUDC_USDC_MAINNET:-}"
BASE_TOKEN="${CWAUDC_MAINNET:-0x5020Db641B3Fc0dAbBc0c688C845bc4E3699f35F}"
QUOTE_TOKEN="0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
;;
cwcadc-usdc)
POOL="${POOL_CWCADC_USDC_MAINNET:-}"
BASE_TOKEN="${CWCADC_MAINNET:-0x209FE32fe7B541751D190ae4e50cd005DcF8EDb4}"
QUOTE_TOKEN="0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
;;
cwjpyc-usdc)
POOL="${POOL_CWJPYC_USDC_MAINNET:-}"
BASE_TOKEN="${CWJPYC_MAINNET:-0x07EEd0D7dD40984e47B9D3a3bdded1c536435582}"
QUOTE_TOKEN="0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
;;
cwchfc-usdc)
POOL="${POOL_CWCHFC_USDC_MAINNET:-}"
BASE_TOKEN="${CWCHFC_MAINNET:-0x0F91C5E6Ddd46403746aAC970D05d70FFe404780}"
QUOTE_TOKEN="0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
;;
*)
echo "[fail] unsupported pair: $PAIR" >&2
exit 2
;;
esac
if [[ -z "$POOL" ]]; then
POOL="$(cast call "$INTEGRATION" 'pools(address,address)(address)' "$BASE_TOKEN" "$QUOTE_TOKEN" --rpc-url "$RPC_URL" | awk '{print $1}' || true)"
else
mapped_pool="$(cast call "$INTEGRATION" 'pools(address,address)(address)' "$BASE_TOKEN" "$QUOTE_TOKEN" --rpc-url "$RPC_URL" 2>/dev/null | awk '{print $1}' || true)"
if [[ -n "$mapped_pool" && "$mapped_pool" != "0x0000000000000000000000000000000000000000" && "${mapped_pool,,}" != "${POOL,,}" ]]; then
POOL="$mapped_pool"
fi
fi
if [[ -z "$POOL" || "$POOL" == "0x0000000000000000000000000000000000000000" ]]; then
echo "[fail] pool missing for pair: $PAIR" >&2
exit 1
fi
case "$DIRECTION" in
base-to-quote)
TOKEN_IN="$BASE_TOKEN"
TOKEN_OUT="$QUOTE_TOKEN"
DIRECT_METHOD="sellBase(address)(uint256)"
;;
quote-to-base)
TOKEN_IN="$QUOTE_TOKEN"
TOKEN_OUT="$BASE_TOKEN"
DIRECT_METHOD="sellQuote(address)(uint256)"
;;
*)
echo "[fail] unsupported direction: $DIRECTION" >&2
exit 2
;;
esac
reserve_info="$(cast call "$POOL" 'getVaultReserve()(uint256,uint256)' --rpc-url "$RPC_URL")"
base_reserve="$(printf '%s\n' "$reserve_info" | sed -n '1p' | awk '{print $1}')"
quote_reserve="$(printf '%s\n' "$reserve_info" | sed -n '2p' | awk '{print $1}')"
fee_rate="$(cast call "$POOL" '_LP_FEE_RATE_()(uint256)' --rpc-url "$RPC_URL" | awk '{print $1}')"
mid_price="$(cast call "$POOL" 'getMidPrice()(uint256)' --rpc-url "$RPC_URL" | awk '{print $1}')"
quote_source="reserve_fallback"
query_amount_out=""
pool_surface="reserve_only"
if [[ "$DIRECTION" == "base-to-quote" ]]; then
if query_output="$(cast call "$POOL" 'querySellBase(address,uint256)(uint256,uint256)' "$DEPLOYER" "$AMOUNT" --rpc-url "$RPC_URL" 2>/dev/null)"; then
query_amount_out="$(printf '%s\n' "$query_output" | sed -n '1p' | awk '{print $1}')"
quote_source="pool_query"
fi
else
if query_output="$(cast call "$POOL" 'querySellQuote(address,uint256)(uint256,uint256)' "$DEPLOYER" "$AMOUNT" --rpc-url "$RPC_URL" 2>/dev/null)"; then
query_amount_out="$(printf '%s\n' "$query_output" | sed -n '1p' | awk '{print $1}')"
quote_source="pool_query"
fi
fi
if [[ "$quote_source" == "pool_query" ]]; then
pool_surface="full_quote_surface"
elif [[ -n "$base_reserve" && -n "$quote_reserve" && -n "$mid_price" ]]; then
# This class of mainnet pool exposes reserve/mid-price getters but reverts on querySell*.
# Treat it as integration-only unless proven otherwise.
pool_surface="partial_dodo_surface_integration_only"
fi
quote_info="$(python3 - "$AMOUNT" "$base_reserve" "$quote_reserve" "$fee_rate" "$DIRECTION" "$SLIPPAGE_BPS" "$quote_source" "$query_amount_out" <<'PY'
import sys
amount_in = int(sys.argv[1])
base_reserve = int(sys.argv[2])
quote_reserve = int(sys.argv[3])
fee_rate = int(sys.argv[4])
direction = sys.argv[5]
slippage_bps = int(sys.argv[6])
quote_source = sys.argv[7]
query_amount_out = int(sys.argv[8] or "0")
if quote_source == "pool_query":
out = query_amount_out
else:
net = amount_in * max(0, 10000 - fee_rate) // 10000
if direction == "base-to-quote":
out = (net * quote_reserve) // (base_reserve + net)
else:
out = (net * base_reserve) // (quote_reserve + net)
min_out = out * max(0, 10000 - slippage_bps) // 10000
print(out)
print(min_out)
PY
)"
estimated_out="$(printf '%s\n' "$quote_info" | sed -n '1p')"
fallback_min_out="$(printf '%s\n' "$quote_info" | sed -n '2p')"
if [[ -z "$MIN_OUT" ]]; then
MIN_OUT="$fallback_min_out"
fi
allowance="$(cast call "$TOKEN_IN" 'allowance(address,address)(uint256)' "$DEPLOYER" "$INTEGRATION" --rpc-url "$RPC_URL" | awk '{print $1}')"
balance_in_before="$(cast call "$TOKEN_IN" 'balanceOf(address)(uint256)' "$DEPLOYER" --rpc-url "$RPC_URL" | awk '{print $1}')"
balance_out_before="$(cast call "$TOKEN_OUT" 'balanceOf(address)(uint256)' "$DEPLOYER" --rpc-url "$RPC_URL" | awk '{print $1}')"
if (( balance_in_before < AMOUNT )); then
echo "[fail] insufficient input-token balance: have=$balance_in_before need=$AMOUNT" >&2
exit 1
fi
if (( DRY_RUN == 1 )); then
echo "pair=$PAIR"
echo "direction=$DIRECTION"
echo "pool=$POOL"
echo "integration=$INTEGRATION"
echo "poolSurface=$pool_surface"
echo "quoteSource=$quote_source"
echo "midPrice=$mid_price"
echo "lpFeeRate=$fee_rate"
echo "baseReserve=$base_reserve"
echo "quoteReserve=$quote_reserve"
echo "amountIn=$AMOUNT"
echo "estimatedOut=$estimated_out"
echo "minOut=$MIN_OUT"
echo "tokenIn=$TOKEN_IN"
echo "tokenOut=$TOKEN_OUT"
echo "tokenInBalanceBefore=$balance_in_before"
echo "tokenOutBalanceBefore=$balance_out_before"
echo "allowanceBefore=$allowance"
echo "approvalRequired=$(( allowance < AMOUNT ? 1 : 0 ))"
echo "dryRun=1"
exit 0
fi
LOCK_DIR="/tmp/run-mainnet-public-dodo-cw-swap.${PAIR}.${DIRECTION}.lock"
if ! mkdir "$LOCK_DIR" 2>/dev/null; then
echo "[fail] another live $PAIR $DIRECTION swap helper run appears to be in flight; rerun serially to avoid nonce collisions" >&2
exit 1
fi
cleanup_lock() {
rm -f "$LOCK_DIR/integration-estimate.err" 2>/dev/null || true
rmdir "$LOCK_DIR" 2>/dev/null || true
}
trap cleanup_lock EXIT
if (( allowance < AMOUNT )); then
cast send "$TOKEN_IN" \
'approve(address,uint256)(bool)' \
"$INTEGRATION" "$AMOUNT" \
--rpc-url "$RPC_URL" \
--private-key "$PRIVATE_KEY" >/dev/null
fi
tx_mode="integration"
tx_output=""
integration_error=""
transfer_tx_output=""
transfer_tx_hash=""
if (( FORCE_DIRECT_DVM == 1 )); then
integration_error="forced direct DVM fallback (--force-direct-dvm)"
else
if ! cast estimate \
--from "$DEPLOYER" \
"$INTEGRATION" \
'swapExactIn(address,address,uint256,uint256)(uint256)' \
"$POOL" "$TOKEN_IN" "$AMOUNT" "$MIN_OUT" \
--rpc-url "$RPC_URL" >/dev/null 2>"$LOCK_DIR/integration-estimate.err"; then
integration_error="$(tr '\n' ' ' <"$LOCK_DIR/integration-estimate.err" | sed 's/[[:space:]]\+/ /g; s/^ //; s/ $//')"
fi
fi
if [[ -z "$integration_error" ]]; then
if ! tx_output="$(
cast send "$INTEGRATION" \
'swapExactIn(address,address,uint256,uint256)(uint256)' \
"$POOL" "$TOKEN_IN" "$AMOUNT" "$MIN_OUT" \
--rpc-url "$RPC_URL" \
--private-key "$PRIVATE_KEY" 2>&1
)"; then
integration_error="$(printf '%s' "$tx_output" | tr '\n' ' ' | sed 's/[[:space:]]\+/ /g; s/^ //; s/ $//')"
tx_output=""
fi
fi
if [[ -n "$integration_error" ]]; then
if [[ "$pool_surface" == "partial_dodo_surface_integration_only" ]]; then
echo "[fail] integration swap failed on a partial DODO surface / integration-only pool; refusing raw direct fallback" >&2
echo "[fail] integration error: $integration_error" >&2
echo "[fail] pool=$POOL base=$BASE_TOKEN quote=$QUOTE_TOKEN tokenIn=$TOKEN_IN direction=$DIRECTION" >&2
exit 1
fi
if (( FORCE_DIRECT_DVM == 0 && ALLOW_DIRECT_DVM_FALLBACK == 0 )); then
echo "[fail] integration swap failed; refusing direct DVM fallback unless --allow-direct-dvm-fallback or --force-direct-dvm is set" >&2
echo "[fail] integration error: $integration_error" >&2
exit 1
fi
echo "[warn] integration swap failed; falling back to direct DVM flow" >&2
echo "[warn] integration error: $integration_error" >&2
transfer_tx_output="$(
cast send "$TOKEN_IN" \
'transfer(address,uint256)(bool)' \
"$POOL" "$AMOUNT" \
--rpc-url "$RPC_URL" \
--private-key "$PRIVATE_KEY"
)"
transfer_tx_hash="$(printf '%s\n' "$transfer_tx_output" | grep -E '^0x[0-9a-fA-F]{64}$' | tail -n1 || true)"
if [[ -z "$transfer_tx_hash" ]]; then
transfer_tx_hash="$(printf '%s\n' "$transfer_tx_output" | grep -E '^transactionHash[[:space:]]+0x[0-9a-fA-F]{64}$' | awk '{print $2}' | tail -n1 || true)"
fi
if [[ -z "$transfer_tx_hash" ]]; then
echo "[fail] could not parse transfer transaction hash from cast output" >&2
printf '%s\n' "$transfer_tx_output" >&2
exit 1
fi
cast receipt "$transfer_tx_hash" --rpc-url "$RPC_URL" >/dev/null
if ! tx_output="$(
cast send "$POOL" \
"$DIRECT_METHOD" \
"$DEPLOYER" \
--rpc-url "$RPC_URL" \
--private-key "$PRIVATE_KEY" 2>&1
)"; then
echo "[fail] direct DVM fallback failed" >&2
printf '%s\n' "$tx_output" >&2
exit 1
fi
tx_mode="direct_dvm"
fi
tx_hash="$(printf '%s\n' "$tx_output" | grep -E '^0x[0-9a-fA-F]{64}$' | tail -n1 || true)"
if [[ -z "$tx_hash" ]]; then
tx_hash="$(printf '%s\n' "$tx_output" | grep -E '^transactionHash[[:space:]]+0x[0-9a-fA-F]{64}$' | awk '{print $2}' | tail -n1 || true)"
fi
if [[ -z "$tx_hash" ]]; then
echo "[fail] could not parse transaction hash from cast output" >&2
printf '%s\n' "$tx_output" >&2
exit 1
fi
balance_in_after="$(cast call "$TOKEN_IN" 'balanceOf(address)(uint256)' "$DEPLOYER" --rpc-url "$RPC_URL" | awk '{print $1}')"
balance_out_after="$(cast call "$TOKEN_OUT" 'balanceOf(address)(uint256)' "$DEPLOYER" --rpc-url "$RPC_URL" | awk '{print $1}')"
amount_out_delta=$((balance_out_after - balance_out_before))
if (( amount_out_delta < MIN_OUT )); then
echo "[fail] observed output below minOut after ${tx_mode}: got=$amount_out_delta minOut=$MIN_OUT" >&2
exit 1
fi
echo "pair=$PAIR"
echo "direction=$DIRECTION"
echo "pool=$POOL"
echo "integration=$INTEGRATION"
echo "txMode=$tx_mode"
echo "poolSurface=$pool_surface"
if [[ -n "$transfer_tx_hash" ]]; then
echo "transferTxHash=$transfer_tx_hash"
fi
echo "quoteSource=$quote_source"
echo "midPrice=$mid_price"
echo "lpFeeRate=$fee_rate"
echo "baseReserve=$base_reserve"
echo "quoteReserve=$quote_reserve"
echo "amountIn=$AMOUNT"
echo "estimatedOut=$estimated_out"
echo "minOut=$MIN_OUT"
echo "tokenIn=$TOKEN_IN"
echo "tokenOut=$TOKEN_OUT"
echo "txHash=$tx_hash"
echo "tokenInBalanceBefore=$balance_in_before"
echo "tokenInBalanceAfter=$balance_in_after"
echo "tokenOutBalanceBefore=$balance_out_before"
echo "tokenOutBalanceAfter=$balance_out_after"
echo "observedAmountOut=$amount_out_delta"

View File

@@ -0,0 +1,164 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
# shellcheck source=/dev/null
source "${PROJECT_ROOT}/scripts/lib/load-project-env.sh"
require_cmd() {
command -v "$1" >/dev/null 2>&1 || {
echo "[fail] missing required command: $1" >&2
exit 1
}
}
require_cmd cast
require_cmd curl
require_cmd jq
require_cmd python3
RPC_URL="${RPC_URL_138:-https://rpc-http-pub.d-bis.org}"
BASE_URL="${BASE_URL:-https://explorer.d-bis.org/token-aggregation}"
PRIVATE_KEY="${PRIVATE_KEY:-}"
if [[ -z "${PRIVATE_KEY}" ]]; then
echo "[fail] PRIVATE_KEY is required" >&2
exit 1
fi
SIGNER="$(cast wallet address --private-key "${PRIVATE_KEY}")"
USDT="${USDT_ADDRESS_138:-0x004b63A7B5b0E06f6bB6adb4a5F9f590BF3182D1}"
WETH="${WETH9_ADDRESS_138:-0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2}"
ROUTER="${ENHANCED_SWAP_ROUTER_V2_ADDRESS:-0xF1c93F54A5C2fc0d7766Ccb0Ad8f157DFB4C99Ce}"
APPROVE_TOTAL_RAW="${APPROVE_TOTAL_RAW:-2000000000}" # 2,000 USDT
if (( $# > 0 )); then
SIZES_USD=("$@")
else
SIZES_USD=(10 50 100 250 500 1000)
fi
bal_of() {
cast call "$1" 'balanceOf(address)(uint256)' "$SIGNER" --rpc-url "$RPC_URL" | awk '{print $1}'
}
fmt_amount() {
python3 - "$1" "$2" <<'PY'
from decimal import Decimal
import sys
raw = Decimal(sys.argv[1])
decimals = Decimal(10) ** Decimal(sys.argv[2])
print(raw / decimals)
PY
}
ensure_allowance() {
local current
current="$(cast call "$USDT" 'allowance(address,address)(uint256)' "$SIGNER" "$ROUTER" --rpc-url "$RPC_URL" | awk '{print $1}')"
if [[ "$current" =~ ^[0-9]+$ ]] && (( current >= APPROVE_TOTAL_RAW )); then
echo "[ok] router allowance already sufficient: $current"
return
fi
echo "[info] approving router for ${APPROVE_TOTAL_RAW} raw USDT"
cast send "$USDT" \
'approve(address,uint256)' \
"$ROUTER" "$APPROVE_TOTAL_RAW" \
--rpc-url "$RPC_URL" \
--private-key "$PRIVATE_KEY" \
--legacy \
--json | jq -r '.transactionHash'
}
build_plan() {
local amount_in_raw="$1"
curl -fsS \
-H 'content-type: application/json' \
-X POST \
--data "{\"sourceChainId\":138,\"tokenIn\":\"${USDT}\",\"tokenOut\":\"${WETH}\",\"amountIn\":\"${amount_in_raw}\"}" \
"${BASE_URL}/api/v2/routes/internal-execution-plan"
}
run_swap() {
local usd="$1"
local amount_in_raw=$(( usd * 1000000 ))
local plan provider contract calldata estimated min_out pre_usdt pre_weth tx_hash status post_usdt post_weth actual_usdt actual_weth
echo
echo "== ${usd} USDT -> WETH =="
plan="$(build_plan "$amount_in_raw")"
provider="$(printf '%s\n' "$plan" | jq -r '.plannerResponse.legs[0].provider // .plannerResponse.routePlan.legs[0].provider // "unknown"')"
contract="$(printf '%s\n' "$plan" | jq -r '.execution.contractAddress // .execution.target')"
calldata="$(printf '%s\n' "$plan" | jq -r '.execution.encodedCalldata')"
estimated="$(printf '%s\n' "$plan" | jq -r '.plannerResponse.estimatedAmountOut')"
min_out="$(printf '%s\n' "$plan" | jq -r '.plannerResponse.minAmountOut')"
[[ -n "$contract" && "$contract" != "null" ]] || {
echo "[fail] planner did not return contract address" >&2
exit 1
}
[[ -n "$calldata" && "$calldata" != "null" ]] || {
echo "[fail] planner did not return executable calldata" >&2
exit 1
}
pre_usdt="$(bal_of "$USDT")"
pre_weth="$(bal_of "$WETH")"
tx_hash="$(
cast send "$contract" \
"$calldata" \
--rpc-url "$RPC_URL" \
--private-key "$PRIVATE_KEY" \
--legacy \
--json | jq -r '.transactionHash'
)"
status="$(cast receipt "$tx_hash" --rpc-url "$RPC_URL" --json | jq -r '.status')"
if [[ "$status" != "1" && "$status" != "0x1" ]]; then
echo "[fail] tx reverted: $tx_hash" >&2
exit 1
fi
post_usdt="$(bal_of "$USDT")"
post_weth="$(bal_of "$WETH")"
actual_usdt=$(( pre_usdt - post_usdt ))
actual_weth=$(( post_weth - pre_weth ))
jq -n \
--arg usd "$usd" \
--arg provider "$provider" \
--arg contract "$contract" \
--arg tx_hash "$tx_hash" \
--arg amount_in_raw "$amount_in_raw" \
--arg estimated "$estimated" \
--arg min_out "$min_out" \
--arg actual_usdt "$actual_usdt" \
--arg actual_weth "$actual_weth" \
--arg actual_usdt_fmt "$(fmt_amount "$actual_usdt" 6)" \
--arg actual_weth_fmt "$(fmt_amount "$actual_weth" 18)" \
'{
usdNotional: ($usd | tonumber),
provider: $provider,
contract: $contract,
txHash: $tx_hash,
amountInRaw: $amount_in_raw,
estimatedAmountOutRaw: $estimated,
minAmountOutRaw: $min_out,
actualAmountInRaw: $actual_usdt,
actualAmountOutRaw: $actual_weth,
actualAmountIn: $actual_usdt_fmt,
actualAmountOut: $actual_weth_fmt
}'
}
echo "Signer: ${SIGNER}"
echo "RPC: ${RPC_URL}"
echo "Base: ${BASE_URL}"
ensure_allowance
for usd in "${SIZES_USD[@]}"; do
run_swap "$usd"
done

View File

@@ -5,7 +5,8 @@
# Addresses are from docs/11-references/CONTRACT_ADDRESSES_REFERENCE.md and TOKENS_AND_NETWORKS_MINTABLE_TO_DEPLOYER.md.
# Usage: ./scripts/deployment/set-dotenv-c-tokens-and-register-gru.sh [--no-register] [--register-v2]
# --no-register Only update .env; do not run RegisterGRUCompliantTokens.
# --register-v2 After V1 registration, also register staged V2 cUSDT/cUSDC using RegisterGRUCompliantTokensV2.
# --register-v2 After V1 registration, also register or recover the deployed forward-canonical V2 cUSDT/cUSDC
# addresses using RegisterGRUCompliantTokensV2.
#
# Note: RegisterGRUCompliantTokens requires (1) broadcast account has REGISTRAR_ROLE, and (2) the
# UniversalAssetRegistry *implementation* (not just proxy) exposes registerGRUCompliantAsset.
@@ -50,10 +51,10 @@ set_env_var "COMPLIANT_USDT" "0x93E66202A11B1772E55407B32B44e5Cd8eda7f22"
set_env_var "COMPLIANT_USDC" "0xf22258f57794CC8E06237084b353Ab30fFfa640b"
set_env_var "CUSDT_ADDRESS_138" "0x93E66202A11B1772E55407B32B44e5Cd8eda7f22"
set_env_var "CUSDC_ADDRESS_138" "0xf22258f57794CC8E06237084b353Ab30fFfa640b"
set_env_var "COMPLIANT_USDT_V2" "0x8d342d321DdEe97D0c5011DAF8ca0B59DA617D29"
set_env_var "COMPLIANT_USDC_V2" "0x1ac3F4942a71E86A9682D91837E1E71b7BACdF99"
set_env_var "CUSDT_V2_ADDRESS_138" "0x8d342d321DdEe97D0c5011DAF8ca0B59DA617D29"
set_env_var "CUSDC_V2_ADDRESS_138" "0x1ac3F4942a71E86A9682D91837E1E71b7BACdF99"
set_env_var "COMPLIANT_USDT_V2" "0x9FBfab33882Efe0038DAa608185718b772EE5660"
set_env_var "COMPLIANT_USDC_V2" "0x219522c60e83dEe01FC5b0329d6fA8fD84b9D13d"
set_env_var "CUSDT_V2_ADDRESS_138" "0x9FBfab33882Efe0038DAa608185718b772EE5660"
set_env_var "CUSDC_V2_ADDRESS_138" "0x219522c60e83dEe01FC5b0329d6fA8fD84b9D13d"
# cEURC (TOKENS_AND_NETWORKS_MINTABLE_TO_DEPLOYER)
set_env_var "CEURC_ADDRESS_138" "0x8085961F9cF02b4d800A3c6d386D31da4B34266a"
@@ -77,7 +78,7 @@ set_env_var "WETH10" "0xf4BB2e28688e89fCcE3c0580D37d36A7672E8A9f"
set_env_var "LINK_TOKEN" "0xb7721dD53A8c629d9f1Ba31a5819AFe250002b03"
set_env_var "CCIP_FEE_TOKEN" "0xb7721dD53A8c629d9f1Ba31a5819AFe250002b03"
echo "Done. Set: COMPLIANT_USDT, COMPLIANT_USDC, COMPLIANT_USDT_V2, COMPLIANT_USDC_V2, all C*_ADDRESS_138 (including staged V2 addresses), UNIVERSAL_ASSET_REGISTRY, WETH9, WETH10, LINK_TOKEN, CCIP_FEE_TOKEN."
echo "Done. Set: COMPLIANT_USDT, COMPLIANT_USDC, COMPLIANT_USDT_V2, COMPLIANT_USDC_V2, all C*_ADDRESS_138 (including forward-canonical V2 addresses), UNIVERSAL_ASSET_REGISTRY, WETH9, WETH10, LINK_TOKEN, CCIP_FEE_TOKEN."
echo "All c* on explorer.d-bis.org/tokens must be GRU-registered. See docs/04-configuration/EXPLORER_TOKENS_GRU_POLICY.md."
if [[ "$RUN_REGISTER" -eq 0 ]]; then
@@ -97,12 +98,12 @@ export RPC_URL_138="$RPC"
if [[ "$RUN_REGISTER_V2" -eq 1 ]]; then
echo ""
echo "=== Registering staged c* V2 inventory as GRU (RegisterGRUCompliantTokensV2) ==="
echo "=== Registering deployed c* V2 inventory as GRU (RegisterGRUCompliantTokensV2) ==="
(cd "$SMOM" && forge script script/deploy/RegisterGRUCompliantTokensV2.s.sol \
--rpc-url "$RPC_URL_138" --broadcast --private-key "$PRIVATE_KEY" --with-gas-price 1000000000)
else
echo ""
echo "V2 note: COMPLIANT_USDT_V2 / COMPLIANT_USDC_V2 are staged in .env but not auto-registered."
echo "Use --register-v2 once downstream registry consumers are ready for version-aware symbols."
echo "V2 note: COMPLIANT_USDT_V2 / COMPLIANT_USDC_V2 are now the forward-canonical GRU V2 addresses."
echo "Use --register-v2 only for explicit recovery or if a fresh deployment has not already staged itself through DeployAndStageCompliantFiatTokensV2ForChain."
fi
echo "=== Done. ==="

View File

@@ -27,12 +27,12 @@ echo "=== Set missing dotenv (Chain 138 / DODO PMM) ==="
echo " Target: $ENV_FILE"
echo ""
append_if_missing "DODO_PMM_PROVIDER_ADDRESS" "0x5CAe6Ce155b7f08D3a956F5Dc82fC9945f29B381"
append_if_missing "DODO_PMM_INTEGRATION_ADDRESS" "0x5BDc62f1ae7D630c37A8B363a1d49845356Ee72d"
append_if_missing "POOL_CUSDTCUSDC" "0x9fcB06Aa1FD5215DC0E91Fd098aeff4B62fEa5C8"
append_if_missing "POOL_CUSDTUSDT" "0x6fc60DEDc92a2047062294488539992710b99D71"
append_if_missing "POOL_CUSDCUSDC" "0x9f74Be42725f2Aa072a9E0CdCce0E7203C510263"
append_if_missing "CHAIN_138_DODO_PMM_INTEGRATION" "0x5BDc62f1ae7D630c37A8B363a1d49845356Ee72d"
append_if_missing "DODO_PMM_PROVIDER_ADDRESS" "0x3f729632E9553EBacCdE2e9b4c8F2B285b014F2e"
append_if_missing "DODO_PMM_INTEGRATION_ADDRESS" "0x86ADA6Ef91A3B450F89f2b751e93B1b7A3218895"
append_if_missing "POOL_CUSDTCUSDC" "0x9e89bAe009adf128782E19e8341996c596ac40dC"
append_if_missing "POOL_CUSDTUSDT" "0x866Cb44b59303d8dc5f4F9E3E7A8e8b0bf238d66"
append_if_missing "POOL_CUSDCUSDC" "0xc39B7D0F40838cbFb54649d327f49a6DAC964062"
append_if_missing "CHAIN_138_DODO_PMM_INTEGRATION" "0x86ADA6Ef91A3B450F89f2b751e93B1b7A3218895"
echo ""
echo "Done. Verify: grep -E 'DODO_PMM|POOL_' $ENV_FILE"

View File

@@ -0,0 +1,213 @@
#!/usr/bin/env bash
# Randomized PMM swaps on Chain 138 against official DODO DVM pools (two-step pattern).
#
# Official DVM uses sellBase(address)/sellQuote(address) — excess token balance vs reserve is
# swapped and sent to `to`. Flow: ERC20.transfer(pool, amountIn) then pool.sellBase(signer)
# or sellQuote(signer). The deployed DODOPMMIntegration at 0x86ADA6… historically called the
# wrong ABI (sellBase(uint256)) until fixed in smom-dbis-138; this script talks to pools
# directly so swaps work without waiting for integration redeploy.
#
# Scope (default): three public stable DVM pools (cUSDT/cUSDC, cUSDT/USDT, cUSDC/USDC).
#
# Optional: TRY_XAU=1 — also iterates six funded XAU PMM pools from ADDRESS_MATRIX. On current
# chain, those addresses typically revert querySellBase/Quote (legacy / non-DVM PMM). The script
# probes each pool and skips with a clear message if quotes fail. Probe manually:
# ./scripts/verify/probe-pmm-pool-dvm-chain138.sh <pool> [RPC]
#
# After fixing DODOPMMIntegration in smom-dbis-138, redeploy and register pools:
# cd smom-dbis-138 && forge script script/dex/DeployDODOPMMIntegration.s.sol --rpc-url $RPC_URL_138 --broadcast
# Then call importExistingPool(...) per pool on the new integration (admin).
#
# Constraints: USD_CAP (default 1000) for 6-dec stable notionals; amount <= balance; random size.
# GOLD_USD_EST (default 4000) used only if TRY_XAU hits a DVM-compatible XAU pool later.
#
# Usage:
# RPC_URL_138=https://rpc-http-pub.d-bis.org ./scripts/deployment/swap-random-registered-pmm-pools-chain138.sh --dry-run
# TRY_XAU=1 ./scripts/deployment/swap-random-registered-pmm-pools-chain138.sh --dry-run
# ./scripts/deployment/swap-random-registered-pmm-pools-chain138.sh --execute
#
# Requires: cast, python3, PRIVATE_KEY in env (smom-dbis-138/.env via load-project-env).
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
_SAVED_RPC_URL_138="${RPC_URL_138:-}"
# shellcheck disable=SC1091
source "$PROJECT_ROOT/scripts/lib/load-project-env.sh"
[[ -n "$_SAVED_RPC_URL_138" ]] && RPC_URL_138="$_SAVED_RPC_URL_138"
export RPC_URL_138
MODE="${1:-}"
if [[ "$MODE" != "--dry-run" && "$MODE" != "--execute" ]]; then
echo "Usage: $0 --dry-run | --execute" >&2
exit 1
fi
RPC="${RPC_URL_138:-https://rpc-http-pub.d-bis.org}"
USD_CAP="${USD_CAP:-1000}"
TRY_XAU="${TRY_XAU:-0}"
GOLD_USD_EST="${GOLD_USD_EST:-4000}"
MAX_STABLE_RAW=$((USD_CAP * 1000000))
if [[ -z "${PRIVATE_KEY:-}" ]]; then
if [[ "$MODE" == "--dry-run" ]]; then
FROM="${DEPLOYER_ADDRESS:-0x4A666F96fC8764181194447A7dFdb7d471b301C8}"
echo "WARN: PRIVATE_KEY unset — dry-run only; FROM=$FROM" >&2
else
echo "ERROR: PRIVATE_KEY required for --execute" >&2
exit 1
fi
else
FROM=$(cast wallet address --private-key "$PRIVATE_KEY")
fi
cast_u1() { cast "$@" 2>/dev/null | awk '{print $1; exit}'; }
bal_of() { cast_u1 call "$1" "balanceOf(address)(uint256)" "$FROM" --rpc-url "$RPC"; }
echo "Signer: $FROM"
echo "RPC: $RPC"
echo "Mode: $MODE (USD_CAP=$USD_CAP TRY_XAU=$TRY_XAU)"
echo ""
POOL_ROWS=(
"0x9e89bAe009adf128782E19e8341996c596ac40dC|0x93E66202A11B1772E55407B32B44e5Cd8eda7f22|0xf22258f57794CC8E06237084b353Ab30fFfa640b|cUSDT/cUSDC"
"0x866Cb44b59303d8dc5f4F9E3E7A8e8b0bf238d66|0x93E66202A11B1772E55407B32B44e5Cd8eda7f22|0x004b63A7B5b0E06f6bB6adb4a5F9f590BF3182D1|cUSDT/USDT"
"0xc39B7D0F40838cbFb54649d327f49a6DAC964062|0xf22258f57794CC8E06237084b353Ab30fFfa640b|0x71D6687F38b93CCad569Fa6352c876eea967201b|cUSDC/USDC"
)
CXAUC="0x290E52a8819A4fbD0714E517225429aA2B70EC6b"
CEURT="0xdf4b71c61E5912712C1Bdd451416B9aC26949d72"
XAU_POOL_ROWS=(
"0x1AA55E2001E5651349AfF5A63FD7A7Ae44f0F1b0|0x93E66202A11B1772E55407B32B44e5Cd8eda7f22|${CXAUC}|cUSDT/cXAUC public"
"0xEA9Ac6357CaCB42a83b9082B870610363B177cBa|0xf22258f57794CC8E06237084b353Ab30fFfa640b|${CXAUC}|cUSDC/cXAUC public"
"0xbA99bc1eAAC164569d5AcA96C806934DDaF970Cf|${CEURT}|${CXAUC}|cEURT/cXAUC public"
"0x94316511621430423a2cff0C036902BAB4aA70c2|0x93E66202A11B1772E55407B32B44e5Cd8eda7f22|${CXAUC}|cUSDT/cXAUC private"
"0x7867D58567948e5b9908F1057055Ee4440de0851|0xf22258f57794CC8E06237084b353Ab30fFfa640b|${CXAUC}|cUSDC/cXAUC private"
"0x505403093826D494983A93b43Aa0B8601078A44e|${CEURT}|${CXAUC}|cEURT/cXAUC private"
)
pick_amount() {
local max_in="$1"
local min_lo="${2:-1000000}"
python3 -c "
import random
max_in = int('$max_in')
min_lo = int('$min_lo')
if max_in < min_lo:
print(0)
else:
lo = min(min_lo, max(min_lo, max_in // 500))
hi = max(lo, max_in)
print(random.randint(lo, hi))
"
}
max_notional_raw() {
local token="$1"
if [[ "${token,,}" == "${CXAUC,,}" ]]; then
python3 -c "print(int(float('$USD_CAP') / float('$GOLD_USD_EST') * 10**6))"
return
fi
echo "$MAX_STABLE_RAW"
}
dvm_quotes_ok() {
local pool="$1"
cast call "$pool" "querySellBase(address,uint256)(uint256,uint256)" "$FROM" 1000000 --rpc-url "$RPC" >/dev/null 2>&1 \
|| cast call "$pool" "querySellQuote(address,uint256)(uint256,uint256)" "$FROM" 1000000 --rpc-url "$RPC" >/dev/null 2>&1
}
run_pool_swap() {
local pool="$1" base="$2" quote="$3" label="$4"
echo "----------------------------------------"
echo "Pool: $label"
echo " $pool"
if ! dvm_quotes_ok "$pool"; then
echo " SKIP: DVM quote probe failed (non-DVM pool or incompatible PMM). See scripts/verify/probe-pmm-pool-dvm-chain138.sh"
return 0
fi
local dir=$((RANDOM % 2))
local token_in dir_lab
if [[ "$dir" -eq 0 ]]; then
token_in="$base"
dir_lab="sellBase (pay base → receive quote)"
else
token_in="$quote"
dir_lab="sellQuote (pay quote → receive base)"
fi
local cap_raw min_lo
cap_raw=$(max_notional_raw "$token_in")
[[ "${token_in,,}" == "${CXAUC,,}" ]] && min_lo=10000 || min_lo=1000000
local bal
bal=$(bal_of "$token_in")
local max_in
max_in=$(python3 -c "print(min(int('$bal'), int('$cap_raw')))")
if [[ "$max_in" -lt "$min_lo" ]]; then
echo " SKIP: balance or cap too low (raw bal=$bal cap=$cap_raw min_lo=$min_lo)"
return 0
fi
local amt
amt=$(pick_amount "$max_in" "$min_lo")
if [[ -z "$amt" || "$amt" == "0" ]]; then
echo " SKIP: could not pick amount"
return 0
fi
local exp
if [[ "$dir" -eq 0 ]]; then
exp=$(cast_u1 call "$pool" "querySellBase(address,uint256)(uint256,uint256)" "$FROM" "$amt" --rpc-url "$RPC")
else
exp=$(cast_u1 call "$pool" "querySellQuote(address,uint256)(uint256,uint256)" "$FROM" "$amt" --rpc-url "$RPC")
fi
echo " $dir_lab"
echo " tokenIn: $token_in amount: $amt expectedOut: $exp"
if [[ "$MODE" == "--dry-run" ]]; then
echo " DRY-RUN: would transfer then sell"
return 0
fi
echo " tx1: transfer → pool"
cast send --rpc-url "$RPC" --private-key "$PRIVATE_KEY" "$token_in" \
"transfer(address,uint256)" "$pool" "$amt" 2>&1 | tail -n 4
if [[ "$dir" -eq 0 ]]; then
echo " tx2: sellBase($FROM)"
cast send --rpc-url "$RPC" --private-key "$PRIVATE_KEY" "$pool" \
"sellBase(address)" "$FROM" 2>&1 | tail -n 4
else
echo " tx2: sellQuote($FROM)"
cast send --rpc-url "$RPC" --private-key "$PRIVATE_KEY" "$pool" \
"sellQuote(address)" "$FROM" 2>&1 | tail -n 4
fi
}
echo "=== Stable DVM pools ==="
echo ""
for row in "${POOL_ROWS[@]}"; do
IFS='|' read -r pool base quote label <<< "$row"
run_pool_swap "$pool" "$base" "$quote" "$label"
done
if [[ "$TRY_XAU" == "1" ]]; then
echo ""
echo "=== XAU pools (TRY_XAU=1; skipped unless DVM quote probe passes) ==="
echo ""
for row in "${XAU_POOL_ROWS[@]}"; do
IFS='|' read -r pool base quote label <<< "$row"
run_pool_swap "$pool" "$base" "$quote" "$label"
done
fi
echo ""
echo "Done."

View File

@@ -0,0 +1,99 @@
#!/usr/bin/env bash
# Build the info.defi-oracle.io SPA locally and sync it into the dedicated web LXC.
#
# Default target: VMID 2410 on r630-01 (nginx-only CT). Do not deploy to VMID 2400 — that is ThirdWeb RPC.
# Provision once: scripts/deployment/provision-info-defi-oracle-web-lxc.sh
#
# Override: PROXMOX_HOST, INFO_DEFI_ORACLE_VMID (or INFO_DEFI_ORACLE_WEB_VMID from ip-addresses.conf).
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
cd "$PROJECT_ROOT"
source "$PROJECT_ROOT/config/ip-addresses.conf" 2>/dev/null || true
PROXMOX_HOST="${PROXMOX_HOST:-${PROXMOX_HOST_R630_01:-192.168.11.11}}"
VMID="${INFO_DEFI_ORACLE_VMID:-${INFO_DEFI_ORACLE_WEB_VMID:-2410}}"
APP_DIR="${INFO_DEFI_ORACLE_WEB_ROOT:-/var/www/info.defi-oracle.io/html}"
SITE_FILE="${INFO_DEFI_ORACLE_NGINX_SITE:-/etc/nginx/sites-available/info-defi-oracle}"
NGINX_SRC="${PROJECT_ROOT}/config/nginx/info-defi-oracle-io.site.conf"
SSH_OPTS="-o BatchMode=yes -o ConnectTimeout=15 -o StrictHostKeyChecking=accept-new"
DRY_RUN=false
if [[ "${1:-}" == "--dry-run" ]]; then
DRY_RUN=true
fi
if ! command -v pnpm >/dev/null 2>&1; then
echo "pnpm is required." >&2
exit 1
fi
if ! command -v tar >/dev/null 2>&1; then
echo "tar is required." >&2
exit 1
fi
if [[ ! -f "$NGINX_SRC" ]]; then
echo "ERROR: Missing $NGINX_SRC" >&2
exit 1
fi
TMP_TGZ="${TMPDIR:-/tmp}/info-defi-oracle-sync-$$.tgz"
REMOTE_TGZ="/tmp/info-defi-oracle-sync-$$.tgz"
REMOTE_NGINX="/tmp/info-defi-oracle-io-site-$$.conf"
CT_TGZ="/tmp/info-defi-oracle-sync.tgz"
cleanup() { rm -f "$TMP_TGZ"; }
trap cleanup EXIT
if $DRY_RUN; then
echo "[DRY-RUN] pnpm --filter info-defi-oracle-138 build"
echo "[DRY-RUN] tar info-defi-oracle-138/dist -> $TMP_TGZ"
echo "[DRY-RUN] scp dist tgz + nginx site -> root@${PROXMOX_HOST}"
echo "[DRY-RUN] pct push ${VMID} -> ${CT_TGZ} + site to ${SITE_FILE}"
echo "[DRY-RUN] extract to ${APP_DIR}"
exit 0
fi
echo "Building info-defi-oracle-138..."
pnpm --filter info-defi-oracle-138 build
echo "Packaging dist/..."
tar czf "$TMP_TGZ" -C "$PROJECT_ROOT/info-defi-oracle-138/dist" .
echo "Copying package and nginx site to Proxmox host ${PROXMOX_HOST}..."
scp $SSH_OPTS "$TMP_TGZ" "root@${PROXMOX_HOST}:${REMOTE_TGZ}"
scp $SSH_OPTS "$NGINX_SRC" "root@${PROXMOX_HOST}:${REMOTE_NGINX}"
echo "Installing site into VMID ${VMID}..."
ssh $SSH_OPTS "root@${PROXMOX_HOST}" bash -s "$VMID" "$REMOTE_TGZ" "$CT_TGZ" "$APP_DIR" "$SITE_FILE" "$REMOTE_NGINX" <<'REMOTE'
set -euo pipefail
VMID="$1"
REMOTE_TGZ="$2"
CT_TGZ="$3"
APP_DIR="$4"
SITE_FILE="$5"
REMOTE_NGINX="$6"
pct push "$VMID" "$REMOTE_TGZ" "$CT_TGZ"
rm -f "$REMOTE_TGZ"
pct exec "$VMID" -- bash -lc "mkdir -p '$APP_DIR' && rm -rf '$APP_DIR'/* && tar xzf '$CT_TGZ' -C '$APP_DIR' && rm -f '$CT_TGZ'"
pct push "$VMID" "$REMOTE_NGINX" "$SITE_FILE"
rm -f "$REMOTE_NGINX"
pct exec "$VMID" -- bash -lc "ln -sf '$SITE_FILE' /etc/nginx/sites-enabled/info-defi-oracle"
pct exec "$VMID" -- nginx -t
pct exec "$VMID" -- systemctl reload nginx
sleep 1
pct exec "$VMID" -- curl -fsS -H 'Host: info.defi-oracle.io' http://127.0.0.1/health >/dev/null
pct exec "$VMID" -- bash -lc "curl -fsS -H 'Host: info.defi-oracle.io' -o /tmp/info-ta-networks.json 'http://127.0.0.1/token-aggregation/api/v1/networks?refresh=1' && grep -q '\"networks\"' /tmp/info-ta-networks.json && rm -f /tmp/info-ta-networks.json"
REMOTE
echo "Done. Target: VMID ${VMID} (${IP_INFO_DEFI_ORACLE_WEB:-LAN IP from provision script}). Next steps:"
echo " 1. NPMplus: point info.defi-oracle.io → http://${IP_INFO_DEFI_ORACLE_WEB:-192.168.11.218}:80 (scripts/nginx-proxy-manager/update-npmplus-proxy-hosts-api.sh)"
echo " 2. Optional tunnel/DNS: scripts/cloudflare/set-info-defi-oracle-dns-to-vmid2400-tunnel.sh"
echo " 3. pnpm run verify:info-defi-oracle-public"

View File

@@ -0,0 +1,3 @@
#!/usr/bin/env bash
# Alias for discoverability — syncs the info.defi-oracle.io SPA to the dedicated web LXC (default VMID 2410).
exec "$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)/sync-info-defi-oracle-to-vmid2400.sh" "$@"

View File

@@ -0,0 +1,79 @@
#!/usr/bin/env bash
# Sync sites/omdnl-org/public into the dedicated nginx LXC (default VMID 10203).
#
# Provision once: scripts/deployment/provision-omdnl-org-web-lxc.sh
#
# Usage: bash scripts/deployment/sync-omdnl-org-static-to-ct.sh [--dry-run]
# Env: PROXMOX_HOST, OMDNL_ORG_WEB_VMID, IP_OMDNL_ORG_WEB (from config/ip-addresses.conf)
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
# shellcheck source=/dev/null
source "${PROJECT_ROOT}/config/ip-addresses.conf" 2>/dev/null || true
PROXMOX_HOST="${PROXMOX_HOST:-${PROXMOX_HOST_R630_01:-192.168.11.11}}"
VMID="${OMDNL_ORG_WEB_VMID:-10203}"
APP_DIR="${OMDNL_ORG_WEB_ROOT:-/var/www/omdnl.org/html}"
SITE_FILE="${OMDNL_ORG_NGINX_SITE:-/etc/nginx/sites-available/omdnl-org}"
NGINX_SRC="${PROJECT_ROOT}/config/nginx/omdnl-org.site.conf"
STATIC_SRC="${PROJECT_ROOT}/sites/omdnl-org/public"
SSH_OPTS="-o BatchMode=yes -o ConnectTimeout=15 -o StrictHostKeyChecking=accept-new"
DRY_RUN=false
[[ "${1:-}" == "--dry-run" ]] && DRY_RUN=true
if [[ ! -d "$STATIC_SRC" ]]; then
echo "ERROR: Missing static directory $STATIC_SRC" >&2
exit 1
fi
if [[ ! -f "$NGINX_SRC" ]]; then
echo "ERROR: Missing $NGINX_SRC" >&2
exit 1
fi
TMP_TGZ="${TMPDIR:-/tmp}/omdnl-org-sync-$$.tgz"
REMOTE_TGZ="/tmp/omdnl-org-sync-$$.tgz"
REMOTE_NGINX="/tmp/omdnl-org-site-$$.conf"
CT_TGZ="/tmp/omdnl-org-sync.tgz"
cleanup() { rm -f "$TMP_TGZ"; }
trap cleanup EXIT
if $DRY_RUN; then
echo "[DRY-RUN] tar $STATIC_SRC -> push VMID $VMID $APP_DIR"
exit 0
fi
echo "Packaging static files..."
tar czf "$TMP_TGZ" -C "$STATIC_SRC" .
echo "Copying to Proxmox host ${PROXMOX_HOST}..."
scp $SSH_OPTS "$TMP_TGZ" "root@${PROXMOX_HOST}:${REMOTE_TGZ}"
scp $SSH_OPTS "$NGINX_SRC" "root@${PROXMOX_HOST}:${REMOTE_NGINX}"
echo "Installing into VMID ${VMID}..."
ssh $SSH_OPTS "root@${PROXMOX_HOST}" bash -s "$VMID" "$REMOTE_TGZ" "$CT_TGZ" "$APP_DIR" "$SITE_FILE" "$REMOTE_NGINX" <<'REMOTE'
set -euo pipefail
VMID="$1"
REMOTE_TGZ="$2"
CT_TGZ="$3"
APP_DIR="$4"
SITE_FILE="$5"
REMOTE_NGINX="$6"
pct push "$VMID" "$REMOTE_TGZ" "$CT_TGZ"
rm -f "$REMOTE_TGZ"
pct exec "$VMID" -- bash -lc "mkdir -p '$APP_DIR' && rm -rf '$APP_DIR'/* && tar xzf '$CT_TGZ' -C '$APP_DIR' && rm -f '$CT_TGZ'"
pct push "$VMID" "$REMOTE_NGINX" "$SITE_FILE"
rm -f "$REMOTE_NGINX"
pct exec "$VMID" -- bash -lc "ln -sf '$SITE_FILE' /etc/nginx/sites-enabled/omdnl-org"
pct exec "$VMID" -- nginx -t
pct exec "$VMID" -- systemctl reload nginx
sleep 1
pct exec "$VMID" -- curl -fsS -H 'Host: omdnl.org' http://127.0.0.1/health >/dev/null
REMOTE
echo "Done. LAN: http://${IP_OMDNL_ORG_WEB:-192.168.11.222}/ (Host: omdnl.org)"
echo "Next: Cloudflare DNS + NPM — see scripts/cloudflare/configure-omdnl-org-dns.sh and upsert-omdnl-org-proxy-host.sh"

View File

@@ -0,0 +1,24 @@
#!/usr/bin/env bash
# Run TestOneUSDTFlash forge script (broadcast): minimal borrower + 1 USDT flash against FLASH_VAULT.
#
# Env: PRIVATE_KEY, FLASH_VAULT, RPC_URL_138
# Optional: FLASH_VAULT_TOKEN, FLASH_TEST_AMOUNT (raw units)
#
# Usage:
# source scripts/lib/load-project-env.sh
# FLASH_VAULT=0x... ./scripts/deployment/test-one-usdt-flash-chain138.sh
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
# shellcheck disable=SC1091
source "$PROJECT_ROOT/scripts/lib/load-project-env.sh" 2>/dev/null || true
RPC="${RPC_URL_138:?Set RPC_URL_138}"
: "${PRIVATE_KEY:?Set PRIVATE_KEY}"
: "${FLASH_VAULT:?Set FLASH_VAULT (deployed SimpleERC3156FlashVault)}"
cd "$PROJECT_ROOT/smom-dbis-138"
exec forge script script/flash/TestOneUSDTFlash.s.sol:TestOneUSDTFlash --rpc-url "$RPC" --broadcast -vvvv

View File

@@ -0,0 +1,13 @@
#!/usr/bin/env bash
# Sequential 1 / 10 / 100 / 1000 USDT flashes (6 dp) against FLASH_VAULT.
#
# Env: RPC_URL_138, PRIVATE_KEY, FLASH_VAULT
#
# shellcheck disable=SC1091
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
source "$PROJECT_ROOT/scripts/lib/load-project-env.sh" 2>/dev/null || true
: "${RPC_URL_138:?}" "${PRIVATE_KEY:?}" "${FLASH_VAULT:?}"
cd "$PROJECT_ROOT/smom-dbis-138"
exec forge script script/flash/TestScaledFlash.s.sol:TestScaledFlash --rpc-url "$RPC_URL_138" --broadcast -vvvv

View File

@@ -0,0 +1,62 @@
#!/usr/bin/env bash
# Default: plan-only. Use --apply only after bridge inventory is restored.
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
APPLY=0
MIN_BRIDGE_WEI="${MIN_BRIDGE_WETH_WEI:-10000000000000000}"
PROFILE_ENV="${RELAY_PROFILE_ENV_MAINNET_WETH:-${PROJECT_ROOT}/smom-dbis-138/services/relay/.env.mainnet-weth}"
SERVICE_NAME="${MAINNET_WETH_RELAY_SERVICE:-ccip-relay}"
while [[ $# -gt 0 ]]; do
case "$1" in
--apply) APPLY=1; shift ;;
--dry-run) APPLY=0; shift ;;
--min-bridge-weth-wei) MIN_BRIDGE_WEI="${2:-}"; shift 2 ;;
*)
echo "Usage: $0 [--apply|--dry-run] [--min-bridge-weth-wei WEI]" >&2
exit 1
;;
esac
done
[[ -f "$PROFILE_ENV" ]] || { echo "Profile env not found: $PROFILE_ENV" >&2; exit 1; }
if [[ "$APPLY" != "1" ]]; then
echo "Dry run only. This will NOT modify ${PROFILE_ENV}."
echo "Planned changes:"
echo " - set RELAY_SHEDDING=0"
echo " - ensure RELAY_DELIVERY_ENABLED=1"
echo " - restart service: $SERVICE_NAME"
echo
echo "Before apply, verify readiness:"
echo " bash scripts/verify/check-mainnet-weth-relay-lane.sh"
echo " bash scripts/bridge/fund-mainnet-relay-bridge.sh --target-bridge-balance-wei $MIN_BRIDGE_WEI --dry-run"
exit 0
fi
tmp_file="$(mktemp)"
trap 'rm -f "$tmp_file"' EXIT
cp "$PROFILE_ENV" "$tmp_file"
if grep -q '^RELAY_SHEDDING=' "$tmp_file"; then
sed -i 's/^RELAY_SHEDDING=.*/RELAY_SHEDDING=0/' "$tmp_file"
else
printf '\nRELAY_SHEDDING=0\n' >> "$tmp_file"
fi
if grep -q '^RELAY_DELIVERY_ENABLED=' "$tmp_file"; then
sed -i 's/^RELAY_DELIVERY_ENABLED=.*/RELAY_DELIVERY_ENABLED=1/' "$tmp_file"
else
printf 'RELAY_DELIVERY_ENABLED=1\n' >> "$tmp_file"
fi
cp "$tmp_file" "$PROFILE_ENV"
echo "Updated $PROFILE_ENV"
echo "Next step on the relay host:"
echo " sudo systemctl restart $SERVICE_NAME"
echo "Then verify:"
echo " bash scripts/verify/check-mainnet-weth-relay-lane.sh"

View File

@@ -116,8 +116,9 @@ declare -A VMID_HOSTS=(
["10150"]="${PROXMOX_HOST_ML110}"
["10151"]="${PROXMOX_HOST_ML110}"
["7811"]="${PROXMOX_HOST_R630_01}"
["2501"]="${PROXMOX_HOST_ML110}"
["2502"]="${PROXMOX_HOST_ML110}"
["2101"]="${PROXMOX_HOST_R630_01}"
["2201"]="${PROXMOX_HOST_R630_01}"
["2301"]="${PROXMOX_HOST_R630_03}"
)
stopped_list=()
@@ -157,4 +158,3 @@ if [ ${#stopped_list[@]} -gt 0 ]; then
log_info " bash scripts/fix-npmplus-backend-services.sh $PROXMOX_HOST $CONTAINER_ID true"
echo ""
fi

Some files were not shown because too many files have changed in this diff Show More