Files
proxmox/docs/04-configuration/STORAGE_FIX_R630_01_AND_THIN5.md
defiQUG dbd517b279 Sync workspace: config, docs, scripts, CI, operator rules, and submodule pointers.
- Update dbis_core, cross-chain-pmm-lps, explorer-monorepo, metamask-integration, pr-workspace/chains
- Omit embedded publish git dirs and empty placeholders from index

Made-with: Cursor
2026-04-12 06:12:20 -07:00

2.0 KiB
Raw Blame History

Storage fix: r630-01 (72%) and r630-02 thin5 (84.6%)

Last updated: 2026-02-28

Situation

  • r630-01 data / local-lvm: ~72% used at the time of this note. Many CTs used this pool, including validators, core RPC 2101, and the then-current edge/private RPC lanes.
  • r630-02 thin5: ~84.6% used. Only VMID 5000 (Blockscout/Explorer) uses thin5.

Fix options

Frees space without moving any container:

  • thin5: Prune inside VMID 5000: journal, Docker logs/images, logrotate, backups.
  • r630-01: Prune journal and logs in all running CTs on the host.
# From project root (LAN, SSH to both hosts)
bash scripts/maintenance/fix-storage-r630-01-and-thin5.sh

Dry-run:

bash scripts/maintenance/fix-storage-r630-01-and-thin5.sh --dry-run

2. Prune + migrate VMID 5000 to an empty pool

To free thin5, migrate Blockscout (5000) to the emptiest pool on r630-02: thin2 (~4.8% used):

# Migrate 5000 from thin5 -> thin2 (empty pool)
bash scripts/maintenance/fix-storage-r630-01-and-thin5.sh --migrate-5000 thin2

Other options: thin6 (~14% used), thin3 (~11% used). This will: stop 5000 → vzdump to local → destroy CT → restore to target pool → start. Expect 1545 min; Blockscout is down during backup/restore.

3. Manual VMID 5000 prune (if script not run from repo)

On r630-02 or from a host that can SSH there:

bash scripts/maintenance/vmid5000-free-disk-and-logs.sh

Verify after fix

bash scripts/audit-proxmox-rpc-storage.sh
# or
ssh root@192.168.11.11 'pvesm status'
ssh root@192.168.11.12 'pvesm status'

LVM thin reclaim can take a few minutes after deleting data inside CTs; re-run pvesm status or lvs after a short wait.

Reference

  • thin5 on r630-02: single consumer VMID 5000.
  • r630-01 data: shared by VMIDs 2101, 1000, 1001, 1002, 10100, 10101, 10120, and others on that host.
  • Existing prune script for 5000: scripts/maintenance/vmid5000-free-disk-and-logs.sh.