Files
proxmox/docs/04-configuration/R630_DIMM_RESEAT_PROCEDURE.md
defiQUG fbda1b4beb
Some checks failed
Deploy to Phoenix / deploy (push) Has been cancelled
docs: Ledger Live integration, contract deploy learnings, NEXT_STEPS updates
- ADD_CHAIN138_TO_LEDGER_LIVE: Ledger form done; public code review repo bis-innovations/LedgerLive; init/push commands
- CONTRACT_DEPLOYMENT_RUNBOOK: Chain 138 gas price 1 gwei, 36-addr check, TransactionMirror workaround
- CONTRACT_*: AddressMapper, MirrorManager deployed 2026-02-12; 36-address on-chain check
- NEXT_STEPS_FOR_YOU: Ledger done; steps completable now (no LAN); run-completable-tasks-from-anywhere
- MASTER_INDEX, OPERATOR_OPTIONAL, SMART_CONTRACTS_INVENTORY_SIMPLE: updates
- LEDGER_BLOCKCHAIN_INTEGRATION_COMPLETE: bis-innovations/LedgerLive reference

Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-12 15:46:57 -08:00

3.2 KiB
Raw Permalink Blame History

R630 DIMM B2 reseat procedure

Last Updated: 2026-01-31
Document Version: 1.0
Status: Active Documentation


Use when: You have an alert or message to reseat DIMM B2 on one of the R630s.


1. Identify which R630

Proxmox host IP address Role / notable workloads
r630-01 192.168.11.11 NPMplus (10233), many infra containers (69 LXC)
r630-02 192.168.11.12 besu-rpc-public-1 (2201), Blockscout (5000), Firefly, MIM (10 LXC)
  • If the alert shows hostname: r630-01 = .11, r630-02 = .12.
  • If the alert shows IP: 192.168.11.11 = r630-01, 192.168.11.12 = r630-02.
  • If the alert shows Dell service tag / iDRAC, match that to the physical server you have labeled as r630-01 or r630-02.

2. Impact of taking that host down

  • r630-01 down: NPMplus (public proxy) and many services unreachable until host is back. Plan for a short maintenance window.
  • r630-02 down: Public RPC (besu-rpc-public-1), Blockscout, Firefly, MIM API unreachable until host is back.

DIMM reseat requires power off. There is no live reseat for memory on R630.


  1. Notify anyone using services on that host.
  2. Optional: Migrate or shut down VMs/containers you can move (Proxmox cluster) to reduce I/O before power off.
  3. Optional: Put the node in maintenance in Proxmox (Datacenter → select node → Maintenance) so the cluster doesnt try to start resources on it during the work.

4. Reseat DIMM B2 steps

  1. Shut down the R630 (from Proxmox: shutdown the node, or from iDRAC: Power → Power Off).
  2. Power off at the PSU (or pull power) and wait ~30 seconds. Touch a grounded chassis part to discharge static.
  3. Open the chassis and locate memory. On Dell R630:
    • B2 = channel B, slot 2 (see Dell R630 Owners Manual / memory population rules for exact slot position).
    • Slots are usually labeled on the board (e.g. A1A4, B1B4, etc.).
  4. Reseat B2:
    • Release the ejector clips at both ends of the DIMM.
    • Remove the module, then reinstall it firmly until the clips click.
    • Ensure the notch aligns and the module is fully seated.
  5. Close the chassis, restore power, and power on the server.
  6. Verify:
    • Enter BIOS/iDRAC and check System Memory (or run memory test if available).
    • Once the OS is up, from Proxmox or SSH: ssh root@<host-ip> 'dmidecode -t memory | grep -A2 "Locator: B2"' (or check total RAM with free -h) to confirm B2 is present and size is correct.

5. After maintenance

  1. Exit maintenance mode on the node in Proxmox if you used it.
  2. Confirm pveproxy, pvedaemon, pvestatd are active and Web UI (8006) is reachable.
  3. Run a quick health check:
    ./scripts/check-all-proxmox-hosts.sh

Quick reference

  • r630-01: 192.168.11.11 — NPMplus, infra (69 LXC)
  • r630-02: 192.168.11.12 — RPC public, Blockscout, Firefly, MIM (10 LXC)
  • DIMM B2: Channel B, slot 2 — power off before reseat.
  • Health check: ./scripts/check-all-proxmox-hosts.sh