- Update dbis_core, cross-chain-pmm-lps, explorer-monorepo, metamask-integration, pr-workspace/chains - Omit embedded publish git dirs and empty placeholders from index Made-with: Cursor
102 lines
5.9 KiB
Markdown
102 lines
5.9 KiB
Markdown
# Physical Drives and Current Configurations — Proxmox Hosts
|
||
|
||
> Modern note: This hardware/storage inventory is still useful, but host workload examples can lag behind live CT placement. Use the storage descriptions here together with current VM placement from `docs/04-configuration/ALL_VMIDS_ENDPOINTS.md` before making migration or capacity decisions.
|
||
|
||
**Last updated:** 2026-04-03
|
||
|
||
---
|
||
|
||
## ml110 (192.168.11.10)
|
||
|
||
| Device | Size | Model | Serial | Configuration |
|
||
|--------|--------|------------------------|----------|----------------|
|
||
| **sda** | 931.5G | ST1000DM003-1ER162 (HDD) | Z4YE0TMR | Partitioned: sda1 (1M), sda2 (1G vfat /boot/efi), sda3 (930.5G LVM2). VG **pve**: swap 8G, root 96G ext4 `/`, **data** thin pool 794G (CTs 1003, 1004, 1503–1508, 2102, 2301, 2304–2308, 2400, 2402, 2403). |
|
||
| **sdb** | 931.5G | ST1000DM003-1ER162 (HDD) | Z4YDLPZ3 | **In VG pve** — extended `data` thin pool (data_tdata). Pool now ~1.7 TB total. |
|
||
|
||
**RAID:** None.
|
||
|
||
**Summary:** 2× 1TB HDDs. Both in use: sda (OS + original data pool); sdb added to pve and used to extend the data thin pool (~930G added). Data/local-lvm pool now ~1.7 TB.
|
||
|
||
---
|
||
|
||
## r630-01 (192.168.11.11)
|
||
|
||
| Device | Size | Model | Serial | Configuration |
|
||
|--------|--------|-------------------|--------------|----------------|
|
||
| **sda** | 558.9G | HUC109060CSS600 (SSD) | KSKUZEZF | Partitioned: sda1 (1M), sda2 (1G vfat), sda3 (557G **zfs_member**). ZFS used for Proxmox root (rpool). |
|
||
| **sdb** | 558.9G | HUC109060CSS600 (SSD) | KSKM1B4F | Same layout as sda — ZFS mirror partner for root. |
|
||
| **sdc** | 232.9G | CT250MX500SSD1 (SSD) | 2203E5FE090E | Member of **md0** (RAID10). |
|
||
| **sdd** | 232.9G | CT250MX500SSD1 | 2203E5FE08F8 | Member of **md0** (RAID10). |
|
||
| **sde** | 232.9G | CT250MX500SSD1 | 2203E5FE08FA | Member of **md0** (RAID10). |
|
||
| **sdf** | 232.9G | CT250MX500SSD1 | 2203E5FE08F1 | Member of **md0** (RAID10). |
|
||
| **sdg** | 232.9G | CT250MX500SSD1 | 2203E5FE095E | Member of **md0** (RAID10). |
|
||
| **sdh** | 232.9G | CT250MX500SSD1 | 2203E5FE0901 | Member of **md0** (RAID10). |
|
||
|
||
**RAID:** **md0** = RAID10, 6× 233G SSDs → **~698G** usable. State: **active**, 6/6 devices [UUUUUU].
|
||
|
||
**LVM on md0:** VG **pve** (single PV `/dev/md0`). Thin pools: **pve-thin1** 208G, **pve-data** 280G. This host has carried validators, core RPC `2101`, and the edge/private RPC lanes across multiple generations. Verify the current CT set with `pct list` or `ALL_VMIDS_ENDPOINTS.md` instead of relying on the historical `2500–2505` example.
|
||
|
||
**Summary:** 2× 559G SSDs (ZFS root) + 6× 233G SSDs (RAID10 → LVM data/thin1). All drives in use.
|
||
|
||
---
|
||
|
||
## r630-02 (192.168.11.12)
|
||
|
||
| Device | Size | Model | Serial | Configuration |
|
||
|--------|--------|-------------------|--------------|----------------|
|
||
| **sda** | 232.9G | CT250MX500SSD1 | 2202E5FB4CB9 | Partitioned: sda1 (1M), sda2 (1G vfat), sda3 (231G **zfs_member**). ZFS for Proxmox root. |
|
||
| **sdb** | 232.9G | CT250MX500SSD1 | 2203E5FE090D | Same — ZFS mirror for root. |
|
||
| **sdc** | 232.9G | CT250MX500SSD1 | 2203E5FE07E1 | sdc3 → LVM VG **thin2** (thin pool → VMIDs 5000, 6000, 6001, 6002). |
|
||
| **sdd** | 232.9G | CT250MX500SSD1 | 2202E5FB186E | sdd3 → LVM VG **thin3** (VMIDs 5800, 10237, 8641, 5801). |
|
||
| **sde** | 232.9G | CT250MX500SSD1 | 2203E5FE0905 | sde3 → LVM VG **thin4** (VMIDs 7810, 7811). |
|
||
| **sdf** | 232.9G | CT250MX500SSD1 | 2203E5FE0964 | sdf3 → LVM VG **thin5** (empty pool after 5000 migrated to thin2). |
|
||
| **sdg** | 232.9G | CT250MX500SSD1 | 2203E5FE0928 | sdg3 → LVM VG **thin6** (VMIDs 5700, 6400, 6401, 6402). |
|
||
| **sdh** | 232.9G | CT250MX500SSD1 | 2203E5FE0903 | sdh3 → LVM VG **thin1** (thin1-r630-02: 2201, 2303, 2401, 5200–5202, 6200, 10234). |
|
||
|
||
**RAID:** None (each data disk is a separate LVM PV).
|
||
|
||
**Summary:** 2× 233G SSDs (ZFS root) + 6× 233G SSDs (each its own VG: thin1–thin6). All 8 drives in use.
|
||
|
||
---
|
||
|
||
## r630-03 (192.168.11.13)
|
||
|
||
| Device | Size | Model / notes | Configuration |
|
||
|--------|--------|---------------|---------------|
|
||
| **sda** | 558.9G | AL14SEB060NY | sda3 → VG **pve** (swap, root ext4, **data** thin pool). |
|
||
| **sdb** | 558.9G | HUC109060CSS600 | PV in VG **pve**; extends **data** thin pool (~1 TiB total with sda). |
|
||
| **sdc–sdh** | 232.9G each | Samsung 850/860 EVO 250G | Whole-disk PV → VG **thin1**–**thin6** + thin pools; Proxmox **`thin1-r630-03`** … **`thin6-r630-03`** (~226 GiB each). |
|
||
|
||
**RAID:** None on data SSDs.
|
||
|
||
**Summary:** 2× ~559G in **pve** for OS + thin **data**/**local-lvm**; six SSDs as **per-disk LVM thin** (same idea as r630-02). Idempotent provision: `scripts/proxmox/provision-r630-03-six-ssd-thinpools.sh`.
|
||
|
||
---
|
||
|
||
## r630-04 (192.168.11.14)
|
||
|
||
| Device | Size | Model / notes | Configuration |
|
||
|--------|--------|---------------|---------------|
|
||
| **sda** | 279.4G | ST9300653SS | sda3 → VG **pve** (swap, root, **data** thin pool leg). |
|
||
| **sdb** | 279.4G | HUC106030CSS600 | PV in VG **pve**; **data** thin ~467 GiB. |
|
||
| **sdc–sdf** | 232.9G each | Crucial MX500 | Ceph bluestore OSDs (one VG per disk). |
|
||
|
||
**RAID:** None.
|
||
|
||
**Summary:** **pve** for Proxmox + guest thin storage; four SSDs for Ceph.
|
||
|
||
---
|
||
|
||
## Quick reference
|
||
|
||
| Host | Physical drives | Layout | Unused / notes |
|
||
|---------|------------------|--------|-----------------|
|
||
| ml110 | 2× 1TB HDD | sda: OS+LVM data; sdb: LVM PV only | **sdb** — 931G not in any VG |
|
||
| r630-01 | 2× 559G + 6× 233G SSD | ZFS root + RAID10 md0 → LVM | All in use |
|
||
| r630-02 | 2× 233G + 6× 233G SSD | ZFS root + 6× single-disk LVM (thin1–thin6) | All in use |
|
||
| r630-03 | 2× 559G + 6× 233G SSD | **pve** thin **data** ~1 TiB + **thin1-r630-03**…**thin6-r630-03** on sdc–sdh | All data SSDs in use |
|
||
| r630-04 | 2× 279G + 4× 233G SSD | LVM **pve** + Ceph OSDs on 233G disks | Ceph + **pve** in use |
|
||
|
||
To re-check:
|
||
`ssh root@<host> 'lsblk -o NAME,SIZE,TYPE,FSTYPE,MODEL,SERIAL; echo; pvs; vgs'`
|