Files
proxmox/reports/storage/R630_01_RAID_ANALYSIS.md
defiQUG fbda1b4beb
Some checks failed
Deploy to Phoenix / deploy (push) Has been cancelled
docs: Ledger Live integration, contract deploy learnings, NEXT_STEPS updates
- ADD_CHAIN138_TO_LEDGER_LIVE: Ledger form done; public code review repo bis-innovations/LedgerLive; init/push commands
- CONTRACT_DEPLOYMENT_RUNBOOK: Chain 138 gas price 1 gwei, 36-addr check, TransactionMirror workaround
- CONTRACT_*: AddressMapper, MirrorManager deployed 2026-02-12; 36-address on-chain check
- NEXT_STEPS_FOR_YOU: Ledger done; steps completable now (no LAN); run-completable-tasks-from-anywhere
- MASTER_INDEX, OPERATOR_OPTIONAL, SMART_CONTRACTS_INVENTORY_SIMPLE: updates
- LEDGER_BLOCKCHAIN_INTEGRATION_COMPLETE: bis-innovations/LedgerLive reference

Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-12 15:46:57 -08:00

5.5 KiB

R630-01 RAID Configuration Analysis

Date: January 6, 2026
Node: R630-01 (192.168.11.11)


Current Disk Configuration

Available Disks

Disk Size Status Current Usage
sda 558.9 GB Available System disk
sdb 558.9 GB Available System disk
sdc 232.9 GB In Use LVM2_member (pve VG)
sdd 232.9 GB In Use LVM2_member (pve VG)
sde 232.9 GB Available Unused
sdf 232.9 GB Available Unused
sdg 232.9 GB Available Unused
sdh 232.9 GB Available Unused

Current LVM Configuration

  • Volume Group: pve
  • Physical Volumes: sdc, sdd
  • Total Size: ~465.77 GB
  • Free Space: ~57.46 GB
  • Thin Pools: thin1 (208GB), data (200GB)

RAID Options for Fast Access

Configuration:

  • Disks: sdc, sdd, sde, sdf, sdg, sdh (6 disks)
  • Level: RAID 10 (mirrored stripes)
  • Capacity: ~700 GB (3x 233GB)
  • Performance: Excellent (read/write performance)
  • Redundancy: Can survive 1-3 disk failures (depending on which disks)
  • Use Case: Best balance of performance and redundancy

Pros:

  • Fast read/write performance
  • Good redundancy
  • Fast rebuild times

Cons:

  • Requires migrating data off sdc and sdd first
  • Only 50% capacity utilization

Option 2: RAID 0 (Maximum Performance, No Redundancy)

Configuration:

  • Disks: sdc, sdd, sde, sdf, sdg, sdh (6 disks)
  • Level: RAID 0 (striping)
  • Capacity: ~1.4 TB (6x 233GB)
  • Performance: Maximum (fastest possible)
  • Redundancy: None - one disk failure = total data loss
  • Use Case: Maximum performance, temporary/cache data

Pros:

  • Maximum performance
  • Full capacity utilization
  • Simple configuration

Cons:

  • No redundancy
  • High risk of data loss
  • Not recommended for production data

Option 3: RAID 5 (Good Performance + Single Redundancy)

Configuration:

  • Disks: sdc, sdd, sde, sdf, sdg, sdh (6 disks)
  • Level: RAID 5
  • Capacity: ~1.17 TB (5x 233GB)
  • Performance: Good (better reads than writes)
  • Redundancy: Can survive 1 disk failure
  • Use Case: Good balance for production

Pros:

  • Good read performance
  • Single disk redundancy
  • Better capacity than RAID 10

Cons:

  • Slower write performance than RAID 10
  • Requires migrating data off sdc and sdd first
  • Slower rebuild times

Option 4: RAID 6 (Good Performance + Double Redundancy)

Configuration:

  • Disks: sdc, sdd, sde, sdf, sdg, sdh (6 disks)
  • Level: RAID 6
  • Capacity: ~933 GB (4x 233GB)
  • Performance: Good (better reads than writes)
  • Redundancy: Can survive 2 disk failures
  • Use Case: High availability requirements

Pros:

  • Double disk redundancy
  • Good read performance
  • Safer than RAID 5

Cons:

  • Slower write performance
  • Requires migrating data off sdc and sdd first
  • Lower capacity than RAID 5

Important Considerations

⚠️ Critical: sdc and sdd are Currently in Use

Current Status:

  • sdc and sdd are part of the pve volume group
  • They contain LVM thin pools (thin1 and data)
  • These pools are actively used by migrated containers

Required Actions Before RAID Creation:

  1. Migrate all data from sdc and sdd to other storage
  2. Remove sdc and sdd from the pve volume group
  3. Verify no data remains on sdc and sdd
  4. Then create RAID array

Migration Strategy

Option A: Migrate to sda/sdb

  • Use sda and sdb (558GB each) as temporary storage
  • Migrate pve VG data to sda/sdb
  • Remove sdc/sdd from pve VG
  • Create RAID with sdc-sdh

Option B: Use Only Available Disks

  • Create RAID with only sde, sdf, sdg, sdh (4 disks)
  • Keep sdc/sdd in pve VG
  • Less capacity but no migration needed

For Fast Access with Redundancy: RAID 10

  1. Install mdadm (if not already installed)
  2. Migrate data from sdc/sdd to sda/sdb
  3. Remove sdc/sdd from pve VG
  4. Create RAID 10 with sdc-sdh
  5. Add RAID to Proxmox as storage

Steps:

# 1. Install mdadm
apt-get update && apt-get install -y mdadm

# 2. Create RAID 10
mdadm --create /dev/md0 --level=10 --raid-devices=6 /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh

# 3. Create filesystem
mkfs.ext4 /dev/md0
# or
mkfs.xfs /dev/md0

# 4. Add to Proxmox storage
# Via web UI or pvesm command

Script Available

A script has been created to assist with RAID creation:

  • Location: scripts/create-raid-r630-01.sh
  • Usage: ./scripts/create-raid-r630-01.sh [raid_level]
  • RAID Levels: 0, 5, 6, 10 (default: 10)

Note: The script includes safety checks and will warn if disks are in use.


Performance Expectations

RAID 10 (6 disks)

  • Read Speed: ~6x single disk speed (~600-900 MB/s)
  • Write Speed: ~3x single disk speed (~300-450 MB/s)
  • IOPS: Significantly improved for random I/O

RAID 0 (6 disks)

  • Read Speed: ~6x single disk speed (~600-900 MB/s)
  • Write Speed: ~6x single disk speed (~600-900 MB/s)
  • IOPS: Maximum possible

Next Steps

  1. Review current storage usage on sdc/sdd
  2. Plan data migration if using sdc/sdd in RAID
  3. Choose RAID level based on requirements
  4. Execute RAID creation using script or manual commands
  5. Add RAID to Proxmox as storage pool
  6. Test performance and verify redundancy

Last Updated: January 6, 2026