Files
proxmox/reports/storage/R630_01_RAID10_IMPLEMENTATION_PLAN.md
defiQUG fbda1b4beb
Some checks failed
Deploy to Phoenix / deploy (push) Has been cancelled
docs: Ledger Live integration, contract deploy learnings, NEXT_STEPS updates
- ADD_CHAIN138_TO_LEDGER_LIVE: Ledger form done; public code review repo bis-innovations/LedgerLive; init/push commands
- CONTRACT_DEPLOYMENT_RUNBOOK: Chain 138 gas price 1 gwei, 36-addr check, TransactionMirror workaround
- CONTRACT_*: AddressMapper, MirrorManager deployed 2026-02-12; 36-address on-chain check
- NEXT_STEPS_FOR_YOU: Ledger done; steps completable now (no LAN); run-completable-tasks-from-anywhere
- MASTER_INDEX, OPERATOR_OPTIONAL, SMART_CONTRACTS_INVENTORY_SIMPLE: updates
- LEDGER_BLOCKCHAIN_INTEGRATION_COMPLETE: bis-innovations/LedgerLive reference

Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-12 15:46:57 -08:00

4.6 KiB

R630-01 RAID 10 Implementation Plan

Date: January 6, 2026
Target: Create RAID 10 with sdc-sdh (6 disks) for fast access
Current Status: sdc/sdd in use, sde-sdh available


Current Situation

Disk Status

  • sda/sdb: System disks (ZFS root pool) - Cannot be used
  • sdc/sdd: In use by pve VG (~408GB used, ~57GB free)
  • sde-sdh: Available (4x 232.9GB SSDs)

Storage Usage

  • pve VG: ~408GB used on sdc/sdd
  • Thin Pools: thin1 (208GB), data (200GB)
  • Available Space: ~57GB free

Implementation Strategy

Option A: RAID 10 with 4 Disks (Simpler, Safer)

Disks: sde, sdf, sdg, sdh
Capacity: ~466GB (RAID 10)
Steps:

  1. Create RAID 10 with 4 disks
  2. Add to pve VG
  3. Migrate data from sdc/sdd to RAID
  4. Remove sdc/sdd from pve VG
  5. Keep RAID 10 with 4 disks (sdc/sdd remain unused or can be used separately)

Pros:

  • Simpler process
  • No data loss risk
  • sdc/sdd can be used for other purposes later

Cons:

  • Only 4 disks in RAID (less performance than 6)
  • sdc/sdd not utilized

Option B: RAID 10 with All 6 Disks (More Complex)

Disks: sdc, sdd, sde, sdf, sdg, sdh
Capacity: ~700GB (RAID 10)
Steps:

  1. Create RAID 10 with 4 available disks (sde-sdh)
  2. Add to pve VG
  3. Migrate ALL data from sdc/sdd to RAID
  4. Remove sdc/sdd from pve VG
  5. Stop RAID array
  6. Create new RAID 10 with all 6 disks (sdc-sdh)
  7. Restore data (requires backup or manual migration)

Pros:

  • Maximum performance (6 disks)
  • All disks utilized
  • Better redundancy

Cons:

  • Complex process
  • Requires data backup/restore
  • Downtime during rebuild

Since sda/sdb are system disks and cannot be used for migration, the safest approach is:

Step-by-Step Process

  1. Install mdadm

    apt-get update && apt-get install -y mdadm
    
  2. Create RAID 10 with 4 disks

    mdadm --create /dev/md0 --level=10 --raid-devices=4 /dev/sde /dev/sdf /dev/sdg /dev/sdh
    
  3. Wait for sync (~30-60 minutes)

    watch cat /proc/mdstat
    
  4. Add RAID to pve VG

    pvcreate /dev/md0
    vgextend pve /dev/md0
    
  5. Migrate data from sdc

    pvmove /dev/sdc /dev/md0
    

    This will take 1-2 hours depending on data size

  6. Migrate data from sdd

    pvmove /dev/sdd /dev/md0
    

    This will take 1-2 hours depending on data size

  7. Remove sdc/sdd from pve VG

    vgreduce pve /dev/sdc /dev/sdd
    pvremove /dev/sdc /dev/sdd
    
  8. Save RAID configuration

    mdadm --detail --scan >> /etc/mdadm/mdadm.conf
    update-initramfs -u
    
  9. Add RAID to Proxmox storage (optional)

    • Via web UI: Datacenter > Storage > Add > LVM-Thin
    • Or via CLI: pvesm add lvmthin local-raid10 --vgname pve --thinpool raid10-thin

Performance Expectations

RAID 10 with 4 Disks (sde-sdh)

  • Read Speed: ~4x single disk (~400-600 MB/s)
  • Write Speed: ~2x single disk (~200-300 MB/s)
  • IOPS: Significantly improved
  • Redundancy: Can survive 1-2 disk failures

RAID 10 with 6 Disks (if implemented later)

  • Read Speed: ~6x single disk (~600-900 MB/s)
  • Write Speed: ~3x single disk (~300-450 MB/s)
  • IOPS: Maximum possible
  • Redundancy: Can survive 1-3 disk failures

Scripts Available

  1. scripts/create-raid10-r630-01-complete.sh

    • Complete automation for Option A (4-disk RAID 10)
    • Handles migration automatically
    • Includes safety checks
  2. scripts/create-raid-r630-01.sh

    • General RAID creation script
    • Supports multiple RAID levels
    • Includes safety checks

Important Notes

⚠️ Warnings

  1. Data Migration Time: Migrating ~408GB will take 1-3 hours
  2. Downtime: Some containers may need to be stopped during migration
  3. Backup Recommended: Always backup before major storage changes
  4. RAID Sync Time: Initial RAID sync takes 30-60 minutes

Safety Features

  • Scripts check disk usage before proceeding
  • Migration is done with pvmove (safe LVM operation)
  • RAID configuration is saved automatically
  • Status monitoring throughout process

Next Steps

  1. Review the plan and choose approach (Option A recommended)
  2. Backup critical data (if not already backed up)
  3. Run the script or execute steps manually
  4. Monitor progress during migration
  5. Verify RAID and storage after completion

Ready to proceed? Run:

./scripts/create-raid10-r630-01-complete.sh

Last Updated: January 6, 2026