- ADD_CHAIN138_TO_LEDGER_LIVE: Ledger form done; public code review repo bis-innovations/LedgerLive; init/push commands - CONTRACT_DEPLOYMENT_RUNBOOK: Chain 138 gas price 1 gwei, 36-addr check, TransactionMirror workaround - CONTRACT_*: AddressMapper, MirrorManager deployed 2026-02-12; 36-address on-chain check - NEXT_STEPS_FOR_YOU: Ledger done; steps completable now (no LAN); run-completable-tasks-from-anywhere - MASTER_INDEX, OPERATOR_OPTIONAL, SMART_CONTRACTS_INVENTORY_SIMPLE: updates - LEDGER_BLOCKCHAIN_INTEGRATION_COMPLETE: bis-innovations/LedgerLive reference Co-authored-by: Cursor <cursoragent@cursor.com>
4.6 KiB
R630-01 RAID 10 Implementation Plan
Date: January 6, 2026
Target: Create RAID 10 with sdc-sdh (6 disks) for fast access
Current Status: sdc/sdd in use, sde-sdh available
Current Situation
Disk Status
- sda/sdb: System disks (ZFS root pool) - Cannot be used
- sdc/sdd: In use by pve VG (~408GB used, ~57GB free)
- sde-sdh: Available (4x 232.9GB SSDs)
Storage Usage
- pve VG: ~408GB used on sdc/sdd
- Thin Pools: thin1 (208GB), data (200GB)
- Available Space: ~57GB free
Implementation Strategy
Option A: RAID 10 with 4 Disks (Simpler, Safer)
Disks: sde, sdf, sdg, sdh
Capacity: ~466GB (RAID 10)
Steps:
- Create RAID 10 with 4 disks
- Add to pve VG
- Migrate data from sdc/sdd to RAID
- Remove sdc/sdd from pve VG
- Keep RAID 10 with 4 disks (sdc/sdd remain unused or can be used separately)
Pros:
- Simpler process
- No data loss risk
- sdc/sdd can be used for other purposes later
Cons:
- Only 4 disks in RAID (less performance than 6)
- sdc/sdd not utilized
Option B: RAID 10 with All 6 Disks (More Complex)
Disks: sdc, sdd, sde, sdf, sdg, sdh
Capacity: ~700GB (RAID 10)
Steps:
- Create RAID 10 with 4 available disks (sde-sdh)
- Add to pve VG
- Migrate ALL data from sdc/sdd to RAID
- Remove sdc/sdd from pve VG
- Stop RAID array
- Create new RAID 10 with all 6 disks (sdc-sdh)
- Restore data (requires backup or manual migration)
Pros:
- Maximum performance (6 disks)
- All disks utilized
- Better redundancy
Cons:
- Complex process
- Requires data backup/restore
- Downtime during rebuild
Recommended Approach: Option A (4-Disk RAID 10)
Since sda/sdb are system disks and cannot be used for migration, the safest approach is:
Step-by-Step Process
-
Install mdadm
apt-get update && apt-get install -y mdadm -
Create RAID 10 with 4 disks
mdadm --create /dev/md0 --level=10 --raid-devices=4 /dev/sde /dev/sdf /dev/sdg /dev/sdh -
Wait for sync (~30-60 minutes)
watch cat /proc/mdstat -
Add RAID to pve VG
pvcreate /dev/md0 vgextend pve /dev/md0 -
Migrate data from sdc
pvmove /dev/sdc /dev/md0This will take 1-2 hours depending on data size
-
Migrate data from sdd
pvmove /dev/sdd /dev/md0This will take 1-2 hours depending on data size
-
Remove sdc/sdd from pve VG
vgreduce pve /dev/sdc /dev/sdd pvremove /dev/sdc /dev/sdd -
Save RAID configuration
mdadm --detail --scan >> /etc/mdadm/mdadm.conf update-initramfs -u -
Add RAID to Proxmox storage (optional)
- Via web UI: Datacenter > Storage > Add > LVM-Thin
- Or via CLI:
pvesm add lvmthin local-raid10 --vgname pve --thinpool raid10-thin
Performance Expectations
RAID 10 with 4 Disks (sde-sdh)
- Read Speed: ~4x single disk (~400-600 MB/s)
- Write Speed: ~2x single disk (~200-300 MB/s)
- IOPS: Significantly improved
- Redundancy: Can survive 1-2 disk failures
RAID 10 with 6 Disks (if implemented later)
- Read Speed: ~6x single disk (~600-900 MB/s)
- Write Speed: ~3x single disk (~300-450 MB/s)
- IOPS: Maximum possible
- Redundancy: Can survive 1-3 disk failures
Scripts Available
-
scripts/create-raid10-r630-01-complete.sh- Complete automation for Option A (4-disk RAID 10)
- Handles migration automatically
- Includes safety checks
-
scripts/create-raid-r630-01.sh- General RAID creation script
- Supports multiple RAID levels
- Includes safety checks
Important Notes
⚠️ Warnings
- Data Migration Time: Migrating ~408GB will take 1-3 hours
- Downtime: Some containers may need to be stopped during migration
- Backup Recommended: Always backup before major storage changes
- RAID Sync Time: Initial RAID sync takes 30-60 minutes
✅ Safety Features
- Scripts check disk usage before proceeding
- Migration is done with
pvmove(safe LVM operation) - RAID configuration is saved automatically
- Status monitoring throughout process
Next Steps
- Review the plan and choose approach (Option A recommended)
- Backup critical data (if not already backed up)
- Run the script or execute steps manually
- Monitor progress during migration
- Verify RAID and storage after completion
Ready to proceed? Run:
./scripts/create-raid10-r630-01-complete.sh
Last Updated: January 6, 2026