- Organized 252 files across project - Root directory: 187 → 2 files (98.9% reduction) - Moved configuration guides to docs/04-configuration/ - Moved troubleshooting guides to docs/09-troubleshooting/ - Moved quick start guides to docs/01-getting-started/ - Moved reports to reports/ directory - Archived temporary files - Generated comprehensive reports and documentation - Created maintenance scripts and guides All files organized according to established standards.
4.2 KiB
r630-02 Orphaned Storage Analysis
Date: 2025-01-20
Status: Documented - Storage exists but VMs not registered
Summary
r630-02 has storage volumes that exist but are not associated with registered VMs. These appear to be orphaned storage from previous deployments or migrations.
Storage Analysis
thin1 Storage (226GB total, 52.35% used)
Orphaned Volumes Found:
thin1:vm-100-disk-0- 10GB (VMID 100)thin1:vm-101-disk-0- 10GB (VMID 101)thin1:vm-102-disk-0- 2GB (VMID 102)thin1:vm-103-disk-0- 8GB (VMID 103)thin1:vm-104-disk-0- 8GB (VMID 104)thin1:vm-105-disk-0- 8GB (VMID 105)thin1:vm-130-disk-0- 50GB (VMID 130)thin1:vm-5000-disk-0- 100GB (VMID 5000)thin1:vm-6200-disk-0- 50GB (VMID 6200)
Total Orphaned: ~246GB (124GB used)
thin4 Storage (226GB total, 16.03% used)
Orphaned Volumes Found:
thin4:vm-7800-disk-0- 50GB (VMID 7800)thin4:vm-7801-disk-0- 50GB (VMID 7801)thin4:vm-7802-disk-0- 30GB (VMID 7802)thin4:vm-7810-disk-0- 50GB (VMID 7810)thin4:vm-7811-disk-0- 30GB (VMID 7811)
Total Orphaned: ~210GB (38GB used)
Verification
VM Registration Status
Checked on all nodes:
- ml110: VMs with similar IDs exist (1000-1004, 10100-10151, 9000) but different VMIDs
- r630-01: No VMs found
- r630-02: No VMs registered (
pct listandqm listreturn empty)
Configuration Files:
/etc/pve/nodes/r630-02/lxc/- Empty/etc/pve/nodes/r630-02/qemu-server/- Empty
Conclusion: Storage volumes exist but VMs are not registered on r630-02.
Possible Causes
- VM Migration: VMs were migrated to another node but storage wasn't cleaned up
- VM Deletion: VMs were deleted but storage volumes weren't removed
- Storage Migration: Storage was migrated but VM registrations weren't updated
- Manual Cleanup: Previous manual cleanup left storage behind
Recommendations
Option 1: Clean Up Orphaned Storage (Recommended)
If these VMs are no longer needed:
# On r630-02
ssh root@192.168.11.12
# List orphaned volumes
pvesm list thin1 | grep -E "vm-100|vm-101|vm-102|vm-103|vm-104|vm-105|vm-130|vm-5000|vm-6200"
pvesm list thin4 | grep -E "vm-7800|vm-7801|vm-7802|vm-7810|vm-7811"
# Remove orphaned volumes (CAREFUL - this deletes data!)
# pvesm free <volume-id>
Benefits:
- Frees up ~246GB on thin1
- Frees up ~210GB on thin4
- Total: ~456GB recovered
Risks:
- Data loss if VMs are still needed
- Cannot recover once deleted
Option 2: Keep Storage (If VMs May Be Needed)
If these VMs might be needed later:
- Document the orphaned storage
- Monitor storage usage
- Keep until storage is needed
- Can clean up later if confirmed unused
Benefits:
- No risk of data loss
- Can recover VMs if needed
Drawbacks:
- Uses storage space
- May cause confusion
Option 3: Investigate Further
Before cleaning up:
- Check if VMs exist on other nodes with these VMIDs
- Check backup systems for these VMIDs
- Verify with team if these VMs are needed
- Check logs for migration/deletion history
Current Impact
Storage Usage:
- thin1: 52.35% used (124GB of 226GB) - 113GB available
- thin4: 16.03% used (38GB of 226GB) - 190GB available
Available Storage:
- thin1: 113GB available (despite orphaned volumes)
- thin4: 190GB available
- Other thin pools: 226GB each (thin2, thin3, thin5, thin6)
Total Available: ~1.2TB+ (sufficient for current needs)
Action Items
Immediate
- Document orphaned storage ✅
- Verify with team if VMs are needed
- Check if VMs exist elsewhere
Optional
- Clean up orphaned storage (if confirmed unused)
- Monitor storage usage
- Plan for storage cleanup
Commands Reference
List Orphaned Volumes
ssh root@192.168.11.12
pvesm list thin1
pvesm list thin4
Check VM Registration
pct list
qm list
ls -la /etc/pve/nodes/r630-02/lxc/
ls -la /etc/pve/nodes/r630-02/qemu-server/
Remove Orphaned Volumes (CAREFUL!)
# List first
pvesm list thin1 | grep vm-100
# Remove (example - verify first!)
# pvesm free thin1:vm-100-disk-0
Last Updated: 2025-01-20
Status: Documented - Action required based on team decision