# r630-02 Storage Review and Recommendations **Last Updated:** 2026-01-31 **Document Version:** 1.0 **Status:** Active Documentation --- **Date:** 2026-01-15 **Host:** r630-02 (192.168.11.12) **Status:** ✅ **REVIEW COMPLETE** --- ## Executive Summary **Total Storage:** ~1.85TB (8 x 232GB disks) **Critical Issues:** 2 pools at 88%+ capacity, 1 container at 94.8% full **Immediate Actions Required:** Container 7811 disk expansion, thin2 cleanup **Potential Space Recovery:** ~300GB from duplicate volumes and old backups --- ## Storage Architecture ### Physical Disks | Disk | Size | Type | Usage | Status | |------|------|------|-------|--------| | sda | 232.9GB | ZFS | System/Boot | Active | | sdb | 232.9GB | ZFS | System/Boot | Active | | sdc | 232.9GB | LVM | thin2 pool | **88.33% full** | | sdd | 232.9GB | LVM | thin3 pool | 0% (empty) | | sde | 232.9GB | LVM | thin4 pool | 12.69% used | | sdf | 232.9GB | LVM | thin5 pool | 0% (empty) | | sdg | 232.9GB | LVM | thin6 pool | 0% (empty) | | sdh | 232.9GB | LVM | thin1 pool | **88.51% full** | **Total Raw Capacity:** ~1.86TB **Total Usable (LVM Thin Pools):** ~1.36TB (6 x 226GB pools) --- ## Storage Pools Status ### Active Pools | Pool | Type | Status | Total | Used | Available | Usage % | Priority | |------|------|--------|-------|------|-----------|---------|----------| | **thin1-r630-02** | lvmthin | Active | 226GB | 200GB | 26GB | **88.51%** | 🔴 Critical | | **thin2** | lvmthin | Active | 226GB | 200GB | 27GB | **88.33%** | 🔴 Critical | | **thin4** | lvmthin | Active | 226GB | 29GB | 197GB | 12.69% | 🟢 Healthy | | thin3 | lvmthin | Active | 226GB | 0GB | 226GB | 0.00% | 🟢 Available | | thin5 | lvmthin | Active | 226GB | 0GB | 226GB | 0.00% | 🟢 Available | | thin6 | lvmthin | Active | 226GB | 0GB | 226GB | 0.00% | 🟢 Available | | local | dir | Active | 220GB | 7.3GB | 213GB | 3.31% | 🟢 Healthy | ### Inactive/Problematic Pools | Pool | Type | Status | Issue | |------|------|--------|-------| | data | lvmthin | Inactive | No logical volume found | | thin1 | lvmthin | Inactive | No logical volume found | | local-lvm | lvmthin | Disabled | Not configured | --- ## Container Disk Usage ### Current Allocation | Container | Name | Pool | Allocated | Used | Available | Usage % | Status | |-----------|------|------|-----------|------|-----------|---------|--------| | **5000** | blockscout-1 | thin2 | 200GB | 125.2GB | 60.6GB | 63.9% | 🟡 Moderate | | **6200** | firefly-1 | thin2 | 50GB | 3.3GB | 43.1GB | 6.7% | 🟢 Healthy | | **6201** | firefly-ali-1 | thin2 | 50GB | 2.8GB | 43.6GB | 5.7% | 🟢 Healthy | | **7811** | mim-api-1 | thin4 | 30GB | 27.8GB | **0GB** | **94.8%** | 🔴 **CRITICAL** | ### Critical Issues 1. **Container 7811 (mim-api-1):** 94.8% full, 0GB available - **IMMEDIATE ACTION REQUIRED** 2. **thin2 Pool:** 88.33% full with 3 containers - **NEEDS CLEANUP** 3. **thin1-r630-02 Pool:** 88.51% full - **NEEDS INVESTIGATION** --- ## Duplicate Volumes Issue ### Problem Identified **Duplicate volumes exist on both thin1 and thin2:** | Container | thin1 Volume | thin2 Volume | Status | |-----------|--------------|--------------|--------| | 5000 | vm-5000-disk-0 (200GB) | vm-5000-disk-0 (200GB) | ⚠️ Duplicate | | 6200 | vm-6200-disk-0 (50GB) | vm-6200-disk-0 (50GB) | ⚠️ Duplicate | | 6201 | vm-6201-disk-0 (50GB) | vm-6201-disk-0 (50GB) | ⚠️ Duplicate | **Total Duplicate Space:** ~300GB (200GB + 50GB + 50GB) **Analysis:** - Containers are currently using thin2 volumes (confirmed by `pct config`) - thin1 volumes appear to be orphaned/unused - This explains why thin1-r630-02 shows 88.51% usage despite no active containers --- ## Snapshots and Backups ### Snapshots **Current Snapshots:** None (all containers show only "current" snapshot) **Recommendation:** Consider creating snapshots before major changes. ### Backup Files **Location:** `/var/lib/vz/dump` **Total Size:** 7.2GB **Count:** 13 backup files **Date Range:** January 3-7, 2026 **Largest Backup Files:** - vzdump-lxc-103-2026_01_07-12_38_26.tar.gz (1.4GB) - vzdump-lxc-7811-2026_01_03-14_57_45.tar.gz (1.3GB) - vzdump-lxc-130-2026_01_07-12_46_28.tar.gz (1.1GB) - vzdump-lxc-7810-2026_01_03-14_55_23.tar.gz (781MB) - vzdump-lxc-7810-2026_01_03-14_03_16.tar.gz (781MB) **Note:** These backups are from containers that may no longer exist (7800, 7801, 7802, 7810, 103, 104, 105, 130). --- ## Detailed Recommendations ### 🔴 **IMMEDIATE ACTIONS (Critical)** #### 1. Expand Container 7811 Disk (URGENT) **Issue:** Container 7811 is 94.8% full with 0GB available. **Action:** ```bash # Expand container 7811 disk from 30GB to 50GB ssh root@192.168.11.12 "pct resize 7811 rootfs +20G" ``` **Impact:** Prevents container from running out of space and crashing. **Priority:** **CRITICAL** - Do immediately --- #### 2. Investigate and Clean Up Duplicate Volumes on thin1 **Issue:** ~300GB of duplicate volumes on thin1-r630-02 pool. **Steps:** 1. **Verify thin1 volumes are not in use:** ```bash ssh root@192.168.11.12 "lvs -o lv_name,vg_name,attr | grep -E 'vm-5000|vm-6200|vm-6201' | grep thin1" ``` 2. **Check if any containers reference thin1:** ```bash ssh root@192.168.11.12 "for CTID in 5000 6200 6201; do echo \"Container \$CTID:\"; pct config \$CTID | grep rootfs; done" ``` 3. **If confirmed unused, remove duplicate volumes:** ```bash # WARNING: Only if confirmed unused! ssh root@192.168.11.12 "lvremove /dev/thin1/vm-5000-disk-0" ssh root@192.168.11.12 "lvremove /dev/thin1/vm-6200-disk-0" ssh root@192.168.11.12 "lvremove /dev/thin1/vm-6201-disk-0" ``` **Potential Space Recovery:** ~300GB on thin1-r630-02 **Priority:** **HIGH** - Frees significant space --- ### 🟡 **SHORT-TERM ACTIONS (Within 1 Week)** #### 3. Review and Clean Up Old Backup Files **Issue:** 7.2GB of backup files from January 3-7, 2026, possibly from deleted containers. **Action:** 1. **Identify backups for non-existent containers:** ```bash ssh root@192.168.11.12 "pct list | awk '{print \$1}' > /tmp/active_containers.txt" ssh root@192.168.11.12 "for file in /var/lib/vz/dump/*.tar.gz; do CTID=\$(echo \$file | grep -oP 'lxc-\K[0-9]+'); if ! grep -q \"^\$CTID\$\" /tmp/active_containers.txt; then echo \"Orphaned: \$file\"; fi; done" ``` 2. **Move old backups to archive or delete:** ```bash # Archive backups older than 30 days ssh root@192.168.11.12 "find /var/lib/vz/dump -name '*.tar.gz' -mtime +30 -exec mv {} /var/lib/vz/archive/ \;" ``` **Potential Space Recovery:** ~7GB on local storage **Priority:** **MEDIUM** - Good housekeeping --- #### 4. Optimize thin2 Pool Usage **Issue:** thin2 is 88.33% full with 3 containers. **Options:** **Option A: Move containers to empty pools** - Move container 6200 or 6201 to thin3, thin5, or thin6 - Reduces thin2 usage to ~70% **Option B: Expand thin2 pool** (if possible) - Check if physical disk can be expanded - Requires additional storage hardware **Option C: Clean up container 5000** - Container 5000 uses 125.2GB of 200GB - Review if data can be cleaned up or archived **Recommended:** **Option A** - Move container 6201 (stopped) to thin3 **Priority:** **MEDIUM** - Prevents future issues --- ### 🟢 **LONG-TERM ACTIONS (Within 1 Month)** #### 5. Implement Backup Retention Policy **Action:** - Configure automated backup cleanup - Keep only last 7 days of daily backups - Keep weekly backups for 4 weeks - Keep monthly backups for 6 months **Implementation:** ```bash # Add to cron or Proxmox backup job find /var/lib/vz/dump -name '*.tar.gz' -mtime +30 -delete ``` **Priority:** **LOW** - Prevents future accumulation --- #### 6. Monitor Storage Usage **Action:** - Set up alerts for pools >80% full - Monitor container disk usage weekly - Review storage allocation quarterly **Priority:** **LOW** - Preventive maintenance --- #### 7. Plan for Storage Expansion **Current State:** - 2 pools at 88%+ capacity - 1 container at 94.8% full - 3 empty pools available (thin3, thin5, thin6) **Recommendation:** - Utilize empty pools for new containers - Consider storage expansion if growth continues - Plan for 6-12 month capacity **Priority:** **LOW** - Strategic planning --- ## Space Recovery Summary | Action | Potential Recovery | Pool | Priority | |--------|-------------------|------|----------| | Remove duplicate thin1 volumes | ~300GB | thin1-r630-02 | 🔴 High | | Clean up old backups | ~7GB | local | 🟡 Medium | | Move containers to empty pools | N/A (redistribution) | thin2 → thin3/5/6 | 🟡 Medium | | **Total Potential Recovery** | **~307GB** | | | --- ## Action Plan Summary ### Immediate (Today) 1. ✅ Expand container 7811 disk to 50GB 2. ✅ Verify and remove duplicate thin1 volumes ### Short-term (This Week) 3. Review and archive old backup files 4. Move container 6201 to thin3 pool ### Long-term (This Month) 5. Implement backup retention policy 6. Set up storage monitoring 7. Plan for future expansion --- ## Risk Assessment ### Low Risk Actions - ✅ Expanding container 7811 disk (non-destructive) - ✅ Moving stopped container 6201 to thin3 - ✅ Cleaning up old backup files ### Medium Risk Actions - ⚠️ Removing duplicate thin1 volumes (verify first!) - ⚠️ Moving running containers (requires downtime) ### High Risk Actions - ❌ None identified --- ## Storage Utilization Summary **Current Utilization:** - **Used:** ~430GB (23% of total) - **Available:** ~1.43TB (77% of total) - **Critical Pools:** 2 (thin1-r630-02, thin2) - **Empty Pools:** 3 (thin3, thin5, thin6) **After Recommended Actions:** - **Used:** ~130GB (7% of total) - **Available:** ~1.73TB (93% of total) - **Critical Pools:** 0 - **Empty Pools:** 3 (thin3, thin5, thin6) --- **Last Updated:** 2026-01-15