Files
proxmox/docs/09-troubleshooting/STORAGE_MIGRATION_ISSUE.md
defiQUG cb47cce074 Complete markdown files cleanup and organization
- Organized 252 files across project
- Root directory: 187 → 2 files (98.9% reduction)
- Moved configuration guides to docs/04-configuration/
- Moved troubleshooting guides to docs/09-troubleshooting/
- Moved quick start guides to docs/01-getting-started/
- Moved reports to reports/ directory
- Archived temporary files
- Generated comprehensive reports and documentation
- Created maintenance scripts and guides

All files organized according to established standards.
2026-01-06 01:46:25 -08:00

114 lines
2.8 KiB
Markdown

# Storage Migration Issue - pve2 Configuration
**Date**: $(date)
**Issue**: Container migrations failing due to storage configuration mismatch
## Problem
Container migrations from ml110 to pve2 are failing with the error:
```
Volume group "pve" not found
ERROR: storage migration for 'local-lvm:vm-XXXX-disk-0' to storage 'local-lvm' failed
```
## Root Cause
**ml110** (source):
- Has `local-lvm` storage **active**
- Uses volume group named **"pve"** (standard Proxmox setup)
- Containers stored on `local-lvm:vm-XXXX-disk-0`
**pve2** (target):
- Has `local-lvm` storage but it's **INACTIVE**
- Has volume groups named **lvm1, lvm2, lvm3, lvm4, lvm5, lvm6** instead of "pve"
- Storage is not properly configured for Proxmox
## Storage Status
### ml110 Storage
```
local-lvm: lvmthin, active, 832GB total, 108GB used
Volume Group: pve (standard)
```
### pve2 Storage
```
local-lvm: lvmthin, INACTIVE, 0GB available
Volume Groups: lvm1, lvm2, lvm3, lvm4, lvm5, lvm6 (non-standard)
```
## Solutions
### Option 1: Configure pve2's local-lvm Storage (Recommended)
1. **Rename/create "pve" volume group on pve2**:
```bash
# On pve2, check current LVM setup
ssh root@192.168.11.12 "vgs; lvs"
# Rename one of the volume groups to "pve" (if possible)
# OR create a new "pve" volume group from available space
```
2. **Activate local-lvm storage on pve2**:
```bash
# Check storage configuration
ssh root@192.168.11.12 "cat /etc/pve/storage.cfg"
# May need to reconfigure local-lvm to use correct volume group
```
### Option 2: Migrate to Different Storage on pve2
Use `local` (directory storage) instead of `local-lvm`:
```bash
# Migrate with storage specification
pct migrate <VMID> pve2 --storage local --restart
```
**Pros**: Works immediately, no storage reconfiguration needed
**Cons**: Directory storage is slower than LVM thin provisioning
### Option 3: Use Shared Storage
Configure shared storage (NFS, Ceph, etc.) accessible from both nodes:
```bash
# Add shared storage to cluster
# Then migrate containers to shared storage
```
## Immediate Workaround
Until pve2's local-lvm is properly configured, we can:
1. **Skip migrations** for now
2. **Configure pve2 storage** first
3. **Then proceed with migrations**
## Next Steps
1. ⏳ Investigate pve2's LVM configuration
2. ⏳ Configure local-lvm storage on pve2 with "pve" volume group
3. ⏳ Verify storage is active and working
4. ⏳ Retry container migrations
## Verification Commands
```bash
# Check pve2 storage status
ssh root@192.168.11.12 "pvesm status"
# Check volume groups
ssh root@192.168.11.12 "vgs"
# Check local-lvm configuration
ssh root@192.168.11.12 "cat /etc/pve/storage.cfg | grep -A 5 local-lvm"
```
---
**Status**: ⚠️ Migrations paused pending storage configuration fix