185 lines
3.2 KiB
Markdown
185 lines
3.2 KiB
Markdown
|
|
# Ceph-Proxmox Integration Guide
|
||
|
|
|
||
|
|
**Last Updated**: 2024-12-19
|
||
|
|
|
||
|
|
## Overview
|
||
|
|
|
||
|
|
This guide covers integrating Ceph storage with Proxmox VE for distributed storage across the cluster.
|
||
|
|
|
||
|
|
## Storage Types
|
||
|
|
|
||
|
|
### RBD (RADOS Block Device)
|
||
|
|
|
||
|
|
**Use Case**: VM disk images, high-performance block storage
|
||
|
|
|
||
|
|
**Features**:
|
||
|
|
- Thin provisioning
|
||
|
|
- Snapshots
|
||
|
|
- Live migration support
|
||
|
|
- High IOPS
|
||
|
|
|
||
|
|
### CephFS
|
||
|
|
|
||
|
|
**Use Case**: ISO images, backups, snippets, shared storage
|
||
|
|
|
||
|
|
**Features**:
|
||
|
|
- POSIX-compliant filesystem
|
||
|
|
- Shared access
|
||
|
|
- Snapshots
|
||
|
|
- Quotas
|
||
|
|
|
||
|
|
## Integration Steps
|
||
|
|
|
||
|
|
### Step 1: Verify Ceph Cluster
|
||
|
|
|
||
|
|
```bash
|
||
|
|
# Check cluster status
|
||
|
|
ceph -s
|
||
|
|
|
||
|
|
# Verify pools exist
|
||
|
|
ceph osd pool ls
|
||
|
|
```
|
||
|
|
|
||
|
|
### Step 2: Add RBD Storage
|
||
|
|
|
||
|
|
**Via CLI**:
|
||
|
|
```bash
|
||
|
|
# On each Proxmox node
|
||
|
|
pvesm add rbd ceph-rbd \
|
||
|
|
--pool rbd \
|
||
|
|
--monhost 192.168.11.10,192.168.11.11 \
|
||
|
|
--username admin \
|
||
|
|
--content images,rootdir \
|
||
|
|
--krbd 1
|
||
|
|
```
|
||
|
|
|
||
|
|
**Via Web UI**:
|
||
|
|
1. Datacenter → Storage → Add
|
||
|
|
2. Select "RBD"
|
||
|
|
3. Configure:
|
||
|
|
- ID: `ceph-rbd`
|
||
|
|
- Pool: `rbd`
|
||
|
|
- Monitor Host: `192.168.11.10,192.168.11.11`
|
||
|
|
- Username: `admin`
|
||
|
|
- Content: Images, Rootdir
|
||
|
|
- Enable KRBD: Yes
|
||
|
|
|
||
|
|
### Step 3: Add CephFS Storage
|
||
|
|
|
||
|
|
**Via CLI**:
|
||
|
|
```bash
|
||
|
|
# On each Proxmox node
|
||
|
|
pvesm add cephfs ceph-fs \
|
||
|
|
--monhost 192.168.11.10,192.168.11.11 \
|
||
|
|
--username admin \
|
||
|
|
--fsname cephfs \
|
||
|
|
--content iso,backup,snippets
|
||
|
|
```
|
||
|
|
|
||
|
|
**Via Web UI**:
|
||
|
|
1. Datacenter → Storage → Add
|
||
|
|
2. Select "CephFS"
|
||
|
|
3. Configure:
|
||
|
|
- ID: `ceph-fs`
|
||
|
|
- Monitor Host: `192.168.11.10,192.168.11.11`
|
||
|
|
- Username: `admin`
|
||
|
|
- Filesystem: `cephfs`
|
||
|
|
- Content: ISO, Backup, Snippets
|
||
|
|
|
||
|
|
## Using Ceph Storage
|
||
|
|
|
||
|
|
### Creating VMs with RBD Storage
|
||
|
|
|
||
|
|
1. **Via Web UI**:
|
||
|
|
- Create VM → Hard Disk
|
||
|
|
- Storage: Select `ceph-rbd`
|
||
|
|
- Configure size and format
|
||
|
|
|
||
|
|
2. **Via CLI**:
|
||
|
|
```bash
|
||
|
|
qm create 100 --name test-vm --memory 2048
|
||
|
|
qm disk create 100 ceph-rbd:0 --size 20G
|
||
|
|
```
|
||
|
|
|
||
|
|
### Migrating VMs to Ceph
|
||
|
|
|
||
|
|
```bash
|
||
|
|
# Move disk to Ceph storage
|
||
|
|
qm disk move 100 scsi0 ceph-rbd --delete
|
||
|
|
|
||
|
|
# Or via Web UI:
|
||
|
|
# VM → Hardware → Disk → Move Storage
|
||
|
|
```
|
||
|
|
|
||
|
|
## Monitoring
|
||
|
|
|
||
|
|
### Ceph Dashboard
|
||
|
|
|
||
|
|
```bash
|
||
|
|
# Enable dashboard
|
||
|
|
ceph mgr module enable dashboard
|
||
|
|
|
||
|
|
# Create user
|
||
|
|
ceph dashboard ac-user-create admin <password> administrator
|
||
|
|
|
||
|
|
# Access: https://ml110-01.sankofa.nexus:8443
|
||
|
|
```
|
||
|
|
|
||
|
|
### Proxmox Storage Status
|
||
|
|
|
||
|
|
```bash
|
||
|
|
# Check storage status
|
||
|
|
pvesm status
|
||
|
|
|
||
|
|
# Check storage usage
|
||
|
|
pvesm list
|
||
|
|
```
|
||
|
|
|
||
|
|
## Best Practices
|
||
|
|
|
||
|
|
1. **Pool Configuration**:
|
||
|
|
- Use separate pools for different workloads
|
||
|
|
- Configure appropriate PG count
|
||
|
|
- Set size and min_size appropriately
|
||
|
|
|
||
|
|
2. **Performance**:
|
||
|
|
- Use SSD for OSD journals (if available)
|
||
|
|
- Configure network bonding for redundancy
|
||
|
|
- Monitor OSD performance
|
||
|
|
|
||
|
|
3. **Backup**:
|
||
|
|
- Use CephFS for backups
|
||
|
|
- Configure snapshot schedules
|
||
|
|
- Test restore procedures
|
||
|
|
|
||
|
|
## Troubleshooting
|
||
|
|
|
||
|
|
### Storage Not Available
|
||
|
|
|
||
|
|
```bash
|
||
|
|
# Check Ceph status
|
||
|
|
ceph -s
|
||
|
|
|
||
|
|
# Check OSD status
|
||
|
|
ceph osd tree
|
||
|
|
|
||
|
|
# Check storage in Proxmox
|
||
|
|
pvesm status
|
||
|
|
```
|
||
|
|
|
||
|
|
### Performance Issues
|
||
|
|
|
||
|
|
```bash
|
||
|
|
# Check OSD performance
|
||
|
|
ceph osd perf
|
||
|
|
|
||
|
|
# Check pool stats
|
||
|
|
ceph df detail
|
||
|
|
```
|
||
|
|
|
||
|
|
## Related Documentation
|
||
|
|
|
||
|
|
- [Ceph Installation](./CEPH_INSTALLATION.md)
|
||
|
|
- [Proxmox Storage Configuration](../proxmox/STORAGE_CONFIGURATION.md)
|
||
|
|
|