172 lines
3.3 KiB
Markdown
172 lines
3.3 KiB
Markdown
|
|
# Ceph Complete Setup Summary
|
||
|
|
|
||
|
|
**Date**: 2024-12-19
|
||
|
|
**Status**: Complete and Operational
|
||
|
|
|
||
|
|
## Cluster Overview
|
||
|
|
|
||
|
|
- **Cluster ID**: 5fb968ae-12ab-405f-b05f-0df29a168328
|
||
|
|
- **Version**: Ceph 19.2.3-pve2 (Squid)
|
||
|
|
- **Nodes**: 2 (ML110-01, R630-01)
|
||
|
|
- **Network**: 192.168.11.0/24
|
||
|
|
|
||
|
|
## Components
|
||
|
|
|
||
|
|
### Monitors
|
||
|
|
- **ML110-01**: Active monitor
|
||
|
|
- **R630-01**: Active monitor
|
||
|
|
- **Quorum**: 2/2 monitors
|
||
|
|
|
||
|
|
### Managers
|
||
|
|
- **ML110-01**: Active manager
|
||
|
|
- **R630-01**: Standby manager
|
||
|
|
|
||
|
|
### OSDs
|
||
|
|
- **OSD 0** (ML110-01): UP, 0.91 TiB
|
||
|
|
- **OSD 1** (R630-01): UP, 0.27 TiB
|
||
|
|
- **Total Capacity**: 1.2 TiB
|
||
|
|
|
||
|
|
## Storage Pools
|
||
|
|
|
||
|
|
### RBD Pool
|
||
|
|
- **Name**: `rbd`
|
||
|
|
- **Size**: 2 (min_size: 1)
|
||
|
|
- **PG Count**: 128
|
||
|
|
- **Application**: RBD enabled
|
||
|
|
- **Use Case**: VM disk images
|
||
|
|
- **Proxmox Storage**: `ceph-rbd`
|
||
|
|
|
||
|
|
### CephFS
|
||
|
|
- **Name**: `cephfs`
|
||
|
|
- **Metadata Pool**: `cephfs_metadata` (32 PGs)
|
||
|
|
- **Data Pool**: `cephfs_data` (128 PGs)
|
||
|
|
- **Use Case**: ISOs, backups, snippets
|
||
|
|
- **Proxmox Storage**: `ceph-fs`
|
||
|
|
|
||
|
|
## Configuration
|
||
|
|
|
||
|
|
### Pool Settings (Optimized for 2-node)
|
||
|
|
```bash
|
||
|
|
size = 2
|
||
|
|
min_size = 1
|
||
|
|
pg_num = 128
|
||
|
|
pgp_num = 128
|
||
|
|
```
|
||
|
|
|
||
|
|
### Network Configuration
|
||
|
|
- **Public Network**: 192.168.11.0/24
|
||
|
|
- **Cluster Network**: 192.168.11.0/24
|
||
|
|
|
||
|
|
## Access Information
|
||
|
|
|
||
|
|
### Dashboard
|
||
|
|
- **URL**: https://ml110-01.sankofa.nexus:8443
|
||
|
|
- **Username**: admin
|
||
|
|
- **Password**: sankofa-admin
|
||
|
|
|
||
|
|
### Prometheus Metrics
|
||
|
|
- **ML110-01**: http://ml110-01.sankofa.nexus:9283/metrics
|
||
|
|
- **R630-01**: http://r630-01.sankofa.nexus:9283/metrics
|
||
|
|
|
||
|
|
## Firewall Ports
|
||
|
|
|
||
|
|
- **6789/tcp**: Ceph Monitor (v1)
|
||
|
|
- **3300/tcp**: Ceph Monitor (v2)
|
||
|
|
- **6800-7300/tcp**: Ceph OSD
|
||
|
|
- **8443/tcp**: Ceph Dashboard
|
||
|
|
- **9283/tcp**: Prometheus Metrics
|
||
|
|
|
||
|
|
## Health Status
|
||
|
|
|
||
|
|
### Current Status
|
||
|
|
- **Health**: HEALTH_WARN (expected for 2-node setup)
|
||
|
|
- **Warnings**:
|
||
|
|
- OSD count (2) < default size (3) - Normal for 2-node
|
||
|
|
- Some degraded objects during initial setup - Will resolve
|
||
|
|
|
||
|
|
### Monitoring
|
||
|
|
```bash
|
||
|
|
# Check cluster status
|
||
|
|
ceph -s
|
||
|
|
|
||
|
|
# Check OSD tree
|
||
|
|
ceph osd tree
|
||
|
|
|
||
|
|
# Check pool status
|
||
|
|
ceph osd pool ls detail
|
||
|
|
```
|
||
|
|
|
||
|
|
## Proxmox Integration
|
||
|
|
|
||
|
|
### Storage Pools
|
||
|
|
1. **RBD Storage** (`ceph-rbd`)
|
||
|
|
- Type: RBD
|
||
|
|
- Pool: rbd
|
||
|
|
- Content: Images, Rootdir
|
||
|
|
- Access: Datacenter → Storage → ceph-rbd
|
||
|
|
|
||
|
|
2. **CephFS Storage** (`ceph-fs`)
|
||
|
|
- Type: CephFS
|
||
|
|
- Filesystem: cephfs
|
||
|
|
- Content: ISO, Backup, Snippets
|
||
|
|
- Access: Datacenter → Storage → ceph-fs
|
||
|
|
|
||
|
|
## Maintenance Commands
|
||
|
|
|
||
|
|
### Cluster Management
|
||
|
|
```bash
|
||
|
|
# Cluster status
|
||
|
|
pveceph status
|
||
|
|
ceph -s
|
||
|
|
|
||
|
|
# OSD management
|
||
|
|
ceph osd tree
|
||
|
|
pveceph osd create /dev/sdX
|
||
|
|
pveceph osd destroy <osd-id>
|
||
|
|
|
||
|
|
# Pool management
|
||
|
|
ceph osd pool ls
|
||
|
|
pveceph pool create <name>
|
||
|
|
pveceph pool destroy <name>
|
||
|
|
```
|
||
|
|
|
||
|
|
### Storage Management
|
||
|
|
```bash
|
||
|
|
# List storage
|
||
|
|
pvesm status
|
||
|
|
|
||
|
|
# Add storage
|
||
|
|
pvesm add rbd <name> --pool <pool> --monhost <hosts>
|
||
|
|
pvesm add cephfs <name> --monhost <hosts> --fsname <fsname>
|
||
|
|
```
|
||
|
|
|
||
|
|
## Troubleshooting
|
||
|
|
|
||
|
|
### Common Issues
|
||
|
|
|
||
|
|
1. **OSD Down**
|
||
|
|
```bash
|
||
|
|
systemctl status ceph-osd@<id>
|
||
|
|
systemctl start ceph-osd@<id>
|
||
|
|
```
|
||
|
|
|
||
|
|
2. **Monitor Issues**
|
||
|
|
```bash
|
||
|
|
systemctl status ceph-mon@<id>
|
||
|
|
pveceph mon create
|
||
|
|
```
|
||
|
|
|
||
|
|
3. **Pool Warnings**
|
||
|
|
```bash
|
||
|
|
# Adjust pool size
|
||
|
|
ceph osd pool set <pool> size 2
|
||
|
|
ceph osd pool set <pool> min_size 1
|
||
|
|
```
|
||
|
|
|
||
|
|
## Related Documentation
|
||
|
|
|
||
|
|
- [Ceph Installation Guide](./CEPH_INSTALLATION.md)
|
||
|
|
- [Ceph-Proxmox Integration](./CEPH_PROXMOX_INTEGRATION.md)
|
||
|
|
- [Ceph Quick Start](./CEPH_QUICK_START.md)
|
||
|
|
|