Apply Composer changes: comprehensive API updates, migrations, middleware, and infrastructure improvements
- Add comprehensive database migrations (001-024) for schema evolution - Enhance API schema with expanded type definitions and resolvers - Add new middleware: audit logging, rate limiting, MFA enforcement, security, tenant auth - Implement new services: AI optimization, billing, blockchain, compliance, marketplace - Add adapter layer for cloud integrations (Cloudflare, Kubernetes, Proxmox, storage) - Update Crossplane provider with enhanced VM management capabilities - Add comprehensive test suite for API endpoints and services - Update frontend components with improved GraphQL subscriptions and real-time updates - Enhance security configurations and headers (CSP, CORS, etc.) - Update documentation and configuration files - Add new CI/CD workflows and validation scripts - Implement design system improvements and UI enhancements
This commit is contained in:
171
docs/storage/CEPH_COMPLETE_SETUP.md
Normal file
171
docs/storage/CEPH_COMPLETE_SETUP.md
Normal file
@@ -0,0 +1,171 @@
|
||||
# Ceph Complete Setup Summary
|
||||
|
||||
**Date**: 2024-12-19
|
||||
**Status**: Complete and Operational
|
||||
|
||||
## Cluster Overview
|
||||
|
||||
- **Cluster ID**: 5fb968ae-12ab-405f-b05f-0df29a168328
|
||||
- **Version**: Ceph 19.2.3-pve2 (Squid)
|
||||
- **Nodes**: 2 (ML110-01, R630-01)
|
||||
- **Network**: 192.168.11.0/24
|
||||
|
||||
## Components
|
||||
|
||||
### Monitors
|
||||
- **ML110-01**: Active monitor
|
||||
- **R630-01**: Active monitor
|
||||
- **Quorum**: 2/2 monitors
|
||||
|
||||
### Managers
|
||||
- **ML110-01**: Active manager
|
||||
- **R630-01**: Standby manager
|
||||
|
||||
### OSDs
|
||||
- **OSD 0** (ML110-01): UP, 0.91 TiB
|
||||
- **OSD 1** (R630-01): UP, 0.27 TiB
|
||||
- **Total Capacity**: 1.2 TiB
|
||||
|
||||
## Storage Pools
|
||||
|
||||
### RBD Pool
|
||||
- **Name**: `rbd`
|
||||
- **Size**: 2 (min_size: 1)
|
||||
- **PG Count**: 128
|
||||
- **Application**: RBD enabled
|
||||
- **Use Case**: VM disk images
|
||||
- **Proxmox Storage**: `ceph-rbd`
|
||||
|
||||
### CephFS
|
||||
- **Name**: `cephfs`
|
||||
- **Metadata Pool**: `cephfs_metadata` (32 PGs)
|
||||
- **Data Pool**: `cephfs_data` (128 PGs)
|
||||
- **Use Case**: ISOs, backups, snippets
|
||||
- **Proxmox Storage**: `ceph-fs`
|
||||
|
||||
## Configuration
|
||||
|
||||
### Pool Settings (Optimized for 2-node)
|
||||
```bash
|
||||
size = 2
|
||||
min_size = 1
|
||||
pg_num = 128
|
||||
pgp_num = 128
|
||||
```
|
||||
|
||||
### Network Configuration
|
||||
- **Public Network**: 192.168.11.0/24
|
||||
- **Cluster Network**: 192.168.11.0/24
|
||||
|
||||
## Access Information
|
||||
|
||||
### Dashboard
|
||||
- **URL**: https://ml110-01.sankofa.nexus:8443
|
||||
- **Username**: admin
|
||||
- **Password**: sankofa-admin
|
||||
|
||||
### Prometheus Metrics
|
||||
- **ML110-01**: http://ml110-01.sankofa.nexus:9283/metrics
|
||||
- **R630-01**: http://r630-01.sankofa.nexus:9283/metrics
|
||||
|
||||
## Firewall Ports
|
||||
|
||||
- **6789/tcp**: Ceph Monitor (v1)
|
||||
- **3300/tcp**: Ceph Monitor (v2)
|
||||
- **6800-7300/tcp**: Ceph OSD
|
||||
- **8443/tcp**: Ceph Dashboard
|
||||
- **9283/tcp**: Prometheus Metrics
|
||||
|
||||
## Health Status
|
||||
|
||||
### Current Status
|
||||
- **Health**: HEALTH_WARN (expected for 2-node setup)
|
||||
- **Warnings**:
|
||||
- OSD count (2) < default size (3) - Normal for 2-node
|
||||
- Some degraded objects during initial setup - Will resolve
|
||||
|
||||
### Monitoring
|
||||
```bash
|
||||
# Check cluster status
|
||||
ceph -s
|
||||
|
||||
# Check OSD tree
|
||||
ceph osd tree
|
||||
|
||||
# Check pool status
|
||||
ceph osd pool ls detail
|
||||
```
|
||||
|
||||
## Proxmox Integration
|
||||
|
||||
### Storage Pools
|
||||
1. **RBD Storage** (`ceph-rbd`)
|
||||
- Type: RBD
|
||||
- Pool: rbd
|
||||
- Content: Images, Rootdir
|
||||
- Access: Datacenter → Storage → ceph-rbd
|
||||
|
||||
2. **CephFS Storage** (`ceph-fs`)
|
||||
- Type: CephFS
|
||||
- Filesystem: cephfs
|
||||
- Content: ISO, Backup, Snippets
|
||||
- Access: Datacenter → Storage → ceph-fs
|
||||
|
||||
## Maintenance Commands
|
||||
|
||||
### Cluster Management
|
||||
```bash
|
||||
# Cluster status
|
||||
pveceph status
|
||||
ceph -s
|
||||
|
||||
# OSD management
|
||||
ceph osd tree
|
||||
pveceph osd create /dev/sdX
|
||||
pveceph osd destroy <osd-id>
|
||||
|
||||
# Pool management
|
||||
ceph osd pool ls
|
||||
pveceph pool create <name>
|
||||
pveceph pool destroy <name>
|
||||
```
|
||||
|
||||
### Storage Management
|
||||
```bash
|
||||
# List storage
|
||||
pvesm status
|
||||
|
||||
# Add storage
|
||||
pvesm add rbd <name> --pool <pool> --monhost <hosts>
|
||||
pvesm add cephfs <name> --monhost <hosts> --fsname <fsname>
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **OSD Down**
|
||||
```bash
|
||||
systemctl status ceph-osd@<id>
|
||||
systemctl start ceph-osd@<id>
|
||||
```
|
||||
|
||||
2. **Monitor Issues**
|
||||
```bash
|
||||
systemctl status ceph-mon@<id>
|
||||
pveceph mon create
|
||||
```
|
||||
|
||||
3. **Pool Warnings**
|
||||
```bash
|
||||
# Adjust pool size
|
||||
ceph osd pool set <pool> size 2
|
||||
ceph osd pool set <pool> min_size 1
|
||||
```
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Ceph Installation Guide](./CEPH_INSTALLATION.md)
|
||||
- [Ceph-Proxmox Integration](./CEPH_PROXMOX_INTEGRATION.md)
|
||||
- [Ceph Quick Start](./CEPH_QUICK_START.md)
|
||||
|
||||
334
docs/storage/CEPH_INSTALLATION.md
Normal file
334
docs/storage/CEPH_INSTALLATION.md
Normal file
@@ -0,0 +1,334 @@
|
||||
# Ceph Installation Guide for Proxmox
|
||||
|
||||
**Last Updated**: 2024-12-19
|
||||
**Infrastructure**: 2-node Proxmox cluster (ML110-01, R630-01)
|
||||
|
||||
## Overview
|
||||
|
||||
Ceph is a distributed storage system that provides object, block, and file storage. This guide covers installing Ceph on the Proxmox infrastructure to provide distributed storage for VMs.
|
||||
|
||||
## Architecture
|
||||
|
||||
### Cluster Configuration
|
||||
|
||||
**Nodes**:
|
||||
- **ML110-01** (192.168.11.10): Ceph Monitor, OSD, Manager
|
||||
- **R630-01** (192.168.11.11): Ceph Monitor, OSD, Manager
|
||||
|
||||
**Network**: 192.168.11.0/24
|
||||
|
||||
### Ceph Components
|
||||
|
||||
1. **Monitors (MON)**: Track cluster state (minimum 1, recommended 3+)
|
||||
2. **Managers (MGR)**: Provide monitoring and management interfaces
|
||||
3. **OSDs (Object Storage Daemons)**: Store data on disks
|
||||
4. **MDS (Metadata Servers)**: For CephFS (optional)
|
||||
|
||||
### Storage Configuration
|
||||
|
||||
**For 2-node setup**:
|
||||
- Reduced redundancy (size=2, min_size=1)
|
||||
- Suitable for development/testing
|
||||
- For production, add a third node or use external storage
|
||||
|
||||
## Prerequisites
|
||||
|
||||
### Hardware Requirements
|
||||
|
||||
**Per Node**:
|
||||
- CPU: 4+ cores recommended
|
||||
- RAM: 4GB+ for Ceph services
|
||||
- Storage: Dedicated disks/partitions for OSDs
|
||||
- Network: 1Gbps+ (10Gbps recommended)
|
||||
|
||||
### Software Requirements
|
||||
|
||||
- Proxmox VE 9.1+
|
||||
- SSH access to all nodes
|
||||
- Root or sudo access
|
||||
- Network connectivity between nodes
|
||||
|
||||
## Installation Steps
|
||||
|
||||
### Step 1: Prepare Nodes
|
||||
|
||||
```bash
|
||||
# On both nodes, update system
|
||||
apt update && apt upgrade -y
|
||||
|
||||
# Install prerequisites
|
||||
apt install -y chrony python3-pip
|
||||
```
|
||||
|
||||
### Step 2: Configure Hostnames and Network
|
||||
|
||||
```bash
|
||||
# On ML110-01
|
||||
hostnamectl set-hostname ml110-01
|
||||
echo "192.168.11.10 ml110-01 ml110-01.sankofa.nexus" >> /etc/hosts
|
||||
echo "192.168.11.11 r630-01 r630-01.sankofa.nexus" >> /etc/hosts
|
||||
|
||||
# On R630-01
|
||||
hostnamectl set-hostname r630-01
|
||||
echo "192.168.11.10 ml110-01 ml110-01.sankofa.nexus" >> /etc/hosts
|
||||
echo "192.168.11.11 r630-01 r630-01.sankofa.nexus" >> /etc/hosts
|
||||
```
|
||||
|
||||
### Step 3: Install Ceph
|
||||
|
||||
```bash
|
||||
# Add Ceph repository
|
||||
wget -q -O- 'https://download.ceph.com/keys/release.asc' | apt-key add -
|
||||
echo "deb https://download.ceph.com/debian-quincy/ bullseye main" > /etc/apt/sources.list.d/ceph.list
|
||||
|
||||
# Update and install
|
||||
apt update
|
||||
apt install -y ceph ceph-common ceph-mds
|
||||
```
|
||||
|
||||
### Step 4: Create Ceph User
|
||||
|
||||
```bash
|
||||
# On both nodes, create ceph user
|
||||
useradd -d /home/ceph -m -s /bin/bash ceph
|
||||
echo "ceph ALL = (root) NOPASSWD:ALL" | tee /etc/sudoers.d/ceph
|
||||
chmod 0440 /etc/sudoers.d/ceph
|
||||
```
|
||||
|
||||
### Step 5: Configure SSH Key Access
|
||||
|
||||
```bash
|
||||
# On ML110-01 (deployment node)
|
||||
su - ceph
|
||||
ssh-keygen -t rsa -N '' -f ~/.ssh/id_rsa
|
||||
ssh-copy-id ceph@ml110-01
|
||||
ssh-copy-id ceph@r630-01
|
||||
```
|
||||
|
||||
### Step 6: Initialize Ceph Cluster
|
||||
|
||||
```bash
|
||||
# On ML110-01 (deployment node)
|
||||
cd ~
|
||||
mkdir ceph-cluster
|
||||
cd ceph-cluster
|
||||
|
||||
# Create cluster configuration
|
||||
ceph-deploy new ml110-01 r630-01
|
||||
|
||||
# Edit ceph.conf to add network and reduce redundancy for 2-node
|
||||
cat >> ceph.conf << EOF
|
||||
[global]
|
||||
osd pool default size = 2
|
||||
osd pool default min size = 1
|
||||
osd pool default pg num = 128
|
||||
osd pool default pgp num = 128
|
||||
public network = 192.168.11.0/24
|
||||
cluster network = 192.168.11.0/24
|
||||
EOF
|
||||
|
||||
# Install Ceph on all nodes
|
||||
ceph-deploy install ml110-01 r630-01
|
||||
|
||||
# Create initial monitor
|
||||
ceph-deploy mon create-initial
|
||||
|
||||
# Deploy admin key
|
||||
ceph-deploy admin ml110-01 r630-01
|
||||
```
|
||||
|
||||
### Step 7: Add OSDs
|
||||
|
||||
```bash
|
||||
# List available disks
|
||||
ceph-deploy disk list ml110-01
|
||||
ceph-deploy disk list r630-01
|
||||
|
||||
# Prepare disks (replace /dev/sdX with actual disk)
|
||||
ceph-deploy disk zap ml110-01 /dev/sdb
|
||||
ceph-deploy disk zap r630-01 /dev/sdb
|
||||
|
||||
# Create OSDs
|
||||
ceph-deploy osd create --data /dev/sdb ml110-01
|
||||
ceph-deploy osd create --data /dev/sdb r630-01
|
||||
```
|
||||
|
||||
### Step 8: Deploy Manager
|
||||
|
||||
```bash
|
||||
# Deploy manager daemon
|
||||
ceph-deploy mgr create ml110-01 r630-01
|
||||
```
|
||||
|
||||
### Step 9: Verify Cluster
|
||||
|
||||
```bash
|
||||
# Check cluster status
|
||||
ceph -s
|
||||
|
||||
# Check OSD status
|
||||
ceph osd tree
|
||||
|
||||
# Check health
|
||||
ceph health
|
||||
```
|
||||
|
||||
## Proxmox Integration
|
||||
|
||||
### Step 1: Create Ceph Storage Pool in Proxmox
|
||||
|
||||
```bash
|
||||
# On Proxmox nodes, create Ceph storage
|
||||
pvesm add cephfs ceph-storage --monhost 192.168.11.10,192.168.11.11 --username admin --fsname cephfs
|
||||
```
|
||||
|
||||
### Step 2: Create RBD Pool for Block Storage
|
||||
|
||||
```bash
|
||||
# Create RBD pool
|
||||
ceph osd pool create rbd 128 128
|
||||
|
||||
# Initialize pool for RBD
|
||||
rbd pool init rbd
|
||||
|
||||
# Create storage in Proxmox
|
||||
pvesm add rbd rbd-storage --pool rbd --monhost 192.168.11.10,192.168.11.11 --username admin
|
||||
```
|
||||
|
||||
### Step 3: Configure Proxmox Storage
|
||||
|
||||
1. **Via Web UI**:
|
||||
- Datacenter → Storage → Add
|
||||
- Select "RBD" or "CephFS"
|
||||
- Configure connection details
|
||||
|
||||
2. **Via CLI**:
|
||||
```bash
|
||||
# RBD storage
|
||||
pvesm add rbd ceph-rbd --pool rbd --monhost 192.168.11.10,192.168.11.11 --username admin --content images,rootdir
|
||||
|
||||
# CephFS storage
|
||||
pvesm add cephfs ceph-fs --monhost 192.168.11.10,192.168.11.11 --username admin --fsname cephfs --content iso,backup
|
||||
```
|
||||
|
||||
## Configuration Files
|
||||
|
||||
### ceph.conf
|
||||
|
||||
```ini
|
||||
[global]
|
||||
fsid = <cluster-fsid>
|
||||
mon initial members = ml110-01, r630-01
|
||||
mon host = 192.168.11.10, 192.168.11.11
|
||||
public network = 192.168.11.0/24
|
||||
cluster network = 192.168.11.0/24
|
||||
auth cluster required = cephx
|
||||
auth service required = cephx
|
||||
auth client required = cephx
|
||||
osd pool default size = 2
|
||||
osd pool default min size = 1
|
||||
osd pool default pg num = 128
|
||||
osd pool default pgp num = 128
|
||||
```
|
||||
|
||||
## Monitoring
|
||||
|
||||
### Ceph Dashboard
|
||||
|
||||
```bash
|
||||
# Enable dashboard module
|
||||
ceph mgr module enable dashboard
|
||||
|
||||
# Create dashboard user
|
||||
ceph dashboard ac-user-create admin <password> administrator
|
||||
|
||||
# Access dashboard
|
||||
# https://ml110-01.sankofa.nexus:8443
|
||||
```
|
||||
|
||||
### Prometheus Integration
|
||||
|
||||
```bash
|
||||
# Enable prometheus module
|
||||
ceph mgr module enable prometheus
|
||||
|
||||
# Metrics endpoint
|
||||
# http://ml110-01.sankofa.nexus:9283/metrics
|
||||
```
|
||||
|
||||
## Maintenance
|
||||
|
||||
### Adding OSDs
|
||||
|
||||
```bash
|
||||
ceph-deploy disk zap <node> /dev/sdX
|
||||
ceph-deploy osd create --data /dev/sdX <node>
|
||||
```
|
||||
|
||||
### Removing OSDs
|
||||
|
||||
```bash
|
||||
ceph osd out <osd-id>
|
||||
ceph osd crush remove osd.<osd-id>
|
||||
ceph auth del osd.<osd-id>
|
||||
ceph osd rm <osd-id>
|
||||
```
|
||||
|
||||
### Cluster Health
|
||||
|
||||
```bash
|
||||
# Check status
|
||||
ceph -s
|
||||
|
||||
# Check detailed health
|
||||
ceph health detail
|
||||
|
||||
# Check OSD status
|
||||
ceph osd tree
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **Clock Skew**: Ensure NTP is configured
|
||||
```bash
|
||||
systemctl enable chronyd
|
||||
systemctl start chronyd
|
||||
```
|
||||
|
||||
2. **Network Issues**: Verify connectivity
|
||||
```bash
|
||||
ping ml110-01
|
||||
ping r630-01
|
||||
```
|
||||
|
||||
3. **OSD Issues**: Check OSD status
|
||||
```bash
|
||||
ceph osd tree
|
||||
systemctl status ceph-osd@<id>
|
||||
```
|
||||
|
||||
## Security
|
||||
|
||||
### Firewall Rules
|
||||
|
||||
```bash
|
||||
# Allow Ceph ports
|
||||
ufw allow 6789/tcp # Monitors
|
||||
ufw allow 6800:7300/tcp # OSDs
|
||||
ufw allow 8443/tcp # Dashboard
|
||||
```
|
||||
|
||||
### Authentication
|
||||
|
||||
- Use cephx authentication (default)
|
||||
- Rotate keys regularly
|
||||
- Limit admin access
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Ceph Official Documentation](https://docs.ceph.com/)
|
||||
- [Proxmox Ceph Integration](https://pve.proxmox.com/pve-docs/chapter-pveceph.html)
|
||||
- [Storage Configuration](../proxmox/STORAGE_CONFIGURATION.md)
|
||||
|
||||
101
docs/storage/CEPH_INSTALLATION_STATUS.md
Normal file
101
docs/storage/CEPH_INSTALLATION_STATUS.md
Normal file
@@ -0,0 +1,101 @@
|
||||
# Ceph Installation Status
|
||||
|
||||
**Date**: 2024-12-19
|
||||
**Status**: Partial - Requires Manual Intervention
|
||||
|
||||
## Current Status
|
||||
|
||||
### Completed
|
||||
- ✅ Disk review completed
|
||||
- ML110-01: `/dev/sdb` (931.5G) available
|
||||
- R630-01: `/dev/sdb` (279.4G) available
|
||||
- ✅ Ceph cluster initialized (`pveceph init`)
|
||||
- ✅ Ceph configuration files created
|
||||
|
||||
### Blocked
|
||||
- ❌ Ceph daemon packages not installed
|
||||
- Issue: Version conflict between Proxmox Ceph (19.2.3) and external repo (18.2.7)
|
||||
- Error: `binary not installed: /usr/bin/ceph-mon`
|
||||
- Required packages: `ceph-mon`, `ceph-mgr`, `ceph-osd`, `ceph-base`
|
||||
|
||||
## Root Cause
|
||||
|
||||
Proxmox VE has Ceph 19.2.3 installed (`ceph-common`), but:
|
||||
1. The external Ceph repository (quincy/18.2.7) conflicts with Proxmox's version
|
||||
2. Proxmox enterprise repository requires a subscription
|
||||
3. Ceph daemon binaries are not installed
|
||||
|
||||
## Solutions
|
||||
|
||||
### Option 1: Use Proxmox Subscription (Recommended)
|
||||
If you have a Proxmox subscription:
|
||||
```bash
|
||||
# Install Ceph daemons from Proxmox enterprise repo
|
||||
apt install -y ceph-mon ceph-mgr ceph-osd ceph-base
|
||||
```
|
||||
|
||||
### Option 2: Use Proxmox No-Subscription Repository
|
||||
Add the no-subscription repository:
|
||||
```bash
|
||||
# On both nodes
|
||||
echo "deb http://download.proxmox.com/debian/ceph-quincy bullseye no-subscription" > /etc/apt/sources.list.d/ceph-no-sub.list
|
||||
apt update
|
||||
apt install -y ceph-mon ceph-mgr ceph-osd ceph-base
|
||||
```
|
||||
|
||||
### Option 3: Manual Installation via Proxmox Web UI
|
||||
1. Access Proxmox Web UI
|
||||
2. Go to: **Datacenter → Ceph**
|
||||
3. Click **Install** to install Ceph packages
|
||||
4. Follow the wizard to initialize cluster
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Choose installation method** (Option 1, 2, or 3)
|
||||
2. **Install Ceph daemon packages**
|
||||
3. **Create monitors**: `pveceph mon create`
|
||||
4. **Create managers**: `pveceph mgr create`
|
||||
5. **Create OSDs**: `pveceph osd create /dev/sdb`
|
||||
6. **Create RBD pool**: `pveceph pool create rbd --add_storages`
|
||||
7. **Enable dashboard**: `ceph mgr module enable dashboard`
|
||||
|
||||
## Manual Installation Commands
|
||||
|
||||
Once packages are installed:
|
||||
|
||||
```bash
|
||||
# On ML110-01
|
||||
pveceph mon create
|
||||
pveceph mgr create
|
||||
pveceph osd create /dev/sdb
|
||||
|
||||
# On R630-01
|
||||
pveceph mon create
|
||||
pveceph mgr create
|
||||
pveceph osd create /dev/sdb
|
||||
|
||||
# Create pool and storage
|
||||
pveceph pool create rbd --add_storages
|
||||
|
||||
# Enable dashboard
|
||||
ceph mgr module enable dashboard
|
||||
ceph dashboard ac-user-create admin <password> administrator
|
||||
```
|
||||
|
||||
## Verification
|
||||
|
||||
```bash
|
||||
# Check cluster status
|
||||
pveceph status
|
||||
ceph -s
|
||||
ceph osd tree
|
||||
|
||||
# Check storage
|
||||
pvesm status | grep ceph
|
||||
```
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Ceph Installation Guide](./CEPH_INSTALLATION.md)
|
||||
- [Proxmox Ceph Integration](./CEPH_PROXMOX_INTEGRATION.md)
|
||||
|
||||
184
docs/storage/CEPH_PROXMOX_INTEGRATION.md
Normal file
184
docs/storage/CEPH_PROXMOX_INTEGRATION.md
Normal file
@@ -0,0 +1,184 @@
|
||||
# Ceph-Proxmox Integration Guide
|
||||
|
||||
**Last Updated**: 2024-12-19
|
||||
|
||||
## Overview
|
||||
|
||||
This guide covers integrating Ceph storage with Proxmox VE for distributed storage across the cluster.
|
||||
|
||||
## Storage Types
|
||||
|
||||
### RBD (RADOS Block Device)
|
||||
|
||||
**Use Case**: VM disk images, high-performance block storage
|
||||
|
||||
**Features**:
|
||||
- Thin provisioning
|
||||
- Snapshots
|
||||
- Live migration support
|
||||
- High IOPS
|
||||
|
||||
### CephFS
|
||||
|
||||
**Use Case**: ISO images, backups, snippets, shared storage
|
||||
|
||||
**Features**:
|
||||
- POSIX-compliant filesystem
|
||||
- Shared access
|
||||
- Snapshots
|
||||
- Quotas
|
||||
|
||||
## Integration Steps
|
||||
|
||||
### Step 1: Verify Ceph Cluster
|
||||
|
||||
```bash
|
||||
# Check cluster status
|
||||
ceph -s
|
||||
|
||||
# Verify pools exist
|
||||
ceph osd pool ls
|
||||
```
|
||||
|
||||
### Step 2: Add RBD Storage
|
||||
|
||||
**Via CLI**:
|
||||
```bash
|
||||
# On each Proxmox node
|
||||
pvesm add rbd ceph-rbd \
|
||||
--pool rbd \
|
||||
--monhost 192.168.11.10,192.168.11.11 \
|
||||
--username admin \
|
||||
--content images,rootdir \
|
||||
--krbd 1
|
||||
```
|
||||
|
||||
**Via Web UI**:
|
||||
1. Datacenter → Storage → Add
|
||||
2. Select "RBD"
|
||||
3. Configure:
|
||||
- ID: `ceph-rbd`
|
||||
- Pool: `rbd`
|
||||
- Monitor Host: `192.168.11.10,192.168.11.11`
|
||||
- Username: `admin`
|
||||
- Content: Images, Rootdir
|
||||
- Enable KRBD: Yes
|
||||
|
||||
### Step 3: Add CephFS Storage
|
||||
|
||||
**Via CLI**:
|
||||
```bash
|
||||
# On each Proxmox node
|
||||
pvesm add cephfs ceph-fs \
|
||||
--monhost 192.168.11.10,192.168.11.11 \
|
||||
--username admin \
|
||||
--fsname cephfs \
|
||||
--content iso,backup,snippets
|
||||
```
|
||||
|
||||
**Via Web UI**:
|
||||
1. Datacenter → Storage → Add
|
||||
2. Select "CephFS"
|
||||
3. Configure:
|
||||
- ID: `ceph-fs`
|
||||
- Monitor Host: `192.168.11.10,192.168.11.11`
|
||||
- Username: `admin`
|
||||
- Filesystem: `cephfs`
|
||||
- Content: ISO, Backup, Snippets
|
||||
|
||||
## Using Ceph Storage
|
||||
|
||||
### Creating VMs with RBD Storage
|
||||
|
||||
1. **Via Web UI**:
|
||||
- Create VM → Hard Disk
|
||||
- Storage: Select `ceph-rbd`
|
||||
- Configure size and format
|
||||
|
||||
2. **Via CLI**:
|
||||
```bash
|
||||
qm create 100 --name test-vm --memory 2048
|
||||
qm disk create 100 ceph-rbd:0 --size 20G
|
||||
```
|
||||
|
||||
### Migrating VMs to Ceph
|
||||
|
||||
```bash
|
||||
# Move disk to Ceph storage
|
||||
qm disk move 100 scsi0 ceph-rbd --delete
|
||||
|
||||
# Or via Web UI:
|
||||
# VM → Hardware → Disk → Move Storage
|
||||
```
|
||||
|
||||
## Monitoring
|
||||
|
||||
### Ceph Dashboard
|
||||
|
||||
```bash
|
||||
# Enable dashboard
|
||||
ceph mgr module enable dashboard
|
||||
|
||||
# Create user
|
||||
ceph dashboard ac-user-create admin <password> administrator
|
||||
|
||||
# Access: https://ml110-01.sankofa.nexus:8443
|
||||
```
|
||||
|
||||
### Proxmox Storage Status
|
||||
|
||||
```bash
|
||||
# Check storage status
|
||||
pvesm status
|
||||
|
||||
# Check storage usage
|
||||
pvesm list
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Pool Configuration**:
|
||||
- Use separate pools for different workloads
|
||||
- Configure appropriate PG count
|
||||
- Set size and min_size appropriately
|
||||
|
||||
2. **Performance**:
|
||||
- Use SSD for OSD journals (if available)
|
||||
- Configure network bonding for redundancy
|
||||
- Monitor OSD performance
|
||||
|
||||
3. **Backup**:
|
||||
- Use CephFS for backups
|
||||
- Configure snapshot schedules
|
||||
- Test restore procedures
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Storage Not Available
|
||||
|
||||
```bash
|
||||
# Check Ceph status
|
||||
ceph -s
|
||||
|
||||
# Check OSD status
|
||||
ceph osd tree
|
||||
|
||||
# Check storage in Proxmox
|
||||
pvesm status
|
||||
```
|
||||
|
||||
### Performance Issues
|
||||
|
||||
```bash
|
||||
# Check OSD performance
|
||||
ceph osd perf
|
||||
|
||||
# Check pool stats
|
||||
ceph df detail
|
||||
```
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Ceph Installation](./CEPH_INSTALLATION.md)
|
||||
- [Proxmox Storage Configuration](../proxmox/STORAGE_CONFIGURATION.md)
|
||||
|
||||
134
docs/storage/CEPH_QUICK_START.md
Normal file
134
docs/storage/CEPH_QUICK_START.md
Normal file
@@ -0,0 +1,134 @@
|
||||
# Ceph Quick Start Guide
|
||||
|
||||
**Last Updated**: 2024-12-19
|
||||
|
||||
## Quick Installation
|
||||
|
||||
### Automated Installation
|
||||
|
||||
```bash
|
||||
# 1. Install Ceph
|
||||
./scripts/install-ceph.sh
|
||||
|
||||
# 2. Integrate with Proxmox
|
||||
./scripts/integrate-ceph-proxmox.sh
|
||||
```
|
||||
|
||||
### Manual Installation
|
||||
|
||||
```bash
|
||||
# On deployment node (ML110-01)
|
||||
su - ceph
|
||||
cd ~
|
||||
mkdir ceph-cluster
|
||||
cd ceph-cluster
|
||||
|
||||
# Initialize cluster
|
||||
ceph-deploy new ml110-01 r630-01
|
||||
|
||||
# Edit ceph.conf for 2-node setup
|
||||
cat >> ceph.conf << EOF
|
||||
osd pool default size = 2
|
||||
osd pool default min size = 1
|
||||
public network = 192.168.11.0/24
|
||||
cluster network = 192.168.11.0/24
|
||||
EOF
|
||||
|
||||
# Install and deploy
|
||||
ceph-deploy install ml110-01 r630-01
|
||||
ceph-deploy mon create-initial
|
||||
ceph-deploy admin ml110-01 r630-01
|
||||
|
||||
# Add OSDs (replace /dev/sdX with actual disks)
|
||||
ceph-deploy disk zap ml110-01 /dev/sdb
|
||||
ceph-deploy osd create --data /dev/sdb ml110-01
|
||||
ceph-deploy disk zap r630-01 /dev/sdb
|
||||
ceph-deploy osd create --data /dev/sdb r630-01
|
||||
|
||||
# Deploy manager
|
||||
ceph-deploy mgr create ml110-01 r630-01
|
||||
|
||||
# Create RBD pool
|
||||
ceph osd pool create rbd 128 128
|
||||
rbd pool init rbd
|
||||
```
|
||||
|
||||
## Proxmox Integration
|
||||
|
||||
### Add RBD Storage
|
||||
|
||||
```bash
|
||||
# On each Proxmox node
|
||||
pvesm add rbd ceph-rbd \
|
||||
--pool rbd \
|
||||
--monhost 192.168.11.10,192.168.11.11 \
|
||||
--username admin \
|
||||
--content images,rootdir
|
||||
```
|
||||
|
||||
### Add CephFS Storage
|
||||
|
||||
```bash
|
||||
# On each Proxmox node
|
||||
pvesm add cephfs ceph-fs \
|
||||
--monhost 192.168.11.10,192.168.11.11 \
|
||||
--username admin \
|
||||
--fsname cephfs \
|
||||
--content iso,backup
|
||||
```
|
||||
|
||||
## Common Commands
|
||||
|
||||
### Cluster Status
|
||||
|
||||
```bash
|
||||
# Cluster status
|
||||
ceph -s
|
||||
|
||||
# OSD tree
|
||||
ceph osd tree
|
||||
|
||||
# Health detail
|
||||
ceph health detail
|
||||
```
|
||||
|
||||
### Storage Management
|
||||
|
||||
```bash
|
||||
# List pools
|
||||
ceph osd pool ls
|
||||
|
||||
# Pool stats
|
||||
ceph df detail
|
||||
|
||||
# Create pool
|
||||
ceph osd pool create <pool-name> <pg-num> <pgp-num>
|
||||
```
|
||||
|
||||
### Proxmox Storage
|
||||
|
||||
```bash
|
||||
# List storage
|
||||
pvesm status
|
||||
|
||||
# Storage usage
|
||||
pvesm list
|
||||
```
|
||||
|
||||
## Dashboard Access
|
||||
|
||||
```bash
|
||||
# Enable dashboard
|
||||
ceph mgr module enable dashboard
|
||||
|
||||
# Create user
|
||||
ceph dashboard ac-user-create admin <password> administrator
|
||||
|
||||
# Access: https://ml110-01.sankofa.nexus:8443
|
||||
```
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Full Installation Guide](./CEPH_INSTALLATION.md)
|
||||
- [Proxmox Integration](./CEPH_PROXMOX_INTEGRATION.md)
|
||||
|
||||
Reference in New Issue
Block a user