Add full monorepo: virtual-banker, backend, frontend, docs, scripts, deployment

Co-authored-by: Cursor <cursoragent@cursor.com>
This commit is contained in:
defiQUG
2026-02-10 11:32:49 -08:00
parent aafcd913c2
commit 88bc76da91
815 changed files with 125522 additions and 264 deletions

View File

@@ -0,0 +1,204 @@
# Deployment Checklist
Use this checklist to track deployment progress.
## Pre-Deployment
- [ ] Proxmox VE host accessible
- [ ] Cloudflare account ready
- [ ] Domain registered and on Cloudflare
- [ ] Cloudflare API token created
- [ ] SSH access configured
- [ ] Backup strategy defined
## Phase 1: LXC Container Setup
- [ ] LXC container created (ID: _____)
- [ ] Container resources allocated (CPU/RAM/Disk)
- [ ] Container started and accessible
- [ ] Base packages installed
- [ ] Deployment user created
- [ ] SSH configured
## Phase 2: Application Installation
- [ ] Go 1.21+ installed
- [ ] Node.js 20+ installed
- [ ] Docker & Docker Compose installed
- [ ] Repository cloned
- [ ] Backend dependencies installed (`go mod download`)
- [ ] Frontend dependencies installed (`npm ci`)
- [ ] Backend applications built
- [ ] Frontend application built (`npm run build`)
## Phase 3: Database Setup
- [ ] PostgreSQL 16 installed
- [ ] TimescaleDB extension installed
- [ ] Database `explorer` created
- [ ] User `explorer` created
- [ ] Database migrations run
- [ ] PostgreSQL tuned for performance
- [ ] Backup script configured
## Phase 4: Infrastructure Services
- [ ] Elasticsearch/OpenSearch deployed
- [ ] Redis deployed
- [ ] Services verified and accessible
- [ ] Services configured to auto-start
## Phase 5: Application Services
- [ ] Environment variables configured (`.env` file)
- [ ] Systemd service files created:
- [ ] `explorer-indexer.service`
- [ ] `explorer-api.service`
- [ ] `explorer-frontend.service`
- [ ] Services enabled
- [ ] Services started
- [ ] Service status verified
- [ ] Logs checked for errors
## Phase 6: Nginx Reverse Proxy
- [ ] Nginx installed
- [ ] Nginx configuration file created
- [ ] Configuration tested (`nginx -t`)
- [ ] Site enabled
- [ ] Nginx started
- [ ] Reverse proxy working
- [ ] Health check endpoint accessible
## Phase 7: Cloudflare Configuration
### DNS
- [ ] A record created for `explorer.d-bis.org`
- [ ] CNAME record created for `www.explorer.d-bis.org`
- [ ] DNS records set to "Proxied" (orange cloud)
- [ ] DNS propagation verified
### SSL/TLS
- [ ] SSL/TLS mode set to "Full (strict)"
- [ ] Always Use HTTPS enabled
- [ ] Automatic HTTPS Rewrites enabled
- [ ] TLS 1.3 enabled
- [ ] Certificate status verified
### Cloudflare Tunnel (if using)
- [ ] `cloudflared` installed
- [ ] Authenticated with Cloudflare
- [ ] Tunnel created
- [ ] Tunnel configuration file created
- [ ] Tunnel systemd service installed
- [ ] Tunnel started and running
- [ ] Tunnel status verified
### WAF & Security
- [ ] Cloudflare Managed Ruleset enabled
- [ ] OWASP Core Ruleset enabled
- [ ] Rate limiting rules configured
- [ ] DDoS protection enabled
- [ ] Bot protection configured
### Caching
- [ ] Caching level configured
- [ ] Cache rules created:
- [ ] Static assets rule
- [ ] API bypass rule
- [ ] Frontend pages rule
## Phase 8: Security Hardening
- [ ] Firewall (UFW) configured
- [ ] Only necessary ports opened
- [ ] Cloudflare IP ranges allowed (if direct connection)
- [ ] Fail2ban installed and configured
- [ ] Automatic updates configured
- [ ] Log rotation configured
- [ ] Backup script created and tested
- [ ] Backup cron job configured
## Phase 9: Monitoring & Maintenance
- [ ] Health check script created
- [ ] Health check cron job configured
- [ ] Log monitoring configured
- [ ] Cloudflare analytics reviewed
- [ ] Alerts configured (email/Slack/etc)
- [ ] Documentation updated
## Post-Deployment Verification
### Services
- [ ] All systemd services running
- [ ] No service errors in logs
- [ ] Database connection working
- [ ] Indexer processing blocks
- [ ] API responding to requests
- [ ] Frontend loading correctly
### Network
- [ ] DNS resolving correctly
- [ ] HTTPS working (if direct connection)
- [ ] Cloudflare Tunnel connected (if using)
- [ ] Nginx proxying correctly
- [ ] WebSocket connections working
### Functionality
- [ ] Homepage loads
- [ ] Block list page works
- [ ] Transaction list page works
- [ ] Search functionality works
- [ ] API endpoints responding
- [ ] Health check endpoint working
### Security
- [ ] Security headers present
- [ ] SSL/TLS certificate valid
- [ ] Firewall rules active
- [ ] Fail2ban active
- [ ] No sensitive files exposed
### Performance
- [ ] Response times acceptable
- [ ] Caching working
- [ ] CDN serving static assets
- [ ] Database queries optimized
## Maintenance Schedule
### Daily
- [ ] Check service status
- [ ] Review error logs
- [ ] Check Cloudflare analytics
### Weekly
- [ ] Review security logs
- [ ] Check disk space
- [ ] Verify backups completed
### Monthly
- [ ] Update system packages
- [ ] Optimize database
- [ ] Update application dependencies
- [ ] Review resource usage
- [ ] Test disaster recovery
## Emergency Contacts
- **System Administrator**: ________________
- **Cloudflare Support**: https://support.cloudflare.com
- **Proxmox Support**: https://www.proxmox.com/en/proxmox-ve/support
## Notes
_Use this space for deployment-specific notes and issues encountered._
---
**Deployment Date**: _______________
**Deployed By**: _______________
**Container ID**: _______________
**Domain**: explorer.d-bis.org

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,183 @@
# Deployment Summary
## Complete Deployment Package
All deployment files and scripts have been created and are ready for use.
## 📁 File Structure
```
deployment/
├── DEPLOYMENT_GUIDE.md # Complete step-by-step guide (1,079 lines)
├── DEPLOYMENT_TASKS.md # Detailed 71-task checklist (561 lines)
├── DEPLOYMENT_CHECKLIST.md # Interactive checklist (204 lines)
├── DEPLOYMENT_SUMMARY.md # This file
├── QUICK_DEPLOY.md # Quick command reference
├── README.md # Documentation overview
├── ENVIRONMENT_TEMPLATE.env # Environment variables template
├── nginx/
│ └── explorer.conf # Complete Nginx configuration
├── cloudflare/
│ └── tunnel-config.yml # Cloudflare Tunnel template
├── systemd/
│ ├── explorer-indexer.service
│ ├── explorer-api.service
│ ├── explorer-frontend.service
│ └── cloudflared.service
├── fail2ban/
│ ├── nginx.conf # Nginx filter
│ └── jail.local # Jail configuration
└── scripts/
├── deploy-lxc.sh # Automated LXC setup
├── install-services.sh # Install systemd services
├── setup-nginx.sh # Setup Nginx
├── setup-cloudflare-tunnel.sh # Setup Cloudflare Tunnel
├── setup-firewall.sh # Configure firewall
├── setup-fail2ban.sh # Configure Fail2ban
├── setup-backup.sh # Setup backup system
├── setup-health-check.sh # Setup health monitoring
├── build-all.sh # Build all applications
├── verify-deployment.sh # Verify deployment
└── full-deploy.sh # Full automated deployment
```
## 🚀 Quick Start
### Option 1: Automated Deployment
```bash
# Run full automated deployment
sudo ./deployment/scripts/full-deploy.sh
```
### Option 2: Step-by-Step Manual
```bash
# 1. Read the guide
cat deployment/DEPLOYMENT_GUIDE.md
# 2. Follow tasks
# Use deployment/DEPLOYMENT_TASKS.md
# 3. Track progress
# Use deployment/DEPLOYMENT_CHECKLIST.md
```
## 📋 Deployment Phases
1. **LXC Container Setup** (8 tasks)
- Create container
- Configure resources
- Install base packages
2. **Application Installation** (12 tasks)
- Install Go, Node.js, Docker
- Clone repository
- Build applications
3. **Database Setup** (10 tasks)
- Install PostgreSQL + TimescaleDB
- Create database
- Run migrations
4. **Infrastructure Services** (6 tasks)
- Deploy Elasticsearch
- Deploy Redis
5. **Application Services** (10 tasks)
- Configure environment
- Create systemd services
- Start services
6. **Nginx Reverse Proxy** (9 tasks)
- Install Nginx
- Configure reverse proxy
- Set up SSL
7. **Cloudflare Configuration** (18 tasks)
- Configure DNS
- Set up SSL/TLS
- Configure Tunnel
- Set up WAF
- Configure caching
8. **Security Hardening** (12 tasks)
- Configure firewall
- Set up Fail2ban
- Configure backups
- Harden SSH
9. **Monitoring** (8 tasks)
- Set up health checks
- Configure logging
- Set up alerts
## 🔧 Available Scripts
| Script | Purpose |
|--------|---------|
| `deploy-lxc.sh` | Automated LXC container setup |
| `build-all.sh` | Build all applications |
| `install-services.sh` | Install systemd service files |
| `setup-nginx.sh` | Configure Nginx |
| `setup-cloudflare-tunnel.sh` | Setup Cloudflare Tunnel |
| `setup-firewall.sh` | Configure UFW firewall |
| `setup-fail2ban.sh` | Configure Fail2ban |
| `setup-backup.sh` | Setup backup system |
| `setup-health-check.sh` | Setup health monitoring |
| `verify-deployment.sh` | Verify deployment |
| `full-deploy.sh` | Full automated deployment |
## 📝 Configuration Files
- **Nginx**: `nginx/explorer.conf`
- **Cloudflare Tunnel**: `cloudflare/tunnel-config.yml`
- **Systemd Services**: `systemd/*.service`
- **Fail2ban**: `fail2ban/*.conf`
- **Environment Template**: `ENVIRONMENT_TEMPLATE.env`
## ✅ Verification Checklist
After deployment, verify:
- [ ] All services running
- [ ] API responding: `curl http://localhost:8080/health`
- [ ] Frontend loading: `curl http://localhost:3000`
- [ ] Nginx proxying: `curl http://localhost/api/health`
- [ ] Database accessible
- [ ] DNS resolving
- [ ] SSL working (if direct connection)
- [ ] Cloudflare Tunnel connected (if using)
- [ ] Firewall configured
- [ ] Backups running
## 🆘 Troubleshooting
See `QUICK_DEPLOY.md` for:
- Common issues
- Quick fixes
- Emergency procedures
## 📊 Statistics
- **Total Tasks**: 71
- **Documentation**: 1,844+ lines
- **Scripts**: 11 automation scripts
- **Config Files**: 8 configuration templates
- **Estimated Time**: 6-8 hours (first deployment)
## 🎯 Next Steps
1. Review `DEPLOYMENT_GUIDE.md`
2. Prepare environment (Proxmox, Cloudflare)
3. Run deployment scripts
4. Verify deployment
5. Configure monitoring
---
**All deployment files are ready!**

View File

@@ -0,0 +1,561 @@
# Complete Deployment Task List
This document provides a detailed checklist of all tasks required to deploy the ChainID 138 Explorer Platform using LXC, Nginx, Cloudflare DNS, SSL, and Cloudflare Tunnel.
---
## 📋 Complete Task List (71 Tasks)
### PRE-DEPLOYMENT (5 tasks)
#### Task 1: Verify Prerequisites
- [ ] Access to Proxmox VE host with LXC support
- [ ] Cloudflare account created and domain added
- [ ] Domain DNS managed by Cloudflare
- [ ] Cloudflare API token created (with DNS edit permissions)
- [ ] SSH access to Proxmox host configured
---
### PHASE 1: LXC CONTAINER SETUP (8 tasks)
#### Task 2: Create LXC Container
- [ ] Log into Proxmox host
- [ ] Download Ubuntu 22.04 template (if not exists)
- [ ] Run container creation command
- [ ] Verify container created successfully
- [ ] Note container ID for future reference
#### Task 3: Start and Access Container
- [ ] Start container: `pct start <CONTAINER_ID>`
- [ ] Access container: `pct enter <CONTAINER_ID>`
- [ ] Verify network connectivity
- [ ] Update system: `apt update && apt upgrade -y`
#### Task 4: Install Base Packages
- [ ] Install essential packages (curl, wget, git, vim, etc.)
- [ ] Install firewall: `apt install -y ufw`
- [ ] Install fail2ban: `apt install -y fail2ban`
- [ ] Install security updates tool: `apt install -y unattended-upgrades`
#### Task 5: Configure System Settings
- [ ] Set timezone: `timedatectl set-timezone UTC`
- [ ] Configure hostname: `hostnamectl set-hostname explorer-prod`
- [ ] Configure locale settings
#### Task 6: Create Deployment User
- [ ] Create user: `adduser explorer`
- [ ] Add to sudo group: `usermod -aG sudo explorer`
- [ ] Configure SSH access for new user
- [ ] Disable root SSH login in `/etc/ssh/sshd_config`
- [ ] Restart SSH service
---
### PHASE 2: APPLICATION INSTALLATION (12 tasks)
#### Task 7: Install Go 1.21+
- [ ] Download Go 1.21.6: `wget https://go.dev/dl/go1.21.6.linux-amd64.tar.gz`
- [ ] Extract to `/usr/local/go`
- [ ] Add Go to PATH in `/etc/profile` and `~/.bashrc`
- [ ] Source profile or logout/login
- [ ] Verify: `go version` (should show 1.21.6+)
#### Task 8: Install Node.js 20+
- [ ] Add NodeSource repository
- [ ] Install Node.js 20.x
- [ ] Verify: `node --version` (should show v20.x.x+)
- [ ] Verify: `npm --version`
#### Task 9: Install Docker & Docker Compose
- [ ] Add Docker GPG key
- [ ] Add Docker repository
- [ ] Install Docker CE
- [ ] Install Docker Compose plugin
- [ ] Start Docker service: `systemctl start docker`
- [ ] Enable Docker on boot: `systemctl enable docker`
- [ ] Add `explorer` user to docker group
- [ ] Verify: `docker --version` and `docker compose version`
#### Task 10: Clone Repository
- [ ] Switch to deployment user: `su - explorer`
- [ ] Navigate to home: `cd /home/explorer`
- [ ] Clone repository: `git clone <repo-url> explorer-monorepo`
- [ ] Verify repository cloned correctly
#### Task 11: Install Dependencies
- [ ] Navigate to backend: `cd explorer-monorepo/backend`
- [ ] Download Go modules: `go mod download`
- [ ] Navigate to frontend: `cd ../frontend`
- [ ] Install npm packages: `npm ci --production`
#### Task 12: Build Applications
- [ ] Build indexer: `go build -o /usr/local/bin/explorer-indexer ./indexer/main.go`
- [ ] Build API: `go build -o /usr/local/bin/explorer-api ./api/rest/main.go`
- [ ] Build gateway: `go build -o /usr/local/bin/explorer-gateway ./api/gateway/main.go`
- [ ] Build search service: `go build -o /usr/local/bin/explorer-search ./api/search/main.go`
- [ ] Build frontend: `cd frontend && npm run build`
- [ ] Verify all binaries exist and are executable
---
### PHASE 3: DATABASE SETUP (10 tasks)
#### Task 13: Install PostgreSQL 16
- [ ] Add PostgreSQL APT repository
- [ ] Add PostgreSQL GPG key
- [ ] Update package list
- [ ] Install PostgreSQL 16: `apt install -y postgresql-16 postgresql-contrib-16`
#### Task 14: Install TimescaleDB
- [ ] Add TimescaleDB repository
- [ ] Add TimescaleDB GPG key
- [ ] Update package list
- [ ] Install TimescaleDB: `apt install -y timescaledb-2-postgresql-16`
- [ ] Run TimescaleDB tuner: `timescaledb-tune --quiet --yes`
- [ ] Restart PostgreSQL: `systemctl restart postgresql`
#### Task 15: Create Database and User
- [ ] Switch to postgres user: `su - postgres`
- [ ] Create database user: `CREATE USER explorer WITH PASSWORD '<SECURE_PASSWORD>'`
- [ ] Create database: `CREATE DATABASE explorer OWNER explorer;`
- [ ] Connect to database: `\c explorer`
- [ ] Enable TimescaleDB extension: `CREATE EXTENSION IF NOT EXISTS timescaledb;`
- [ ] Enable UUID extension: `CREATE EXTENSION IF NOT EXISTS "uuid-ossp";`
- [ ] Grant privileges: `GRANT ALL PRIVILEGES ON DATABASE explorer TO explorer;`
#### Task 16: Run Database Migrations
- [ ] Return to deployment user
- [ ] Navigate to backend: `cd /home/explorer/explorer-monorepo/backend`
- [ ] Run migrations: `go run database/migrations/migrate.go`
- [ ] Verify migrations completed successfully
- [ ] Check database tables exist
#### Task 17: Configure PostgreSQL
- [ ] Edit `postgresql.conf`: `/etc/postgresql/16/main/postgresql.conf`
- [ ] Set `max_connections = 100`
- [ ] Set `shared_buffers = 4GB`
- [ ] Set `effective_cache_size = 12GB`
- [ ] Set other performance tuning parameters
- [ ] Edit `pg_hba.conf` for local connections
- [ ] Restart PostgreSQL: `systemctl restart postgresql`
- [ ] Verify PostgreSQL is running: `systemctl status postgresql`
---
### PHASE 4: INFRASTRUCTURE SERVICES (6 tasks)
#### Task 18: Deploy Elasticsearch/OpenSearch
- [ ] Navigate to deployment directory: `cd /home/explorer/explorer-monorepo/deployment`
- [ ] Start Elasticsearch: `docker compose -f docker-compose.yml up -d elasticsearch`
- [ ] Wait for Elasticsearch to be ready
- [ ] Verify Elasticsearch: `curl http://localhost:9200`
#### Task 19: Deploy Redis
- [ ] Start Redis: `docker compose -f docker-compose.yml up -d redis`
- [ ] Verify Redis: `redis-cli ping`
- [ ] Verify both services running: `docker ps`
---
### PHASE 5: APPLICATION SERVICES (10 tasks)
#### Task 20: Create Environment Configuration
- [ ] Copy `.env.example` to `.env`: `cp .env.example .env`
- [ ] Edit `.env` file with production values
- [ ] Set database credentials
- [ ] Set RPC URLs and Chain ID
- [ ] Set API URLs and ports
- [ ] Verify all required variables are set
- [ ] Set proper file permissions: `chmod 600 .env`
#### Task 21: Create Systemd Service Files
- [ ] Create `/etc/systemd/system/explorer-indexer.service`
- [ ] Create `/etc/systemd/system/explorer-api.service`
- [ ] Create `/etc/systemd/system/explorer-frontend.service`
- [ ] Set proper ownership: `chown root:root /etc/systemd/system/explorer-*.service`
- [ ] Set proper permissions: `chmod 644 /etc/systemd/system/explorer-*.service`
#### Task 22: Enable and Start Services
- [ ] Reload systemd: `systemctl daemon-reload`
- [ ] Enable indexer: `systemctl enable explorer-indexer`
- [ ] Enable API: `systemctl enable explorer-api`
- [ ] Enable frontend: `systemctl enable explorer-frontend`
- [ ] Start indexer: `systemctl start explorer-indexer`
- [ ] Start API: `systemctl start explorer-api`
- [ ] Start frontend: `systemctl start explorer-frontend`
#### Task 23: Verify Services
- [ ] Check indexer status: `systemctl status explorer-indexer`
- [ ] Check API status: `systemctl status explorer-api`
- [ ] Check frontend status: `systemctl status explorer-frontend`
- [ ] Check indexer logs: `journalctl -u explorer-indexer -f`
- [ ] Check API logs: `journalctl -u explorer-api -f`
- [ ] Verify API responds: `curl http://localhost:8080/health`
- [ ] Verify frontend responds: `curl http://localhost:3000`
---
### PHASE 6: NGINX REVERSE PROXY (9 tasks)
#### Task 24: Install Nginx
- [ ] Install Nginx: `apt install -y nginx`
- [ ] Verify installation: `nginx -v`
#### Task 25: Create Nginx Configuration
- [ ] Copy config template: `cp deployment/nginx/explorer.conf /etc/nginx/sites-available/explorer`
- [ ] Edit configuration file (update domain if needed)
- [ ] Enable site: `ln -s /etc/nginx/sites-available/explorer /etc/nginx/sites-enabled/`
- [ ] Remove default site: `rm /etc/nginx/sites-enabled/default`
- [ ] Test configuration: `nginx -t`
- [ ] If test passes, reload Nginx: `systemctl reload nginx`
#### Task 26: Configure Rate Limiting
- [ ] Verify rate limiting zones in config
- [ ] Adjust rate limits as needed
- [ ] Test rate limiting (optional)
#### Task 27: Test Nginx Proxy
- [ ] Verify Nginx is running: `systemctl status nginx`
- [ ] Test HTTP endpoint: `curl -I http://localhost`
- [ ] Test API proxy: `curl http://localhost/api/v1/blocks`
- [ ] Check Nginx access logs: `tail -f /var/log/nginx/explorer-access.log`
- [ ] Check Nginx error logs: `tail -f /var/log/nginx/explorer-error.log`
---
### PHASE 7: CLOUDFLARE CONFIGURATION (18 tasks)
#### Task 28: Set Up Cloudflare DNS Records
- [ ] Login to Cloudflare Dashboard
- [ ] Select domain
- [ ] Go to DNS → Records
- [ ] Add A record for `explorer` (or `@`):
- Type: A
- Name: explorer
- IPv4: [Your server IP] (if direct) or leave empty (if tunnel)
- Proxy: Proxied (orange cloud)
- TTL: Auto
- [ ] Add CNAME for `www`:
- Type: CNAME
- Name: www
- Target: explorer.d-bis.org
- Proxy: Proxied
- TTL: Auto
- [ ] Save DNS records
- [ ] Verify DNS propagation
#### Task 29: Configure Cloudflare SSL/TLS
- [ ] Go to SSL/TLS → Overview
- [ ] Set encryption mode to: **Full (strict)**
- [ ] Go to SSL/TLS → Edge Certificates
- [ ] Enable: "Always Use HTTPS"
- [ ] Enable: "Automatic HTTPS Rewrites"
- [ ] Enable: "Opportunistic Encryption"
- [ ] Enable: "TLS 1.3"
- [ ] Save settings
#### Task 30: Install Cloudflare Tunnel (cloudflared)
- [ ] Download cloudflared: `wget https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb`
- [ ] Install: `dpkg -i cloudflared-linux-amd64.deb`
- [ ] Verify: `cloudflared --version`
#### Task 31: Authenticate Cloudflare Tunnel
- [ ] Run: `cloudflared tunnel login`
- [ ] Follow browser authentication
- [ ] Verify authentication successful
#### Task 32: Create Cloudflare Tunnel
- [ ] Create tunnel: `cloudflared tunnel create explorer-tunnel`
- [ ] List tunnels: `cloudflared tunnel list`
- [ ] Note tunnel ID
#### Task 33: Configure Cloudflare Tunnel
- [ ] Create config directory: `mkdir -p /etc/cloudflared`
- [ ] Copy tunnel config template: `cp deployment/cloudflare/tunnel-config.yml /etc/cloudflared/config.yml`
- [ ] Edit config file with tunnel ID
- [ ] Update hostnames in config
- [ ] Verify config: `cloudflared tunnel --config /etc/cloudflared/config.yml ingress validate`
#### Task 34: Install Cloudflare Tunnel as Service
- [ ] Install service: `cloudflared service install`
- [ ] Enable service: `systemctl enable cloudflared`
- [ ] Start service: `systemctl start cloudflared`
- [ ] Check status: `systemctl status cloudflared`
- [ ] View logs: `journalctl -u cloudflared -f`
#### Task 35: Verify Cloudflare Tunnel
- [ ] Check tunnel is running: `cloudflared tunnel info explorer-tunnel`
- [ ] Verify DNS routes are configured in Cloudflare dashboard
- [ ] Test domain access: `curl -I https://explorer.d-bis.org`
- [ ] Verify SSL certificate is active
#### Task 36: Configure Cloudflare WAF
- [ ] Go to Security → WAF
- [ ] Enable Cloudflare Managed Ruleset
- [ ] Enable OWASP Core Ruleset
- [ ] Create custom rate limiting rule (if needed)
- [ ] Save rules
#### Task 37: Configure Cloudflare Caching
- [ ] Go to Caching → Configuration
- [ ] Set Caching Level: Standard
- [ ] Go to Caching → Cache Rules
- [ ] Create rule for static assets (Cache everything, Edge TTL: 1 year)
- [ ] Create rule for API endpoints (Bypass cache)
- [ ] Create rule for frontend pages (Cache HTML for 5 minutes)
#### Task 38: Configure DDoS Protection
- [ ] Go to Security → DDoS
- [ ] Enable DDoS protection
- [ ] Configure protection level (Medium recommended)
- [ ] Review and adjust as needed
---
### PHASE 8: SECURITY HARDENING (12 tasks)
#### Task 39: Configure Firewall (UFW)
- [ ] Enable UFW: `ufw --force enable`
- [ ] Allow SSH: `ufw allow 22/tcp`
- [ ] Allow HTTP: `ufw allow 80/tcp` (if direct connection)
- [ ] Allow HTTPS: `ufw allow 443/tcp` (if direct connection)
- [ ] Add Cloudflare IP ranges (if direct connection)
- [ ] Check status: `ufw status verbose`
#### Task 40: Configure Fail2ban
- [ ] Create Nginx jail config: `/etc/fail2ban/jail.d/nginx.conf`
- [ ] Configure nginx-limit-req jail
- [ ] Configure nginx-botsearch jail
- [ ] Restart fail2ban: `systemctl restart fail2ban`
- [ ] Check status: `fail2ban-client status`
#### Task 41: Configure Automatic Updates
- [ ] Configure `/etc/apt/apt.conf.d/50unattended-upgrades`
- [ ] Enable security updates only
- [ ] Disable automatic reboot
- [ ] Enable service: `systemctl enable unattended-upgrades`
- [ ] Start service: `systemctl start unattended-upgrades`
#### Task 42: Configure Log Rotation
- [ ] Create logrotate config: `/etc/logrotate.d/explorer`
- [ ] Set rotation schedule (daily)
- [ ] Set retention (30 days)
- [ ] Configure compression
- [ ] Test: `logrotate -d /etc/logrotate.d/explorer`
#### Task 43: Set Up Backup Script
- [ ] Create backup script: `/usr/local/bin/explorer-backup.sh`
- [ ] Configure database backup
- [ ] Configure config file backup
- [ ] Set cleanup of old backups
- [ ] Make executable: `chmod +x /usr/local/bin/explorer-backup.sh`
- [ ] Test backup script manually
- [ ] Add to crontab: Daily at 2 AM
#### Task 44: Secure Environment File
- [ ] Set proper permissions: `chmod 600 /home/explorer/explorer-monorepo/.env`
- [ ] Verify only owner can read: `ls -l .env`
- [ ] Add .env to .gitignore (verify)
#### Task 45: Configure SSH Hardening
- [ ] Edit `/etc/ssh/sshd_config`
- [ ] Disable root login: `PermitRootLogin no`
- [ ] Disable password authentication (use keys only): `PasswordAuthentication no`
- [ ] Set SSH port (optional, change from 22)
- [ ] Restart SSH: `systemctl restart sshd`
- [ ] Test SSH connection before closing session
---
### PHASE 9: MONITORING & MAINTENANCE (8 tasks)
#### Task 46: Create Health Check Script
- [ ] Create script: `/usr/local/bin/explorer-health-check.sh`
- [ ] Configure API health check
- [ ] Configure service restart on failure
- [ ] Add alert mechanism (email/Slack)
- [ ] Make executable: `chmod +x /usr/local/bin/explorer-health-check.sh`
- [ ] Test script manually
#### Task 47: Configure Health Check Cron Job
- [ ] Add to crontab: Every 5 minutes
- [ ] Verify cron job added: `crontab -l`
#### Task 48: Set Up Log Monitoring
- [ ] Install logwatch: `apt install -y logwatch`
- [ ] Configure logwatch
- [ ] Set up daily log summaries (optional)
#### Task 49: Configure Cloudflare Analytics
- [ ] Access Cloudflare Analytics dashboard
- [ ] Set up custom dashboards
- [ ] Configure alert thresholds
#### Task 50: Set Up Alerts
- [ ] Configure email alerts in Cloudflare
- [ ] Set up high error rate alerts
- [ ] Set up DDoS detection alerts
- [ ] Set up certificate expiration alerts
- [ ] Test alert mechanism
---
### POST-DEPLOYMENT VERIFICATION (13 tasks)
#### Task 51: Verify All Services
- [ ] Check all systemd services: `systemctl status explorer-*`
- [ ] Verify no service errors
- [ ] Check service logs for warnings
#### Task 52: Verify Database
- [ ] Test database connection: `psql -U explorer -d explorer -h localhost`
- [ ] Check database tables exist
- [ ] Verify migrations applied
#### Task 53: Verify Infrastructure Services
- [ ] Check Elasticsearch: `curl http://localhost:9200`
- [ ] Check Redis: `redis-cli ping`
- [ ] Check Docker containers: `docker ps`
#### Task 54: Verify API
- [ ] Test health endpoint: `curl https://explorer.d-bis.org/api/health`
- [ ] Test blocks endpoint: `curl https://explorer.d-bis.org/api/v1/blocks`
- [ ] Test transactions endpoint
- [ ] Test search endpoint
#### Task 55: Verify Frontend
- [ ] Open browser: `https://explorer.d-bis.org`
- [ ] Verify homepage loads
- [ ] Test navigation
- [ ] Verify static assets load
#### Task 56: Verify DNS
- [ ] Check DNS resolution: `dig explorer.d-bis.org`
- [ ] Verify DNS points to Cloudflare IPs
- [ ] Test from multiple locations
#### Task 57: Verify SSL/TLS
- [ ] Check SSL certificate: `openssl s_client -connect explorer.d-bis.org:443 -servername explorer.d-bis.org`
- [ ] Verify certificate is valid
- [ ] Verify TLS 1.3 is enabled
- [ ] Check SSL Labs rating (optional): https://www.ssllabs.com/ssltest/
#### Task 58: Verify Cloudflare Tunnel
- [ ] Check tunnel status: `systemctl status cloudflared`
- [ ] View tunnel info: `cloudflared tunnel info explorer-tunnel`
- [ ] Check tunnel logs for errors
#### Task 59: Verify Nginx
- [ ] Check Nginx status: `systemctl status nginx`
- [ ] Test configuration: `nginx -t`
- [ ] Check access logs
- [ ] Check error logs
#### Task 60: Verify Security
- [ ] Test firewall: `ufw status`
- [ ] Test fail2ban: `fail2ban-client status`
- [ ] Verify security headers present
- [ ] Test rate limiting (optional)
#### Task 61: Verify Performance
- [ ] Test response times
- [ ] Verify caching working
- [ ] Check Cloudflare cache hit ratio
- [ ] Monitor resource usage
#### Task 62: Verify Monitoring
- [ ] Test health check script
- [ ] Verify cron jobs running
- [ ] Check log rotation working
- [ ] Verify backups running
#### Task 63: Documentation
- [ ] Document deployed version
- [ ] Document configuration changes
- [ ] Document known issues
- [ ] Update deployment checklist
---
### OPTIONAL ENHANCEMENTS (8 tasks)
#### Task 64: Set Up Let's Encrypt Certificates (Optional)
- [ ] Install certbot: `apt install -y certbot python3-certbot-nginx`
- [ ] Obtain certificate: `certbot --nginx -d explorer.d-bis.org -d www.explorer.d-bis.org`
- [ ] Test renewal: `certbot renew --dry-run`
- [ ] Set up auto-renewal cron job
#### Task 65: Configure CDN for Static Assets
- [ ] Configure Cloudflare cache rules
- [ ] Set up custom cache headers
- [ ] Verify CDN serving static assets
#### Task 66: Set Up Monitoring Dashboard (Optional)
- [ ] Install Prometheus (optional)
- [ ] Install Grafana (optional)
- [ ] Configure dashboards
- [ ] Set up alerts
#### Task 67: Configure Database Replication (Optional)
- [ ] Set up read replica
- [ ] Configure connection pooling
- [ ] Update application config
#### Task 68: Set Up Load Balancing (Optional)
- [ ] Configure multiple API instances
- [ ] Set up load balancer
- [ ] Configure health checks
#### Task 69: Configure Auto-Scaling (Optional)
- [ ] Set up monitoring metrics
- [ ] Configure scaling rules
- [ ] Test auto-scaling
#### Task 70: Set Up Disaster Recovery
- [ ] Configure automated backups
- [ ] Set up backup verification
- [ ] Document recovery procedures
- [ ] Test recovery process
#### Task 71: Performance Optimization
- [ ] Optimize database queries
- [ ] Configure Redis caching
- [ ] Optimize Nginx config
- [ ] Review and optimize Cloudflare settings
---
## 📊 Deployment Summary
- **Total Tasks**: 71
- **Required Tasks**: 63
- **Optional Tasks**: 8
- **Estimated Time**: 6-8 hours (first deployment)
## 🚀 Quick Start Commands
```bash
# 1. Run automated deployment script (Phase 1-2)
./deployment/scripts/deploy-lxc.sh
# 2. Follow manual steps for remaining phases
# See DEPLOYMENT_GUIDE.md for detailed instructions
# 3. Use checklist to track progress
# See DEPLOYMENT_CHECKLIST.md
```
## 📝 Notes
- Tasks marked with ⚠️ require careful attention
- Tasks marked with ✅ can be automated
- Always test in staging before production
- Keep backups before major changes
- Document any deviations from standard procedure
---
**Last Updated**: 2024-12-23
**Version**: 1.0.0

View File

@@ -0,0 +1,126 @@
# Production Environment Configuration Template
# Copy this to /home/explorer/explorer-monorepo/.env and fill in values
# ============================================
# Database Configuration
# ============================================
DB_HOST=localhost
DB_PORT=5432
DB_USER=explorer
DB_PASSWORD=CHANGE_THIS_SECURE_PASSWORD
DB_NAME=explorer
DB_MAX_CONNECTIONS=50
DB_MAX_IDLE_TIME=5m
DB_CONN_MAX_LIFETIME=1h
# Read Replica (optional)
DB_REPLICA_HOST=
DB_REPLICA_PORT=5432
DB_REPLICA_USER=
DB_REPLICA_PASSWORD=
DB_REPLICA_NAME=
# ============================================
# RPC Configuration
# ============================================
# Public RPC Endpoints (ChainID 138) - Internal IP Addresses
# Using internal IP for direct connection (no proxy overhead)
RPC_URL=http://192.168.11.221:8545
WS_URL=ws://192.168.11.221:8546
CHAIN_ID=138
# Alternative RPC Endpoints (if needed)
# Public RPC (via domain/proxy): https://rpc-http-pub.d-bis.org
# Public WS (via domain/proxy): wss://rpc-ws-pub.d-bis.org
# Private RPC (internal IP): http://192.168.11.211:8545
# Private WS (internal IP): ws://192.168.11.211:8546
# Private RPC (via domain/proxy): https://rpc-http-prv.d-bis.org
# Private WS (via domain/proxy): wss://rpc-ws-prv.d-bis.org
# ============================================
# Search Configuration (Elasticsearch/OpenSearch)
# ============================================
SEARCH_URL=http://localhost:9200
SEARCH_USERNAME=
SEARCH_PASSWORD=
SEARCH_USE_SSL=false
SEARCH_INDEX_PREFIX=explorer-prod
# ============================================
# API Configuration
# ============================================
PORT=8080
API_GATEWAY_PORT=8081
CHAIN_ID=138
# ============================================
# Frontend Configuration
# ============================================
NEXT_PUBLIC_API_URL=https://explorer.d-bis.org/api
NEXT_PUBLIC_CHAIN_ID=138
# ============================================
# Redis Configuration
# ============================================
REDIS_URL=redis://localhost:6379
# ============================================
# Message Queue Configuration (Optional)
# ============================================
KAFKA_BROKERS=localhost:9092
# or
RABBITMQ_URL=amqp://guest:guest@localhost:5672/
# ============================================
# Cloudflare Configuration
# ============================================
CLOUDFLARE_API_TOKEN=
CLOUDFLARE_ZONE_ID=
CLOUDFLARE_ACCOUNT_ID=
# ============================================
# External API Keys (for integrations)
# ============================================
# DEX Aggregators
ONEINCH_API_KEY=
ZEROX_API_KEY=
PARASWAP_API_KEY=
# KYC Providers
JUMIO_API_KEY=
JUMIO_API_SECRET=
ONFIDO_API_KEY=
# Payment Rails
MOONPAY_API_KEY=
RAMP_API_KEY=
# WalletConnect
WALLETCONNECT_PROJECT_ID=
# Soul Machines (VTM)
SOUL_MACHINES_API_KEY=
SOUL_MACHINES_API_SECRET=
# ============================================
# Security
# ============================================
JWT_SECRET=CHANGE_THIS_JWT_SECRET
ENCRYPTION_KEY=CHANGE_THIS_ENCRYPTION_KEY_32_BYTES
# ============================================
# Monitoring (Optional)
# ============================================
SENTRY_DSN=
DATADOG_API_KEY=
PROMETHEUS_ENABLED=false
# ============================================
# Feature Flags
# ============================================
ENABLE_GRAPHQL=true
ENABLE_WEBSOCKET=true
ENABLE_ANALYTICS=true
ENABLE_VTM=false
ENABLE_XR=false

196
deployment/INDEX.md Normal file
View File

@@ -0,0 +1,196 @@
# Deployment Files Index
Complete index of all deployment files and their purposes.
## 📚 Documentation
| File | Purpose | Lines |
|------|---------|-------|
| `DEPLOYMENT_GUIDE.md` | Complete step-by-step deployment guide | 1,079 |
| `DEPLOYMENT_TASKS.md` | Detailed 71-task checklist | 561 |
| `DEPLOYMENT_CHECKLIST.md` | Interactive deployment checklist | 204 |
| `DEPLOYMENT_SUMMARY.md` | Deployment package summary | - |
| `QUICK_DEPLOY.md` | Quick command reference | - |
| `README.md` | Documentation overview | - |
| `INDEX.md` | This file | - |
## 🔧 Scripts
| Script | Purpose | Executable |
|--------|---------|------------|
| `scripts/deploy-lxc.sh` | Automated LXC container setup | ✅ |
| `scripts/build-all.sh` | Build all applications | ✅ |
| `scripts/install-services.sh` | Install systemd service files | ✅ |
| `scripts/setup-nginx.sh` | Configure Nginx | ✅ |
| `scripts/setup-cloudflare-tunnel.sh` | Setup Cloudflare Tunnel | ✅ |
| `scripts/setup-firewall.sh` | Configure UFW firewall | ✅ |
| `scripts/setup-fail2ban.sh` | Configure Fail2ban | ✅ |
| `scripts/setup-backup.sh` | Setup backup system | ✅ |
| `scripts/setup-health-check.sh` | Setup health monitoring | ✅ |
| `scripts/verify-deployment.sh` | Verify deployment | ✅ |
| `scripts/full-deploy.sh` | Full automated deployment | ✅ |
## ⚙️ Configuration Files
### Nginx
- `nginx/explorer.conf` - Complete Nginx reverse proxy configuration
### Cloudflare
- `cloudflare/tunnel-config.yml` - Cloudflare Tunnel configuration template
### Systemd Services
- `systemd/explorer-indexer.service` - Indexer service file
- `systemd/explorer-api.service` - API service file
- `systemd/explorer-frontend.service` - Frontend service file
- `systemd/cloudflared.service` - Cloudflare Tunnel service file
### Fail2ban
- `fail2ban/nginx.conf` - Nginx filter configuration
- `fail2ban/jail.local` - Jail configuration
### Environment
- `ENVIRONMENT_TEMPLATE.env` - Environment variables template
### Docker
- `docker-compose.yml` - Docker Compose for infrastructure services
### Kubernetes
- `kubernetes/indexer-deployment.yaml` - Kubernetes deployment example
## 📋 Usage Guide
### For First-Time Deployment
1. **Read**: `DEPLOYMENT_GUIDE.md` - Complete walkthrough
2. **Track**: `DEPLOYMENT_TASKS.md` - Follow 71 tasks
3. **Check**: `DEPLOYMENT_CHECKLIST.md` - Mark completed items
4. **Reference**: `QUICK_DEPLOY.md` - Quick commands
### For Automated Deployment
```bash
# Full automated deployment
sudo ./deployment/scripts/full-deploy.sh
# Or step-by-step
./deployment/scripts/deploy-lxc.sh
./deployment/scripts/build-all.sh
./deployment/scripts/install-services.sh
./deployment/scripts/setup-nginx.sh
./deployment/scripts/setup-cloudflare-tunnel.sh
```
### For Verification
```bash
# Verify deployment
./deployment/scripts/verify-deployment.sh
```
## 🗂️ File Organization
```
deployment/
├── Documentation (7 files)
│ ├── DEPLOYMENT_GUIDE.md
│ ├── DEPLOYMENT_TASKS.md
│ ├── DEPLOYMENT_CHECKLIST.md
│ ├── DEPLOYMENT_SUMMARY.md
│ ├── QUICK_DEPLOY.md
│ ├── README.md
│ └── INDEX.md
├── Scripts (11 files)
│ └── scripts/*.sh
├── Configuration (10 files)
│ ├── nginx/explorer.conf
│ ├── cloudflare/tunnel-config.yml
│ ├── systemd/*.service (4 files)
│ ├── fail2ban/*.conf (2 files)
│ ├── ENVIRONMENT_TEMPLATE.env
│ └── docker-compose.yml
└── Kubernetes (1 file)
└── kubernetes/indexer-deployment.yaml
```
## ✅ Quick Reference
### Essential Commands
```bash
# Build applications
./deployment/scripts/build-all.sh
# Install services
sudo ./deployment/scripts/install-services.sh
sudo systemctl enable explorer-indexer explorer-api explorer-frontend
sudo systemctl start explorer-indexer explorer-api explorer-frontend
# Setup Nginx
sudo ./deployment/scripts/setup-nginx.sh
# Setup Cloudflare Tunnel
sudo ./deployment/scripts/setup-cloudflare-tunnel.sh
# Verify deployment
./deployment/scripts/verify-deployment.sh
```
### Service Management
```bash
# Check status
systemctl status explorer-indexer explorer-api explorer-frontend
# View logs
journalctl -u explorer-api -f
# Restart service
systemctl restart explorer-api
```
### Health Checks
```bash
# API health
curl http://localhost:8080/health
# Through Nginx
curl http://localhost/api/health
# Through Cloudflare
curl https://explorer.d-bis.org/api/health
```
## 📊 Statistics
- **Total Files**: 28
- **Documentation**: 7 files (1,844+ lines)
- **Scripts**: 11 files (all executable)
- **Configuration**: 10 files
- **Total Tasks**: 71
- **Estimated Deployment Time**: 6-8 hours
## 🎯 Deployment Paths
### Path 1: Full Automated
```bash
sudo ./deployment/scripts/full-deploy.sh
```
### Path 2: Step-by-Step Manual
1. Follow `DEPLOYMENT_GUIDE.md`
2. Use `DEPLOYMENT_TASKS.md` for task list
3. Check off in `DEPLOYMENT_CHECKLIST.md`
### Path 3: Hybrid (Recommended)
1. Run automated scripts for setup
2. Manual configuration for critical steps
3. Verify with `verify-deployment.sh`
---
**All deployment files are ready and documented!**

138
deployment/QUICK_DEPLOY.md Normal file
View File

@@ -0,0 +1,138 @@
# Quick Deployment Reference
Quick command reference for deploying the platform.
## One-Command Setup (Partial)
```bash
# Run automated script (sets up container and dependencies)
./deployment/scripts/deploy-lxc.sh
```
## Essential Commands
### Container Management
```bash
# Create container
pct create 100 local:vztmpl/ubuntu-22.04-standard_22.04-1_amd64.tar.zst \
--hostname explorer-prod --memory 16384 --cores 4 --unprivileged 0
# Start/Stop
pct start 100
pct stop 100
pct enter 100
```
### Services
```bash
# Start all services
systemctl start explorer-indexer explorer-api explorer-frontend
# Check status
systemctl status explorer-indexer
journalctl -u explorer-indexer -f
# Restart
systemctl restart explorer-api
```
### Database
```bash
# Run migrations
cd /home/explorer/explorer-monorepo/backend
go run database/migrations/migrate.go
# Backup
pg_dump -U explorer explorer | gzip > backup.sql.gz
```
### Nginx
```bash
# Test config
nginx -t
# Reload
systemctl reload nginx
# Check logs
tail -f /var/log/nginx/explorer-error.log
```
### Cloudflare Tunnel
```bash
# Create tunnel
cloudflared tunnel create explorer-tunnel
# Run tunnel
cloudflared tunnel --config /etc/cloudflared/config.yml run
# Service management
systemctl start cloudflared
systemctl status cloudflared
```
### Health Checks
```bash
# API health
curl http://localhost:8080/health
# Frontend
curl http://localhost:3000
# Through Nginx
curl http://localhost/api/health
# Through Cloudflare
curl https://explorer.d-bis.org/api/health
```
## File Locations
- **Config**: `/home/explorer/explorer-monorepo/.env`
- **Services**: `/etc/systemd/system/explorer-*.service`
- **Nginx**: `/etc/nginx/sites-available/explorer`
- **Tunnel**: `/etc/cloudflared/config.yml`
- **Logs**: `/var/log/explorer/` and `journalctl -u explorer-*`
## Common Issues
### Service won't start
```bash
journalctl -u explorer-api --since "10 minutes ago"
systemctl restart explorer-api
```
### Database connection failed
```bash
sudo -u postgres psql
\c explorer
\dt # List tables
```
### Nginx 502 Bad Gateway
```bash
# Check if API is running
curl http://localhost:8080/health
# Check Nginx error log
tail -f /var/log/nginx/explorer-error.log
```
### Cloudflare Tunnel not working
```bash
cloudflared tunnel info explorer-tunnel
journalctl -u cloudflared -f
```
## Emergency Rollback
```bash
# Stop all services
systemctl stop explorer-indexer explorer-api explorer-frontend
# Restore from backup
gunzip < backup.sql.gz | psql -U explorer explorer
# Restart services
systemctl start explorer-indexer explorer-api explorer-frontend
```

118
deployment/README.md Normal file
View File

@@ -0,0 +1,118 @@
# Deployment Documentation
Complete deployment documentation for the ChainID 138 Explorer Platform.
## Documentation Files
### 📘 DEPLOYMENT_GUIDE.md
**Complete step-by-step guide** with detailed instructions for:
- LXC container setup
- Application installation
- Database configuration
- Nginx reverse proxy setup
- Cloudflare DNS, SSL, and Tunnel configuration
- Security hardening
- Monitoring setup
**Use this for**: Full deployment walkthrough
### 📋 DEPLOYMENT_TASKS.md
**Detailed task checklist** with all 71 tasks organized by phase:
- Pre-deployment (5 tasks)
- Phase 1: LXC Setup (8 tasks)
- Phase 2: Application Installation (12 tasks)
- Phase 3: Database Setup (10 tasks)
- Phase 4: Infrastructure Services (6 tasks)
- Phase 5: Application Services (10 tasks)
- Phase 6: Nginx Reverse Proxy (9 tasks)
- Phase 7: Cloudflare Configuration (18 tasks)
- Phase 8: Security Hardening (12 tasks)
- Phase 9: Monitoring (8 tasks)
- Post-Deployment Verification (13 tasks)
- Optional Enhancements (8 tasks)
**Use this for**: Tracking deployment progress
### ✅ DEPLOYMENT_CHECKLIST.md
**Interactive checklist** for tracking deployment completion.
**Use this for**: Marking off completed items
### ⚡ QUICK_DEPLOY.md
**Quick reference** with essential commands and common issues.
**Use this for**: Quick command lookup during deployment
## Configuration Files
### nginx/explorer.conf
Complete Nginx configuration with:
- Rate limiting
- SSL/TLS settings
- Reverse proxy configuration
- Security headers
- Caching rules
- WebSocket support
### cloudflare/tunnel-config.yml
Cloudflare Tunnel configuration template.
### scripts/deploy-lxc.sh
Automated deployment script for initial setup.
## Deployment Architecture
```
Internet
Cloudflare (DNS, SSL, WAF, CDN)
Cloudflare Tunnel (optional)
LXC Container
├── Nginx (Reverse Proxy)
│ ├── → Frontend (Port 3000)
│ └── → API (Port 8080)
├── PostgreSQL + TimescaleDB
├── Elasticsearch
├── Redis
└── Application Services
├── Indexer
├── API Server
└── Frontend Server
```
## Quick Start
1. **Read the deployment guide**: `DEPLOYMENT_GUIDE.md`
2. **Use the task list**: `DEPLOYMENT_TASKS.md`
3. **Track progress**: `DEPLOYMENT_CHECKLIST.md`
4. **Quick reference**: `QUICK_DEPLOY.md`
## Prerequisites
- Proxmox VE with LXC support
- Cloudflare account with domain
- 16GB+ RAM, 4+ CPU cores, 100GB+ storage
- Ubuntu 22.04 LTS template
- SSH access to Proxmox host
## Estimated Time
- **First deployment**: 6-8 hours
- **Subsequent deployments**: 2-3 hours
- **Updates**: 30-60 minutes
## Support
For issues during deployment:
1. Check `QUICK_DEPLOY.md` for common issues
2. Review service logs: `journalctl -u <service-name> -f`
3. Check Nginx logs: `tail -f /var/log/nginx/explorer-error.log`
4. Verify Cloudflare tunnel: `systemctl status cloudflared`
## Version
**Version**: 1.0.0
**Last Updated**: 2024-12-23

View File

@@ -0,0 +1,31 @@
# Cloudflare Tunnel Configuration
# Place this file at: /etc/cloudflared/config.yml
tunnel: <YOUR_TUNNEL_ID>
credentials-file: /etc/cloudflared/<YOUR_TUNNEL_ID>.json
# Ingress rules
ingress:
# Main domain - API and Frontend
- hostname: explorer.d-bis.org
service: http://localhost:80
originRequest:
noHappyEyeballs: true
connectTimeout: 30s
tcpKeepAlive: 30s
keepAliveTimeout: 90s
keepAliveConnections: 100
# WWW redirect handled by Cloudflare
- hostname: www.explorer.d-bis.org
service: http://localhost:80
# Catch-all rule
- service: http_status:404
# Metrics (optional)
metrics: 0.0.0.0:9090
# Logging
loglevel: info

View File

@@ -0,0 +1,206 @@
version: '3.8'
services:
postgres:
image: timescale/timescaledb:latest-pg16
environment:
POSTGRES_USER: explorer
POSTGRES_PASSWORD: ${DB_PASSWORD:-changeme}
POSTGRES_DB: explorer
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U explorer"]
interval: 10s
timeout: 5s
retries: 5
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.11.0
environment:
- discovery.type=single-node
- xpack.security.enabled=false
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ports:
- "9200:9200"
volumes:
- es_data:/usr/share/elasticsearch/data
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:9200/_cluster/health || exit 1"]
interval: 30s
timeout: 10s
retries: 5
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
- redis_data:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
indexer:
build:
context: ../backend
dockerfile: Dockerfile.indexer
environment:
- DB_HOST=postgres
- DB_PORT=5432
- DB_USER=explorer
- DB_PASSWORD=${DB_PASSWORD:-changeme}
- DB_NAME=explorer
- RPC_URL=${RPC_URL:-http://localhost:8545}
- WS_URL=${WS_URL:-ws://localhost:8546}
- CHAIN_ID=138
depends_on:
postgres:
condition: service_healthy
healthcheck:
test: ["CMD", "pg_isready", "-U", "explorer", "-h", "postgres"]
interval: 30s
timeout: 10s
retries: 3
deploy:
resources:
limits:
cpus: '2'
memory: 2G
reservations:
cpus: '0.5'
memory: 512M
restart: unless-stopped
labels:
- "com.solacescanscout.name=indexer"
- "com.solacescanscout.version=1.0.0"
- "com.solacescanscout.service=block-indexer"
api:
build:
context: ../backend
dockerfile: Dockerfile.api
environment:
- DB_HOST=postgres
- DB_PORT=5432
- DB_USER=explorer
- DB_PASSWORD=${DB_PASSWORD:-changeme}
- DB_NAME=explorer
- PORT=8080
- CHAIN_ID=138
- REDIS_URL=redis://redis:6379
ports:
- "8080:8080"
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
deploy:
resources:
limits:
cpus: '2'
memory: 2G
reservations:
cpus: '1'
memory: 1G
restart: unless-stopped
labels:
- "com.solacescanscout.name=api"
- "com.solacescanscout.version=1.0.0"
- "com.solacescanscout.service=rest-api"
frontend:
build:
context: ../frontend
dockerfile: Dockerfile
environment:
- NEXT_PUBLIC_API_URL=http://localhost:8080
- NEXT_PUBLIC_CHAIN_ID=138
ports:
- "3000:3000"
depends_on:
api:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
deploy:
resources:
limits:
cpus: '1'
memory: 1G
reservations:
cpus: '0.5'
memory: 512M
restart: unless-stopped
labels:
- "com.solacescanscout.name=frontend"
- "com.solacescanscout.version=1.0.0"
- "com.solacescanscout.service=web-frontend"
virtual-banker-api:
build:
context: ../virtual-banker/backend
dockerfile: ../virtual-banker/deployment/Dockerfile.backend
environment:
- DATABASE_URL=postgres://explorer:${DB_PASSWORD:-changeme}@postgres:5432/explorer?sslmode=disable
- REDIS_URL=redis://redis:6379
- PORT=8081
ports:
- "8081:8081"
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8081/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
deploy:
resources:
limits:
cpus: '2'
memory: 2G
reservations:
cpus: '1'
memory: 1G
restart: unless-stopped
labels:
- "com.solacescanscout.name=virtual-banker-api"
- "com.solacescanscout.version=1.0.0"
- "com.solacescanscout.service=virtual-banker-api"
virtual-banker-widget:
build:
context: ../virtual-banker/widget
dockerfile: ../virtual-banker/deployment/Dockerfile.widget
ports:
- "8082:80"
restart: unless-stopped
labels:
- "com.solacescanscout.name=virtual-banker-widget"
- "com.solacescanscout.version=1.0.0"
- "com.solacescanscout.service=virtual-banker-widget-cdn"
volumes:
postgres_data:
es_data:
redis_data:

View File

@@ -0,0 +1,29 @@
# Fail2ban configuration for Explorer platform
# Place in: /etc/fail2ban/jail.d/explorer.conf
[nginx-limit-req]
enabled = true
port = http,https
logpath = /var/log/nginx/explorer-error.log
maxretry = 10
findtime = 600
bantime = 3600
action = %(action_)s
[nginx-botsearch]
enabled = true
port = http,https
logpath = /var/log/nginx/explorer-access.log
maxretry = 2
findtime = 600
bantime = 86400
action = %(action_)s
[sshd]
enabled = true
port = ssh
logpath = %(sshd_log)s
maxretry = 5
findtime = 600
bantime = 3600

View File

@@ -0,0 +1,7 @@
# Fail2ban filter for Nginx rate limiting
# Place in: /etc/fail2ban/filter.d/nginx-limit-req.conf
[Definition]
failregex = ^.*limiting requests, excess:.*by zone.*client: <HOST>.*$
ignoreregex =

View File

@@ -0,0 +1,54 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: indexer
namespace: explorer
spec:
replicas: 2
selector:
matchLabels:
app: indexer
template:
metadata:
labels:
app: indexer
spec:
containers:
- name: indexer
image: explorer/indexer:latest
env:
- name: DB_HOST
valueFrom:
secretKeyRef:
name: db-credentials
key: host
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: password
- name: RPC_URL
valueFrom:
configMapKeyRef:
name: indexer-config
key: rpc_url
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "2Gi"
cpu: "1000m"
---
apiVersion: v1
kind: Service
metadata:
name: indexer
namespace: explorer
spec:
selector:
app: indexer
ports:
- port: 8080
targetPort: 8080

View File

@@ -0,0 +1,207 @@
# Rate limiting zones
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=general_limit:10m rate=50r/s;
limit_conn_zone $binary_remote_addr zone=conn_limit:10m;
# Upstream servers
upstream explorer_api {
server 127.0.0.1:8080;
keepalive 32;
}
upstream explorer_frontend {
server 127.0.0.1:3000;
keepalive 32;
}
# Redirect HTTP to HTTPS
server {
listen 80;
listen [::]:80;
server_name explorer.d-bis.org www.explorer.d-bis.org;
# Allow Let's Encrypt validation
location /.well-known/acme-challenge/ {
root /var/www/html;
}
# Redirect all other traffic to HTTPS
location / {
return 301 https://$server_name$request_uri;
}
}
# Main HTTPS server
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name explorer.d-bis.org www.explorer.d-bis.org;
# SSL Configuration (Cloudflare handles SSL, but we can add local certs too)
# ssl_certificate /etc/letsencrypt/live/explorer.d-bis.org/fullchain.pem;
# ssl_certificate_key /etc/letsencrypt/live/explorer.d-bis.org/privkey.pem;
# ssl_protocols TLSv1.2 TLSv1.3;
# ssl_ciphers HIGH:!aNULL:!MD5;
# ssl_prefer_server_ciphers on;
# Security headers
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Permissions-Policy "geolocation=(), microphone=(), camera=()" always;
# Content Security Policy (adjust as needed)
# CSP: unsafe-eval required by ethers.js v5 UMD from CDN
add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval' https://cdn.jsdelivr.net https://unpkg.com https://cdnjs.cloudflare.com; style-src 'self' 'unsafe-inline' https://cdnjs.cloudflare.com; img-src 'self' data: https:; font-src 'self' data: https://cdnjs.cloudflare.com; connect-src 'self' https://api.cloudflare.com https://explorer.d-bis.org wss://explorer.d-bis.org https://rpc-http-pub.d-bis.org wss://rpc-ws-pub.d-bis.org http://192.168.11.221:8545 ws://192.168.11.221:8546;" always;
# Logging
access_log /var/log/nginx/explorer-access.log combined buffer=32k flush=5m;
error_log /var/log/nginx/explorer-error.log warn;
# Client settings
client_max_body_size 10M;
client_body_timeout 60s;
client_header_timeout 60s;
# Gzip compression
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_min_length 1000;
gzip_types
text/plain
text/css
text/xml
text/javascript
application/json
application/javascript
application/xml+rss
application/rss+xml
font/truetype
font/opentype
application/vnd.ms-fontobject
image/svg+xml;
# Brotli compression (if available)
# brotli on;
# brotli_comp_level 6;
# brotli_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
# Frontend
location / {
limit_req zone=general_limit burst=20 nodelay;
limit_conn conn_limit 10;
proxy_pass http://explorer_frontend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 300s;
proxy_connect_timeout 75s;
proxy_send_timeout 300s;
# Buffering
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
proxy_busy_buffers_size 8k;
}
# API endpoints
location /api/ {
limit_req zone=api_limit burst=20 nodelay;
limit_conn conn_limit 5;
proxy_pass http://explorer_api;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header Connection "";
proxy_read_timeout 300s;
proxy_connect_timeout 75s;
proxy_send_timeout 300s;
# Disable buffering for API responses
proxy_buffering off;
# CORS headers (Cloudflare will also add these)
add_header Access-Control-Allow-Origin "*" always;
add_header Access-Control-Allow-Methods "GET, POST, OPTIONS" always;
add_header Access-Control-Allow-Headers "Content-Type, X-API-Key, Authorization" always;
# Handle preflight
if ($request_method = OPTIONS) {
add_header Access-Control-Allow-Origin "*";
add_header Access-Control-Allow-Methods "GET, POST, OPTIONS";
add_header Access-Control-Allow-Headers "Content-Type, X-API-Key, Authorization";
add_header Access-Control-Max-Age 1728000;
add_header Content-Type "text/plain; charset=utf-8";
add_header Content-Length 0;
return 204;
}
}
# WebSocket support
location /ws {
proxy_pass http://explorer_api;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
proxy_connect_timeout 75s;
}
# Static files caching
location ~* \.(jpg|jpeg|png|gif|ico|css|js|svg|woff|woff2|ttf|eot|webp|avif)$ {
expires 1y;
add_header Cache-Control "public, immutable";
add_header X-Content-Type-Options "nosniff";
access_log off;
log_not_found off;
}
# Health check endpoint (internal only)
location /health {
access_log off;
proxy_pass http://explorer_api/health;
proxy_set_header Host $host;
}
# Block access to sensitive files
location ~ /\. {
deny all;
access_log off;
log_not_found off;
}
location ~ \.(env|git|gitignore|md|sh)$ {
deny all;
access_log off;
log_not_found off;
}
}

47
deployment/scripts/build-all.sh Executable file
View File

@@ -0,0 +1,47 @@
#!/bin/bash
# Build all applications
set -e
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
PROJECT_ROOT="$( cd "$SCRIPT_DIR/../.." && pwd )"
echo "Building all applications..."
cd "$PROJECT_ROOT"
# Build backend services
echo "Building backend services..."
cd backend
# Indexer
echo " Building indexer..."
go build -o /usr/local/bin/explorer-indexer ./indexer/main.go
# API
echo " Building API..."
go build -o /usr/local/bin/explorer-api ./api/rest/main.go
# Gateway
echo " Building gateway..."
go build -o /usr/local/bin/explorer-gateway ./api/gateway/main.go
# Search
echo " Building search service..."
go build -o /usr/local/bin/explorer-search ./api/search/main.go
# Build frontend
echo "Building frontend..."
cd ../frontend
npm ci
npm run build
echo ""
echo "All applications built successfully!"
echo ""
echo "Binaries installed to:"
echo " /usr/local/bin/explorer-indexer"
echo " /usr/local/bin/explorer-api"
echo " /usr/local/bin/explorer-gateway"
echo " /usr/local/bin/explorer-search"

170
deployment/scripts/deploy-lxc.sh Executable file
View File

@@ -0,0 +1,170 @@
#!/bin/bash
# LXC Deployment Script for ChainID 138 Explorer Platform
# This script automates the deployment process
set -e
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
PROJECT_ROOT="$( cd "$SCRIPT_DIR/../.." && pwd )"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Configuration
CONTAINER_ID="${CONTAINER_ID:-100}"
CONTAINER_HOSTNAME="${CONTAINER_HOSTNAME:-explorer-prod}"
DOMAIN="${DOMAIN:-explorer.d-bis.org}"
SKIP_CONTAINER_CREATION="${SKIP_CONTAINER_CREATION:-false}"
echo -e "${GREEN}=== ChainID 138 Explorer Platform - LXC Deployment ===${NC}"
echo ""
# Check if running on Proxmox host
if ! command -v pct &> /dev/null; then
echo -e "${RED}Error: This script must be run on a Proxmox host${NC}"
exit 1
fi
# Phase 1: Create LXC Container
if [ "$SKIP_CONTAINER_CREATION" != "true" ]; then
echo -e "${YELLOW}Phase 1: Creating LXC Container...${NC}"
# Check if container already exists
if pct list | grep -q "^$CONTAINER_ID "; then
echo -e "${YELLOW}Container $CONTAINER_ID already exists. Skipping creation.${NC}"
echo "Set SKIP_CONTAINER_CREATION=true to skip this check."
read -p "Do you want to continue with existing container? (y/N): " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
exit 1
fi
else
echo "Creating container $CONTAINER_ID..."
pct create $CONTAINER_ID \
local:vztmpl/ubuntu-22.04-standard_22.04-1_amd64.tar.zst \
--hostname $CONTAINER_HOSTNAME \
--memory 16384 \
--cores 4 \
--swap 4096 \
--storage local-lvm \
--rootfs local-lvm:100 \
--net0 name=eth0,bridge=vmbr0,ip=dhcp \
--unprivileged 0 \
--features nesting=1 \
--start 1
echo "Waiting for container to start..."
sleep 5
fi
fi
# Phase 2: Initial Container Setup
echo -e "${YELLOW}Phase 2: Initial Container Setup...${NC}"
cat << 'INITSCRIPT' | pct exec $CONTAINER_ID bash
set -e
# Update system
apt update && apt upgrade -y
# Install essential packages
apt install -y curl wget git vim net-tools ufw fail2ban \
unattended-upgrades apt-transport-https ca-certificates \
gnupg lsb-release software-properties-common
# Set timezone
timedatectl set-timezone UTC
# Create deployment user
if ! id "explorer" &>/dev/null; then
adduser --disabled-password --gecos "" explorer
usermod -aG sudo explorer
echo "explorer ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
fi
INITSCRIPT
echo -e "${GREEN}✓ Container setup complete${NC}"
# Phase 3: Install Dependencies
echo -e "${YELLOW}Phase 3: Installing Dependencies...${NC}"
cat << 'DEPSCRIPT' | pct exec $CONTAINER_ID bash
set -e
# Install Go
if ! command -v go &> /dev/null; then
cd /tmp
wget -q https://go.dev/dl/go1.21.6.linux-amd64.tar.gz
rm -rf /usr/local/go
tar -C /usr/local -xzf go1.21.6.linux-amd64.tar.gz
echo 'export PATH=$PATH:/usr/local/go/bin' >> /etc/profile
export PATH=$PATH:/usr/local/go/bin
fi
# Install Node.js
if ! command -v node &> /dev/null; then
curl -fsSL https://deb.nodesource.com/setup_20.x | bash -
apt install -y nodejs
fi
# Install Docker
if ! command -v docker &> /dev/null; then
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
apt update
apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
systemctl enable docker
systemctl start docker
usermod -aG docker explorer
fi
# Install PostgreSQL
if ! command -v psql &> /dev/null; then
sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -
apt update
apt install -y postgresql-16 postgresql-contrib-16
fi
# Install Nginx
if ! command -v nginx &> /dev/null; then
apt install -y nginx
fi
# Install cloudflared
if ! command -v cloudflared &> /dev/null; then
cd /tmp
wget -q https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb
dpkg -i cloudflared-linux-amd64.deb || apt install -f -y
fi
DEPSCRIPT
echo -e "${GREEN}✓ Dependencies installed${NC}"
# Phase 4: Deploy Application
echo -e "${YELLOW}Phase 4: Deploying Application...${NC}"
# Copy project files to container (assuming git clone on host)
echo "Note: You'll need to clone the repository inside the container or copy files"
echo "For now, the script will prepare the structure"
cat << 'APPSCRIPT' | pct exec $CONTAINER_ID bash -s
set -e
mkdir -p /home/explorer/explorer-monorepo
chown explorer:explorer /home/explorer/explorer-monorepo
APPSCRIPT
echo -e "${YELLOW}Please complete the deployment manually:${NC}"
echo "1. Clone repository inside container: pct exec $CONTAINER_ID"
echo "2. Copy .env file and configure"
echo "3. Run migrations"
echo "4. Build applications"
echo "5. Configure services"
echo ""
echo "See DEPLOYMENT_GUIDE.md for complete instructions"
echo -e "${GREEN}=== Deployment Script Complete ===${NC}"

View File

@@ -0,0 +1,81 @@
#!/bin/bash
# Full automated deployment script
# This script automates most of the deployment process
set -e
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
DEPLOYMENT_DIR="$( cd "$SCRIPT_DIR/.." && pwd )"
PROJECT_ROOT="$( cd "$DEPLOYMENT_DIR/.." && pwd )"
# Colors
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
RED='\033[0;31m'
NC='\033[0m'
echo -e "${GREEN}=== Full Deployment Script ===${NC}"
echo ""
# Check if running as root
if [ "$EUID" -ne 0 ]; then
echo -e "${RED}Please run as root${NC}"
exit 1
fi
# Phase 1: Install dependencies
echo -e "${YELLOW}Phase 1: Installing dependencies...${NC}"
"$SCRIPT_DIR/../scripts/setup.sh" || {
echo "Installing dependencies manually..."
apt update
apt install -y curl wget git vim net-tools ufw fail2ban \
unattended-upgrades apt-transport-https ca-certificates \
gnupg lsb-release software-properties-common
}
# Phase 2: Install Go, Node.js, Docker
echo -e "${YELLOW}Phase 2: Installing development tools...${NC}"
# These would be installed by setup.sh or manually
echo "Please ensure Go, Node.js, and Docker are installed"
echo "Run: ./scripts/check-requirements.sh"
# Phase 3: Setup Nginx
echo -e "${YELLOW}Phase 3: Setting up Nginx...${NC}"
"$SCRIPT_DIR/setup-nginx.sh"
# Phase 4: Install services
echo -e "${YELLOW}Phase 4: Installing systemd services...${NC}"
"$SCRIPT_DIR/install-services.sh"
# Phase 5: Setup firewall
echo -e "${YELLOW}Phase 5: Setting up firewall...${NC}"
"$SCRIPT_DIR/setup-firewall.sh"
# Phase 6: Setup backups
echo -e "${YELLOW}Phase 6: Setting up backups...${NC}"
"$SCRIPT_DIR/setup-backup.sh"
# Phase 7: Setup health checks
echo -e "${YELLOW}Phase 7: Setting up health checks...${NC}"
"$SCRIPT_DIR/setup-health-check.sh"
# Phase 8: Cloudflare Tunnel (optional, interactive)
echo -e "${YELLOW}Phase 8: Cloudflare Tunnel setup (optional)...${NC}"
read -p "Do you want to set up Cloudflare Tunnel now? (y/N): " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]; then
"$SCRIPT_DIR/setup-cloudflare-tunnel.sh"
fi
echo ""
echo -e "${GREEN}=== Deployment Complete ===${NC}"
echo ""
echo "Next steps:"
echo "1. Configure .env file: /home/explorer/explorer-monorepo/.env"
echo "2. Run database migrations"
echo "3. Build applications"
echo "4. Start services: systemctl start explorer-indexer explorer-api explorer-frontend"
echo "5. Configure Cloudflare DNS and SSL"
echo ""
echo "See DEPLOYMENT_GUIDE.md for detailed instructions"

View File

@@ -0,0 +1,27 @@
#!/bin/bash
# Install systemd service files
set -e
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
DEPLOYMENT_DIR="$( cd "$SCRIPT_DIR/.." && pwd )"
echo "Installing systemd service files..."
# Copy service files
cp "$DEPLOYMENT_DIR/systemd/explorer-indexer.service" /etc/systemd/system/
cp "$DEPLOYMENT_DIR/systemd/explorer-api.service" /etc/systemd/system/
cp "$DEPLOYMENT_DIR/systemd/explorer-frontend.service" /etc/systemd/system/
cp "$DEPLOYMENT_DIR/systemd/cloudflared.service" /etc/systemd/system/
# Set permissions
chmod 644 /etc/systemd/system/explorer-*.service
chmod 644 /etc/systemd/system/cloudflared.service
# Reload systemd
systemctl daemon-reload
echo "Service files installed. Enable with:"
echo " systemctl enable explorer-indexer explorer-api explorer-frontend"
echo " systemctl start explorer-indexer explorer-api explorer-frontend"

View File

@@ -0,0 +1,49 @@
#!/bin/bash
# Setup backup script and cron job
set -e
echo "Setting up backup system..."
# Create backup directory
mkdir -p /backups/explorer
chown explorer:explorer /backups/explorer
# Create backup script
cat > /usr/local/bin/explorer-backup.sh << 'EOF'
#!/bin/bash
BACKUP_DIR="/backups/explorer"
DATE=$(date +%Y%m%d_%H%M%S)
mkdir -p $BACKUP_DIR
# Backup database
echo "Backing up database..."
pg_dump -U explorer explorer | gzip > $BACKUP_DIR/db_$DATE.sql.gz
# Backup configuration
echo "Backing up configuration..."
tar -czf $BACKUP_DIR/config_$DATE.tar.gz \
/home/explorer/explorer-monorepo/.env \
/etc/nginx/sites-available/explorer \
/etc/systemd/system/explorer-*.service \
/etc/cloudflared/config.yml 2>/dev/null || true
# Cleanup old backups (keep 30 days)
echo "Cleaning up old backups..."
find $BACKUP_DIR -type f -mtime +30 -delete
echo "Backup completed: $DATE"
EOF
chmod +x /usr/local/bin/explorer-backup.sh
chown explorer:explorer /usr/local/bin/explorer-backup.sh
# Add to crontab (daily at 2 AM)
(crontab -l 2>/dev/null | grep -v explorer-backup.sh; echo "0 2 * * * /usr/local/bin/explorer-backup.sh >> /var/log/explorer-backup.log 2>&1") | crontab -
echo "Backup system configured!"
echo "Backups will run daily at 2 AM"
echo "Backup location: /backups/explorer"
echo ""
echo "To run backup manually: /usr/local/bin/explorer-backup.sh"

View File

@@ -0,0 +1,68 @@
#!/bin/bash
# Setup Cloudflare Tunnel
set -e
echo "Setting up Cloudflare Tunnel..."
# Check if cloudflared is installed
if ! command -v cloudflared &> /dev/null; then
echo "Installing cloudflared..."
cd /tmp
wget -q https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb
dpkg -i cloudflared-linux-amd64.deb || apt install -f -y
fi
# Authenticate (interactive)
echo "Please authenticate with Cloudflare..."
cloudflared tunnel login
# Create tunnel
echo "Creating tunnel..."
TUNNEL_NAME="explorer-tunnel"
cloudflared tunnel create $TUNNEL_NAME || echo "Tunnel may already exist"
# Get tunnel ID
TUNNEL_ID=$(cloudflared tunnel list | grep $TUNNEL_NAME | awk '{print $1}')
if [ -z "$TUNNEL_ID" ]; then
echo "ERROR: Could not find tunnel ID"
exit 1
fi
echo "Tunnel ID: $TUNNEL_ID"
# Create config directory
mkdir -p /etc/cloudflared
# Create config file
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
DEPLOYMENT_DIR="$( cd "$SCRIPT_DIR/.." && pwd )"
cat > /etc/cloudflared/config.yml << EOF
tunnel: $TUNNEL_ID
credentials-file: /etc/cloudflared/$TUNNEL_ID.json
ingress:
- hostname: explorer.d-bis.org
service: http://localhost:80
- hostname: www.explorer.d-bis.org
service: http://localhost:80
- service: http_status:404
EOF
# Validate config
cloudflared tunnel --config /etc/cloudflared/config.yml ingress validate
# Install as service
cloudflared service install
echo "Cloudflare Tunnel configured!"
echo "Tunnel ID: $TUNNEL_ID"
echo "Config: /etc/cloudflared/config.yml"
echo ""
echo "Next steps:"
echo "1. Configure DNS routes in Cloudflare dashboard"
echo "2. Start service: systemctl start cloudflared"
echo "3. Enable on boot: systemctl enable cloudflared"

View File

@@ -0,0 +1,51 @@
#!/bin/bash
# Setup Fail2ban for Nginx
set -e
echo "Setting up Fail2ban..."
# Install fail2ban if not installed
if ! command -v fail2ban-server &> /dev/null; then
apt update
apt install -y fail2ban
fi
# Create filter for Nginx
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
DEPLOYMENT_DIR="$( cd "$SCRIPT_DIR/.." && pwd )"
cat > /etc/fail2ban/filter.d/nginx-limit-req.conf << 'EOF'
[Definition]
failregex = ^.*limiting requests, excess:.*by zone.*client: <HOST>.*$
ignoreregex =
EOF
# Create jail configuration
cat > /etc/fail2ban/jail.d/explorer.conf << 'EOF'
[nginx-limit-req]
enabled = true
port = http,https
logpath = /var/log/nginx/explorer-error.log
maxretry = 10
findtime = 600
bantime = 3600
[nginx-botsearch]
enabled = true
port = http,https
logpath = /var/log/nginx/explorer-access.log
maxretry = 2
findtime = 600
bantime = 86400
EOF
# Restart fail2ban
systemctl restart fail2ban
# Check status
fail2ban-client status
echo "Fail2ban configured!"
echo "Jails: nginx-limit-req, nginx-botsearch"

View File

@@ -0,0 +1,47 @@
#!/bin/bash
# Setup firewall rules
set -e
echo "Configuring firewall (UFW)..."
# Enable UFW
ufw --force enable
# Allow SSH
ufw allow 22/tcp comment 'SSH'
# Allow HTTP/HTTPS (if direct connection)
read -p "Do you have a direct public IP? (y/N): " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]; then
ufw allow 80/tcp comment 'HTTP'
ufw allow 443/tcp comment 'HTTPS'
fi
# Allow Cloudflare IP ranges (if using direct connection)
if [[ $REPLY =~ ^[Yy]$ ]]; then
echo "Adding Cloudflare IP ranges..."
ufw allow from 173.245.48.0/20 comment 'Cloudflare'
ufw allow from 103.21.244.0/22 comment 'Cloudflare'
ufw allow from 103.22.200.0/22 comment 'Cloudflare'
ufw allow from 103.31.4.0/22 comment 'Cloudflare'
ufw allow from 141.101.64.0/18 comment 'Cloudflare'
ufw allow from 108.162.192.0/18 comment 'Cloudflare'
ufw allow from 190.93.240.0/20 comment 'Cloudflare'
ufw allow from 188.114.96.0/20 comment 'Cloudflare'
ufw allow from 197.234.240.0/22 comment 'Cloudflare'
ufw allow from 198.41.128.0/17 comment 'Cloudflare'
ufw allow from 162.158.0.0/15 comment 'Cloudflare'
ufw allow from 104.16.0.0/13 comment 'Cloudflare'
ufw allow from 104.24.0.0/14 comment 'Cloudflare'
ufw allow from 172.64.0.0/13 comment 'Cloudflare'
ufw allow from 131.0.72.0/22 comment 'Cloudflare'
fi
# Show status
ufw status verbose
echo ""
echo "Firewall configured!"

View File

@@ -0,0 +1,44 @@
#!/bin/bash
# Setup health check script and cron job
set -e
echo "Setting up health check system..."
# Create health check script
cat > /usr/local/bin/explorer-health-check.sh << 'EOF'
#!/bin/bash
API_URL="http://localhost:8080/health"
LOG_FILE="/var/log/explorer-health-check.log"
# Check API health
STATUS=$(curl -s -o /dev/null -w "%{http_code}" $API_URL 2>/dev/null || echo "000")
if [ "$STATUS" != "200" ]; then
echo "$(date): Health check failed - Status: $STATUS" >> $LOG_FILE
# Restart API service
systemctl restart explorer-api
# Wait a bit and check again
sleep 10
STATUS2=$(curl -s -o /dev/null -w "%{http_code}" $API_URL 2>/dev/null || echo "000")
if [ "$STATUS2" != "200" ]; then
echo "$(date): API still unhealthy after restart - Status: $STATUS2" >> $LOG_FILE
# Send alert (configure email/Slack/etc here)
else
echo "$(date): API recovered after restart" >> $LOG_FILE
fi
fi
EOF
chmod +x /usr/local/bin/explorer-health-check.sh
# Add to crontab (every 5 minutes)
(crontab -l 2>/dev/null | grep -v explorer-health-check.sh; echo "*/5 * * * * /usr/local/bin/explorer-health-check.sh") | crontab -
echo "Health check system configured!"
echo "Health checks will run every 5 minutes"
echo "Log file: /var/log/explorer-health-check.log"

View File

@@ -0,0 +1,35 @@
#!/bin/bash
# Setup Nginx configuration
set -e
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
DEPLOYMENT_DIR="$( cd "$SCRIPT_DIR/.." && pwd )"
echo "Setting up Nginx configuration..."
# Install Nginx if not installed
if ! command -v nginx &> /dev/null; then
apt update
apt install -y nginx
fi
# Copy configuration
cp "$DEPLOYMENT_DIR/nginx/explorer.conf" /etc/nginx/sites-available/explorer
# Enable site
ln -sf /etc/nginx/sites-available/explorer /etc/nginx/sites-enabled/
# Remove default site
rm -f /etc/nginx/sites-enabled/default
# Test configuration
if nginx -t; then
echo "Nginx configuration is valid"
systemctl reload nginx
echo "Nginx reloaded"
else
echo "ERROR: Nginx configuration test failed"
exit 1
fi

View File

@@ -0,0 +1,103 @@
#!/bin/bash
# Verify deployment is working correctly
set -e
GREEN='\033[0;32m'
RED='\033[0;31m'
YELLOW='\033[1;33m'
NC='\033[0m'
echo -e "${GREEN}=== Deployment Verification ===${NC}"
echo ""
ERRORS=0
# Check services
echo "Checking services..."
for service in explorer-indexer explorer-api explorer-frontend nginx postgresql; do
if systemctl is-active --quiet $service; then
echo -e "${GREEN}${NC} $service is running"
else
echo -e "${RED}${NC} $service is not running"
ERRORS=$((ERRORS + 1))
fi
done
# Check API
echo ""
echo "Checking API..."
if curl -s http://localhost:8080/health | grep -q "healthy"; then
echo -e "${GREEN}${NC} API is healthy"
else
echo -e "${RED}${NC} API health check failed"
ERRORS=$((ERRORS + 1))
fi
# Check Frontend
echo ""
echo "Checking Frontend..."
if curl -s http://localhost:3000 | grep -q "Explorer"; then
echo -e "${GREEN}${NC} Frontend is responding"
else
echo -e "${RED}${NC} Frontend check failed"
ERRORS=$((ERRORS + 1))
fi
# Check Nginx
echo ""
echo "Checking Nginx..."
if curl -s http://localhost/api/health | grep -q "healthy"; then
echo -e "${GREEN}${NC} Nginx proxy is working"
else
echo -e "${RED}${NC} Nginx proxy check failed"
ERRORS=$((ERRORS + 1))
fi
# Check Database
echo ""
echo "Checking Database..."
if sudo -u postgres psql -d explorer -c "SELECT 1;" > /dev/null 2>&1; then
echo -e "${GREEN}${NC} Database is accessible"
else
echo -e "${RED}${NC} Database check failed"
ERRORS=$((ERRORS + 1))
fi
# Check Elasticsearch
echo ""
echo "Checking Elasticsearch..."
if curl -s http://localhost:9200 | grep -q "cluster_name"; then
echo -e "${GREEN}${NC} Elasticsearch is running"
else
echo -e "${YELLOW}${NC} Elasticsearch check failed (may not be critical)"
fi
# Check Redis
echo ""
echo "Checking Redis..."
if redis-cli ping 2>/dev/null | grep -q "PONG"; then
echo -e "${GREEN}${NC} Redis is running"
else
echo -e "${YELLOW}${NC} Redis check failed (may not be critical)"
fi
# Check Cloudflare Tunnel (if installed)
echo ""
echo "Checking Cloudflare Tunnel..."
if systemctl is-active --quiet cloudflared 2>/dev/null; then
echo -e "${GREEN}${NC} Cloudflare Tunnel is running"
else
echo -e "${YELLOW}${NC} Cloudflare Tunnel not running (optional)"
fi
# Summary
echo ""
if [ $ERRORS -eq 0 ]; then
echo -e "${GREEN}✓ All critical checks passed!${NC}"
exit 0
else
echo -e "${RED}$ERRORS critical check(s) failed${NC}"
exit 1
fi

View File

@@ -0,0 +1,17 @@
[Unit]
Description=Cloudflare Tunnel Service
After=network.target
[Service]
Type=simple
User=root
ExecStart=/usr/local/bin/cloudflared tunnel --config /etc/cloudflared/config.yml run
Restart=on-failure
RestartSec=5
StandardOutput=journal
StandardError=journal
SyslogIdentifier=cloudflared
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,33 @@
[Unit]
Description=ChainID 138 Explorer API Service
Documentation=https://github.com/explorer/backend
After=network.target postgresql.service
Requires=postgresql.service
[Service]
Type=simple
User=explorer
Group=explorer
WorkingDirectory=/home/explorer/explorer-monorepo/backend
EnvironmentFile=/home/explorer/explorer-monorepo/.env
ExecStart=/usr/local/bin/explorer-api
Restart=always
RestartSec=10
StandardOutput=journal
StandardError=journal
SyslogIdentifier=explorer-api
# Security settings
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=read-only
ReadWritePaths=/home/explorer/explorer-monorepo
# Resource limits
LimitNOFILE=65536
LimitNPROC=4096
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,33 @@
[Unit]
Description=ChainID 138 Explorer Frontend Service
Documentation=https://github.com/explorer/frontend
After=network.target explorer-api.service
Requires=explorer-api.service
[Service]
Type=simple
User=explorer
Group=explorer
WorkingDirectory=/home/explorer/explorer-monorepo/frontend
EnvironmentFile=/home/explorer/explorer-monorepo/.env
ExecStart=/usr/bin/npm start
Restart=always
RestartSec=10
StandardOutput=journal
StandardError=journal
SyslogIdentifier=explorer-frontend
# Security settings
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=read-only
ReadWritePaths=/home/explorer/explorer-monorepo/frontend
# Resource limits
LimitNOFILE=65536
LimitNPROC=4096
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,33 @@
[Unit]
Description=ChainID 138 Explorer Indexer Service
Documentation=https://github.com/explorer/backend
After=network.target postgresql.service
Requires=postgresql.service
[Service]
Type=simple
User=explorer
Group=explorer
WorkingDirectory=/home/explorer/explorer-monorepo/backend
EnvironmentFile=/home/explorer/explorer-monorepo/.env
ExecStart=/usr/local/bin/explorer-indexer
Restart=always
RestartSec=10
StandardOutput=journal
StandardError=journal
SyslogIdentifier=explorer-indexer
# Security settings
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=read-only
ReadWritePaths=/home/explorer/explorer-monorepo
# Resource limits
LimitNOFILE=65536
LimitNPROC=4096
[Install]
WantedBy=multi-user.target