5.0 KiB
Real-time Event System Specification
Overview
This document specifies the event bus architecture for real-time event distribution across the explorer platform services.
Architecture
Technology: Kafka or RabbitMQ
Recommendation: Kafka for high throughput, RabbitMQ for simpler setup
Event Bus Architecture
flowchart LR
Producers[Event Producers]
Bus[Event Bus<br/>Kafka/RabbitMQ]
Consumers[Event Consumers]
Producers --> Bus
Bus --> Consumers
Event Types
Block Events
Event: block.new
Payload:
{
"chain_id": 138,
"block_number": 12345,
"hash": "0x...",
"timestamp": "2024-01-01T00:00:00Z",
"transaction_count": 100
}
Producers: Indexer Consumers: API Gateway (cache invalidation), WebSocket service, Analytics
Transaction Events
Event: transaction.confirmed
Payload:
{
"chain_id": 138,
"hash": "0x...",
"block_number": 12345,
"from_address": "0x...",
"to_address": "0x...",
"status": "success"
}
Producers: Indexer, Mempool service Consumers: WebSocket service, Search indexer, Analytics
Mempool Events
Event: transaction.pending
Payload: Transaction data
Producers: Mempool service Consumers: WebSocket service, Fee oracle
Token Transfer Events
Event: token.transfer
Payload:
{
"chain_id": 138,
"token_address": "0x...",
"from_address": "0x...",
"to_address": "0x...",
"amount": "1000000000000000000",
"transaction_hash": "0x..."
}
Producers: Indexer Consumers: Token balance updater, Analytics, Notifications
Contract Verification Events
Event: contract.verified
Payload:
{
"chain_id": 138,
"address": "0x...",
"verification_status": "verified",
"verified_at": "2024-01-01T00:00:00Z"
}
Producers: Verification service Consumers: Search indexer, Cache invalidation
Event Schema
Standard Event Format
{
"event_type": "block.new",
"event_id": "uuid",
"timestamp": "2024-01-01T00:00:00Z",
"chain_id": 138,
"payload": { ... },
"metadata": {
"producer": "indexer",
"version": "1.0"
}
}
Topic/Queue Structure
Kafka Topics
Naming: {event_type}.{chain_id} (e.g., block.new.138)
Topics:
block.new.{chain_id}transaction.confirmed.{chain_id}transaction.pending.{chain_id}token.transfer.{chain_id}contract.verified.{chain_id}
Partitioning: By chain_id (all events for chain in same partition)
RabbitMQ Exchanges
Exchange Type: Topic exchange
Exchange Name: explorer.events
Routing Keys: {event_type}.{chain_id} (e.g., block.new.138)
Consumer Groups and Partitioning
Consumer Groups
Purpose: Enable parallel processing and load balancing
Groups:
websocket-service: WebSocket real-time updatessearch-indexer: Search index updatesanalytics: Analytics aggregationcache-invalidation: Cache invalidation
Partitioning Strategy
Kafka:
- Partition by chain_id
- Same chain_id → same partition (maintains ordering)
RabbitMQ:
- Use consistent hash exchange for partitioning
- Partition by chain_id
Delivery Guarantees
At-Least-Once Delivery
Default: At-least-once delivery (events may be delivered multiple times)
Handling:
- Idempotent consumers
- Deduplication by event_id
- Track processed events
Exactly-Once Delivery
Use Case: Critical financial events (banking layer)
Implementation:
- Kafka exactly-once semantics (if using Kafka)
- Idempotent consumers with deduplication
- Transactional processing
Backpressure Handling
Strategy
Flow Control:
- Consumer lag monitoring
- Slow consumer detection
- Automatic scaling
Handling Slow Consumers:
- Scale consumers horizontally
- Increase consumer resources
- Alert on persistent lag
Lag Monitoring
Metrics:
- Consumer lag per topic
- Processing rate
- Consumer health
Alerts: Lag > 1000 events or > 5 minutes
Event Ordering
Ordering Guarantees
Within Partition: Events processed in order Across Partitions: No ordering guarantee
Use Cases Requiring Ordering:
- Block events (must process in order)
- Transaction events per address (maintain order)
Implementation: Use same partition for related events
Error Handling
Dead Letter Queue
Purpose: Store failed events for retry or investigation
Strategy:
- Retry failed events (exponential backoff)
- Move to DLQ after max retries
- Alert on DLQ events
Retry Strategy
Configuration:
- Max retries: 3
- Backoff: Exponential (1s, 2s, 4s)
- Dead letter after max retries
Monitoring
Metrics
- Event production rate
- Event consumption rate
- Consumer lag
- Error rate
- Dead letter queue size
Dashboards
- Event throughput per topic
- Consumer lag per consumer group
- Error rates
- System health
References
- Mempool Service: See
mempool-service.md - WebSocket API: See
../api/websocket-api.md