c8fb52565122c9a6566a7cac935ac7a1148e556c
NowYouSeeMe: Real-Time 6DOF Holodeck Environment
A robust, real-time 6DOF, photo-realistic "holodeck" environment using commodity laptop camera and Wi-Fi Channel State Information (CSI) as primary sensors, supplemented by GPU-accelerated neural enhancements.
🎯 Project Objectives
- End-to-end spatial mapping and dynamic object tracking at interactive frame rates (<20 ms latency)
- RF-vision fusion to cover areas with low visibility or occlusions
- Extensible codebase split between rapid Python prototyping and optimized C++/CUDA modules
🏗️ System Architecture
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ Camera Capture │ │ Wi-Fi CSI Capture│ │ Calibration │
│ (OpenCV/GStream)│ │ (Intel 5300/Nex) │ │ Store │
└─────────┬───────┘ └────────┬─────────┘ └─────────────────┘
│ │
▼ ▼
┌─────────────────────────────────────────────────────────────┐
│ Sensor Fusion Module │
│ - RF point cloud & occupancy grid │
│ - Vision pose graph & dense point cloud │
└─────────────┬───────────────────────────────┬─────────────┘
│ │
▼ ▼
┌─────────────────┐ ┌─────────────────────┐
│ Export Engine │ │ Rendering Engine │
│ (Unity/UE4) │ │ (VR/Projection Map) │
└─────────────────┘ └─────────────────────┘
🚀 Quick Start
Prerequisites
- Ubuntu 20.04+ (primary target) or Windows 10+
- CUDA-capable GPU (NVIDIA GTX 1060+)
- Intel 5300 WiFi card or Broadcom chipset with Nexmon support
- USB camera (720p+ recommended)
Installation
# Clone the repository
git clone https://github.com/your-org/NowYouSeeMe.git
cd NowYouSeeMe
# Install dependencies
./tools/install_dependencies.sh
# Build the project
./tools/build.sh
# Run calibration
./tools/calibrate.sh
# Start the holodeck
./tools/start_holodeck.sh
Development Setup
# Set up development environment
./tools/setup_dev.sh
# Run tests
./tools/run_tests.sh
# Start development server
./tools/dev_server.sh
📁 Project Structure
NowYouSeeMe/
├── src/
│ ├── ingestion/ # Camera & CSI data capture
│ ├── calibration/ # Intrinsic/extrinsic calibration
│ ├── rf_slam/ # RF-based localization & SLAM
│ ├── vision_slam/ # Monocular vision SLAM
│ ├── fusion/ # Sensor fusion algorithms
│ ├── reconstruction/ # Surface & mesh reconstruction
│ ├── nerf/ # Neural Radiance Fields
│ └── engine/ # Rendering & interaction
├── tools/ # Build scripts & utilities
├── docs/ # Documentation
├── tests/ # Unit & integration tests
└── configs/ # Configuration files
🎮 Features
- Real-time 6DOF tracking with <20ms latency
- RF-vision sensor fusion for robust mapping
- Neural enhancement with NeRF integration
- Unity/Unreal export for VR/AR applications
- Projection mapping support for physical installations
- Auto-calibration and drift compensation
📊 Performance Targets
- Latency: <20ms end-to-end
- Accuracy: <10cm spatial fidelity
- Frame Rate: 30-60 FPS
- CSI Rate: ≥100 packets/second
🤝 Contributing
See CONTRIBUTING.md for development guidelines.
📄 License
MIT License - see LICENSE for details.
🆘 Support
- Documentation
- Troubleshooting Guide
- Issue Tracker # NYSM-NYD
Description
Languages
Python
59.2%
C++
33.3%
Shell
5.5%
CMake
1.4%
Dockerfile
0.6%