AI conversation with 6 providers, 71 emotions, 19 dances, real-time telemetry, MuJoCo 3D viewer, music playback, and full system monitoring.
One app to rule them all — no other community app comes close.
Full voice pipeline: robot mic → Whisper STT → LLM chat → TTS response. Supports OpenAI, Anthropic, Groq, Gemini, DeepSeek, and ElevenLabs.
Dedicated vision model selector with auto-detection per provider. Ask "what do you see?" and Reachy looks through the camera and describes the scene.
The AI expresses itself with emotions and dances during conversation. Say something funny and Reachy laughs. Ask it to dance and it picks a move.
9 live charts: CPU per-core, RAM breakdown, disk, network, WiFi, load, fan speed, disk I/O, temperature. Plus top processes and hardware inventory.
Interactive MuJoCo simulation with real-time joints. Drag sliders for roll/pitch/yaw, X/Y/Z, and antennas. Stewart platform actuator angles live.
Upload music, play via GStreamer on the robot speaker, manage your library. Selectable audio routing: robot mic or browser mic, robot or browser speaker.
Natural conversation triggers actions — no buttons needed.
All providers unified through LiteLLM. API keys entered in the web UI — no environment variables needed.
Zero polling. Everything streams live.
/ws/live
Robot state + system stats at configurable Hz
/ws/intercom
Browser mic PCM audio to robot speaker
/ws/terminal
Full PTY terminal via tmux
/ws/transcribe
Transcription results + TTS audio broadcast
Click “Install” above
Or run pip install -e . in the apps venv on your Reachy
http://localhost:8042
Configure your AI providers and start chatting