Reachy Mini Community App

The all‑in‑one
dashboard for
Reachy Mini

AI conversation with 6 providers, 71 emotions, 19 dances, real-time telemetry, MuJoCo 3D viewer, music playback, and full system monitoring.

⚡ 6 AI Providers 🎭 71 Emotions 🕺 19 Dances 🧠 13 AI Tools 🔌 4 WebSockets 🎮 MuJoCo 3D
Python FastAPI LiteLLM Vanilla JS WebSocket MuJoCo
6AI Providers
71Emotions
19Dances
13AI Tools
4WebSockets
9Live Charts

See It In Action

System Telemetry 9 live charts · top processes · hardware inventory
3D Robot View MuJoCo simulation · head pose telemetry
AI Provider Settings showing 6 configured providers
AI Provider Settings 6 providers · STT/LLM/VLM/TTS · voice selection

Everything Your Reachy Mini Needs

One app to rule them all — no other community app comes close.

💬

AI Conversation

Full voice pipeline: robot mic → Whisper STT → LLM chat → TTS response. Supports OpenAI, Anthropic, Groq, Gemini, DeepSeek, and ElevenLabs.

👁

Vision (VLM)

Dedicated vision model selector with auto-detection per provider. Ask "what do you see?" and Reachy looks through the camera and describes the scene.

🎭

71 Emotions + 19 Dances

The AI expresses itself with emotions and dances during conversation. Say something funny and Reachy laughs. Ask it to dance and it picks a move.

📊

System Telemetry

9 live charts: CPU per-core, RAM breakdown, disk, network, WiFi, load, fan speed, disk I/O, temperature. Plus top processes and hardware inventory.

🕹

Head Control + 3D View

Interactive MuJoCo simulation with real-time joints. Drag sliders for roll/pitch/yaw, X/Y/Z, and antennas. Stewart platform actuator angles live.

🎵

Music + Audio

Upload music, play via GStreamer on the robot speaker, manage your library. Selectable audio routing: robot mic or browser mic, robot or browser speaker.

13 Tools the LLM Calls Autonomously

Natural conversation triggers actions — no buttons needed.

😊 Play emotions
🕺 Dance moves
📷 Take snapshots
👁 Describe vision
🎬 Record video
🎙 Record audio
🎵 Play music
Stop music
📋 List tracks
🖥 System status
🕒 Date & time
🔄 Move head
🚫 Ignore noise

AI Pipeline

🎤
STT
OpenAI Whisper
Groq
🧠
LLM
OpenAI
Anthropic
Groq
Gemini
DeepSeek
🔊
TTS
OpenAI
ElevenLabs
Groq Orpheus
Gemini
+
👁
VLM
Auto-detected
Per provider

All providers unified through LiteLLM. API keys entered in the web UI — no environment variables needed.

WebSocket Architecture

Zero polling. Everything streams live.

/ws/live Robot state + system stats at configurable Hz
/ws/intercom Browser mic PCM audio to robot speaker
/ws/terminal Full PTY terminal via tmux
/ws/transcribe Transcription results + TTS audio broadcast

Up and Running in 2 Steps

1

Install

Click “Install” above

Or run pip install -e . in the apps venv on your Reachy

2

Open

http://localhost:8042

Configure your AI providers and start chatting