Reachy Mini is Hugging Face and Pollen Robotics’ open-source robot for human-robot interaction. This guide integrates Honcho for persistent, multi-user memory with OpenAI’s Realtime API for voice.Documentation Index
Fetch the complete documentation index at: https://docs.honcho.dev/llms.txt
Use this file to discover all available pages before exploring further.
Real-time memory: Honcho’s async API is designed for live voice interactions. Messages persist in the background without blocking audio, and the dialectic API returns user context fast enough for mid-conversation tool calls.
GitHub Repository
Full source code
Build Livestream
Watch us build it live
What It Does
- Face recognition identifies users and loads their personal memory
- Honcho stores conversations and reasons about each user over time
- OpenAI Realtime handles low-latency voice interaction
- Gaze tracking maintains eye contact during conversation
Setup
Architecture
Honcho Integration
Initialize Honcho with a robot peer (not observed) and dynamic user peers (observed):Memory Tools
The robot calls Honcho mid-conversation via OpenAI function calling — fast enough for real-time voice:| Tool | Purpose |
|---|---|
recall | Query Honcho about the user (“What’s their name?”) |
create_conclusion | Save important facts to long-term memory |
see | Capture and analyze camera feed |
Multi-User Support
Face recognition identifies returning users. When a new face is detected, the agent:- Flushes pending transcripts to the previous user’s session
- Switches Honcho context to the new user
- Fetches a briefing from Honcho’s dialectic API
- Reconnects OpenAI with fresh context and triggers a greeting
System Prompt
Run
Next Steps
Honcho Architecture
Understand peers, sessions, and reasoning
Chat Endpoint
Learn about Honcho’s dialectic API
Get Context
Retrieve formatted conversation history
GitHub code
Dig into the code