Reachy Mini is Hugging Face and Pollen Robotics’ open-source robot for human-robot interaction. This guide integrates Honcho for persistent, multi-user memory with OpenAI’s Realtime API for voice.
Real-time memory: Honcho’s async API is designed for live voice interactions. Messages persist in the background without blocking audio, and the dialectic API returns user context fast enough for mid-conversation tool calls.
What It Does
- Face recognition identifies users and loads their personal memory
- Honcho stores conversations and reasons about each user over time
- OpenAI Realtime handles low-latency voice interaction
- Gaze tracking maintains eye contact during conversation
When a user returns days later, the robot remembers their name, interests, and previous discussions.
Setup
pip install reachy-mini honcho-ai openai python-dotenv numpy scipy mediapipe face-recognition
export OPENAI_API_KEY=your_openai_key
export HONCHO_API_KEY=your_honcho_key # get at app.honcho.dev
Architecture
Reachy Mini (camera, mic, speaker)
↓
OpenAI Realtime API (voice + tools)
↓
Honcho (memory + reasoning per user)
Honcho Integration
Initialize Honcho with a robot peer (not observed) and dynamic user peers (observed):
from honcho import Honcho
from honcho.api_types import PeerConfig
honcho = Honcho(api_key=api_key, workspace_id="reachy-mini")
# Robot peer - stores messages but isn't reasoned about
robot_peer = await honcho.aio.peer(
"reachy",
configuration=PeerConfig(observe_me=False),
)
# User peers - Honcho reasons about their preferences and history
user_peer = await honcho.aio.peer(user_id)
session = await honcho.aio.session(f"chat-{user_id}")
Store messages in the background without blocking the voice loop:
# Queue messages async - doesn't block audio playback
await session.aio.add_messages(user_peer.message(transcript))
await session.aio.add_messages(robot_peer.message(response))
The robot calls Honcho mid-conversation via OpenAI function calling — fast enough for real-time voice:
| Tool | Purpose |
|---|
recall | Query Honcho about the user (“What’s their name?”) |
create_conclusion | Save important facts to long-term memory |
see | Capture and analyze camera feed |
# Recall - ask Honcho's dialectic API (returns in ~200-500ms)
result = await user_peer.aio.chat(
"What do I know about this user?",
session=session,
reasoning_level="medium"
)
# Create conclusion - save a fact
await user_peer.conclusions_of(user_id).aio.create([
{"content": "Their name is Alice"}
])
Multi-User Support
Face recognition identifies returning users. When a new face is detected, the agent:
- Flushes pending transcripts to the previous user’s session
- Switches Honcho context to the new user
- Fetches a briefing from Honcho’s dialectic API
- Reconnects OpenAI with fresh context and triggers a greeting
# Get briefing when user is recognized
briefing = await user_peer.aio.chat(
"What should I know about this user? Name, interests, recent topics.",
session=session,
reasoning_level="low"
)
System Prompt
SYSTEM_PROMPT = """You are Reachy, a friendly robot. Keep responses concise.
You have a recall tool for memory. ALWAYS use it before claiming you don't
know something about the user. Never say "Nice to meet you" if you've met before."""
Run
Next Steps