Honcho can run entirely locally with OpenClaw — no external API required. Keep your data on your machine while getting full memory capabilities across all channels. See the self-hosting guide to get started.
Install the Plugin
openclaw honcho setup prompts for your API key, writes the config, and optionally uploads any legacy memory files to Honcho.
Alternative: ClawHub SkillThe
honcho-setup skill handles installation and migration interactively from a chat session:Migrating Legacy Memory
If you have existing workspace memory files (USER.md, MEMORY.md, IDENTITY.md, memory/, canvas/, etc.), openclaw honcho setup will detect them and offer to migrate them.
Migration is non-destructive — files are uploaded to Honcho. Originals are never deleted or moved.
Legacy files
User/owner files (content describes the user):USER.md,IDENTITY.md,MEMORY.md- All files in
memory/andcanvas/directories
SOUL.md,AGENTS.md,TOOLS.md,BOOTSTRAP.md
Upload to Honcho
Files are uploaded viasession.uploadFile(). User/owner files go to the owner peer; agent/self files go to the openclaw peer.
How It Works
Once installed, the plugin runs automatically:- Message Observation — After every AI turn, the conversation is persisted to Honcho. Both user and agent messages are observed, allowing Honcho to build and refine its models.
- Tool-Based Context Access — The AI can query Honcho mid-conversation using tools like
honcho_context,honcho_search_conclusions,honcho_search_messages, andhoncho_askto retrieve relevant context. Context is injected during OpenClaw’sbefore_prompt_buildphase, ensuring accurate turn boundaries. - Dual Peer Model — Honcho maintains separate representations: one for the user (preferences, facts, communication style) and one for the agent (personality, learned behaviors). Each OpenClaw agent gets its own Honcho peer (
agent-{id}), so multi-agent workspaces maintain isolated memory. - Clean Persistence — Platform metadata (conversation info, sender headers, thread context, forwarded messages) is stripped before saving to Honcho, ensuring only meaningful content is persisted.
Multi-Agent Support
OpenClaw uses a multi-agent architecture where a primary agent can spawn subagents to handle specialized tasks. The Honcho plugin is fully aware of this hierarchy:- Automatic Subagent Detection — When OpenClaw spawns a subagent, the plugin tracks the parent→child relationship via the
subagent_spawnedhook. Each subagent session records itsparentPeerIdin metadata. - Parent Observer Peer — The spawning agent is added as a silent observer in the subagent’s Honcho session (
observeMe: false, observeOthers: true). This gives Honcho visibility into the full agent tree — the parent can see what its subagents are doing without its own messages being attributed to the subagent session.
AI Tools
Data Retrieval (fast, no LLM)
| Tool | Description |
|---|---|
honcho_context | User knowledge across all sessions. detail='card' for key facts, 'full' for broad representation. |
honcho_search_conclusions | Semantic vector search over stored conclusions ranked by relevance. |
honcho_search_messages | Find specific messages across all sessions. Filter by sender, date, or metadata. |
honcho_session | Current session history and summary. Supports semantic search within the session. |
Q&A (LLM-powered)
| Tool | Description |
|---|---|
honcho_ask | Ask Honcho a question about the user. depth='quick' for facts, 'thorough' for synthesis. |
CLI Commands
Configuration
Runopenclaw honcho setup to configure interactively, or set values directly in ~/.openclaw/openclaw.json under plugins.entries["openclaw-honcho"].config.
| Key | Default | Description |
|---|---|---|
apiKey | — | Honcho API key (required for managed; omit for self-hosted). |
workspaceId | "openclaw" | Honcho workspace ID for memory isolation. |
baseUrl | "https://api.honcho.dev" | API endpoint (for self-hosted instances). |
Self-Hosted Honcho
Point the plugin to your local instance and follow the self-hosting guide to get started:Local File Search (QMD Integration)
The plugin automatically exposes OpenClaw’smemory_search and memory_get tools when a memory backend is configured, allowing both Honcho memory and local file search together.
Setup
- Install QMD on your server
-
Configure OpenClaw to use QMD as the memory backend in
~/.openclaw/openclaw.json:
memory.qmd.paths. See the QMD Memory Engine docs for full setup.
- Restart the gateway:
Available Tools
When QMD is configured, you get both Honcho and local file tools:| Tool | Source | Description |
|---|---|---|
honcho_* | Honcho | Cross-session memory, user modeling, dialectic reasoning |
memory_search | QMD | Search local markdown files |
memory_get | QMD | Retrieve file content |
Next Steps
GitHub Repository
Source code, issues, and README.
OpenClaw Memory Docs
Memory backends, search, and configuration in the OpenClaw docs.
Honcho Architecture
Learn about peers, sessions, and dialectic reasoning.