Skip to main content
OpenClaw is a general AI agent that can perform actions on behalf of a user. The Honcho plugin gives OpenClaw memory across every channel — WhatsApp, Telegram, Discord, Slack, and more.
Honcho can run entirely locally with OpenClaw — no external API required. Keep your data on your machine while getting full memory capabilities across all channels. See the self-hosting guide to get started.
For OpenClaw’s own documentation on Honcho, see the Honcho Memory guide.

Install the Plugin

openclaw plugins install @honcho-ai/openclaw-honcho
openclaw honcho setup
openclaw gateway --force
openclaw honcho setup prompts for your API key, writes the config, and optionally uploads any legacy memory files to Honcho.
Alternative: ClawHub SkillThe honcho-setup skill handles installation and migration interactively from a chat session:
npx clawhub install honcho-setup
# Restart OpenClaw, then invoke the skill from a session
openclaw plugins install @honcho-ai/openclaw-honcho
openclaw gateway restart

Migrating Legacy Memory

If you have existing workspace memory files (USER.md, MEMORY.md, IDENTITY.md, memory/, canvas/, etc.), openclaw honcho setup will detect them and offer to migrate them.
Migration is non-destructive — files are uploaded to Honcho. Originals are never deleted or moved.

Legacy files

User/owner files (content describes the user):
  • USER.md, IDENTITY.md, MEMORY.md
  • All files in memory/ and canvas/ directories
Agent/self files (content describes the agent):
  • SOUL.md, AGENTS.md, TOOLS.md, BOOTSTRAP.md

Upload to Honcho

Files are uploaded via session.uploadFile(). User/owner files go to the owner peer; agent/self files go to the openclaw peer.

How It Works

Once installed, the plugin runs automatically:
  • Message Observation — After every AI turn, the conversation is persisted to Honcho. Both user and agent messages are observed, allowing Honcho to build and refine its models.
  • Tool-Based Context Access — The AI can query Honcho mid-conversation using tools like honcho_context, honcho_search_conclusions, honcho_search_messages, and honcho_ask to retrieve relevant context. Context is injected during OpenClaw’s before_prompt_build phase, ensuring accurate turn boundaries.
  • Dual Peer Model — Honcho maintains separate representations: one for the user (preferences, facts, communication style) and one for the agent (personality, learned behaviors). Each OpenClaw agent gets its own Honcho peer (agent-{id}), so multi-agent workspaces maintain isolated memory.
  • Clean Persistence — Platform metadata (conversation info, sender headers, thread context, forwarded messages) is stripped before saving to Honcho, ensuring only meaningful content is persisted.

Multi-Agent Support

OpenClaw uses a multi-agent architecture where a primary agent can spawn subagents to handle specialized tasks. The Honcho plugin is fully aware of this hierarchy:
  • Automatic Subagent Detection — When OpenClaw spawns a subagent, the plugin tracks the parent→child relationship via the subagent_spawned hook. Each subagent session records its parentPeerId in metadata.
  • Parent Observer Peer — The spawning agent is added as a silent observer in the subagent’s Honcho session (observeMe: false, observeOthers: true). This gives Honcho visibility into the full agent tree — the parent can see what its subagents are doing without its own messages being attributed to the subagent session.

AI Tools

Data Retrieval (fast, no LLM)

ToolDescription
honcho_contextUser knowledge across all sessions. detail='card' for key facts, 'full' for broad representation.
honcho_search_conclusionsSemantic vector search over stored conclusions ranked by relevance.
honcho_search_messagesFind specific messages across all sessions. Filter by sender, date, or metadata.
honcho_sessionCurrent session history and summary. Supports semantic search within the session.

Q&A (LLM-powered)

ToolDescription
honcho_askAsk Honcho a question about the user. depth='quick' for facts, 'thorough' for synthesis.

CLI Commands

openclaw honcho setup                           # Configure API key and migrate legacy files
openclaw honcho status                          # Connection status
openclaw honcho ask <question>                  # Query Honcho about the user
openclaw honcho search <query> [-k N] [-d D]    # Semantic search (topK, maxDistance)

Configuration

Run openclaw honcho setup to configure interactively, or set values directly in ~/.openclaw/openclaw.json under plugins.entries["openclaw-honcho"].config.
KeyDefaultDescription
apiKeyHoncho API key (required for managed; omit for self-hosted).
workspaceId"openclaw"Honcho workspace ID for memory isolation.
baseUrl"https://api.honcho.dev"API endpoint (for self-hosted instances).

Self-Hosted Honcho

Point the plugin to your local instance and follow the self-hosting guide to get started:
openclaw honcho setup
# Enter blank API key, set Base URL to http://localhost:8000

Local File Search (QMD Integration)

The plugin automatically exposes OpenClaw’s memory_search and memory_get tools when a memory backend is configured, allowing both Honcho memory and local file search together.

Setup

  1. Install QMD on your server
  2. Configure OpenClaw to use QMD as the memory backend in ~/.openclaw/openclaw.json:
{
  "memory": {
    "backend": "qmd"
  }
}
OpenClaw manages QMD collections automatically from your workspace memory files and any extra paths in memory.qmd.paths. See the QMD Memory Engine docs for full setup.
  1. Restart the gateway:
openclaw gateway restart

Available Tools

When QMD is configured, you get both Honcho and local file tools:
ToolSourceDescription
honcho_*HonchoCross-session memory, user modeling, dialectic reasoning
memory_searchQMDSearch local markdown files
memory_getQMDRetrieve file content

Next Steps

GitHub Repository

Source code, issues, and README.

OpenClaw Memory Docs

Memory backends, search, and configuration in the OpenClaw docs.

Honcho Architecture

Learn about peers, sessions, and dialectic reasoning.