Skip to main content
OpenClaw is a general AI agent that can perform actions on behalf of a user. The Honcho plugin gives OpenClaw memory across every channel — WhatsApp, Telegram, Discord, Slack, and more.
Honcho can run entirely locally with OpenClaw — no external API required. Keep your data on your machine while getting full memory capabilities across all channels. See the self-hosting guide to get started.

Install the Plugin

openclaw plugins install @honcho-ai/openclaw-honcho
openclaw honcho setup
openclaw gateway --force
openclaw honcho setup prompts for your API key, writes the config, and optionally uploads any legacy memory files to Honcho.
Alternative: ClawHub SkillThe honcho-setup skill handles installation and migration interactively from a chat session:
npx clawhub install honcho-setup
# Restart OpenClaw, then invoke the skill from a session
openclaw plugins install @honcho-ai/openclaw-honcho
openclaw gateway restart

Migrating Legacy Memory

If you have existing workspace memory files (USER.md, MEMORY.md, IDENTITY.md, memory/, canvas/, etc.), openclaw honcho setup will detect them and offer to migrate them.
Migration is non-destructive — files are uploaded to Honcho. Originals are never deleted or moved.

Legacy files

User/owner files (content describes the user):
  • USER.md, IDENTITY.md, MEMORY.md
  • All files in memory/ and canvas/ directories
Agent/self files (content describes the agent):
  • SOUL.md, AGENTS.md, TOOLS.md, BOOTSTRAP.md

Upload to Honcho

Files are uploaded via session.uploadFile(). User/owner files go to the owner peer; agent/self files go to the openclaw peer.

How It Works

Once installed, the plugin runs automatically:
  • Message Observation — After every AI turn, the conversation is persisted to Honcho. Both user and agent messages are observed, allowing Honcho to build and refine its models.
  • Tool-Based Context Access — The AI can query Honcho mid-conversation using tools like honcho_recall, honcho_search, and honcho_analyze to retrieve relevant context. Context is injected during OpenClaw’s before_prompt_build phase, ensuring accurate turn boundaries.
  • Dual Peer Model — Honcho maintains separate representations: one for the user (preferences, facts, communication style) and one for the agent (personality, learned behaviors). Each OpenClaw agent gets its own Honcho peer (agent-{id}), so multi-agent workspaces maintain isolated memory.
  • Clean Persistence — Platform metadata (conversation info, sender headers, thread context, forwarded messages) is stripped before saving to Honcho, ensuring only meaningful content is persisted.

Multi-Agent Support

OpenClaw uses a multi-agent architecture where a primary agent can spawn subagents to handle specialized tasks. The Honcho plugin is fully aware of this hierarchy:
  • Automatic Subagent Detection — When OpenClaw spawns a subagent, the plugin tracks the parent→child relationship via the subagent_spawned hook. Each subagent session records its parentPeerId in metadata.
  • Parent Observer Peer — The spawning agent is added as a silent observer in the subagent’s Honcho session (observeMe: false, observeOthers: true). This gives Honcho visibility into the full agent tree — the parent can see what its subagents are doing without its own messages being attributed to the subagent session.

AI Tools

Data Retrieval (fast, no LLM)

ToolDescription
honcho_sessionConversation history and summaries from the current session.
honcho_profileUser’s peer card — key facts (name, preferences, role).
honcho_searchSemantic search over stored observations.
honcho_contextFull user representation across all sessions.

Q&A (LLM-powered)

ToolDescription
honcho_recallSimple factual question — minimal reasoning.
honcho_analyzeComplex question requiring synthesis — medium reasoning.

CLI Commands

openclaw honcho setup                           # Configure API key and migrate legacy files
openclaw honcho status                          # Connection status
openclaw honcho ask <question>                  # Query Honcho about the user
openclaw honcho search <query> [-k N] [-d D]    # Semantic search (topK, maxDistance)

Configuration

Run openclaw honcho setup to configure interactively, or set values directly in ~/.openclaw/openclaw.json under plugins.entries["openclaw-honcho"].config.
KeyDefaultDescription
apiKeyHoncho API key (required for managed; omit for self-hosted).
workspaceId"openclaw"Honcho workspace ID for memory isolation.
baseUrl"https://api.honcho.dev"API endpoint (for self-hosted instances).

Self-Hosted Honcho

Point the plugin to your local instance and follow the self-hosting guide to get started:
openclaw honcho setup
# Enter blank API key, set Base URL to http://localhost:8000

Local File Search (QMD Integration)

The plugin automatically exposes OpenClaw’s memory_search and memory_get tools when a memory backend is configured, allowing both Honcho cloud memory and local file search together.

Setup

  1. Install QMD on your server
  2. Configure OpenClaw in ~/.openclaw/openclaw.json:
{
  "memory": {
    "backend": "qmd",
    "qmd": {
      "limits": {
        "timeoutMs": 120000
      }
    }
  }
}
  1. Set up QMD collections and restart:
qmd collection add ~/Documents/notes --name notes
qmd update
openclaw gateway restart

Available Tools

When QMD is configured, you get both Honcho and local file tools:
ToolSourceDescription
honcho_*HonchoCross-session memory, user modeling, dialectic reasoning
memory_searchQMDSearch local markdown files
memory_getQMDRetrieve file content

Next Steps

GitHub Repository

Source code, issues, and README.

Honcho Architecture

Learn about peers, sessions, and dialectic reasoning.