Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.honcho.dev/llms.txt

Use this file to discover all available pages before exploring further.

MCP Server

The fastest way to give any AI tool persistent memory is through the Honcho MCP server. It works with any client that supports the Model Context Protocol. Get started in 2 minutes:
  1. Get an API key at app.honcho.dev
  2. Add the config for your client below
  3. Restart your client
See the full MCP documentation for all available tools, advanced configuration, and setup instructions for every supported client.
{
  "mcpServers": {
    "honcho": {
      "command": "npx",
      "args": [
        "mcp-remote",
        "https://mcp.honcho.dev",
        "--header",
        "Authorization:${AUTH_HEADER}",
        "--header",
        "X-Honcho-User-Name:${USER_NAME}"
      ],
      "env": {
        "AUTH_HEADER": "Bearer hch-your-key-here",
        "USER_NAME": "YourName"
      }
    }
  }
}

CLI

Inspect and debug a running Honcho deployment from your terminal. The honcho CLI wraps the Python SDK with agent-friendly defaults — JSON output, structured errors, and commands for every primitive (workspaces, peers, sessions, messages, conclusions). Get started:
uv tool install honcho-cli
honcho init      # configure apiKey + environmentUrl
honcho doctor    # verify connectivity
The CLI also ships an agent skill. Install it with npx skills add plastic-labs/honcho and pick honcho-cli from the list. See the full CLI reference for all commands, flags, and environment variables.

Claude Code Plugin

Use Honcho to build with Honcho! The plugin provides Claude Code persistent memory that survives context wipes and session restarts.
/plugin marketplace add plastic-labs/claude-honcho
/plugin install honcho@honcho # Tools for Claude to use Honcho to manage its own context
/plugin install honcho-dev@honcho # Skills to teach Claude how to integrate Honcho
The marketplace also includes all the agent skills below, so you can use /honcho-dev:integrate directly after installing. See the full Claude Code integration guide for setup details.

OpenCode Plugin

The OpenCode plugin gives OpenCode sessions persistent memory that survives context wipes, session restarts, and fresh chats.
bunx @honcho-ai/opencode-honcho install
Then run /honcho:setup inside OpenCode. See the full OpenCode integration guide for setup details.

Agent Skills

We provide agent skills for coding assistants like Claude Code, OpenCode, Cursor, Windsurf, and others.
npx skills add plastic-labs/honcho

Available Skills

honcho-integration

For new integrations. This skill helps you add Honcho to an existing Python or TypeScript codebase. It provides a guided, interactive experience:
  1. Explores your codebase to understand your language, framework, and existing AI/LLM integrations
  2. Interviews you about which entities should be peers, your preferred integration pattern, and session structure
  3. Implements the integration based on your answers—installing the SDK, creating peers, configuring sessions, and wiring up the chat endpoint
  4. Verifies the setup to ensure everything is configured correctly
Invoke with /honcho-integration in your coding agent.

honcho-cli

For inspection & debugging. Teaches your coding agent the right commands and flags for the honcho CLI — peer memory, session context, queue status, dialectic quality. Invoke implicitly when you ask your agent to inspect a Honcho deployment.

migrate-honcho-py / migrate-honcho-ts

For SDK upgrades. Migrates code from v1.6.0 to v2.0.0 (required for Honcho 3.0.0+). Use when upgrading the SDK or seeing errors about removed APIs like observations, Representation, .core, or get_config. Both skills handle: terminology changes (ObservationConclusion), Representation class removal, method renames, and streaming API updates.
PythonTypeScript
/migrate-honcho-py/migrate-honcho-ts
AsyncHoncho.aio accessor@honcho-ai/core removal
snake_casecamelCase

Universal Starter Prompt

I want to start building with Honcho - an open source memory library for building stateful agents.

## Honcho Resources

**Documentation:**
- Main docs: https://docs.honcho.dev
- API Reference: https://docs.honcho.dev/v3/api-reference/introduction
- Quickstart: https://docs.honcho.dev/v3/documentation/introduction/quickstart
- Architecture: https://docs.honcho.dev/v3/documentation/core-concepts/architecture

**Code & Examples:**
- Core repo: https://github.com/plastic-labs/honcho
- Python SDK: https://github.com/plastic-labs/honcho-python
- TypeScript SDK: https://github.com/plastic-labs/honcho-node
- CLI (inspect & debug a deployment): https://github.com/plastic-labs/honcho/tree/main/honcho-cli
- Discord bot starter: https://github.com/plastic-labs/discord-python-starter
- Telegram bot example: https://github.com/plastic-labs/telegram-python-starter

**What Honcho Does:**
Honcho is an open source memory library with a managed service for building stateful agents. It enables agents to build and maintain state about any entity--users, agents, groups, ideas, and more. Because it's a continual learning system, it understands entities that change over time.

When you write messages to Honcho, they're stored and processed in the background. Custom reasoning models perform formal logical reasoning to generate conclusions about each peer. These conclusions are stored as representations that you can query to provide rich context for your agents.

**Architecture Overview:**
- Core primitives: Workspaces contain Peers (any entity that persists but changes) and Sessions (interaction threads between peers)
- Peers can observe other peers in sessions (configurable with observe_me and observe_others)
- Background reasoning processes messages to extract premises, draw conclusions, and build representations
- Representations enable continuous improvement as new messages refine existing conclusions and scaffold new ones over time
- Chat endpoint provides personalized responses based on learned context
- Supports any LLM (OpenAI, Anthropic, open source)
- Can use managed service or self-host

Please assess the resources above and ask me relevant questions to help build a well-structured application using Honcho. Consider asking about:
- What I'm trying to build
- My technical preferences and stack
- Whether I want to use the managed service or self-host
- My experience level with the technologies involved
- Specific features I need (multi-peer sessions, perspective-taking, streaming, etc.)

Once you understand my needs, help me create a working implementation with proper memory and statefulness.