The fastest way to give any AI tool persistent memory is through the Honcho MCP server. It works with any client that supports the Model Context Protocol.Get started in 2 minutes:
Inspect and debug a running Honcho deployment from your terminal. The honcho CLI wraps the Python SDK with agent-friendly defaults — JSON output, structured errors, and commands for every primitive (workspaces, peers, sessions, messages, conclusions).Get started:
The CLI also ships an agent skill. Install it with npx skills add plastic-labs/honcho and pick honcho-cli from the list.See the full CLI reference for all commands, flags, and environment variables.
Use Honcho to build with Honcho! The plugin provides Claude Code persistent memory that survives context wipes and session restarts.
/plugin marketplace add plastic-labs/claude-honcho/plugin install honcho@honcho # Tools for Claude to use Honcho to manage its own context/plugin install honcho-dev@honcho # Skills to teach Claude how to integrate Honcho
The marketplace also includes all the agent skills below, so you can use /honcho-dev:integrate directly after installing.See the full Claude Code integration guide for setup details.
For inspection & debugging. Teaches your coding agent the right commands and flags for the honcho CLI — peer memory, session context, queue status, dialectic quality.Invoke implicitly when you ask your agent to inspect a Honcho deployment.
For SDK upgrades. Migrates code from v1.6.0 to v2.0.0 (required for Honcho 3.0.0+). Use when upgrading the SDK or seeing errors about removed APIs like observations, Representation, .core, or get_config.Both skills handle: terminology changes (Observation → Conclusion), Representation class removal, method renames, and streaming API updates.
I want to start building with Honcho - an open source memory library for building stateful agents.## Honcho Resources**Documentation:**- Main docs: https://docs.honcho.dev- API Reference: https://docs.honcho.dev/v3/api-reference/introduction- Quickstart: https://docs.honcho.dev/v3/documentation/introduction/quickstart- Architecture: https://docs.honcho.dev/v3/documentation/core-concepts/architecture**Code & Examples:**- Core repo: https://github.com/plastic-labs/honcho- Python SDK: https://github.com/plastic-labs/honcho-python- TypeScript SDK: https://github.com/plastic-labs/honcho-node- CLI (inspect & debug a deployment): https://github.com/plastic-labs/honcho/tree/main/honcho-cli- Discord bot starter: https://github.com/plastic-labs/discord-python-starter- Telegram bot example: https://github.com/plastic-labs/telegram-python-starter**What Honcho Does:**Honcho is an open source memory library with a managed service for building stateful agents. It enables agents to build and maintain state about any entity--users, agents, groups, ideas, and more. Because it's a continual learning system, it understands entities that change over time.When you write messages to Honcho, they're stored and processed in the background. Custom reasoning models perform formal logical reasoning to generate conclusions about each peer. These conclusions are stored as representations that you can query to provide rich context for your agents.**Architecture Overview:**- Core primitives: Workspaces contain Peers (any entity that persists but changes) and Sessions (interaction threads between peers)- Peers can observe other peers in sessions (configurable with observe_me and observe_others)- Background reasoning processes messages to extract premises, draw conclusions, and build representations- Representations enable continuous improvement as new messages refine existing conclusions and scaffold new ones over time- Chat endpoint provides personalized responses based on learned context- Supports any LLM (OpenAI, Anthropic, open source)- Can use managed service or self-hostPlease assess the resources above and ask me relevant questions to help build a well-structured application using Honcho. Consider asking about:- What I'm trying to build- My technical preferences and stack- Whether I want to use the managed service or self-host- My experience level with the technologies involved- Specific features I need (multi-peer sessions, perspective-taking, streaming, etc.)Once you understand my needs, help me create a working implementation with proper memory and statefulness.