If you’re using a coding agent (Claude Code, Cursor, etc.), the
/honcho-integration skill walks you through these decisions interactively. It explores your codebase, interviews you about peers and sessions, and generates the integration code. The patterns below are the same ones the skill uses.Quick Reference
| Decision | Recommendation |
|---|---|
| How many workspaces? | One per application. Separate per-agent if you need hard data isolation. |
| Who should be a peer? | Any entity you want Honcho to reason about — users, agents, NPCs, students, customers. |
| How should I scope sessions? | Flexible — per-conversation, per-channel, per-scene, etc. See Session Design below. |
Should I set observe_me: false? | Yes, for any peer you don’t need Honcho to build a representation of — typically assistants or bots with deterministic behavior. |
Do I need observe_others? | Only when different peers need distinct views of the same participant (e.g., games, multi-agent). Most apps can leave it at the default (false). |
Workspace Design
Workspaces are the top-level container. Everything inside a workspace (peers, sessions, messages, and all reasoning) is fully isolated from other workspaces. One workspace per application is the most common pattern. Use separate workspaces when you need hard isolation:| Pattern | When to use |
|---|---|
| Single workspace | Most applications. One product, one environment. |
| Per-tenant | Multi-tenant SaaS where each customer’s data must be completely isolated. |
If you are using the SDK, it will create a workspace called
default if no name is specified for workspace_idPeer Design
A peer is any entity that participates in a session. Observation settings control which ones Honcho reasons about. What makes a good peer?- It participates in sessions (a user, an agent, a character, an NPC)
- It persists across sessions
- It changes over time (preferences shift, knowledge grows), or it produces messages you want Honcho to see
observe_me: false on peers that behave deterministically.
Session Design
Sessions define the temporal boundaries of an interaction. How you scope sessions directly affects how summaries are generated and how context is retrieved. Common session patterns| Pattern | Session scoped to | Example |
|---|---|---|
| Per-conversation | Each new chat thread | ChatGPT-style UI where each thread is a session |
| Per-channel | A persistent channel or room | Discord channel, Slack thread |
| Per-interaction | A bounded task or encounter | A support ticket, a game encounter |
| Per-import | A batch of external data | Importing emails or documents for a single peer |
- New session when the context resets (new conversation, new day, new topic)
- Reuse session when context should accumulate (ongoing channel, persistent thread)
Application Patterns
AI Companions
An assistant that remembers the user across sessions and platforms. The Honcho plugin for OpenClaw is a production example—one assistant with memory across WhatsApp, Telegram, Discord, and Slack.- Session key = thread + platform —
general-discordandgeneral-telegramare separate sessions but share a single owner representation, so Honcho learns from every channel - Dynamic agent peers — each agent gets its own peer (
agent-{id}), resolved via a workspace-level map. Renaming an agent recovers the peer by metadata lookup - Subagent hierarchy — when a primary agent spawns a subagent, the parent joins the child’s session as a silent observer (
observe_me: false, observe_others: true), giving Honcho visibility into the full agent tree - Asymmetric observation — both owner and agent are observed, but with different scopes: owner has
observe_others: false(default view), while the agent hasobserve_others: trueso it can build its own representation of the owner. Subagents get lighter context (peer card only, no session summary)
Coding Agents
Coding agents survive terminal restarts, editor switches, and project hops. The Honcho plugin for Claude Code is a production example of this pattern.- One workspace per tool — Claude Code and Cursor each get their own workspace, with optional cross-linking for read access
- Asymmetric peers — developer is observed (memory formation), agent is not observed but still stores messages so Honcho sees both sides
- Session-per-directory by default — each project accumulates its own memory. Prefix with peer name (
user-honcho-repo) so multiple developers on the same workspace don’t collide. Alternative strategies:git-branch(session switches on branch change) orchat-instance(clean slate each time) - Filter what you store — user messages go in real-time; agent messages are filtered to skip trivial tool output and keep substantive explanations
- Import external data with single-peer sessions to ingest READMEs, architecture docs, or commit history into a developer’s representation
Games
Games introduce multi-peer scenarios where information asymmetry matters. An NPC should only know what it has witnessed, not the full game state.- Every character (player, NPC) is a peer
observe_others: truelets NPCs build their own representations of the player based only on what they’ve witnessed- Session-per-scene or session-per-encounter so context scopes to specific interactions
- Use
targetwhen querying to get a specific NPC’s perspective rather than Honcho’s omniscient view - See Representation Scopes for the full details
Common Mistakes
- Leaving
observe_meon for assistants — Wastes reasoning compute on a peer you control. Deterministic behavior doesn’t need to be modeled. - Not storing messages — Honcho reasons about messages asynchronously. If you don’t call
add_messages(), there’s nothing to reason about — no messages means no memory. See Storing Data for details. - Creating a new workspace per user — Use peers within a single workspace instead. Workspaces are for isolation between applications, not between users.
- Too many tiny sessions — Summaries and
session.context()are scoped to a single session. If you split a continuous conversation across many sessions, context is fragmented and each session is too short to summarize. Reuse a session when context should flow continuously. - Blocking on processing — Messages are processed asynchronously in the background. Don’t poll or wait for reasoning to complete before continuing your application flow.
Next Steps
Get Context
Retrieve formatted context from sessions for your LLM
Chat Endpoint
Query Honcho about your peers with natural language
Reasoning Configuration
Fine-tune what gets reasoned about and how
Representation Scopes
Directional representations for multi-peer scenarios