The full code is available on GitHub with examples in both Python and TypeScript
What We’re Building
We’ll create a conversational agent that remembers and reasons over past exchanges with the user. Here’s how the pieces fit together:- LangGraph orchestrates the conversation flow
- Honcho stores messages and retrieves relevant context
- Your LLM generates responses using Honcho’s formatted context
This tutorial demonstrates a simple linear conversation flow to show
how Honcho integrates with LangGraph. For production applications,
you’ll likely want to add LangGraph features like conditional routing,
tool calling, and multi-agent orchestration.
Setup
Install required packages:.env file with your API keys:
This tutorial uses the Honcho demo server at https://demo.honcho.dev which runs a small instance of Honcho on the latest version. For production, get your Honcho API key at app.honcho.dev. For local development, use
environment="local".Initialize Clients
Define LangGraph State
Define your state schema to pass data through the graph. The state stores Honcho objects directly along with the current user message and assistant response.Before proceeding, it’s important to understand Honcho’s core concepts (
Peers and Sessions). Review the Honcho Architecture to familiarize yourself with these primitives.Build the LangGraph
Define your chatbot logic, using Honcho to retrieve conversation context. This function demonstrates how Honcho can store messages, retrieve context, and generate responses.Understanding get_context()
Theget_context() method retrieves comprehensive conversation context and formats it for your LLM. It automatically:
- Manages conversation history - Tracks all messages and determines what’s relevant
- Respects token limits - Stays within context window constraints without manual counting
- Handles long conversations - Combines recent detailed messages with summaries of older exchanges
- Provides peer understanding - Includes theory-of-mind representations and peer cards when requested
SessionContext object always includes fields for messages, summaries, peer representations, and peer cards. By default, only messages and summary are populated. To populate peer-specific context, pass a peer_target parameter:
Using peer_target for Context:
- Without
peer_perspective: Returns Honcho’s omniscient view ofpeer_target(all observations and context) - With
peer_perspective: Returns whatpeer_perspectiveknows aboutpeer_target(perspective-based observations and context)
session.get_context().to_openai(assistant) and you get properly formatted context tailored for your assistant.
For more details on all available parameters, see
get_context() documentationChat Loop
Now we’ll create the main conversation function. To simplify logic, we initialize Honcho objects once per conversation and pass them through the LangGraph state. Therun_conversation_turn function initializes a Honcho Session and Peer objects, passes them to the LangGraph, and returns the assistant’s response. By calling it repeatedly with the same user_id and in the same session, the chat builds context over time.
Production Usage: Honcho accepts any nanoid-compatible string for
user_id and session_id. You can use IDs directly from your authentication system (Auth0, Firebase, Clerk, etc.) and session management without modification.This tutorial uses hardcoded values for simplicity.Next Steps
Now that you have a working LangGraph integration with Honcho, you can:- Create custom LangChain tools for your agent - to fully utilize Honcho’s memory & context management features
- Build a multi-agent LangGraph where each agent is a Honcho
Peerwith its own memory