peer.chat()) is the natural language interface to Honcho’s reasoning. Instead of manually retrieving conclusions, your LLM can ask questions and get synthesized answers based on all the reasoning Honcho has done about a peer. Think of it as agent-to-agent communication.
Basic Usage
The simplest way to use the chat endpoint is to ask a question and get a text response:Streaming Responses
For longer answers, use streaming to get incremental responses:Integration Patterns
Dynamic Prompt Enhancement
Let your LLM decide what it needs to know, then inject that context into the next generation:Conditional Logic
Use chat endpoint responses to drive application logic:Preference Extraction
Extract specific preferences for personalization:How Honcho Answers
When you callpeer.chat(query):
- Honcho searches through the peer’s peer card and representation—conclusions drawn from reasoning over their messages
- Retrieves conclusions semantically relevant to your query
- Combines them with segments of source messages, if needed, to gather more context
- Synthesizes them into a coherent natural language response to your query
Best Practices
Ask specific questions
Instead of “Tell me about the user”, ask “What communication style does the user prefer?” You’ll get more actionable answers.Let your LLM formulate queries
The chat endpoint shines when your LLM decides what it needs to know. This creates dynamic, context-aware personalization. An excellent way to achieve this, if building an agent, is to give access to the Honcho chat endpoint as just another tool.Use for runtime decisions
Don’t just use chat for LLM prompts - use it to drive application logic, routing, and feature flags based on user behavior.Combine with get_context()
Useget_context() for conversation context and peer.chat() for specific insights. They complement each other.
For more ideas on using the chat endpoint, see our guides.