get_context() method is a powerful feature that retrieves formatted conversation context from sessions, making it easy to integrate with LLMs like OpenAI, Anthropic, and others. This guide covers everything you need to know about working with session context.
By default, the context includes a blend of summary and messages which covers the entire history of the session. Summaries are automatically generated at intervals and recent messages are included depending on how many tokens the context is intended to be. You can specify any token limit you want, and can disable summaries to fill that limit entirely with recent messages. To get representation data, you need to specify a target peer.
Basic Usage
Theget_context() method is available on all Session objects and returns a SessionContext that contains the formatted conversation history.
Context Parameters
Theget_context() method accepts several optional parameters to customize the retrieved context:
Token Limits
Control the size of the context by setting a maximum token count:Summary Mode
Enable summary mode (on by default) to get a condensed version of the conversation:Peer Representation in Context
You can include a peer’s representation and peer card in the context by specifyingpeer_target. This is useful for providing the LLM with knowledge about a specific peer.
Semantic Search with Last Message
Uselast_user_message to fetch semantically relevant conclusions based on the most recent message (requires peer_target):
Session-Scoped Representations
Uselimit_to_session to only include observations from the current session:
All Parameters Reference
| Parameter | Type | Description |
|---|---|---|
summary | bool | Include summary in context (default: true) |
tokens | int | Maximum tokens to include |
peer_target | str | Peer ID to include representation for |
peer_perspective | str | Peer ID for perspective (requires peer_target) |
last_user_message | str | Message for semantic search (requires peer_target) |
limit_to_session | bool | Limit to session observations only |
search_top_k | int | Semantic search results to include (1-100) |
search_max_distance | float | Max semantic distance (0.0-1.0) |
include_most_derived | bool | Include most recently derived observations |
max_observations | int | Maximum observations to include (1-100) |
Converting to LLM Formats
TheSessionContext object provides methods to convert the context into formats compatible with popular LLM APIs. When converting to OpenAI format, you must specify the assistant peer to format the context in such a way that the LLM can understand it.
OpenAI Format
Convert context to OpenAI’s chat completion format:Anthropic Format
Convert context to Anthropic’s Claude format:Complete LLM Integration Examples
Using with OpenAI
Multi-Turn Conversation Loop
Advanced Context Usage
Context with Summaries for Long Conversations
For very long conversations, use summaries to maintain context while controlling token usage:Context for Different Assistant Types
You can get context formatted for different types of assistants in the same session:Best Practices
1. Token Management
Always set appropriate token limits to control costs and ensure context fits within LLM limits:2. Context Caching
For applications with frequent context retrieval, consider caching context when appropriate:3. Error Handling
Always handle potential errors when working with context:Conclusion
Theget_context() method is essential for integrating Honcho sessions with LLMs. By understanding how to:
- Retrieve context with appropriate parameters
- Convert context to LLM-specific formats
- Manage token limits and summaries
- Handle multi-turn conversations