Honcho by default runs ambient inference on top of the message objects you store. Those messages serve as the ground truth upon which facts about the user are derived and stored. The Dialectic Endpoint is the natural language interface through which insights are synthesized from those facts. We believe intellectual respect for LLMs is paramount in building effective AI agents/apps. It follows that the LLM should know better than any human what would aid them in their generation task. Thus, the Dialectic endpoint exists for flexible agent-to-agent communication.

Automatic Fact Derivation

On every message written to a session, an automatic callback is run that will reason about the conversation and store facts in a collection named honcho. This is a reserved collection specifically for the backend Honcho agent to interact with.

Dialectic Endpoint

The Dialectic endpoint allows you to define logic enabling your agent to talk to our agent that automatically retrieves and synthesizes facts from the collection. You can use the response as part of your reasoning process for your agent–add it to your next prompt to inject critical context about the user.

This chat interface is exposed via the peer.chat() endpoint. It accepts a string or a list of strings. Below is some example code on how this works.

Prerequisites

from honcho import Honcho

# use the default workspace
honcho = Honcho()

# get/create a peer
peer = honcho.peer("demo-user")

# get/create a session
session = honcho.session("demo-session")

# (assuming some messages have been written to Honcho for the deriver to use)

Static Dialectic Call

query = "What is the user's favorite way of completing the task?"
answer = peer.chat(query)

Streaming Dialectic Call

query = "What do we know about the user?"
response_stream = peer.chat(query, stream=True)

for line in response_stream.iter_text():
    print(line)

We’ve designed the Dialectic endpoint to be infinitely flexible. We wrote an incomplete list of ideas on how to use it on our blog here.