- Environment variables (always take precedence)
.envfile (for local development)config.tomlfile (base configuration)- Default values
Recommended Configuration Approaches
Option 1: Environment Variables Only (Production)
- Use environment variables for all configuration
- No config files needed
- Ideal for containerized deployments (Docker, Kubernetes)
- Secrets managed by your deployment platform
Option 2: config.toml (Development/Simple Deployments)
- Use config.toml for base configuration
- Override sensitive values with environment variables
- Good for development and simple deployments
Option 3: Hybrid Approach
- Use config.toml for non-sensitive base settings
- Use .env file for sensitive values (API keys, secrets)
- Good for development teams
Option 4: .env Only (Local Development)
- Use .env file for all configuration
- Simple for local development
- Never commit .env files to version control
Configuration Methods
Using config.toml
Copy the example configuration file to get started:[app]- Application-level settings (log level, session limits, embedding settings, Langfuse integration, local metrics collection)[db]- Database connection and pool settings (connection URI, pool size, timeouts, connection recycling)[auth]- Authentication configuration (enable/disable auth, JWT secret)[cache]- Redis cache configuration (enable/disable caching, Redis URL, TTL settings, lock configuration for cache stampede prevention)[llm]- LLM provider API keys (Anthropic, OpenAI, Gemini, Groq, OpenAI-compatible endpoints) and general LLM settings[dialectic]- Dialectic API configuration (provider, model, query generation settings, semantic search parameters, context window size)[deriver]- Background worker settings (worker count, polling intervals, queue management) and theory of mind configuration (model, tokens, observation limits)[peer_card]- Peer card generation settings (provider, model, token limits)[summary]- Session summarization settings (frequency thresholds, provider, model, token limits for short and long summaries)[dream]- Dream processing configuration (enable/disable, thresholds, idle timeouts, dream types, LLM settings)[webhook]- Webhook configuration (webhook secret, workspace limits)[metrics]- Metrics collection settings (enable/disable metrics, namespace)[sentry]- Error tracking and monitoring settings (enable/disable, DSN, environment, sample rates)
Using Environment Variables
All configuration values can be overridden using environment variables. The environment variable names follow this pattern:{SECTION}_{KEY}for nested settings- Just
{KEY}for app-level settings
DB_CONNECTION_URI→[db].CONNECTION_URIDB_POOL_SIZE→[db].POOL_SIZEAUTH_JWT_SECRET→[auth].JWT_SECRETDIALECTIC_MODEL→[dialectic].MODELLOG_LEVEL(no section) →[app].LOG_LEVEL
Configuration Priority
When a configuration value is set in multiple places, Honcho uses this priority:- Environment variables - Always take precedence
- .env file - Loaded for local development
- config.toml - Base configuration
- Default values - Built-in defaults
- Use
config.tomlfor base configuration - Override specific values with environment variables in production
- Use
.envfiles for local development without modifying config.toml
Example
If you have this inconfig.toml:
Core Configuration
Application Settings
Application-level settings control core behavior of the Honcho server including logging, session limits, message handling, and optional integrations. Basic Application Configuration:Database Configuration
Required Database Settings:Authentication Configuration
JWT Authentication:Cache Configuration
Honcho supports Redis caching to improve performance by caching frequently accessed data like peers, sessions, and working representations. Caching also includes lock mechanisms to prevent cache stampede scenarios. Redis Cache Settings:- High-traffic production environments
- Applications with many repeated reads of the same data
- When you need to reduce database load
LLM Provider Configuration
Honcho supports multiple LLM providers for different tasks. API keys are configured in the[llm] section, while specific features use their own configuration sections.
API Keys
All provider API keys use theLLM_ prefix:
General LLM Settings
Feature-Specific Model Configuration
Different features can use different providers and models: Dialectic API: The Dialectic API provides theory-of-mind informed responses by integrating long-term facts with current context.Default Provider Usage
By default, Honcho uses:- Anthropic (Claude) for dialectic API responses
- Groq for query generation (fast, cost-effective)
- Google (Gemini) for theory of mind derivation
- OpenAI (GPT) for peer cards and summarization
- OpenAI for embeddings (if
EMBED_MESSAGES=true)
Additional Features Configuration
Dream Processing
Dream processing consolidates and refines peer representations during idle periods, similar to how human memory consolidation works during sleep. Dream Settings:Webhook Configuration
Webhooks allow you to receive real-time notifications when events occur in Honcho (e.g., new messages, session updates). Webhook Settings:Metrics Collection
Enable metrics collection for monitoring Honcho performance and usage. Metrics Settings:Monitoring Configuration
Sentry Error Tracking
Sentry Settings:Environment-Specific Examples
Development Configuration
config.toml for development:Production Configuration
config.toml for production:Migration Management
Running Database Migrations:Troubleshooting
Common Configuration Issues:-
Database Connection Errors
- Ensure
DB_CONNECTION_URIusespostgresql+psycopg://prefix - Verify database is running and accessible
- Check pgvector extension is installed
- Ensure
-
Authentication Issues
- Set
AUTH_USE_AUTH=truefor production - Generate and set
AUTH_JWT_SECRETif authentication is enabled - Use
python scripts/generate_jwt_secret.pyto create a secure secret
- Set
-
LLM Provider Issues
- Verify API keys are set correctly
- Check model names match provider specifications
- Ensure provider is enabled in configuration
-
Deriver Issues
- Increase
DERIVER_WORKERSfor better performance - Check
DERIVER_STALE_SESSION_TIMEOUT_MINUTESfor session cleanup - Monitor background processing logs
- Increase