Overview
By the end of this guide, you’ll have:- A local Honcho server running on your machine
- A PostgreSQL database with pgvector extension
- Basic configuration to connect your applications
- A working environment for development or testing
Prerequisites
Before you begin, ensure you have the following installed:Required Software
- uv - Python package manager:
curl -LsSf https://astral.sh/uv/install.sh | shorbrew install uv - Git - Download from git-scm.com
- Docker (required for Docker setup, not needed for manual setup) - Download from docker.com
Database Options
You’ll need a PostgreSQL database with the pgvector extension. Choose one:- Local PostgreSQL - Install locally or use Docker
- Supabase - Free cloud PostgreSQL with pgvector
- Railway - Simple cloud PostgreSQL hosting
- Your own PostgreSQL server
LLM Setup
Honcho uses LLMs for memory extraction, summarization, dialectic chat, and dreaming. The server will fail to start without a provider configured. You need one API key and one model. Any OpenAI-compatible endpoint works — OpenRouter, Together, Fireworks, Ollama, vLLM, or a direct vendor API. Models must support tool calling (function calling). The.env.template has provider and model lines ready for each feature. After copying it to .env, you need to set three things:
your-model-here with your chosen model in one step.
For recommended model tiers per feature, using multiple providers, or direct vendor API keys, see the Configuration Guide.
Community quick-start: elkimek/honcho-self-hosted provides a one-command installer with pre-configured model tiers, interactive provider setup, and Hermes Agent integration.
Docker Setup (Recommended)
Docker Compose handles the database, Redis, and Honcho server. The compose file builds the image from source (there is no pre-built image on Docker Hub). This requires Docker with BuildKit enabled — see Troubleshooting if the build fails. The compose file is production-oriented by default (ports bound to127.0.0.1, restart policies, caching enabled). For development, uncomment the source mounts and monitoring services inside the file.
1. Clone the Repository
2. Set Up Environment Variables
Copy the example environment file and configure it:.env and configure your LLM provider — see LLM Setup above. The database connection is set in the compose file. Auth is disabled by default (AUTH_USE_AUTH=false).
3. Start the Services
127.0.0.1. Redis caching is enabled by default.
For development, uncomment the source mount and monitoring sections inside docker-compose.yml to enable live reload, Prometheus, and Grafana.
4. Verify
Migrations run automatically on startup.Manual Setup
For more control over your environment, you can set up everything manually.1. Clone and Install Dependencies
2. Set Up PostgreSQL
Option A: Local PostgreSQL Installation
Install PostgreSQL and pgvector on your system: macOS (using Homebrew):Option B: Docker PostgreSQL
3. Enable Extensions
Connect to PostgreSQL and enable pgvector:4. Configure Environment
Create a.env file with your settings:
.env — configure your LLM provider (see LLM Setup above) and set the database connection:
5. Run Database Migrations
6. Start the Server
http://localhost:8000.
7. Start the Background Worker (Deriver)
In a separate terminal, start the deriver background worker:Cloud Database Setup
If you prefer to use a managed PostgreSQL service:Supabase (Recommended)
- Create a Supabase project at supabase.com
- Enable pgvector extension in the SQL editor:
- Get your connection string from Settings > Database
- Update your
.envfile with the connection string
Railway
- Create a Railway project at railway.app
- Add a PostgreSQL service
- Enable pgvector in the PostgreSQL console
- Get your connection string from the service variables
- Update your
.envfile
Verify Your Setup
Once your Honcho server is running, verify everything is working:1. Health Check
/health only confirms the process is running. It does not check database or LLM connectivity.
2. Smoke Test (database + API)
This confirms the database connection, migrations, and API are all working:id, your database is connected and migrations ran correctly.
3. API Documentation
Visithttp://localhost:8000/docs to see the interactive API documentation.
4. Test with SDK
Connect Your Application
Now that Honcho is running locally, you can connect your applications:Update SDK Configuration
Next Steps
- Configure Honcho: Visit the Configuration Guide for model tiers, provider options, and tuning
- Explore the API: Check out the API Reference
- Try the SDKs: See our guides for examples
- Join the community: Discord
Troubleshooting
Running into issues? See the Troubleshooting Guide for detailed solutions to common problems including:- Startup failures (missing API keys, database issues)
- Runtime errors (“An unexpected error occurred” on every request)
- Deriver not processing messages
- Database connection and migration issues
- Docker and Redis problems
- Verify the server is running:
curl http://localhost:8000/health - Check logs:
docker compose logs api(Docker) or check terminal output (manual setup) - Ensure migrations ran:
uv run alembic upgrade head
Production Considerations
The default compose file is already production-oriented — ports bound to127.0.0.1, restart policies, caching enabled.
Security
- Set
AUTH_USE_AUTH=trueand generate a JWT secret withpython scripts/generate_jwt_secret.py - Use HTTPS via a reverse proxy in front of Honcho. Example with Caddy (automatic TLS):
Or with nginx:
- Secure your database with strong credentials and restrict network access
- The production compose binds PostgreSQL and Redis to
127.0.0.1only — they are not accessible from the network
Scaling the Deriver
- Increase
DERIVER_WORKERS(default: 1) for higher message throughput - You can also run multiple deriver processes across machines — they coordinate via the database queue
- Monitor deriver logs for processing backlog
Caching
- The production compose enables Redis caching by default (
CACHE_ENABLED=true) - For the development compose, enable manually:
CACHE_ENABLED=true - Configure
CACHE_URLto point to your Redis instance (or use a managed Redis service)
Database Migrations
- Always run
uv run alembic upgrade headafter updating Honcho before starting the server - Check current migration status with
uv run alembic current
LLM Providers
- Ensure your API keys are configured (see LLM Setup)
- For alternative providers or per-feature model overrides, see the Configuration Guide
Monitoring
- Enable Prometheus metrics with
METRICS_ENABLED=true. The API exposes/metricson port 8000, the deriver on port 9090 (internal to its container — not published to the host by default). - Enable Sentry error tracking with
SENTRY_ENABLED=true - The development compose includes Prometheus (host port 9090) and Grafana (host port 3000) for scraping and dashboards. Uncomment those services to enable them.
Backups
- Set up regular PostgreSQL backups:
- Back up your
.envorconfig.tomlconfiguration files