Open-source core.
Platform when you're ready.
The context engine is open-source. The cloud platform adds persistent memory, semantic search, compliance reporting, and team management when your team needs it.
Open Source
- Sync compression pipeline (<50ms)
- Security scanning (flag/redact/block)
- In-memory persistence (dev mode)
- 3 LLM providers (OpenAI, Anthropic, Google)
- 5 agent tool definitions
- Apache 2.0 licensed
Pro
- Everything in Open Source
- Neon DB persistent memory (pgvector)
- Redis caching layer
- Session history & replay
- Observability dashboard
- Email support
Feature Matrix
| Capability | Open Source | Pro | Team | Enterprise |
|---|---|---|---|---|
| Context Engine | Full | Full | Full | Full |
| LLM Providers | 3 | 3 | 3 | 3 + Custom |
| Agent Tools | 5 | 5 | 5 | 5 + Custom |
| Memory Storage | In-memory | Neon DB | Neon DB | Neon DB + BYOK |
| Relationship Graph | — | — | Neo4j | Neo4j |
| Observability | — | Full | Full + Team | Full + Custom |
| Security Scanning | Local only | Cloud + Alerts | Cloud + Alerts | Custom policies |
| Compliance Audit | — | — | GDPR, SOX | HIPAA, PCI-DSS, SOX |
| Semantic Search | Text only | pgvector | pgvector | pgvector + Custom |
| Session History | — | 30 days | 90 days | Immutable forever |
70%+ Token Savings
Pointer replacement + summary injection removes redundant tool results. Combined with dedup and context injection, enterprise agents processing 1 billion tokens/month save over $100K/year. GPT-5.4-nano extraction costs ~$0.001 per turn.
OPUS_SAVINGS: ~$10/MTok
SONNET_SAVINGS: ~$2/MTok
EXTRACTION_COST: ~$0.001/turn (nano)
Security-First Architecture
Security scanning runs before LLM extraction. Redacted content never reaches the model. Every memory is classified: public, internal, or redacted. Cloud platform offers AES-256 encryption, zero retention, and optional customer-managed keys.
SCAN_MODES: FLAG / REDACT / BLOCK
MEMORY_LEVELS: PUBLIC / INTERNAL / REDACTED
ENCRYPTION: AES-256-GCM + TLS 1.3