A multi-tenant AI knowledge system that turns business data (meetings, docs, emails) into structured, searchable, AI-accessible intelligence. Built on Cloudflare Workers, D1, and Claude.
01System Overview
The full platform at a glance: what runs where and how components connect.
02Three-Layer Architecture
Data stays in source systems. We store metadata and pointers. The UI layer is replaceable.
Source Systems
Original data stays where it is. We never duplicate entire documents, emails, or transcripts. The platform fetches content on-demand when an AI agent needs the full text.
Backend (brain.db)
Per-client D1 database storing metadata: titles, summaries, source URIs, entity links, tags, and AI-generated embeddings. This is where search and intelligence live.
UI Layer
The client talks to their brain via Telegram. There's also a Next.js dashboard on Vercel and optional Notion databases. All are swappable without touching the backend.
03Data Ingestion Flows
How data enters the system from each source type.
04Retrieval: How Agents Query the Brain
When a client asks a question via Telegram, here's exactly what happens.
1. Client asks
Natural language question via Telegram chat
2. Agent receives
telegram-agent.py on VPS wraps message with system prompt, company context
3. Claude reasons
Claude CLI decides which MCP tools to call to find the answer
4. MCP executes
Queries D1 (full-text + SQL) and Vectorize (semantic), returns results to Claude
05Multi-Tenant Client Routing
One Worker, many clients. The API key determines which database you're talking to.
06Telegram Agent Architecture
Each client gets a dedicated Telegram bot backed by Claude + MCP tools.
07Repository Map
The seoul repo structure. Know where everything lives.
seoul/ corporate-brain-mcp/-- THE core: MCP server, API, OAuth, client routing src/index.ts-- 2000+ lines. All MCP tools, resolveClient(), OAuth flows schema.sql-- Full D1 schema (entities, records, tags, FTS, memories, collections) wrangler.toml-- D1 + Vectorize bindings per client brain-processor/-- VPS agent code telegram-agent.py-- Telegram bot + Claude CLI + self-monitoring clients/ pj.env-- PJ's config (tokens, API keys, chat IDs) blueorchid.env-- Blue Orchid's config josh.env-- Josh's config (new, pending tokens) brain-telegram-pj.service-- systemd unit for PJ brain-telegram-bo.service-- systemd unit for Blue Orchid brain-telegram-josh.service-- systemd unit for Josh (new) limitless-sync/-- Cloudflare Worker: polls Limitless API worker.js-- Cron-triggered, iterates CLIENT_IDS wrangler.toml-- CLIENT_IDS="PJ,BO,JOSH", D1 bindings fathom-webhook/-- Cloudflare Worker: receives Fathom webhooks worker.js-- Routes by webhook secret to correct D1 wrangler.toml-- D1 bindings per client brain-dashboard/-- Next.js dashboard (Vercel) app/-- Pages: entities, records, connections, status lib/api.ts-- API client for corporate-brain-mcp docs/-- Documentation architecture-visual.html-- This page .context/-- Project context, architecture docs, plans corporate-brain-architecture.md-- Full architecture reference (READ THIS FIRST)
File
Purpose
Where
index.ts
All MCP tools, OAuth flows, client routing, query logic. The brain of the brain.
Cloudflare
schema.sql
D1 database schema. Applied to every new client DB.
Cloudflare
telegram-agent.py
Telegram bot, Claude CLI integration, self-monitoring. One instance per client on VPS.
VPS
clients/*.env
Per-client config: API keys, bot tokens, chat IDs, polling settings.
VPS
limitless-sync/worker.js
Polls Limitless API every 15 min. Iterates CLIENT_IDS, writes to per-client D1.
Cloudflare
fathom-webhook/worker.js
Receives Fathom webhook POSTs, routes by secret to correct client D1.
Cloudflare
08Client Provisioning Checklist
Adding a new client to the platform. Each step is isolated; existing clients are never affected.
1
Create D1 Database
npx wrangler d1 create corporate-brain-<client>
Creates an isolated SQLite database on Cloudflare's edge. Save the database_id.
npx wrangler vectorize create corporate-brain-<client>-vectors --dimensions=1024 --metric=cosine
For semantic/embedding search. Cosine similarity, 1024 dimensions (Workers AI model).
4
Add Bindings to wrangler.toml
Add DB_<CLIENT> and VECTORIZE_<CLIENT> bindings to all three workers:
corporate-brain-mcp, limitless-sync, fathom-webhook. Update CLIENT_IDS in limitless-sync.
5
Add to resolveClient()
In index.ts, add API key mapping: "cbrain_<client>_live_2026": { dbKey, vecKey, clientId }
Also add DB_<CLIENT> and VECTORIZE_<CLIENT> to the Env interface.
6
Deploy Workers
npx wrangler deploy from each worker directory.
Verify via health endpoint: curl https://corporate-brain-mcp.../health?key=cbrain_<client>_live_2026
7
Create Telegram Bot
Message @BotFather on Telegram. Create bot, save token. Have client message the bot to get their chat_id.
8
Create VPS Config
Create clients/<client>.env with API key, Limitless key, Telegram tokens, alert chat ID.
Create brain-telegram-<client>.service systemd unit file.
9
Deploy to VPS
scp env file and service file to VPS. systemctl enable --now brain-telegram-<client>
Check logs: journalctl -u brain-telegram-<client> -f. Self-test should pass on startup.
10
Set Sync Secrets
echo "<key>" | npx wrangler secret put LIMITLESS_KEY_<CLIENT>
Also set Fathom webhook secret if applicable. These are stored encrypted in Cloudflare.
09Execution Layer: Departments and Agents
The brain stores and retrieves data. The execution layer acts on it. This is how you go from a knowledge base to an autonomous operating system.
How a Department Works
Department Examples
Intelligence
Purpose: Make raw data smarter. Process new records, extract entities, detect duplicates, find knowledge gaps.
Agents:
Intake Processor (on new record)
Deep Analyst (nightly cross-reference)
Gap Detector (weekly question generation)
Entity Resolver (dedup on demand)
Triggers: Record creation, scheduled, manual
Operations
Purpose: Keep the system and the client informed. Briefings, health checks, status reports, follow-up tracking.
Agents:
Daily Briefing (9 AM summary)
Health Monitor (every 6h check)
Follow-Up Tracker (daily scan)
Weekly Reporter (Friday rollup)
Triggers: Cron schedules
Proactive
Purpose: Surface opportunities and risks the client hasn't asked about. This is where the brain becomes an advisor.
Agents learn from client corrections. Future runs use stored context.
How Gravemind Does It (the reference architecture)
Evolution Path: From Knowledge Base to Autonomous System
1
Phase 1: Knowledge Base (current)
Data flows in via connectors (Limitless, Fathom). Client asks questions via Telegram. Claude searches the brain and answers. Purely reactive.
2
Phase 2: Scheduled Intelligence
Add cron-triggered agents: daily briefing, follow-up tracking, weekly status reports. The brain starts talking first, not just answering. Uses agent_configs + collections tables.
3
Phase 3: Proactive Monitoring
Event-driven agents: "This deal hasn't had activity in 2 weeks." "Jane mentioned a competing offer in yesterday's call." "Your lease with Oak Street expires in 90 days." The brain becomes an advisor.
4
Phase 4: Inter-Agent Collaboration
Agents read each other's outputs. The Intake Processor's entity extraction feeds the Gap Detector. The Deal Watcher's alerts feed the Follow-Up Tracker. Emergent intelligence from composable agents.
5
Phase 5: Self-Improving System
The brain learns from corrections (memories table). Agents improve their prompts based on what worked. Client feedback refines entity matching confidence. The system gets smarter the more you use it.
10Current Client Status
Active clients and their infrastructure state.
Client
D1 Database
Telegram Bot
Limitless Sync
Fathom Webhook
Status
PJ (Fox RE)
corporate-brain-pj
live
live
live
active
Blue Orchid
corporate-brain-blueorchid
live
live
live
active
Rushil Patel
brain-rushil
live
no key
n/a
active
Josh Brimhall
corporate-brain-josh
pending
pending key
pending
provisioned
Blue Orchid Society // Corporate Brain Platform // Architecture Reference
Last updated: April 2026