Synapse: The Data Brain of Your Business
Inside Synapse Studio — the central nervous system that organizes AI agents with multi-agent orchestration, 7 multimodal capabilities (TTI, TTS, STT, ITT, I2I, Web, LLM), subtasks with a digital CEO, and autonomous execution.
Imagine your company as a building. Each floor is a department: sales, support, marketing, operations. On each floor, AI agents with specific roles work, execute tasks, respond to events and communicate with each other. Synapse Studio isn't a metaphor — it's literally how Cadences models your business operations.
With over 6,000 lines of backend API, Synapse is the largest module in Cadences. There are 45+ endpoints covering multi-agent orchestration with parallel execution, 7 multimodal capabilities (image generation, audio, transcription, vision, web search), a subtask system with a digital CEO that reviews and approves proposals, organizational data access, and full gamification with XP, achievements and leaderboards.
The building metaphor
Synapse models your organization as a building with floors. Each floor has its own context, assigned agents and configuration. A "Sales" agent on Floor 2 can't see "HR" data on Floor 5 — isolation is architectural, not a permission someone forgets to set.
Buildings, Floors, Agents and Tasks
Everything in Synapse revolves around four fundamental entities. The hierarchy is clear: the organization has buildings, buildings have floors, floors have agents, and agents execute tasks.
Buildings
The top-level organizational unit. A company can have multiple buildings: "HQ", "LATAM Operations", "AI Center". Each with its own configuration and visual theme.
Floors
Departments within the building. Each floor has its own context — documents, base prompts, reference data — that feeds the agents working on it. They can be dynamically created, reordered and deleted.
Agents
AI workers with name, role, department, level, personality, avatar, state (idle/working/break), mood, energy, XP and visual position on the floor. Each agent has its own system_prompt and personality config.
Tasks
Assignable work units with title, description, priority (low/medium/high/critical), type (text/image/vision/audio/data/mixed), AI prompt, and approval flow (approve/reject). Executed against AI models configurable per organization.
45+ Endpoints, One Single Router
All of Synapse's backend lives in a single catch-all file ([[path]].js) that routes by URL segments. This simplifies deployment and allows every endpoint to share utilities like generateId(), generateContent() and direct D1 access.
| Resource | Operations | Usage |
|---|---|---|
| Buildings | GET, POST, PUT | Create and manage organizational buildings |
| Floors | GET, POST, PUT, DELETE + Context | Floors with their own document context |
| Agents | CRUD + State, Break, Achievements, Ratings | Full AI agent management with gamification |
| Tasks | CRUD + Execute, Steps, Approve, Reject, Report | Tasks with AI execution and approval flow |
| Subtasks | Auto-create via CEO, Parent propagation | Subtasks with digital CEO review |
| Cron / Heartbeat | GET trigger, Auto-execute pending | Autonomous execution via external scheduler |
| Events | GET, POST, Batch | System event logging (audit trail) |
| Conversations | GET, POST | Conversation history between agents and users |
| Input Sources | CRUD + Form Link + Items | Data sources: emails, forms, APIs, webhooks |
| Output Destinations | CRUD + Items + Send | Destinations: emails, Slack, webhooks, reports |
| Context Analysis | POST analyze, GET latest | Automatic analysis of agent and task state |
| Scores / Leaderboard | GET scores, GET leaderboard | Agent scores and XP-based ranking |
| Achievements | GET by org/agent | Unlockable achievements with tiers and XP rewards |
| Templates | GET all, GET by dept | Predefined templates for agents and tasks |
| Org Config | GET, PUT | AI config, orchestrator, capabilities, subtasks |
| Media (R2) | GET serve by key | Serve generated images/audio from R2 |
Anatomy of a Synapse Agent
Each agent in Synapse has over 20 attributes that define who it is, what it does, how it feels and where it is. This goes far beyond a simple "chatbot with a prompt" — it's a digital worker with state, personality and progression.
◆ Identity
name— Agent namerole— Type: agent, supervisor, analystrole_title— Visible title: "Sales Rep", "QA Lead"department— Assigned departmentlevel— Experience level (1-N)avatar_sprite— Visual sprite for the UI
◆ Intelligence
system_prompt— Agent base instructionspersonality_config— Personality JSONcapabilities— Multimodal array: llm, tti, tts, stt, itt, i2i, webcan_create_subtasks— Can propose sub-tasks to the digital CEOdata_access_level— Data access level (0=none, 1=dept, 2=org)
◆ Real-Time State
current_state— idle, working, break, offlinecurrent_mood— neutral, happy, focused, tiredenergy_level— 0.0 to 100.0consecutive_tasks— Tasks in a row without pausecurrent_task_id— Currently active tasklast_break_at— Last break taken
◆ Progression
xp_points— Accumulated experience pointsposition_x,position_y— Position on the flooris_active— Active or disabled- Unlockable achievements
- Ratings from other agents/users
7 Multimodal Capabilities
Each agent has a capabilities array that defines what it can do beyond text. The orchestrator assigns steps to agents that have the required capability and validates that the chosen agent actually possesses it — if not, it automatically reassigns.
LLM
Text generation. All agents have this. DeepSeek, Groq, Gemini, Cloudflare AI.
TTI — Text-to-Image
Generates images via FLUX Schnell (Cloudflare Workers AI). The LLM creates an English prompt, then the image is generated and stored in R2.
TTS — Text-to-Speech
Generates audio with ElevenLabs (multilingual v2) or MeloTTS (CF AI). The LLM drafts the text, then it's synthesized and uploaded to R2.
STT — Speech-to-Text
Transcribes audio with Whisper (CF AI). Audio URLs are auto-detected, transcribed and injected into the prompt for analysis.
ITT — Image-to-Text
Visual analysis with Llama 4 Scout (Groq). Attached images are sent as multi-image visual context to the model.
I2I — Image-to-Image
Transforms images with Stable Diffusion v1.5. Controls strength (0.3 subtle → 0.9 radical). Supports automatic iterative evolution.
Web — Internet Search
Search via Groq Compound Beta. Generates automatic queries from the title/description and enriches context with real-world data.
Automatic post-processing
Each capability has its own post-processing pipeline: the LLM generates intermediate content (image prompt, narration text, analysis), and then the system executes the real action (generate image, synthesize audio, store in R2). Resulting assets are aggregated into the final result and propagated to child subtasks.
Orchestrated Execution Pipeline
When a task is executed, Synapse doesn't simply call an LLM. It orchestrates a multi-agent collaboration with parallel execution, capability validation and result compilation. The executeTaskInternal() function is callable both via HTTP and by cron, and runs in background via ctx.waitUntil().
Config resolution + agents
loadOrgConfigForTask() loads the tier, models, token limits and capability_config from D1. Then all active agents in the organization are loaded for the collaboration team.
AI Orchestration — Collaboration plan
orchestrateCollaboration() sends the full agent roster (with capabilities, energy, state) to the orchestrator LLM (DeepSeek by default). The result is a plan with parallel_groups: steps that execute in parallel within each group, with groups executing sequentially. If the orchestrator fails, a smart fallback detects the required capability from title keywords.
Capability validation
The system validates that each assigned agent has the step's required_capability. If an [llm] agent is assigned to a tti step, it's automatically reassigned to an agent that has the capability — or kept with a warning if none has it.
Parallel execution with timeout
Each group executes its steps in real parallel via Promise.allSettled(), with a 2-minute timeout per step. Each agent receives a personalized prompt with its system_prompt, capability instructions, organizational data, and previous group contributions as context.
Multimodal post-processing
Based on the step's capability, the corresponding pipeline runs: TTI extracts prompt → generates image with FLUX → stores in R2. TTS extracts text → synthesizes with ElevenLabs → uploads MP3 to R2. STT transcribes with Whisper before the LLM. Each output is recorded in synapse_task_steps with type, content and timing.
Aggregation + creative assets
Images, audios and transcriptions from all steps are aggregated into the final result. If there are creative assets, a deep-link to Perspectiva Studio is injected for creating social media content with the generated material.
Subtasks + digital CEO
Agents with can_create_subtasks can propose derived subtasks. A "CEO" LLM reviews each proposal, assigns agents by department and expertise, and creates them with configurable depth and count limits. Parent images/audio are automatically propagated to child tasks.
State, XP, routing + iteration
The task moves to completed or waiting_approval based on agent level. XP is awarded, achievements are checked, and results are routed to configured output destinations. If the user rejects with feedback, the system detects the original capability and re-executes with TTI/TTS preserved.
Digital CEO and Subtask System
Agents with the can_create_subtasks flag can include a subtareas_propuestas field in their JSON output. The system collects all proposals and sends them to a "CEO" LLM that acts as a digital executive officer.
Intelligent review
The CEO validates each proposal, rejects duplicates or vague ones, improves titles/descriptions, and decides actual priority.
Assignment by expertise
Assigns each subtask to the most suitable agent by department, level and capabilities. Validates real IDs (no hallucinations).
Asset propagation
Images and audio generated by the parent task are propagated as parent_images/parent_audios in the child's metadata.
Configurable limits
subtask_max_per_task (default 5) — maximum subtasks per root task. subtask_max_depth (default 2) — maximum chain depth. subtask_auto_execute — whether subtasks auto-execute or wait for manual approval.
Organizational Data Access
Agents with data_access_level ≥ 1 can query data from the organization's Input Sources. The system loads schemas from linked projects, injects available data into the prompt, and allows the agent to request specific searches with busqueda_datos.
// The agent can request searches against its sources
{
"resultado": "Active client analysis...",
"busqueda_datos": [
{
"fuente": "CRM Clients",
"campos": ["name", "email", "plan"],
"buscar": "enterprise",
"limite": 10
}
]
}
When the agent includes busqueda_datos, the system executes the searches against D1, injects the results into a second prompt, and re-executes the agent with real data. This allows a sales agent to ask "how many enterprise clients do we have?" and get real numbers from their CRM — not hallucinated ones.
Input Sources and Output Destinations
Synapse doesn't live in isolation. Input Sources are the data entry points: emails, web forms, webhooks, external APIs. Output Destinations are where results go: automated emails, Slack channels, webhooks, PDF reports.
→ Input Sources
Incoming emails are parsed and converted into items processable by agents.
Public form links with /form-link for direct data capture.
Receive data from external systems as JSON for automatic processing.
← Output Destinations
Send results, reports and notifications via email.
Notifications to team channels or external systems.
Each output is logged as an item with send status for full traceability.
XP, Achievements and Leaderboards
What makes Synapse unique is its gamification system. AI agents earn experience points (XP) for completing tasks, unlock achievements based on performance, and compete on an organizational leaderboard. This isn't just a visual detail — it's a way to measure and compare the effectiveness of different agent configurations.
XP Points
Each completed task awards XP based on its complexity and priority. Agents level up automatically.
Achievements
Achievements by category (productivity, quality, speed) with tiers (bronze → silver → gold) and bonus XP rewards.
Leaderboard
Per-organization rankings sorted by XP. Quickly identify which agents are performing best and which need tuning.
The ratings system lets users (and other agents) rate response quality. The endpoint calculates averages and totals for each agent, feeding a continuous improvement cycle: if an agent gets low ratings, you know its system_prompt needs refinement.
Real-Time Context Analysis
The POST /context/analyze endpoint takes an instant snapshot of the organization: which agents are active, what state they're in, what tasks are pending, and generates a structured analysis. This is callable manually or by the scheduler for continuous monitoring.
// Context analysis result
{
"id": "ctx_a1b2c3d4",
"agents": [
{ "name": "Ana - Sales", "state": "working", "mood": "focused", "energy": 72.5 },
{ "name": "Carlos - Support", "state": "idle", "mood": "neutral", "energy": 95.0 },
{ "name": "Legal Bot", "state": "break", "mood": "tired", "energy": 15.0 }
],
"tasks": [
{ "title": "Q1 Competitor Analysis", "status": "in_progress", "priority": "high" },
{ "title": "Lead follow-up email", "status": "pending", "priority": "medium" }
],
"timestamp": "2025-01-22T10:30:00Z"
} Bot Sessions: Long-Term Memory
Bot Sessions allow a bot to maintain memory between interactions. Each session stores conversation history, metadata and bot type (chat, assistant, workflow). They auto-clean with keep=N to retain only the latest N active sessions.
This is especially useful for agents that serve the same customer across multiple occasions: "Hello Maria, last time we talked about your marketing budget. Would you like to continue?" — without the user having to repeat context.
Ready-to-Use Templates
Synapse includes a bank of predefined templates for agents and tasks by department. A new user can deploy a complete "Sales" or "Support" floor with pre-configured agents, tested prompts and working workflows — in minutes, not weeks.
Sales
Support
Marketing
Legal
Finance
HR
Operations
R&D
Synapse → Perspectiva Studio
When Synapse generates creative assets (TTI/I2I images, TTS audio, text), the final result includes a deep-link to Perspectiva Studio that opens the Publications module directly with everything pre-loaded: images auto-assigned to enabled platforms (Instagram, X, LinkedIn, TikTok, Facebook), text ready to edit, and content analysis pre-executed.
Deep-link with encoded payload
The backend builds an object { source, taskId, title, text, images[], audios[] }, encodes it in Base64 and appends it as hash: #synapse=eyJ.... Perspectiva Studio detects this fragment on load, decodes the payload, and auto-populates the publication across all active platforms.
The full flow is: create task with TTI capability → agent generates image → R2 stores it → result includes image URL → backend injects deep-link → user clicks "📣 Publish to social" CTA → Perspectiva Studio opens with image pre-assigned, text pre-analyzed, and "🧠 Synapse" badge on previews.
Per-Organization Configuration
Each organization configures AI models, the orchestrator, multimodal capabilities, subtask limits and data access. Everything is configurable per-organization — one can use DeepSeek for orchestration and Groq for execution, another can use Gemini for everything.
| Parameter | Description | Example |
|---|---|---|
| tier_config | Primary provider and model per agent | deepseek-chat, groq/llama, gemini |
| orchestrator_* | Orchestrator provider, model and temperature | deepseek / deepseek-chat / 0.3 |
| capability_config | Specific models per capability (TTI, ITT, TTS, Web) | tti: flux-1-schnell, itt: llama-4-scout |
| max_tokens_* | Tokens per tier: basic, mid, smart | 1536 / 2048 / 3072 |
| subtask_max_* | Subtask limits (per task, depth, auto-exec) | 5 per task, depth 2 |
| data_access_* | Minimum level and max rows for data access | min_level: 2, max_rows: 10 |
| auto_execute_enabled | Enables autonomous execution via cron | true/false |
Multi-Tenant Isolation in Every Query
Every query in Synapse includes organization_id as a mandatory filter. An agent from Org A can never see, modify or execute tasks from Org B. This isolation doesn't depend on application permissions — it's in every WHERE clause of every SQL query.
Combined with the per-tenant Durable Objects system (described in the Multi-Tenant SaaS article), Synapse operates with database-level isolation (separate D1), process-level isolation (separate DO) and logic-level isolation (org_id in every query). Triple barrier, zero chance of leakage.
The Brain That Connects Everything
Synapse is more than an AI agent dashboard. It's the central nervous system that connects data inputs, intelligent multimodal processing, and result outputs — all modeled as a visual organization with buildings, floors and digital workers that collaborate in parallel.
With multi-agent orchestration, 7 multimodal capabilities (TTI, TTS, STT, ITT, I2I, Web, LLM), a digital CEO that reviews subtasks, organizational data access, direct integration with Perspectiva Studio, and autonomous cron execution — Synapse turns the complexity of operating an AI agent office into something any team can visually manage.
Technical summary
- ✦ 6,000+ lines of API, 45+ endpoints, catch-all router on Cloudflare Pages Functions
- ✦ Hierarchical model: Buildings → Floors (with context) → Agents → Tasks → Subtasks
- ✦ Multi-agent orchestration with parallel execution (parallel_groups) and per-step timeout
- ✦ 7 multimodal capabilities: LLM, TTI (FLUX), TTS (ElevenLabs/MeloTTS), STT (Whisper), ITT (Llama 4 Scout), I2I (Stable Diffusion), Web (Groq Compound)
- ✦ Digital CEO that reviews, approves and assigns subtasks with depth/count limits
- ✦ Organizational data access with LIKE searches against D1 and re-execution
- ✦ Perspectiva Studio integration: deep-link with auto-populate in Publications
- ✦ Gamification: XP, levels, tiered achievements, leaderboard, ratings
- ✦ R2 storage for generated images/audio (canonical domain cadences.app)
- ✦ Autonomous cron execution + background execution via waitUntil
- ✦ Multi-tenant with org_id in every query, total isolation between organizations
Cadences Engineering
Technical documentation from the engineering team
Conversational Voice AI
From robotic IVRs to natural conversations
All articles →Codex Blog
Explore all technical documentation