Founder OS Engineering

Shared AI context for a 9-person team

Every Claude Code session across every machine, every operator, every project shares context through Supabase. Automatically.
7
Human operators
15
MCP servers
46
Velocity skills
86
Shared memories
316
Tests passing
The problem

Every session starts from zero

A $5M+ revenue company with AI at the center. Every team member uses Claude Code daily. Without shared context, the AI doesn't know what happened yesterday, what the team learned, or what's already been built.

Two operators fix the same bug. Credentials get hardcoded. Onboarding takes days. No audit trail.

Architecture

The system overview

Two Supabase projects. One holds org context (20 tables). The other runs 46 proprietary skills via edge functions. Every Claude Code session starts with a pull and ends with a push.

graph TB
  subgraph ops["Operators"]
    direction LR
    don["Don\nMacBook Pro + Mac Mini"]
    matt["Matt\nMacBook Pro"]
    sean["Sean\nMac Mini"]
    nick["Nick\nMacBook Air"]
    diego["Diego\nMacBook Pro"]
    others["Nicolas, Bojan"]
    agents["FORGE + ARIA\nAI Agents"]
  end

  cc["Claude Code\nTerminal Session"]

  subgraph hooks["Lifecycle Hooks"]
    direction LR
    start_h["SessionStart\ngit pull + pull --fast"]
    stop_h["Stop\npush + auto-journal"]
  end

  subgraph ctx_db["Supabase: Context DB"]
    direction TB
    operators_t["operators + tokens"]
    config_t["global_config\noperator_config"]
    ctx_t["context_file\ncommands, rules, skills"]
    journal_t["session_journal\nappend-only"]
    proj_t["project_inventory\nproject_state"]
    audit_t["sync_audit\ncollision_log"]
  end

  subgraph skills_db["Supabase: Skills Proxy"]
    direction TB
    exec["execute-skill v18\nSSE streaming"]
    list["list-skills v8"]
    rate["rate_limits\n100/hr global"]
    usage["usage_logs\ncost tracking"]
    cron["pg_cron\ndaily sync 6am UTC"]
  end

  subgraph gh["GitHub"]
    fos["fos-context repo"]
    vel["velocity repo\n46 skill definitions"]
    repos["14+ project repos"]
  end

  subgraph local["Local File System"]
    claudemd["~/CLAUDE.md"]
    projmd["~/PROJECTS.md"]
    mem["memory/ 86 files"]
    cmds["commands/ 115"]
    rules["rules/ 14"]
    skills_l["skills/ 46"]
    hookify["hookify.* 9 guards"]
  end

  subgraph safety["Safety Guards"]
    direction LR
    hash["SHA256\nhash check"]
    bleed["Identity\nbleed guard"]
    slug["Slug collision\ndetection"]
    recover["Push\nrecovery"]
    sessions["Active session\ncheck"]
  end

  ops --> cc
  cc --> hooks
  start_h -->|"8 parallel\nworkers"| ctx_db
  start_h -->|"git pull"| gh
  ctx_db -->|"materialize"| local
  stop_h -->|"upsert changed"| ctx_db
  stop_h -->|"git push"| gh
  safety -.->|"enforced on push"| stop_h
  cc -->|"MCP server"| skills_db
  cron -->|"sync skills"| vel
      
7 operators, 2 AI agents, 2 Supabase projects, GitHub, 5 safety guards
Integrations

15 MCP servers, one session

Every Claude Code session has live, authenticated connections to 15 external services. No API wrappers, no custom scripts. Native MCP protocol with real-time read and write access.

graph LR
  subgraph crm["CRM + Sales"]
    hs["HubSpot\nEnterprise CRM"]
    cal["Calendly\nBookings"]
  end

  subgraph comm["Communication"]
    slack["Slack\nTeam messaging"]
    gmail["Gmail\nEmail"]
  end

  subgraph prod["Productivity"]
    gcal["Google Calendar\nScheduling"]
    notion["Notion\nDocs + DBs"]
  end

  subgraph design["Design"]
    figma["Figma\nDesign system"]
    webflow["Webflow\nSite builder"]
  end

  subgraph infra["Infrastructure"]
    vercel["Vercel\nDeploy"]
    supa["Supabase\nDatabase + Auth"]
    n8n["n8n\nWorkflow automation"]
    postman["Postman\nAPI collections"]
  end

  subgraph custom_mcp["Custom"]
    skills["Founder OS Skills\n46 Velocity skills"]
    mermaid_c["Mermaid Chart\nDiagrams"]
    zapier["Zapier\nAutomation"]
  end

  cc(("Claude Code\nSession"))

  crm <--> cc
  comm <--> cc
  prod <--> cc
  design <--> cc
  infra <--> cc
  custom_mcp <--> cc
      
15 live MCP connections per session. Read and write access to CRM, comms, design, deploy, and automation.
Session lifecycle

Pull, work, push

When any operator opens Claude Code, two hooks fire automatically. Git pull gets code updates. Supabase pull materializes everything to local files. When the session ends, everything pushes back with 5 safety checks.

sequenceDiagram
  participant Op as Operator
  participant CC as Claude Code
  participant GH as GitHub
  participant SB as Supabase
  participant FS as Local Files

  rect rgb(232, 245, 233)
    Note over Op,FS: SESSION START
    Op->>CC: Opens Claude Code
    CC->>GH: git pull fos-context
    CC->>SB: pull --fast (8 workers, 30s cache)
    SB-->>CC: config, memories, journals, projects
    CC->>FS: Materialize to ~/CLAUDE.md, commands/, rules/, skills/, memory/
    CC->>SB: journal-status query
    SB-->>CC: FRESH or STALE per machine
  end

  rect rgb(255, 255, 255)
    Note over Op,FS: WORK SESSION
    Op->>CC: Edit, test, deploy
    CC->>FS: Read/write project files
    Op->>CC: log-entry Topic Content
    CC->>SB: INSERT session_journal
  end

  rect rgb(255, 243, 224)
    Note over Op,FS: SESSION END
    CC->>FS: Compute SHA256 hashes
    CC->>CC: Compare to sync_state.json
    CC->>SB: Active session check
    CC->>SB: Identity bleed guard
    CC->>SB: Slug collision check
    CC->>SB: Upsert changed content
    CC->>FS: Save push_recovery.json on failure
    CC->>SB: Auto-journal + sync_audit
    CC->>GH: git commit + push
  end
      
Automatic sync on every session. SHA256 hashing, identity bleed guard, collision detection, and recovery.
File system

What lives on each machine

Every operator's machine has the same file structure. The sync engine materializes Supabase data into these local files. Claude Code auto-loads them based on the current working directory.

graph TB
  subgraph supabase["Supabase Tables"]
    global_cfg["global_config"]
    op_cfg["operator_config"]
    ctx_file["context_file"]
    proj_inv["project_inventory"]
    session_j["session_journal"]
  end

  subgraph github["GitHub Repos"]
    fos_ctx["fos-context\nsync engine + commands"]
    vel_repo["velocity\nskill definitions"]
    proj_repos["project repos\nCLAUDE.md per project"]
  end

  subgraph home["~ Home Directory"]
    claude_md["~/CLAUDE.md\nCore rules, identity, doctrine"]
    projects_md["~/PROJECTS.md\nActive project inventory"]
  end

  subgraph dotclaude["~/.claude/"]
    commands["commands/ \n115 slash commands"]
    rules["rules/ \n14 path-scoped rules"]
    skills_dir["skills/ \n46 Velocity skills"]
    hookify_dir["hookify.* \n9 deterministic guards"]
    settings["settings.json\n15 MCP servers + hooks"]
  end

  subgraph memory["~/.claude/projects/memory/"]
    mem_global["Global (35)\nbrand, pricing, engineering"]
    mem_team["Team (9)\nshared learnings, attributed"]
    mem_operator["Operator (42)\nprivate per person"]
    mem_index["MEMORY.md\nauto-generated index"]
  end

  subgraph sync_files["Sync State Files"]
    sync_state[".sync_state.json\nSHA256 content hashes"]
    pull_cache[".pull_cache.json\n30s TTL fast cache"]
    push_recovery[".push_recovery.json\nfailsafe on error"]
  end

  global_cfg -->|"pull"| claude_md
  op_cfg -->|"pull"| claude_md
  ctx_file -->|"pull"| commands
  ctx_file -->|"pull"| rules
  ctx_file -->|"pull"| skills_dir
  proj_inv -->|"pull"| projects_md
  session_j -->|"pull"| home

  fos_ctx -->|"git pull"| commands
  fos_ctx -->|"git pull"| hookify_dir
  vel_repo -->|"pg_cron sync"| skills_dir

  claude_md -->|"push"| global_cfg
  projects_md -->|"push"| proj_inv
  memory -->|"push"| ctx_file
      
Every file on disk maps to a Supabase table or GitHub repo. Pull materializes, push syncs back.
Security

Credentials and access control

Supabase Vault is the single credential store. Operator tokens scope every read and write via row-level security. Memory tiers enforce visibility boundaries. No operator can read or write another operator's private data.

graph TB
  subgraph vault["Supabase Vault"]
    direction TB
    v_fn["get_credential key, operator\nRLS-gated, logged, decrypted server-side"]
    v_keys["HubSpot | Anthropic | n8n\nZoom | Mercury | Calendly\nFathom | Aloware | Slack"]
  end

  subgraph auth["Operator Authentication"]
    direction TB
    config_file[".supabase_context.json\nurl + anon_key + operator_token"]
    header["x-operator-key header\nsent on every request"]
    rls["Supabase RLS policies\noperator_id scoping on all tables"]
    config_file --> header --> rls
  end

  subgraph tiers["Memory Tier Boundaries"]
    direction LR
    tier_g["Global 35\nAll operators see"]
    tier_t["Team 9\nAll operators see\nAttributed by author"]
    tier_o["Operator 42\nOnly you see"]
  end

  subgraph guards["Write Safety"]
    direction LR
    g1["SHA256 hash\nSkip unchanged"]
    g2["Identity bleed\nBlock cross-operator"]
    g3["Slug collision\nUnique paths"]
    g4["Push recovery\nAuto-save on fail"]
    g5["Active sessions\nNo concurrent writes"]
  end

  subgraph disk["Disk Credentials - migrating to Vault"]
    direction TB
    disk_hs["~/.hubspot_api_key"]
    disk_zoom["~/.zoom_credentials.json"]
    disk_cal["~/.calendly_credentials.json"]
    disk_fathom["~/.fathom_credentials.json"]
  end

  vault -->|"runtime lookup"| auth
  auth --> tiers
  auth --> guards
  disk -.->|"Phase 6 migration"| vault
      
Supabase Vault + RLS + operator tokens. Disk credentials migrating to Vault in Phase 6.
Use cases

What this looks like in practice

Don builds infrastructure, Matt uses it same day

March 30, 2026

Don wires 14 PLG tools to HubSpot Forms API, creates form-utils.js, deploys to tools.founderos.com. That afternoon, Matt opens Claude Code. His session already knows about the tools, the form GUIDs, the deployment. No coordination call needed.

Sean finds a bug, the team never hits it again

March 30, 2026

Sean's session crashes on a HubSpot pagination issue. The fix becomes a team memory. Every future session for every operator loads this knowledge automatically.

Nick onboards in 30 minutes

March 30, 2026

Nick runs bootstrap.sh. His first session has 115 commands, 86 memories, 14 rules, 9 hookify guards, 46 skills. He deploys a landing page that afternoon.

Three operators, three machines, zero conflicts

April 2, 2026

Diego builds a YouTube dashboard. Don hardens the context architecture. Matt builds content strategy. Same Supabase. 231 journal entries, zero data loss.

Automated DFY assets for sales calls

April 2, 2026

A prospect books a strategy call. The pipeline fires: HubSpot data, skill matching, Claude generation, Vercel deploy, HubSpot custom object, Slack notification. Zero human effort.

DFY pipeline

From booking to deliverable, automatically

12 nodes. Calendly webhook triggers n8n Cloud. HubSpot lookup, skill matching, Claude generation, HTML rendering, Vercel deploy, HubSpot custom object, Slack notification.

graph LR
  subgraph trigger["1. Trigger"]
    booking["Prospect books\nstrategy call"]
    webhook["Calendly webhook\nfires to n8n Cloud"]
    booking --> webhook
  end

  subgraph enrich["2. Enrich"]
    hs_lookup["HubSpot lookup\ncontact + deal data"]
    context["Pull founder context\nwebsite, intake, prior assets"]
    hs_lookup --> context
  end

  subgraph match_s["3. Match"]
    select_s["Select from\n46 Velocity skills"]
    config_s["Load skill config\nfrom Supabase"]
    select_s --> config_s
  end

  subgraph generate["4. Generate"]
    edge_fn["Supabase Edge Function\nexecute-skill v18"]
    claude_api["Claude API\nSSE streaming"]
    output["JSON or .docx\nstructured output"]
    edge_fn --> claude_api --> output
  end

  subgraph deploy_s["5. Deploy"]
    template["HTML template\nbuild.js + 4 templates"]
    vercel_d["Vercel deploy\nclients.founderos.com"]
    template --> vercel_d
  end

  subgraph record["6. Record"]
    hs_obj["HubSpot\ncustom DFY object"]
    slack_n["Slack\n#dfy-feedback"]
    hs_obj --> slack_n
  end

  trigger --> enrich --> match_s --> generate --> deploy_s --> record

  subgraph n8n_env["n8n Environment"]
    dev["Dev: Mac Mini\n192.168.1.200:5678"]
    cloud["Prod: n8n Cloud\nfounderos.app.n8n.cloud"]
    dev -->|"promote"| cloud
  end

  subgraph limits["Rate Limits"]
    rl1["100 calls/hr global"]
    rl2["Per-key daily limits"]
    rl3["$0.04 avg per call"]
  end

  n8n_env -.-> trigger
  limits -.-> generate
      
Calendly booking to live Vercel page. n8n dev on Mac Mini promotes to Cloud. Rate limited + cost tracked.
Metrics

By the numbers

MetricValue
Human operators7
AI agents2 (FORGE, ARIA)
MCP servers connected15
Session journal entries231 (14 days)
Shared memories86 (35 global, 9 team, 42 operator)
Shared commands115
Velocity skills46
Credential references17
Skills API cost$12.51 total (~$0.04/call)
Tests passing316
Code lines5,749 + 3,265 tests

The system runs itself

Every improvement compounds. It syncs to every operator on their next session start.

FOUNDEROS.COM