The architecture behind TeamOS — how the pieces fit together, and why it's built from plain files.
The hardest part of building any shared system is deciding what it's actually for.
TeamOS has a narrow mandate: synthesize context across the tools you already use, make that context retrievable without hunting, and eliminate the reformatting loop. It's the layer between those tools that makes them collectively useful instead of siloed.
Here's how it's built.
The File Structure
Everything lives in a git repo. The structure is the system:
team-os/
├── TEAM.md # System config — rules, integrations, commands
├── projects/
│ └── project-alpha/
│ ├── brief.md # Original charter
│ ├── plan.md # Phased plan with milestones
│ ├── status.md # Current state (auto-updated)
│ ├── decisions.md # Decision log with reasoning
│ ├── risks.md # Risk register
│ └── dependencies.md # Cross-project dependencies
├── standards/
│ ├── brand-charter.md # Visual identity, slide themes, formatting
│ ├── writing-guide.md # Tone, terminology, acronym glossary
│ └── templates/ # Canonical templates for every deliverable type
├── team/
│ ├── roster.md # Members, roles, expertise
│ ├── stakeholders.md # Who to keep updated, how, and when
│ ├── expertise-map.md # Who knows what
│ └── preferences/
│ └── sarah.md # "Bullets, risks first, Slack over email"
├── context/
│ ├── standup_YYYYMMDD.md # Async standup summaries
│ ├── meeting-notes/
│ ├── connections/ # Context connector match log
│ └── audit-trail.md # Append-only action log
├── learnings/
│ ├── patterns.md # What we've learned works and doesn't
│ ├── tribal-knowledge/ # Captured expertise
│ └── corrections.md # Corrected assumptions
├── inbox/ # Unprocessed captures — raw notes, Slack threads
├── outputs/ # Generated decks, reports, plans
└── access/
├── roles.md # Role tiers
└── connector-consent.md # Per-person opt-in/out for proactive DMs
TEAM.md is the control plane — the equivalent of a CLAUDE.md for the system as a whole. It tells the AI what the team is, which integrations are live, what commands are available, and what rules govern behavior. If something about the system changes, TEAM.md is where it's documented.
The Baseline Creator
The highest-value pattern: define your standards once, then enforce them through every transformation command automatically.
Standards live in standards/:
brand-charter.md— colors, fonts, slide layouts, logo usage ruleswriting-guide.md— tone, terminology, what not to say, audience-specific conventionstemplates/— canonical structures for decks, briefs, status reports, project plans
Every content transformation reads standards first. When someone runs:
/to-deck quarterly-update.md
The agent reads brand-charter.md + the deck template + the source doc, then generates a branded HTML slide deck to outputs/decks/. A standards-enforcer validates it before delivery. Wrong font? Caught. Missing required section? Caught. Off-brand language? Caught.
The practical effect: nobody asks "can you make this look like the last deck?" anymore. The last deck and this deck look the same because both ran through the same template against the same brand charter.
The before/after is stark:
| Before | After |
|---|---|
| "Can you make this look like the last deck?" | /to-deck file.md — always matches brand |
| "What template should I use?" | Templates auto-applied by command type |
| 30–60 min reformatting per deck | 2 min generation + review |
| "Can you reformat this for the VP?" | /to-report alpha --for leadership |
The --for flag on /to-report adapts the same data for different audiences. Same project, same facts — --for team gets full technical detail with blockers and open decisions; --for exec gets three bullets: ship date, top risk, the ask.
The Context Connector
The second pattern is proactive connection — surfacing overlap between people working on related problems without knowing it.
In most teams, these connections happen by accident. Two engineers are solving adjacent auth problems. Two PMs are scheduling reviews with the same VP in the same week. Someone writes a plan to deprecate an API that another team just built on top of. These collisions are expensive. And they're mostly preventable if someone is watching the whole board.
The context connector runs on every project update:
- Every update is tagged with topic vectors — auth, API, vendor-X, performance, deploy
- The system scans for overlap across projects and people
- Matches above a relevance threshold get scored and routed
- Opted-in members get a structured DM; everyone else sees it in the weekly digest
A dependency contradiction — where one project plans to deprecate something another project depends on — always gets surfaced immediately. Soft overlaps (two people working on related topics) go to digest unless the recipient has opted into real-time DMs.
The DM format matters. It's not a notification — it's an offer:
Hey Sarah — TeamOS spotted a connection:
Your work: Auth migration in Project Alpha (decisions.md, Mar 28)
Related: Marcus hit an auth regression in Project Beta (risks.md, Mar 30)
Both involve the OAuth token refresh flow.
→ Want me to create a shared thread?
→ Or just an FYI? (React 👀 to dismiss)
Dismissed connections don't resurface. The system logs what was acted on and what wasn't, and that feedback calibrates future scoring.
The Write Queue
Personal AI systems don't have concurrency problems. I'm the only user; I don't have two sessions open simultaneously.
Teams do.
Sarah logs a decision at 2:01. Marcus pulls a status at 2:01. Jordan drops meeting notes at 2:02. A Jira sync fires at 2:05. Five writes to overlapping files within four minutes. Without serialization, you get write races — two agents writing to the same file and producing garbage.
The write queue fixes this:
All writes → queue (FIFO) → single writer process → git commit
Reads bypass the queue entirely — they're never the bottleneck. Writes get ordered by timestamp. File-level locking (not repo-level) means Sarah's write to projects/alpha/decisions.md only blocks other writes to that specific file. Marcus's write to a different file proceeds in parallel.
For a small team (3–5 people), a simple mutex file per target document is enough. For 6–15, a lightweight message queue. For larger teams, you're probably moving off git anyway.
The Three-Layer Density Model
With a team of eight running five projects, syncing Jira + Slack + Calendar, you can drown in your own output within weeks. The information overload problem is real, and it's what kills adoption of tools like this.
The solution is explicit information density management across three layers:
Layer 1: Raw — inbox captures, meeting notes, Slack threads. High volume. Only the routing agent sees this. 7-day retention, auto-archived after processing.
Layer 2: Structured — project files, decisions, risks. Medium volume. Project-scoped visibility. Accumulates for the life of the project, then archives.
Layer 3: Synthesized — digests, reports, dashboards. Low volume. Role-filtered. The only layer that grows permanently — and it's compressed by design.
The weekly digest uses progressive summarization:
🔴 Needs Attention (read this)
- Alpha: API v2 deprecation conflicts with Beta dependency
- Gamma: Ship date slipped — QA resource gap
🟡 Decisions Made (skim this)
- Alpha: Webhook polling over streaming (Sarah, Mar 28)
- Beta: Launch delayed 1 week for QA (Marcus, Mar 30)
🟢 On Track (skip unless your project)
- Alpha: Auth migration 80% complete, on schedule
- Beta: New timeline confirmed
📊 Numbers
- Decisions: 7 | Blockers resolved: 3 | Connector matches: 4 (3 acted on)
Red gets two sentences. Yellow gets one line each. Green you skip. The reader chooses depth; the system doesn't force it.
Per-person information budgets enforce this at the DM level. A team lead gets up to five DMs a day (action items, connector matches, escalations). A member gets three (their own action items, direct connector matches). Stakeholders get zero DMs — scheduled digests only. If the system would exceed someone's budget, it batches and prioritizes: blockers always get through, FYI notifications get dropped.
What Commands Look Like
The full command set covers project management, content transformation, and team operations. A few that show the range:
/status alpha— current project state from live Jira + recent decisions + blockers, formatted for whoever asked/to-deck quarterly-update.md— branded slides from any doc, 2 minutes/decision "delay launch for QA"— logs the decision with reasoning, alternatives, owner, and a review date/capture "how billing actually works"— interactive interview that extracts tribal knowledge from whoever knows it/handoff alpha jordan— generates a "what I'd tell you over coffee" doc with full context, open items, landmines, and unwritten stakeholder notes/who-knows oauth— searches roster, project history, and tribal knowledge base for expertise matches/onboard [person]— generates a curated context package for a new team member
Why Plain Files
Plain files are debuggable, portable, and human-readable. If the AI layer goes completely down, the git repo still has every project doc, decision log, and template in it. The system degrades to "a well-organized shared folder" — not zero. You can open any file and read it without special tooling.
There's also a practical limit. Git handles concurrent writes well up to about fifteen active contributors. Below that, the simplicity wins. Above it, you migrate the storage layer — the file structure and agent patterns transfer directly to a database backend, with commands unchanged.
Start simple. Add complexity when the problem demands it.