Teammate Mental Model for AI
"It's a bit like this really smart intern that refuses to read Slack and doesn't check Datadog or Century unless you ask it to. And so no matter how smart it is, how much are you going to trust it to write code without you also working with it?" - Alexander Embiricos
What It Is
The Teammate Mental Model is a framework for thinking about how to effectively work with AI coding agents and AI assistants. Rather than treating AI tools as magic black boxes or simple utilities, you approach them as you would a new teammate joining your team.
This mental model, developed by Alexander Embiricos at OpenAI's Codex team, changes how you onboard AI tools, delegate to them, and build trust over time. Just as you wouldn't ask a new hire to architect your entire system on day one, you shouldn't expect an AI agent to work autonomously without first building shared context and trust.
The key insight is that AI agents today are like "really smart interns that refuse to read Slack"—they have raw capability but lack the organizational context, guidelines, and trust that enable autonomous work.
How It Works
The framework maps the human teammate onboarding process to AI agent adoption:
Phase 1: Initial Pair Work
- Work side-by-side with the AI in an interactive mode
- Let it see your code, your decisions, your patterns
- Build shared vocabulary and context
Phase 2: Guided Tasks
- Give the AI specific, bounded tasks with clear validation criteria
- Review its output carefully and provide feedback
- Learn which prompts work well and which don't
Phase 3: Expanded Autonomy
- Gradually increase the scope and duration of delegated tasks
- Configure the AI with access to more tools (tests, previews, monitoring)
- Establish "starter tasks" you know it handles well
Phase 4: Proactive Participation
- The AI can identify work that needs to be done
- It participates in planning, not just execution
- It can be "on call" to respond to signals autonomously
How to Apply It
Start with pair programming - Don't begin by giving the AI complex autonomous tasks. Work together interactively first.
Build context together - As you work, help the AI understand your codebase, your patterns, and your team's conventions
Establish validation loops - Configure the AI to test its own work, preview results, and verify changes before presenting them to you
Give credentials gradually - Just as you'd give a new teammate access to systems over time, progressively enable more integrations and permissions
Create a plan.md - For longer tasks, collaborate with the AI to write a plan first. Only then ask it to execute. This mirrors how you'd align with a human on approach before they start building.
Track what it's good at - Over time, build a list of task types where this AI teammate excels, and delegate those with confidence
When to Use It
- When first adopting a new AI coding tool
- When delegating complex or multi-step tasks to AI
- When deciding how much autonomy to give AI agents
- When onboarding team members to AI-assisted workflows
- When evaluating why an AI tool isn't working well (often: not enough trust-building)
Source
- Guest: Alexander Embiricos
- Episode: "How to drive word of mouth | Nilan Peiris (CPO of Wise)"
- Key Discussion: (00:12:21) - The teammate analogy for AI agents
- Additional Discussion: (01:04:54) - Tips for building trust with Codex
- YouTube: Watch on YouTube
Related Frameworks
- Agency-Control Trade-off - More AI autonomy means less human control; earn trust before increasing agency