- AI
- Coding Agents
- Engineering Practice
Your AI Coding Agent Is Only as Good as Its Weakest Variable
Four things have to align for an AI coding agent to work. Miss one and you will waste time fighting the tool.
After using AI coding agents across a variety of projects, models, and methodologies, I have a theory about when they work and when they don’t. Four things have to align.
The harness — the tool (Claude Code, Cursor, etc.) and everything else you give it to work with. External integrations like your issue tracker and review comments. A domain knowledge base: architecture docs, coding conventions, debugging guides. A well-configured harness is the difference between an agent that can reason about your codebase and one that’s guessing.
Problem complexity. Two dimensions matter here: the intrinsic difficulty of what you’re building (a greenfield CRUD endpoint versus untangling a distributed transaction bug), and the accidental complexity of the codebase itself. Ambiguous naming, inconsistent patterns, conflicting documentation — from the model’s perspective, that’s noise it has to reason through before it can do anything useful.
The model. Not all models are equal, and the relationship between problem complexity and model capability is non-linear. A problem twice as complex may need a model an order of magnitude more capable. Pick accordingly.
How you use it. At one end: vibe coding — give it the outcome and let it find the path. At the other: micromanaged steps with verification at each. Most of the interesting work happens in between. The right method depends on the problem, the model, and how much patience you have at the moment. And remember, garbage in, garbage out.
You have to get all four right. Miss one and you will waste time fighting the tool. And remember, the goal is to create value faster, not just code faster — there’s a difference.