90 Minutes to Do 5 Minutes of Work

Setup is not overhead. It is the infrastructure that makes AI output predictable across sessions and across teams.

Two hours on one Jira story. Ninety minutes went to context, rules, tooling, and IDE wiring. The actual change took five minutes.

That ratio is not a mistake. It is the real price of deterministic work with AI.

Most teams treat setup like a tax they can skip. They borrow time from the future and pay it back as rework, drift, and the same landmines discovered independently by every new person who touches the repo.

Why skipping setup looks rational (and is not)

Setup does not move the metrics managers watch. No ticket closed. No PR merged. No burndown line jumps.

So teams optimize for visible motion. They open a model, paste a prompt, and hope the session goes well.

The failure mode is not obvious at first. The first answer often looks fine. The second one does too. Then the inconsistency shows up: different developers get different guardrails, different defaults, different interpretations of done, and different blind spots in auth, migrations, or build constraints that nobody wrote down.

AI magnifies the system you actually have. If your system is implicit, your AI output will be implicit too. It will vary by person, by day, and by mood.


The better model: setup is infrastructure, not overhead

Think of setup as buying determinism. You are not paying to do today's task faster. You are paying so tomorrow's task starts from the same map, the same constraints, and the same standards.

Most teams optimize for single-session wins. Instead, optimize for repeatability. A ninety-minute investment that makes every future session start in the same place compounds. A skipped ninety minutes saves nothing. It pushes discovery cost into random later moments, usually when people are tired and under pressure.

There are two common shapes of this investment. They are not the same work, but they obey the same math.

First, the personal system: editor configuration, project rules, agent definitions, context packs, and the small rituals that keep one developer aligned with how your team writes code and reviews it.

Second, the repo-level knowledge layer: agents, skills, and rules that encode how a legacy codebase behaves. Fifteen years of developer generations leaves fingerprints: Windows-only builds, tangled auth paths, framework assumptions that predate half your team. The machine does not inherit that history unless you encode it where tools read it every time.

Chapter 5.1 in my book is the team-level version of this: standardize the AI-native IDE and account model so people are not improvising different stacks. Chapter 4 Phase 0 applies the same principle upstream: context loading is the first PIT step because weak context creates confident bad output.


A layered breakdown you can use now

1) Name constraints before the task

Models are eager to help. If you do not pin constraints first, they will invent plausible ones.

List non-negotiables: target framework, OS constraints, test commands, forbidden patterns, security boundaries. Put them where the tool cannot miss them.

2) Encode the map, not trivia

Do not dump a wiki into a prompt. Encode landmines: the auth path that surprises people, the script that must run first, the project that only compiles on Windows, the serializer that breaks if you touch it.

If a senior engineer gives the same warning every quarter, that warning belongs in rules or skills. If it is not written, every developer pays tuition again. So does every agent.

3) Separate team workflow from repo truth

Personal setup is about toolchain and habits. Repo setup is about facts that should survive employee turnover.

Keep team standards where they belong. Keep repo-specific hazards with the code. Mixing them creates noise, and noise erodes trust.

4) Build for the next session, not the current win

When you finish setup, ask one question: if I open this project cold in a week, will I land in the same working state without re-deriving anything?

If the answer is no, you saved minutes and bought days of variance.

5) Treat legacy encoding as your escape path

When agents can navigate a .NET 4.6.2 monolith with a gnarled auth layer, you are not only reducing fear. You are compressing time-to-understanding for a rewrite.

AI does not just help teams survive legacy code. It shortens the distance between "we are afraid to touch this" and "we know exactly what to replace first."


What good looks like

A new developer can open the repo and produce a reviewed-quality change without a hero tour. They may still need domain knowledge. They should not need oral history to avoid breaking something critical.

AI sessions stop feeling like roulette. Two people with the same ticket get materially similar guardrails because constraints are not living in Slack memory.

Incidents shift from "nobody told the model" to "we need to tighten a rule," which is a solvable engineering problem.

The hours you skip on setup are not saved. They are moved downstream, with interest, into rework and re-explained context.

One question

If you logged setup time as infrastructure the way you log CI or on-call, what would your last sprint look like, and what would you stop pretending is optional?

For related field notes, browse the blog archive and implementation patterns in resources.