AI Governance

Grep 'n Guess: Why AI Can't Find What You Never Organized

I asked a simple question recently: "Where do our business rules actually live?"

The answers were honest. And they should worry anyone planning to hand their codebase to an AI assistant.

"In the stored procedures." "Sarah knows that one." "Check the wiki… wait, that's outdated." "The code is the documentation." "We figured it out last time, let me remember…"

Twelve years of a SaaS product. Hundreds of business rules. Spread across databases, code, documents, and people's heads.

This worked. It worked because humans are incredible gap-fillers. We remember context from a meeting three years ago. We infer intent from a variable name. We walk over to Sarah's desk and ask. We carry the system in our heads in ways no documentation ever fully captures.

AI cannot do any of this.

What AI does instead

When an AI coding assistant encounters a codebase, it does the best it can with what it has. It searches. Files. Functions. Comments. Patterns. Naming conventions. Code structure.

Then it guesses.

I call this grep 'n guess — the AI scans what it can see, infers intent from patterns, and fills in the gaps with what usually works.

The output is reasonable. Confident. Syntactically correct. And often wrong in ways that are difficult to catch.

Not because the AI is bad. Because the knowledge it needs was never designed to be found — or questioned — in this way.

The tribal knowledge problem

Every mature codebase has layers of decisions embedded in it. Why does this function check for null before proceeding? Because there was a production incident in 2019 where a null reference brought down the billing system. Why does the discount calculation have a hardcoded exception for accounts older than five years? Because the VP of Sales negotiated that in 2017 and someone coded it directly into the logic.

These decisions are invisible to an AI assistant. There is no comment. There is no document. There is a human who was there, and a codebase that carries the scar tissue of the decision without recording the reason.

When an AI assistant encounters this code, it sees the what but not the why. And when it modifies, extends, or refactors the code, it has no way to know which patterns are load-bearing business decisions and which are incidental implementation choices.

The result: confident code that quietly violates business rules nobody wrote down.

The real problem is structural

This is not an AI problem. It is a knowledge architecture problem that AI makes urgent.

When humans maintained the codebase, tribal knowledge was a tolerable risk. The people who carried the context were the same people making the changes. The knowledge and the action lived in the same head.

AI breaks that coupling. The knowledge stays in Sarah's head. The action moves to the AI assistant. And nobody notices the gap until the generated code does something the business rules do not allow — rules nobody realized were unwritten.

The question is not "how do we make AI smarter about our business rules?" It is "how do we make our business rules accessible to anything — human or machine — that needs to act on them?"

What structured knowledge looks like

The fix is not better AI. It is better knowledge organization.

Business rules that live in prose documents, wikis, and people's heads are advisory at best. They describe intent. They do not constrain behavior. An AI assistant — like a new hire — can read them and still get the implementation wrong because the description is ambiguous, incomplete, or contradicted by the actual code.

Business rules that are structured — queryable, explicit, connected to the code and systems they govern — are a different category of knowledge. They can be consumed by humans and machines alike. They do not depend on someone remembering the context from 2019.

The difference between "Sarah knows that rule" and "that rule is in the system" is the difference between knowledge that works for a team of ten and knowledge that works for a team augmented by AI.

The question worth asking

If you are adopting AI coding tools — or any AI tooling that acts on your business logic — ask yourself:

Could an AI find and correctly apply every business rule in your system without asking a human?

If the answer is no, the AI will do what any reasonable system does when it lacks information: it will guess. Confidently. At scale.

We are going to be exploring this territory further in the coming weeks — the gap between what our systems know and what they can enforce, and what that means for organizations moving fast with AI. There is a structural problem here worth understanding before the codebase gets too far ahead of the governance.


Sources

  • METR 2025 Study on AI Coding Tool Productivity — METR

  • GitClear analysis of AI-assisted code duplication — GitClear

Agents Do Dumb Things
Join the conversation Discuss on LinkedIn →
Mar 2026
Practice

How to Actually Benchmark a VPS: What a Day of Testing Taught Us About Getting It Right

15 mins
Mar 2026
AI Governance

Grep 'n Guess: The Research Caught Up

20 mins
Mar 2026
Natural Selection

Natural Selection - About This Series

2mins