Field Notes

A few months ago I looked at a prompt I'd been iterating on for weeks

A few months ago I looked at a prompt I'd been iterating on for weeks

A few months ago I looked at a prompt I'd been iterating on for weeks. It had fifteen lines of "never," "always," and "unless." Each one was the fossil of a failure I'd patched without diagnosing.

That's the difference between layer two and layer three of building with AI.

Layer two: review the output, tell it not to do that again. Layer three: trace the failure back to the structural assumption that produced it.

"Don't do that again" addresses the symptom. The mistake resurfaces in a slightly different form, you patch it again, the prompt gets longer, the fragility compounds quietly.

The third layer asks: what category of failure was this? Ambiguous prompt. Missing context the AI inferred wrong. Wrong task decomposition. Wrong evaluation criteria. Reviewing output against a standard you never made explicit.

Naming the category is what lets you change the underlying logic instead of annotating around a crack in it.

Don't just flag the shit. Stop making it.

What's your current process for diagnosing why a failure happened? Or are you treating each one as its own isolated problem?