build, break, learn

blind spots scale faster than code

how we accidentally used ai to automate our misunderstandings



we thought ai would help us rewrite a legacy app faster. it did — just not in the ways we expected.


it seemed like a simple ask

so a few months ago we landed a project that looked simple on paper:

rewrite this old app.

that’s it. that’s the brief.

no architectural manifesto. no neatly typed atlases of documented requirements. just the source code and maybe… a sentence or two from someone who once thought they wrote documentation.

to make things even more exciting, we were told we had access to an ai dev tool — that shiny new assistant that everyone loves to talk about — and told to go nuts.

obviously, we thought this was going to be fun.

spoiler alert: it was fun.
also a dumpster fire. both at the same time.


ai looked helpful at first

at first, ai helped us zoom into unfamiliar code like having a teammate who can read faster than all of us combined.
“what does this do?”
“here’s a summary.”
“here’s a guess at the architecture.”
“here’s something plausible!”

and we ate that up.

except the rewrite wasn’t just some code snippets. it was an entire application. pages. apis. workflows that nobody remembered properly.

and ai was like a hyper-eager assistant — great at giving you answers, but terrible at saying “i don’t actually know this yet.”


chaos, but confident chaos

so what did “fun” really look like in practice?

well… picture this:

one developer asks the tool to summarize a module. another developer asks a slightly different question. a third developer runs off and starts coding a new page before either summary lines up. then someone else runs a prompt that overwrites the other approach.

the ai cheerfully obliges and suddenly you have three versions of something that all kinda work — until they don’t.

we didn’t call it chaos at the time, but if chaos had a newsletter…


and then things got weird

the early mistakes were classic.

we all read the same code, but came away with slightly different interpretations.
ai happily filled in the blanks in ways that were plausible but not consistent.
components got built twice in subtly incompatible ways.
one person’s output overwrote another’s like friendly code demolition derby.

and the best part?

none of it threw errors. nothing exploded. everything looked fine until integration time.

that’s when we realized ai wasn’t broken.

we were looking at the same system through different lenses.


what we were missing

we had no common starting point and no single picture of what the app was supposed to be once rewritten.

ai did what we asked — but we were asking a dozen slightly different questions.

so we kept going, hoping each of the next rewrites would magically get smoother.

that didn’t really happen.

sure, by app number two we were making fewer mistakes. by app three we started to laugh when we noticed familiar patterns resurfacing (“oh hey, it’s the same weird loop we forgot to reconcile last time”).

by the fourth, we could practically predict where the next misunderstanding would happen before lunch.

but it still felt like we were reinventing the same wheel on every rewrite.


uncovering where the work really was

here’s the part that mattered most:

we accidentally discovered that the code was the easiest bit.

ai could crank out lines fast enough. rewriting functions wasn’t the hard part.

the hard part was:

  • figuring out what we were even rewriting
  • agreeing on how pieces fit together
  • deciding how responsibilities were shared
  • noticing when someone’s answer to “what does this do?” subtly differed from someone else’s answer

in other words: the difficulty wasn’t speed — it was coordination, shared understanding, and common constraints.

ai was great in small doses.
it was awful without structure.


a slow shift

once we got that — once we actually noticed that the messy stuff wasn’t code quality but conflicting mental models — things changed.

not instantly.
not dramatically.

but slowly — steadily — enough that our next rewrite looked… different.

not because ai suddenly earned a cape, but because we stopped letting it fill the gaps we hadn’t bothered to identify.

we started writing down the steps that kept tripping us up. the assumptions we kept forgetting to reconcile. the places where two developers thought they were doing the same thing — but weren’t.

and from that mess of loops and runs and rewrite attempts came something a bit more organized. something we’ll talk about in a follow-up, once we’ve had time to polish it.


why this story matters

this story — this build, break, learn story — isn’t about that solution.

it’s about the fact that writing the code was never the real problem.

the real problem was everything in between:

  • the assumptions we never said out loud
  • the shared context we never actually built
  • the ways ai made us look smart without actually being aligned

if you’re wondering how this story ends — it doesn’t, really.

we ran in circles. learned a few things. ran in slightly smaller circles.

ai helped a lot. it also made our blind spots louder.

the breakthrough wasn’t better tools. it was realizing that speed without shared understanding just gets you lost faster.

the next step came from there.