$ cat i-asked-an-ai-to-describe-how-our-company-actually-works.md
I asked an AI to describe how our company actually works

I asked AI to describe how our company actually works. It was embarrassingly accurate.
I've been collecting transcripts from work locally to my machine. Things like stand-ups, planning sessions, retros, all of it. Just information being accumulated.
A year of meeting transcripts. Hundreds of conversations. The full residue of what it looks like to try to pivot a company while also shipping product while also managing enterprise clients while also getting 70 people to actually use the AI tools you're selling.
I drafted a prompt as I wanted to understand deeply how the organization and my teams are operating. I wanted to know if there were workflow problems.
What came back was not a workflow answer.
It was an org portrait. A power map, an informal org chart that bore only passing resemblance to the formal one, a taxonomy of how decisions actually get made versus how they're supposed to get made. And a multi-stage definition of done that nobody had ever written down explicitly, but that the AI had apparently inferred from the accumulated evidence of every meeting where something was "done" but then turned out not to be.
The line that caught my attention, "The gap between 'it's built' and 'customers can use it' is where things go to die."
I read it out loud to my team. Nobody disagreed.
The definition of done is worth sitting with for a second, because most product and engineering teams are living in this gap and calling it something else.
The AI surfaced a progression that nobody had explicitly named but everyone apparently understood. Built means the code exists and the engineer who wrote it knows it works. In a PR (pull request) means someone else has seen it. On a test server means you could, theoretically, look at it. Shipped means it's in production and customers could, theoretically, access it. Documented means customers could, theoretically, understand it. Celebrated means the team knows it shipped, the company knows it shipped, the customer knows to go use it.
Most teams operate as if "it's built" and "it shipped" are the same stage. They're not. There are four stages in between, and that's where the work disappears. The work dissolves into unreviewed PRs, into test servers that nobody goes back to check, into features that shipped but nobody knows about.
As if that wasn't enough, the other thing the AI produced was more uncomfortable.
What it got at, without ever using the word "culture," was the informal power structure. The who influences decisions that aren't technically theirs to make, where organizational energy actually flows versus where the org chart says it should flow, what patterns repeat across meetings that the participants probably don't notice because they're inside them.
This is not magic. The AI isn't sentient. It's pattern recognition applied to a very large sample of unguarded human communication. Meetings are where people say what they actually think, or at least more of what they actually think than they write in docs or put in Slack (okay well sometimes because we've all been in "that" Slack thread).
That's what I keep coming back to. None of this organizational knowledge was secret. It wasn't even hidden. It was just distributed and spread across hundreds of conversations, never synthesized, never named, never written down in one place. A sufficiently capable model reading the transcripts could surface it in one session.
The question this leaves me with is what we'd do differently if we assume our AI could read all our meetings.
Not in a paranoid way. In a useful way.
And what would we actually fix, now?
I'm figuring this out. The messy middle of using our own tools on our own organization and then having to decide what to do with what we find.
$ _