lauren@terminal:~/blog$

$ cat your-meeting-transcripts-dont-need-to-leave-your-laptop.md

Your meeting transcripts don't need to leave your laptop

Your meeting transcripts don't need to leave your laptop

There are great tools out there. Granola is one of them. Unfortunately, my company doesn't agree. As Head of Produt, it's not a good look to engage in shadow IT and given I'm building privacy first AI products, this is also an opportunity to practice what we say. Streaming all these conversations to the cloud can be a privacy risk.

Enter the opportunity to build a solution with AI.

what I actually built

The pipeline is pretty simple once you see it laid out. Audio Hijack records the meeting audio on my Mac. It captures both sides of the conversation, what the other person says through Zoom and what I say through my laptop mic. Those recordings go to MacWhisper, which runs a local transcription model with speaker recognition. Then a Python script picks up the transcript, checks my Apple Calendar to figure out which meeting it was, and saves it as a clean markdown file organized by month.

The whole thing runs locally. No audio leaves my laptop. No transcripts get uploaded anywhere. The markdown files feed into AnythingLLM so I can search across past meetings and ask questions like "what did we decide about the pricing model in January?" I also have a series of prompts that I can run to mimic some of my favorite features in Granola. It took about four sessions to build, spread across a couple of days. And most of that time was spent on things breaking.

the part where everything broke

I want to be honest about this because I think people skip over the messy middle when they write about building things. "I had a problem, I built a solution, it works great." That's not what happened. What happened was two solid rounds of everything going sideways.

The first disaster was the echo problem. Audio Hijack has its own virtual audio driver called ACE, and it seemed like the obvious way to capture Zoom audio. I set it up, started a test call, and immediately my colleague said "you sound like a robot." Not in the fun sci-fi way. In the "I can't understand a word you're saying and there's a horrifying echo" way.

I tried three different configurations. Every single one caused feedback. The ACE driver was intercepting Zoom's audio pipeline in a way that broke everything. Other participants couldn't hear me properly, I disappeared from their recordings, and one person described my voice as "roboty sounds." Which is a contender for the quote I'll be putting on my tombstone.

The fix was switching to BlackHole, which is a virtual audio cable that takes a totally different approach. Instead of intercepting audio, it just routes a copy of it. I set up a Multi-Output Device in macOS that sends audio to both my speakers and BlackHole simultaneously. Audio Hijack captures from BlackHole for other people's audio and my MacBook mic for mine. No ACE driver involvement at all. It just worked.

The second disaster was sneakier. I finished a 49-minute meeting, felt good about the setup, went to check the transcript, and found this: "Okay. Alright, can you talk and tell me if."

That's it. That was the entire transcript of a 49-minute meeting.

What happened was a race condition. MacWhisper has a "Watched Folders" feature that automatically transcribes any new audio file that appears in a folder. Sounds perfect, right? Except it grabbed the recording file while Audio Hijack was still writing to it. So it transcribed whatever existed at that exact moment, which was about eight seconds of audio.

The fix was adding a staging directory. Recordings land in a staging folder first. The Python script checks that folder every five minutes, and it only moves a file to MacWhisper's watched folder after the file hasn't been written to for at least two minutes. That way the recording is definitely done before MacWhisper touches it. The first time it worked properly, the script waited 293 seconds of idle time before promoting the file. I watched the log like it was a season finale.

what I learned building this with AI

I didn't know how macOS audio routing worked before this project. I didn't know what a virtual audio cable was.

I built this entire pipeline with Claude Code as my pair programmer. And "I just told the AI what I wanted and it built it" story is not what happened.

What happened was more like a conversation. I'd describe the problem, Claude would suggest an approach, I'd try it, it would break in some new way, and we'd go back and forth until we figured it out. The ACE driver debugging took an entire session of trying things, reading error output, and narrowing down the root cause together. The staging gate logic went through several iterations before we landed on the "check file modification time" approach.

AI didn't make this effortless. But it made it possible. A year ago, I wouldn't have even attempted this project because the technical pieces felt too far outside what I could do. Now I can build something real, something I use every day, and iterate on it when things don't work. That's a genuine shift in what's available to someone with my background. Document everything as you go. Ask Claude to build you a handoff doc that it can understand quickly. In fact, build a skill as you'll use this a lot. Claude has no memory between sessions. If you don't document what you figured out, you'll burn your next session re-explaining it. This is known as context rot, and it will quietly kill your momentum.

where it stands

The pipeline runs daily now. I start Audio Hijack when a meeting begins, stop it when the meeting ends, and ten minutes later I've got a clean, searchable transcript in my notes folder. No cloud. No subscription. Just my meetings, on my machine, organized by month and matched to my calendar.

It still has a rough edge. MacWhisper's file watcher doesn't always trigger on moved files, so occasionally I have to nudge it. Granola still runs as my backup while I iron that out. Shipped and iterating, which is honestly how I think about everything I build. Get it working, use it, fix what breaks, repeat. The pipeline isn't perfect. But it's mine, it's local, and my 1:1 feedback conversations aren't living on anyone else's servers anymore. That's enough for now.

$ _

© 2026 Lauren Out Loud. All rights reserved.

System: Retro Terminal v2.1.0

Uptime: No script, still running | Coffee: ∞

// Type "help()" in the console for a surprise | Press Ctrl+Alt+Del to restart