Abstract ouroboros made of code fragments
Thinking

I Built 8 Layers to Remove AI Voice. People Still Say It “Smells Like AI.”

March 2026 · 8 min read

And somehow, that hurts my feelings? About AI? What is happening to me.

What I Did

OK so full disclosure: I am not a developer. I am a marketer who learned just enough about Claude Code to be dangerous. Everything I've built is stolen from other people and duct-taped together with stubbornness.

I have this content pipeline. One command, 8 chained AI agents, a month of content in 20 minutes. I cobbled it together from a dozen tutorials, tweets I bookmarked at 1am, and a lot of trial and error. Mostly error.

The pipeline does this:

  1. 1Scans my notes for ideas I keep obsessing over (stolen from a PhD student's Obsidian setup)
  2. 2Maps ideas to content pillars (stolen from Mat Do's content house framework)
  3. 3Builds a 4-week calendar (the Build/Reinforce/Reinforce rhythm I stole from Boring Marketer)
  4. 4Writes everything (this part Claude does on its own and honestly it's fine)
  5. 5Catches fake quotes the AI invented (learned about this the hard way)
  6. 6Strips AI voice — 47 patterns I catalogued because they annoyed me
  7. 7Checks brand voice (does it still sound like me or like a LinkedIn robot)
  8. 8Runs adversarial review until quality score hits 8.5/10 (concept stolen from Maxwell Finn)

8 layers. Months of tinkering. $4 in API calls.

And then I show someone the output and they go: “Yeah, smells like AI.”

Why This Gets Under My Skin (And I Wish It Didn't)

I should be able to shrug this off. I'm not pretending to be some AI authority. I'm a guy with an Obsidian vault and too much time on his hands who figured out how to chain some agents together.

But when someone dismisses the output as “AI,” there's this weird defensive reflex. Not about the tool — about the work. I spent real time selecting those ideas. I actually care about those topics. The content radar picked them because I kept returning to them over months. That's not AI choosing — that's my obsession being measured.

The ideas are mine. The pipeline is stolen. The execution is AI. And somehow none of those distinctions matter when someone's AI detector goes off.

It's like showing someone a photo you took and they go “Photoshop” and you're like — yes, I used Photoshop, but I was actually there? The sunset was real?

I spent 20 years being told I was too scattered. Philosophy and marketing. Consciousness research and SEO. History and SaaS metrics. Career counselors furrowed their brows. “So what exactly do you do?” Now I've finally found a way to wire all that scattered reading and thinking into output that actually ships — and the response is “smells like AI.” The universe has a sense of humor.

I realize this is not a rational response. I am defending the honor of my markdown files. This is where we are in 2026.

The Honest Part

OK here's where I have to be fair to the AI-sniffers.

They're mostly right. Not about my stuff specifically, but about the base rate. 95% of AI content IS garbage. Paste-and-publish slop that sounds like a corporate retreat brainstorm. “In today's rapidly evolving landscape...” — that kind of stuff makes me physically uncomfortable and I use AI every day.

So when someone's heuristic fires on my content too, that's not unfair. That's a Bayesian prior doing its job. If you tell me “AI helped write this” and I don't know you, I should assume it's bad. Because statistically, it is.

My stuff is also genuinely imperfect. Eight agents solve for quality gates — factual accuracy, objection rates, voice consistency. They don't solve for soul. And I'm not going to pretend they do. There's a layer under all the optimization that I can't name or measure. The part that makes writing feel like a specific person wrote it because they couldn't not write it.

Karpathy nailed this. He noticed AI agents don't surface their own confusion. They don't push back. Everything is too smooth, too resolved, too agreeable. The output looks confident even when the model is guessing.

My pipeline has the same disease, probably. Just milder. I built agents that push back — but they push back along dimensions I programmed. What about the dimensions I can't see?

The Funny Part

Here's what makes me laugh about this whole situation:

One of my 8 agents literally exists to strip AI voice from the content. It removes “moreover,” “it's worth noting,” “at the intersection of” — 47 patterns. An AI agent whose entire job is making AI content not sound like AI content.

That's... that's the whole thing, isn't it? I built an AI to police other AIs for sounding too much like AI. There's a snake eating its tail somewhere in here and I'm the snake and also the tail.

@EXM7777 posted that “Opus with the right skills is already AGI for marketing.” And yeah, maybe for the systems part. But voice isn't a system. Voice is identity. And I'm not sure you can encode identity into a yaml file, no matter how many gotcha patterns you document.

Katie Parrott from Every wrote about “craft vocabulary” — the idea that you need words for specific writing choices before you can direct AI to make them. I've been trying to name everything. 47 patterns. 3 buyer voices. Quality gates.

But here's what I think the real gap is. I wrote something recently about how AI has infinite range and decent perception but zero taste. Taste requires having been wrong — having shipped the thing that flopped, backed the idea that went nowhere. Scar tissue, not training data.

I think “smells like AI” is a taste detection. My pipeline has range (6 months of obsessive notes from philosophy, marketing, consciousness research — all the scattered interests I was told were a liability). It has decent perception (the agents find real connections). But taste is the thing that makes you cut a sentence that's technically fine because something about it feels off. My agents don't feel off. They feel correct. And “correct” might be exactly the problem.

Human writing has mess. Not error — mess. Digressions. Sentences that start confident and end uncertain. Paragraphs that exist because the writer needed to think something through, not because the structure required them. My pipeline optimizes for publishable. It doesn't optimize for the mess that makes you feel like a real person is on the other end, figuring it out as they go.

What I Stole And What I'm Trying Next

I should be clear: almost nothing I've built is original. The content radar concept came from watching what Shreya Shankar at Berkeley described as “research loops with qualitative evaluators.” The adversarial review loop came from Maxwell Finn's marketing optimization frameworks. The SOP-to-agent approach came from Riley Brown documenting his thought processes. The content house came from Mat Do. The Build/Reinforce rhythm came from Boring Marketer.

I'm a synthesizer. I find things that work in one context and wire them together in another. Sometimes it works. Sometimes it smells like AI.

Three things I'm trying next:

Break the pipeline output on purpose.

After the 8 agents finish their perfectly optimized content, I'm going to mess it up. Insert a digression that doesn't connect. Cut a paragraph the structure needs but the feeling doesn't. Leave a sentence that limps. Like distressing furniture — sand it smooth, then scratch it so it looks like someone actually lives there.

Stop asking reviewers. Start watching readers.

People who know AI touched the content have a different response than people who just... read it. I want to see if actual readers — who don't know how it was made — react the same way. The “AI smell” might be a knowledge bias, not a quality signal.

Keep building anyway.

Because the alternative is writing everything from scratch. And I've tried that. Ideas die in my notes app. Drafts stay at 30% forever. I am a starter, not a finisher. The pipeline finishes for me. That's worth defending even if the output isn't perfect.

George Nurijanian wrote something I keep coming back to: you can't automate past your understanding. He calls it the Three Gulfs — Comprehension, Specification, Generalization — and you have to close them in order. My understanding of what makes writing feel human is incomplete. So my pipeline reflects that incompleteness. It optimizes what I can name. It misses what I can't.

The Weird Identity Thing

The strangest part: when someone says “smells like AI,” I take it personally. Like they're criticizing me, not a tool. Because the pipeline IS me. I built every rule. Every quality gate. Every banned word. Every adversarial persona. If the output lacks soul, whose soul is missing?

My skills are basically psychological self-portraits in markdown format. The buyer-voice agent contains my anxiety about bad copy. The de-ai-ify agent contains my irritation at “moreover.” The fact-checker contains the time I almost published a fabricated Peter Drucker quote and broke into a cold sweat.

So yeah. “Smells like AI” hits different when the AI is wearing your personality. It's like someone telling you your kid is ugly. You know the kid has your face.

I don't have a clean ending for this. A clean ending would itself be a tell.

I'm a non-developer who steals smart people's ideas, wires them together with AI, ships imperfect content, and gets his feelings hurt when people notice the seams.

That's NXRB in a nutshell, actually. I'm documenting how I do this. Not because I'm good at it — because I'm going through it and writing it down as I go.

Follow if that sounds useful. Or don't. I genuinely don't know what I'm doing.

NXRB: playbooks for operators who'd rather build than hire. Built by someone who'd rather steal than invent.