Build a voice profile so AI actually sounds like you
Tiago Forte spent all day building a 20,000-word style guide for Claude and called the result bland. The problem is not the guide. Voice profiles capture style but miss voice. Style is sentence length and vocabulary. Voice is how you think. Here is the framework that bridges the gap.

If you remember nothing else:
- Voice profiles fail because they capture style (sentence length, vocabulary) but miss voice (your opinions, reasoning patterns, what surprises you)
- The optimal profile is under 400 words. Tiago Forte's 20,000-word guide made output worse, not better. Three before-and-after examples beat fifty rules.
- Profiles work brilliantly for short-form (emails, social posts). They fall apart for long-form articles because sustained voice requires ideological consistency that rules cannot capture.
- The 6-chain framework: discovery interview, sample analysis, draft profile, anti-AI rules, test and refine, format-specific versions
Tiago Forte is one of the most recognized productivity writers on the internet. In May 2025, he decided to build a comprehensive style guide so Claude would write in his voice. He identified 20 of his best essays, had Claude analyse them, and produced an 8,000-word guide with 168 bullet points describing his tone, quirks, vocabulary, and approach.
Then he expanded it. Added counter-examples for each point. Created a 4,000-word implementation guide with prompt templates. The total came to roughly 20,000 words of instructions.
He fed Claude 8,000 words of book highlights and asked it to write an essay using the guide.
His verdict? “A blandness, a vanillaness, a middle-of-the-road lukewarm neutrality that seems designed NOT to surprise.” His most important principle, he said, is to “try to ONLY write things that surprise.” The guide produced the opposite. It took “several times as much time as it would have taken me to write the whole thing myself.”
Later that year, Forte claimed Claude drafts 90% of his content. Readers revolted. One commenter wrote: “if a writer isn’t dedicated enough to write its own articles, I’m definitely not going to be dedicated to read.”
This isn’t a story about Forte being wrong. It’s a story about a fundamental gap in how voice profiles work. A gap that 20,000 words could not close.
Why most voice profiles produce text that sounds like nobody
Every.to, one of the better AI writing publications, nailed the core problem in March 2026: “A model is perfectly consistent from the start. That is why it sounds like nobody.”
Sit with that for a moment. AI doesn’t sound generic because it lacks instructions. It sounds generic because consistency IS its default mode. Human writing is interesting precisely because it’s inconsistent. We change pace mid-paragraph. We contradict ourselves. We use a word wrong because it sounds right. AI never does any of this unless you force it to. That’s the core problem.
The Carnegie Mellon and NJIT study published in PNAS in February 2025 quantified this beautifully. They analysed roughly 12,000 AI-generated texts and found that GPT-4o uses present participial clauses at 5.3 times the human rate. The word “camaraderie” appears 162 times more often in AI text than in human writing. “Tapestry” appears 155 times more. It showed up in 23% of GPT-4o outputs.
A separate psycholinguistic study mapped 31 stylometric features to cognitive processes. The finding that stuck with me: humans show evidence of cognitive struggle in their writing. Pauses. Revisions. Stylistic fluctuations that reflect actual thinking. AI doesn’t struggle. It just produces. And that smoothness is exactly what makes it detectable.
G2 analysed 5,000 reviews of AI writing tools and found only about 1 in 10 even mentioned voice mimicry. Of those, 7% praised it. The rest described outputs as “generic and robotic.” Voice matching remains, as they put it, “a work-in-progress.”
Surface voice vs deep voice
Here is the distinction nobody makes, and it’s sort of the key to everything.
Surface voice is what every template captures: sentence length, vocabulary choices, contractions, tone, formatting preferences. “Use short sentences. Avoid jargon. Write in active voice.” These are style rules. They are useful. But they are not your voice.
Deep voice is what templates miss entirely: your opinions, your reasoning patterns, your go-to analogies, what you find interesting about a topic, what you dismiss as irrelevant, how you think about cause and effect. “I believe most digital transformation fails because of incentive misalignment, not technology” is a voice element. It shapes how you write about everything. No style guide captures it.
The CyberCorsairs “Taste Interviewer” method gets closest. It uses 100 questions across seven categories: Beliefs and Contrarian Takes, Writing Mechanics, Aesthetic Crimes, Voice and Personality, Structural Preferences, Hard Nos, and Red Flags. The key insight is brilliant: “Taste isn’t what you like, but what you reject.”
That is the breakthrough. Defining your voice by what you would never write is more powerful than defining it by what you would. Most templates ask “What is your tone?” The better question is “What makes you cringe when other people write it?”
An analysis of 1,490 culturally marked texts from February 2026 found something troubling. When LLMs process writing, they don’t just flatten style. They strip social context, pragmatic function, and cultural identity. Your voice isn’t just how you write. It’s where you come from, what you have experienced, and what you care about. AI erases all of that unless you explicitly encode it.
The Hashmeta paradox makes this worse: “The hallmarks of exceptional writing - structured arguments, coherent flow, proper grammar - are precisely what triggers false positives” in AI detectors. Good writing is now suspicious. Which is, honestly, a proper mess.
The diminishing returns curve
Here is something counterintuitive that I keep seeing in my teaching. More instructions don’t produce better output. Past a certain point, they make it worse.
Voice profiles have an optimal complexity window. In the AI courses I teach, I tell students to keep it under 400 words. Longer profiles get ignored or diluted. The AI starts contradicting itself trying to follow too many rules simultaneously.
Forte’s 20,000-word guide was not just long. It was counterproductive. The AI could not hold all 168 bullet points in working memory at once. Some rules contradicted others. The result was not Forte’s voice with extra precision. It was mush.
Atom Writer’s research found that few-shot transformation pairs - showing the AI a before-and-after example of text rewritten in your voice - produce 25-35% better voice adherence than rules alone. Three good examples beat fifty bullet points every time.
This connects to how AI detection actually works. GPTZero measures two things: perplexity (how surprising the word choices are) and burstiness (how much the writing varies across the document). Adding more rules to a voice profile can actually lower perplexity by making AI more predictable. You are essentially optimising for detection.
An arXiv study of 14,700 samples found that even minimally AI-polished text triggers detection. The binary boundary between human and AI writing doesn’t exist. It’s a spectrum. And adding more rules pushes you further toward the detectable end.
Stop adding rules. Start adding examples.
The 6-chain framework
This framework comes from the AI courses I teach at WashU and OneDay. I have not seen it published anywhere else, and it produces consistently better results than the “paste a giant prompt” approach.
Chain 1: Discovery interview. Instead of trying to describe your own voice, let AI interview you. Ask Claude to ask you questions one at a time about how you communicate. What words do you use? What would you never say? How casual are you? What is your business context? The critical insight from Every.to: “the most distinctive markers of a writer’s style tend to be subconscious.” An interview surfaces patterns you would never articulate on your own.
Chain 2: Sample analysis. Provide 2-3 pieces of YOUR actual writing. Not your best writing. Your typical writing. AI analyses sentence length, rhythm, openings, closings, vocabulary patterns, and distinctive quirks. One of my students discovered that when Claude compared her personal 2014 essay to a 2025 AI-assisted blog post, Claude identified the original as her real voice immediately. The newer, “better” version read like every other blog on the internet.
Chain 3: Draft profile. Under 400 words. This constraint is the feature. If you can’t capture your voice in 400 words, you are describing style, not voice. Focus on who you are, your core characteristics, your sentence patterns, and the paragraph structures you default to.
Chain 4: Anti-AI rules. This is the most important section. Your personal blacklist. Words AI should never use in your voice: delve, navigate, landscape, robust, comprehensive, leverage, facilitate. Patterns to avoid: uniform sentence lengths, the “not X, but Y” construction AI overuses, fake enthusiasm, press release tone, generic transitions like “moreover” and “furthermore.” Be specific. List actual phrases.
Chain 5: Test and refine. Give AI a real task you actually need to write. Not a test prompt. A real email, a real social post, a real document. Have AI write it using your profile AND explain which parts of the profile it leaned on. This diagnostic step is key. You will immediately see what is working and what needs adjustment.
Chain 6: Format-specific versions. Email, social posts, and long-form content need different adaptations. Keep each under 75 words. For email: short, open with the recipient’s name, one topic per message, put the ask in the first two sentences. For social: even more casual, okay to start mid-thought, end with something specific rather than a call-to-action. For long-form: paragraphs under four sentences, use subheadings, tell stories.
Your voice profile is a specification of your thinking, not a style sheet. The specification is worth more than any individual output.
One developer who built their own voice clone put it well: “If you don’t like your current writing style, you won’t like the cloned results either.” The profile reflects you. Not a better you. You.
When to skip the profile and just edit
Voice profiles are not always the right tool. Knowing when to skip them saves real time.
For short-form writing - emails, social posts, Slack messages, meeting summaries - voice profiles are brilliant. The ROI from my course work is clear: invest 3 hours building a solid profile, save 4 or more hours per week on communications. You break even in a week. What used to take an hour of writing takes 15 minutes of editing. AI gets you 80% there. You finish the remaining 20%.
For teams who need brand consistency across many writers rather than a personal voice, the approach is different - it’s about enforcement layers, not personal identity.
For long-form writing - articles over 2,000 words, thought leadership, essays - profiles fall apart. Not because the style is wrong. Because sustained voice over many paragraphs requires argumentative structure, ideological consistency, and the kind of surprising connections that come from actually knowing things. Style rules cannot generate surprise. Only thinking can.
For long-form, the editing approach often works better: generate generic AI text as raw material, then edit it into your voice manually. Many writers find this faster than building and maintaining a complex profile for a use case where the profile will always fall short.
The bigger picture is worth watching. Consumer enthusiasm for AI-generated content dropped from 60% to 26% between 2023 and 2025, according to Billion Dollar Boy’s survey of 4,000 consumers. Marketers increased AI content spending by 79% over the same period. That gap between production and trust is widening.
The Slate piece on AI making writing worse described the bizarre consequence: people are now self-censoring good writing because it “sounds AI.” Removing em dashes. Leaving in typos deliberately. An English professor at the University of Illinois mistakenly flagged a 2019 academic article as AI-generated - three years before ChatGPT existed.
A Springer Nature study found something even stranger. Human writing across PubMed, Wikipedia, and Stack Exchange became measurably less diverse and more machine-predictable between 2014 and 2024. This trend predates widespread AI adoption. We were converging toward machine patterns before the machines even arrived.
The NYT ran a blind test with 86,000 readers. 54% preferred AI-generated writing overall. But consumer trust collapses the moment people know it’s AI. That gap between “preferred” and “trusted” is exactly where voice profiles live. They don’t make AI write better. They make AI write in a way that preserves trust.
If you want the input side of this equation - talking to AI is three times faster than typing, and the results are often better because you’re less filtered. The iteration methodology I use for prompt engineering applies here too.
The quality check from my course that I keep coming back to: “If something looks suspiciously polished for a quick response, it will stand out as AI.” Sometimes the most authentic thing you can do is let it be imperfect.
About the Author
Amit Kothari is an experienced consultant, advisor, coach, and educator specializing in AI and operations for executives and their companies. With 25+ years of experience and as the founder of Tallyfy (raised $3.6m), he helps mid-size companies identify, plan, and implement practical AI solutions that actually work. Originally British and now based in St. Louis, MO, Amit combines deep technical expertise with real-world business understanding.
Disclaimer: The content in this article represents personal opinions based on extensive research and practical experience. While every effort has been made to ensure accuracy through data analysis and source verification, this should not be considered professional advice. Always consult with qualified professionals for decisions specific to your situation.