
By Gagan Malik
Everyone says the best AI is the most capable AI. Use the biggest model, write better prompts, and get out of the way. I think that is backwards.
When you use a frontier LLM cold, it pulls your output toward the middle of the internet. I write with a specific voice, one that took years to develop across essays, videos and social media posts. When I fed my drafts into GPT-4 and Claude without context, the outputs were clean but flat. This is not anecdotal. Research published in 'PNAS' in 2025 found that LLMs produce systematically homogenised outputs, with each additional AI-generated piece contributing less unique diversity to a body of work than a human-authored one would. A separate peer-reviewed study found that instruction-tuned models are trained into a particular noun-heavy, informationally dense style that actively limits their ability to mimic other writing registers. The model is not failing. It is doing exactly what it was built to do, which is serve everyone, not you. pnas
The obvious response is that this is a prompting problem. Give it more context, write better instructions, and the outputs improve. That is true, and I tried it. Better prompts do help. But a 2025 arXiv study tested exactly this assumption and found that even with few-shot prompting, LLMs still struggle with nuanced, informal writing in blogs and forums, which is precisely the register most personal content lives in. At some point you are spending more effort teaching the model who you are than you are saving by using it. That is a broken workflow. arxiv
A model fine-tuned on your own work knows your intellectual history, not just your last message. I have five years of writing: essays, transcripts, notes, drafts. That is not just content. It is a record of how my thinking has moved, what I keep returning to, what I have changed my mind about. Fine-tuning a model on a personal corpus is an established technique. A documented case from the academic domain achieved a cosine similarity score of 0.8 between the fine-tuned model's output and the original researcher's writing style, demonstrating that a model can learn not just vocabulary but structural and stylistic patterns from a focused dataset. It made my own archive useful in a way search never could. blog.gopenai
Some people will point out that retrieval-augmented generation does the same job without fine-tuning. You build a RAG pipeline over your notes and let the model query it. That gets you halfway there. But as both AWS and independent technical analysis confirm, RAG and fine-tuning solve different problems: RAG retrieves existing information, while fine-tuning changes the model's generative behaviour at the level of tone, rhythm and structure. Those are different problems and they need different solutions. A hybrid of both is often where the real value sits. kairntech
The longer you write with a generic model, the more you start writing like one. I noticed this over about eighteen months. My drafts got smoother. More structured. Less surprising. This is documented. A Cornell University study published in April 2025 found that when people used an AI writing assistant, their writing converged, with distinct cultural and individual voices becoming more similar. The lead researcher described it as one of the first studies to show that AI use in writing could lead to cultural stereotyping and language homogenisation. A separate study found that teachers rated AI-assisted essays as more fluent and well-structured but thinner in voice and original insight. Designers know this pattern already: you become the tools you use. news.cornell
The easy answer is that this is a discipline problem. Write your first draft without AI, stay intentional, and the drift stops. Maybe. But the structural problem is that a generic tool has no stake in preserving your voice; it has every incentive to smooth it out. A 'New Yorker' piece from June 2025 put it plainly: large language models are designed to identify patterns within extensive datasets, producing outputs that lean toward consensus, and that tendency shapes the humans using them over time, not just the outputs. Building a model trained on your voice changes the incentive structure of the tool itself. You stop fighting the current. newyorker
A.R. Rahman did not become the sound of Indian cinema by learning Western orchestration and stopping there. He built a recording studio in his home in Chennai, trained himself in MIDI composition when others were still working with live orchestras, and fused Carnatic classical music, Sufi devotional music and Western pop into a grammar that nobody else could replicate because nobody else had lived the same combination of influences. The tools he used were available to anyone. The synthesis was entirely his. A generic LLM gives you the tools. A personal model gives you the synthesis.
The real objection is this: fine-tuning on a small personal corpus risks amplifying your weaknesses, not just your strengths. If your writing has blind spots or structural tics, a personal model learns those too. You end up with a tool that is very good at sounding like you, including the parts that could use an editor. This is a documented limitation; fine-tuned models carry an increased risk of reinforcing patterns present in the training data and do not surface references to challenge those patterns. docs.aws.amazon
That is true. But it mistakes the use case. A personal model handles voice and continuity. It does not replace editorial challenge; that still needs to come from outside. A musician practising in their own style is not avoiding critique. They are making sure the critique lands on something genuinely theirs.
The tools that matter in the next decade will be the ones that know you well enough to be genuinely useful, not just generically capable. If you keep outsourcing your voice to a model trained on everyone else's thinking, you are not augmenting yourself; you are slowly licensing your distinctiveness out. The people who build personal AI infrastructure now will own their intellectual leverage; the rest will spend the next ten years sounding like each other. news.cornell