Guide · 01 of 03
Why AI prose sounds generic — and how to break the pattern
Large language models pull toward an average voice. The trick to writing around them isn't trickery; it's specificity. A field guide to the patterns and the fixes.
- Style
- Writing process
- AI tells
The average AI essay this semester will open with the phrase “In today's rapidly evolving world.” It will use the word delve exactly once, the word tapestry if the topic is humanities, and the construction “It's not just X — it's Y” somewhere in the second paragraph. By the conclusion, it will have mentioned navigating the complexities of something and assured the reader that the topic presents both exciting opportunities and important challenges.
These tells are not random. They are the visible scars of a statistical process. A language model trained on the public internet learns to output the sentences that minimize loss against the average sentence in its training corpus — and the average sentence about almost anything, written by a person who hadn't thought about it very hard, contains a remarkable number of these same phrases. The model wasn't trying to sound like the worst student in your class. It was just trying to sound like everyone.
The pull toward the average
Every sentence a language model writes is a vote, weighted by probability, across the millions of similar sentences it has seen. When the model is being friendly and helpful — the default instruction tuning of every commercial chatbot — it is rewarded for sentences that read as confident, clear, vaguely positive, and appropriately hedged. There is exactly one cheap way to do this well: pick the words that make the sentence sound like the kind of sentence such a sentence would be. That's how you get delve. That's how you get tapestry. That's how you get a four-paragraph essay where every paragraph begins with a topic sentence and ends on a hedge.
Detection isn't magic, either. You don't need a model to find these tells; you need a list. The Wikipedia community page “Signs of AI writing,” compiled by editors with an unusually direct incentive to spot machine output, names dozens of specific phrases. Most of them are still useful. The taxonomy DraftGuard ships against has seven buckets. They are not equally bad. They are not equally easy to fix.
The seven categories of tell
First, empty openers: phrases that occupy the slot of a thesis without making one. “In today's rapidly evolving world.” “Throughout history, humanity has.” “Since the dawn of time.” These are uniformly cuttable. Delete them and the essay does not lose information. It loses padding.
Second, hedged-importance frames: phrases that signal the writer is about to make a point without actually making one. “It is important to note that.” “It is essential to consider.” “One must take into account.” These are also cuttable — but the rule is subtler. Cut the frame, keep the content. “It is important to note that climate change disproportionately affects coastal regions” → “Climate change disproportionately affects coastal regions.”
Third, vague attribution — what Wikipedia editors call weasel words. “Studies have shown.” “Many experts believe.” “Research suggests.” This is the most dangerous category, because the fix is not a phrase swap. The fix is either a citation or a deletion. There is no third option, and DraftGuard refuses to invent one. We do not generate fictional studies for you. We mark the sentence with [citation needed] and ask whether the claim is one you can support.
Fourth, negative parallelism: the “It's not X, it's Y” construction and its variants. This is the single most diagnostic structural fingerprint of post-2023 AI writing. A student writing under deadline almost never uses it. A model producing confident-sounding prose uses it constantly. The fix is to flatten — “It's not just acquiring knowledge — it's growing as a person” → “Learning means growing, not just memorizing.”
Fifth, the rule of three: three coordinated phrases joined by “and.” “Convenient, efficient, and innovative.” “To address bias, improve performance, and expand applications.” Humans use the rule of three occasionally. Language models use it compulsively, especially in conclusion paragraphs. If three coordinated items appear in your closing sentence, ask whether you actually meant three or whether the model picked three because three sounds rhetorical.
Sixth, inflated vocabulary: delve, tapestry, leverage, navigate, underscore, foster, harness, robust, vibrant, comprehensive, multifaceted, ever-evolving. These are words with two properties — a Latinate or technical register, and a broad applicability that lets them survive any topic. They are the survival traits of a model trying not to be wrong. The fix is almost always a one-word Anglo-Saxon swap: delve → look at, leverage → use, navigate → handle, underscore → show, foster → build, robust → strong.
Seventh, wordy bureaucratic phrases: utilize, facilitate, in order to, due to the fact that, at this point in time, in close proximity, with regard to. The University of North Carolina Writing Center has a public list of fifty of these with their concise replacements. Memorizing the list takes ten minutes and improves your prose for life.
What the fix is not
The fix is not to humanize. There is a small industry of products that promise to take your AI draft and make it sound human — usually by introducing typos, switching em-dashes for hyphens, and substituting synonyms in a way that leaves the sentence shape intact. This works on the surface and fails everywhere else. A teacher who asks you about your third-paragraph thesis will notice when you cannot defend it. A reader who knows the topic will notice when your multifaceted analysis is a synonym for I don't know.
The fix is also not to write nothing. The model's prose is a scaffold; you can keep the scaffold and rewrite around it. That's what most working writers do with their own first drafts, AI or not. The point is to know the scaffold from the building.
What the fix is
The fix is specificity. Not stylistic specificity — not “use active verbs” or “shorten your sentences,” though those are useful. Lexical and referential specificity. Replace the abstract noun with the thing the noun is about. Replace the model's invented expert with a real person, or delete the appeal to authority. Replace the “in today's rapidly evolving world” with a place, a date, a name, a quote, an object you can describe.
Specificity is the property AI prose lacks because it is a property the average sentence in the training corpus also lacks. The model cannot give it to you because the model has never seen your draft, your class, your friend, your life. You can. That's the point.
DraftGuard's role in this is small and specific. We surface the tells. We offer plain replacements where a phrase is replaceable. We mark claims that need evidence. We never invent the specificity for you. The work of writing your draft is yours. That's not a limitation. That's the design.