Article image
brinsa.com

Writing by Score - How Grammarly Trains Obedience

markus brinsa 17 january 23, 2026 4 4 min read create pdf website all articles

Sources

I didn’t test Grammarly. I lived with it.

I didn’t install Grammarly last month out of curiosity. I’ve used it continuously and became a paying subscriber in early 2021. That matters because this is not a story about novelty or misuse. It’s a story about slow drift.

For more than a decade, Grammarly sat in my workflow the way spellcheck always had. It caught typos, flagged the occasional grammar slip, and stayed mostly invisible. I’m fluent in English, but not a native speaker, and Grammarly felt like a second set of eyes, not a second brain.

Over time, that relationship changed. Gradually enough that most people wouldn’t notice right away. Grammarly stopped behaving like a checker and started behaving like a judge.

When rules stop being rules

The moment I noticed something was wrong wasn’t dramatic. It was mundane. British spellings started slipping into American English documents without being flagged. “Favourite.” “Organise.” Words that any basic spellchecker catches instantly.

My system language was American English. Grammarly’s language setting was American English. Apple’s built-in spellchecker flagged the errors immediately. Grammarly ignored them.

This wasn’t a one-off glitch. It repeated across documents and platforms. When I eventually pushed Grammarly’s own chatbot hard enough, it admitted the problem plainly: recent updates meant American English–specific suggestions were not fully enforced everywhere. That answer should have triggered alarms. If a grammar tool can’t reliably enforce the most basic rule you explicitly selected, it’s no longer enforcing rules at all. It’s negotiating them.

From correction to interpretation

Spelling failures are irritating, but they are not the real risk. The real risk is interpretive rewriting. Grammarly no longer limits itself to identifying errors. It proposes rephrasing, restructuring, and “improving” sentences that are already correct. In doing so, it frequently alters emphasis, intent, or tone.

A cautious statement becomes assertive. A nuanced claim becomes blunt. A limitation quietly disappears. Nothing breaks. No warning appears. The sentence still looks fine. It just no longer says what I meant.

That is not editing. That is inference. And inference without context is where AI systems quietly go wrong.

The score that trains obedience

This is where Grammarly becomes actively dangerous. Every week, Grammarly sends progress reports. Productivity summaries. Scores. Trends. Improvement graphs. They look harmless. Motivational, even. But the scoring logic is simple and unforgiving. Accept Grammarly’s suggestions and your score goes up. Reject them and your score stagnates or drops.

The system does not distinguish between corrections and stylistic disagreement. It does not know when you are intentionally breaking a “rule” for rhetorical reasons. It only knows compliance. Over time, this trains behavior. You stop asking whether a suggestion is correct and start asking whether rejecting it is worth the penalty. That is automation bias with a leaderboard.

For non-native writers especially, the pressure is intense. The interface frames Grammarly as authority. The score frames disagreement as failure. The safest path becomes acceptance. Not because the suggestion is right, but because the system rewards obedience.

Why non-native writers pay the highest price

Grammarly markets itself as a safety net for people who don’t fully trust their English. That’s precisely the group least equipped to resist its suggestions.

When Grammarly proposes a rewrite, it doesn’t say, “This is one possible alternative.” It presents the change as improvement. Confidence meters rise. Scores improve. Green signals reinforce compliance.

But Grammarly does not understand nuance. It does not understand irony, pacing, or intentional repetition. It does not understand audience. It understands patterns. When pattern optimization overrides authorial intent, the result is writing that feels polished and wrong at the same time. Readers sense it immediately. The voice feels flattened. The meaning subtly shifts. Sometimes the outcome is unintentionally funny. Sometimes it’s misleading. Sometimes it’s embarrassing. And the writer often has no idea where things went off the rails.

Complexity without accountability 

Grammarly’s defenders will say this is the cost of ambition. The product is no longer just a grammar checker. It’s an AI writing assistant operating across apps, languages, and contexts.

That’s exactly the problem. As Grammarly expands, it inherits the failure modes of all generative AI systems: probabilistic behavior, inconsistent enforcement, and confident suggestions without understanding. But unlike chatbots, Grammarly operates inside professional workflows. It edits emails, articles, proposals, and legal drafts. Its mistakes don’t look like hallucinations. They look like stylistic choices. That makes them harder to detect and easier to publish.

Turning it off felt like regaining control

I didn’t rage-quit Grammarly. I disabled it quietly and went back to simpler tools. Basic spellcheck. Manual rereading. Actual editing. Not because those tools are smarter, but because they are honest. They correct what they know how to correct and stay silent about the rest.

That silence matters. A tool that knows its limits is safer than one that hides them behind confidence scores and weekly performance emails. 

Grammarly was never an editor

Editors hesitate. They ask questions. They explain why something might be wrong. Grammarly suggests. Confidently. Repeatedly. Sometimes correctly. Sometimes not.

If a grammar tool cannot reliably enforce the rules you explicitly selected, rewrites meaning without understanding intent, and pressures users to accept suggestions through scoring mechanics, it is not an editor. It is a persuasion engine with a spellchecker attached.

And trusting it blindly is how “favorite” quietly becomes “favourite,” your score goes up, and nobody notices until it’s too late.

About the Author

Markus Brinsa is the Founder & CEO of SEIKOURI Inc., an international strategy firm that gives enterprises and investors human-led access to pre-market AI—then converts first looks into rights and rollouts that scale. As an AI Risk & Governance Strategist, he created "Chatbots Behaving Badly," a platform and podcast that investigates AI’s failures, risks, and governance. With over 30 years of experience bridging technology, strategy, and cross-border growth in the U.S. and Europe, Markus partners with executives, investors, and founders to turn early signals into a durable advantage.

©2026 copyright by markus brinsa | brinsa.com™