What Autocorrect Algorithms Get Wrong About How We Really Type

Morgan Reese

Morgan Reese

March 15, 2026

What Autocorrect Algorithms Get Wrong About How We Really Type

Autocorrect is so baked into how we type that we notice it most when it fails. A wrong suggestion, an embarrassing substitution, or a “correction” that strips out the word we meant. Behind those moments is a set of assumptions about how language and typing work—assumptions that often don’t match how we actually write. Here’s what autocorrect algorithms get wrong, and why getting it right is harder than it looks.

They Assume We Make Random Typos

Classic autocorrect is built on the idea that typos are mistakes: your finger slipped, you hit the key next to the one you meant, and the result is a non-word or a rare word. So the algorithm looks at what you typed, finds the “closest” valid word by edit distance or key proximity, and substitutes it. That works when the typo is truly accidental—”teh” → “the,” “adn” → “and.” But we don’t always type that way. We use slang, dialect, proper nouns, and brand names. We mix languages. We leave in intentional misspellings for tone. When the system “corrects” those, it’s not fixing an error; it’s overwriting what we meant. The algorithm assumes a single, dictionary-style notion of “correct,” but real typing is messier and more contextual. Early autocorrect (think Word in the 1990s) was heavily dictionary-based: if it wasn’t in the list, it was wrong. Smartphones made the problem more visible because we type more on glass than on physical keys, so “typos” increased—but so did the variety of what we type. Messaging, search, and social media are full of abbreviations, emoji, and informal language that don’t fit a single dictionary.

Person typing on laptop in casual workspace

Context Is Hard

Modern systems use language models and context: the words around the one you’re typing, your past messages, even the app you’re in. That helps. “I’m going to the store” is easier to correct than an isolated “store” that might have been “stire” or “stote.” But context is expensive and incomplete. The model might not know you’re quoting someone, writing code, or using a term that’s correct in your field but not in the training data. It might reinforce your past mistakes if you’ve accepted a wrong suggestion before. And it often has no idea about tone—whether you’re being formal, casual, or deliberately informal. So it corrects “gonna” to “going to” or “dunno” to “do not know” in contexts where that’s not what you want. The algorithm optimises for a kind of average, edited prose; many of us don’t write that way all the time.

They Optimise for the Wrong Thing

Autocorrect systems are typically trained to maximise something like “did we suggest the word the user intended?” or “did we reduce the number of backspaces?” But that doesn’t capture the full cost of a bad correction. When the system changes a correct but rare word to a common wrong one, you might not notice until later—or you might send the message and only then see the error. The cost of an over-correction (changing something right to something wrong) is often higher than the cost of under-correction (leaving a typo in place), because the first case can change meaning or create embarrassment. Many algorithms don’t weight that asymmetry. They’re tuned for accuracy on clean test sets, not for the real-world penalty of “fixing” something that wasn’t broken.

Personalisation Has Limits

Personalised autocorrect—learning your vocabulary, your names, your frequent phrases—helps. But it also creates lock-in and blind spots. If you’ve accepted a wrong suggestion a few times, the system may keep offering it. If you use a word the model rarely saw in training, it might keep “correcting” it until you add it to a custom dictionary—and not every app has one. And personalisation is often per-device or per-app, so your phone might “know” you in one messaging app but not in another, or not in the notes app at all. So the same person can get different correction behaviour in different places, and the system can’t fully learn how you really type across all of them.

What Would Help

Better autocorrect would treat “correct” as context-dependent: formal vs casual, technical vs everyday, quoted vs authored. It would weight over-correction more heavily than under-correction in the loss function. It would make it easy to say “never change this” or “this is always wrong” and have that propagate. And it would be more conservative when confidence is low—offering a suggestion rather than auto-applying it, or leaving rare words alone unless there’s strong evidence they’re typos. Some of that is already happening in the best systems; much of it is still a work in progress. The core issue is that typing is a human behaviour with huge variation, and algorithms are still catching up to that variety.

Mobile vs Desktop: Different Surfaces, Same Assumptions

On a phone, keys are small and close together, so proximity-based correction (assuming you meant the key next to the one you hit) is more relevant than on a physical keyboard. But mobile typing also has more abbreviations, more one-handed typing, and more use in noisy or distracted contexts. So the same “fix” that works on desktop—replace with nearest dictionary word—can feel more intrusive on mobile, where “wrong” might still be what you meant. Meanwhile, desktop autocorrect often lives inside a single app (e.g. a word processor) with a clearer notion of “document” and “style.” On the web and in chat, there’s no such boundary; you might switch from a formal email to a casual message in the same minute. The algorithm rarely knows which mode you’re in, so it applies one policy everywhere. That’s another way autocorrect gets it wrong: it doesn’t model the switching we do between registers and contexts in real life.

The Takeaway

Autocorrect gets wrong the idea that there’s one right way to type. It assumes typos are random, context is knowable, and the goal is to make text look like standard written English. In practice, we type with intent that the algorithm can’t always see—slang, names, tone, and mixed contexts. So the next time autocorrect “fixes” something you didn’t want fixed, it’s not just a bug; it’s a mismatch between how the system models typing and how we actually do it. Better algorithms will need to model that messiness, not assume it away.

More articles for you