The Software Bugs That Changed the Course of Tech History

Quinn Reed

Quinn Reed

March 7, 2026

The Software Bugs That Changed the Course of Tech History

Software bugs are usually nuisances—a crashed app, a wrong calculation. But a handful have changed the course of tech history. They’ve grounded rockets, crashed markets, and reshaped how we build systems. Here are the bugs that mattered.

The Therac-25: When Software Kills

Between 1985 and 1987, the Therac-25 radiation therapy machine delivered massive overdoses to at least six patients. Three died. Two were left with permanent injuries. The cause wasn’t hardware failure—it was a race condition in the software. When operators typed commands quickly, the machine could switch modes without updating its safety interlocks. The software assumed it was in low-power mode when it was actually firing a high-powered electron beam.

The Therac-25 became the canonical case study in software safety. It led to stricter regulation of medical device software, formal verification in safety-critical systems, and the principle that software cannot be assumed safe—it must be proven. The lessons are still taught in software engineering courses today.

The Ariane 5: A Number Too Big

On June 4, 1996, the first Ariane 5 rocket blew up 37 seconds after launch. The cause: a 64-bit floating-point number was converted to a 16-bit integer. The horizontal velocity of the rocket (which was faster than Ariane 4) exceeded the range the old code expected. Overflow. Crash. $370 million in payload lost.

The bug was reused code—Ariane 4’s guidance system, which had never encountered velocities that high. The Ariane 5 failure became a warning about copy-paste across systems: what works in one context can fail catastrophically in another. It also reinforced the need for robust error handling and testing at system boundaries.

Y2K: The Bug That Didn’t Bite

The Year 2000 bug was everywhere—or so we thought. Decades of software had stored years as two digits: 99 for 1999, 00 for 2000. When the clock rolled over, would systems interpret 00 as 1900? Would banks, power grids, and airlines collapse?

Y2K millennium bug computer moment

We’ll never know how bad it could have been—because the world spent billions fixing it. The Y2K remediation effort was one of the largest coordinated software projects in history. The bug itself was simple; the fix was global. And the fact that nothing major broke on January 1, 2000, was either a triumph of preparation or proof the risk was overstated. Either way, it changed how we think about technical debt and legacy systems.

Knight Capital: 45 Minutes, $440 Million Lost

On August 1, 2012, Knight Capital deployed new trading software. Old code that should have been deleted was still present on some servers. When the new code ran, it activated the old code—which had been repurposed and sent orders to the wrong place. In 45 minutes, Knight executed millions of unintended trades. Losses: $440 million. The company was sold within days.

Knight Capital became a cautionary tale for deployment procedures. The bug wasn’t in the logic—it was in the process. Inadequate testing, incomplete rollback, and no kill switch. The incident accelerated adoption of safer deployment practices in finance and beyond.

The Morris Worm: The Birth of Cybersecurity

In 1988, Robert Morris released a worm that infected thousands of computers—roughly 10% of the internet at the time. He claimed it was an experiment; the worm had a bug that caused it to replicate far more aggressively than intended. Systems crashed. The internet was disrupted. Morris became the first person convicted under the Computer Fraud and Abuse Act.

The Morris worm catalyzed the creation of CERT and the broader cybersecurity industry. It proved that software could be weaponized at scale—and that the internet was fragile. The bug wasn’t in Morris’s code alone; it was in the assumption that networked systems were safe by default.

What We Learned

These bugs share patterns. Reused code in new contexts. Missing error handling. Assumptions about scale or environment. Process failures as much as logic failures. Each one changed how we build, test, and deploy software. They’re reminders that small bugs can have large consequences—and that the systems we depend on are only as reliable as the care we put into them.

More articles for you