General

Issues in software development 2025

Posted on Thu, Feb 6, 2025

Issues in software development 2025

Computer Says No...

You’ve probably noticed that software seems to be failing more often these days—whether it's at the post office, airports, or banks. From bad actors attempting to exploit vulnerabilities to software glitches bringing entire systems to a halt, it feels like there's no end to the problems we encounter.

Some of these issues are minor inconveniences, while others are downright life-threatening. For example, Barclays Bank recently experienced an outage that affected millions of customers, disrupting their lives in a major way. And Barclays isn't alone—banking outages have become increasingly common. Looking further back, there was the infamous Therac-25 disaster, where a radiation therapy machine’s software glitch led to multiple fatalities (see video). More recently, software-related failures have impacted modern vehicles, leading to throttle malfunctions, and even aircraft systems, such as the Boeing 737 MAX crashes.

While most software issues go unnoticed, every now and then, one slips through, and the consequences can be catastrophic.

Why is this happening?

In my opinion, several factors contribute to these recurring issues.

As a software developer with years of experience, I can say with certainty that time constraints are one of the biggest challenges. We’re constantly under pressure to build more, faster, and at lower costs. I've personally been in situations where corners had to be cut just to meet deadlines, always with the intention of fixing things later—but sometimes later never comes.

Far too often, unfinished software is pushed live, and developers are left firefighting issues in production. In non-critical applications, this might not seem like a big deal. But what happens when the software in question controls life-support systems, critical infrastructure, or public safety mechanisms?

Even in high-stakes environments, where software undergoes rigorous testing, unforeseen failures still happen. Take the UK's Passport E-Gate system failure in 2024 as an example (see video). A simple software update ended up exceeding BT’s maximum bandwidth limit, cutting off connections between airport security systems nationwide. This single point of failure led to absolute chaos. Was it poor planning? An oversight? Or did they simply not believe it could happen? Who knows—but it happened, and it exposed a critical weakness.

Our Increasing Reliance on Technology

We are becoming dangerously dependent on technology for every aspect of our lives, blindly trusting that our data is safe and systems are reliable. Governments are pushing for digital IDs, banks are moving everything online, and the NHS wants us to book appointments and manage prescriptions through digital platforms. Given the history of corporate data breaches and misuse, why do they keep insisting we go digital when they can’t even get the basics right?

And now, on top of all this, companies are pushing AI to make decisions for them. Most large language models (LLMs) operate by predicting the next most likely word based on their training data. If they’ve been trained on incorrect or biased information, they’ll present flawed outputs as fact—and people will believe it.

Look at the recent Google Gemini Super Bowl ad fiasco. The first version of the ad showed a business owner asking Gemini to generate a description of a cheese, but the AI provided completely incorrect statistics and information. Google quickly re-uploaded the video with a revised version, but the damage was done (see video). If a simple task like describing a cheese goes wrong, how can we trust AI with critical decision-making?

AI models that general consumers use mostly rely on publicly available data, whether it’s accurate or not (there are exceptions to this). Imagine a scenario where 100 websites report the same (potentially flawed) climate statistics over a 20-year period, while only 10 sites present data spanning 100 years. The AI, trained to recognize the most common patterns, would likely favor the shorter dataset and present warming trends as more extreme than they might actually be. Context is everything when dealing with statistics, yet AI often lacks that nuance.

Final Thoughts

Technology is advancing faster than our ability to regulate and safeguard it. Software failures, data breaches, and AI-generated misinformation are only going to become more prevalent unless we change our approach. Companies need to prioritize quality and accountability over speed and cost-cutting.

And now, after all that, I’m going to put this post through Chat Gippidy to make it sound better... Christ almighty!