If history teaches us anything, it’s that we are notoriously bad at predicting the future, especially when it comes to technology. Whether we overestimate timelines or underestimate impact, our collective ability to forecast the trajectory of technological advancement has consistently faltered.
Consider Henry Ford’s 1940 assertion:
"Mark my word: a combination airplane and motorcar is coming. You may smile, but it will come."
Yes, it did come—in a fashion. However, the flying car has yet to become part of our everyday landscape, 80 years later.
Meanwhile, Stewart Alsop predicted in 1991:
"I predict that the last mainframe will be unplugged on 15 March 1996."
Spoiler alert: Mainframes are not only still operational but also thriving in sectors such as banking and government.
This pattern, our tendency to overlook what happens versus what we expect to happen, should serve as a cautionary lens through which we evaluate today’s AI boom. Yet, once again, we find ourselves seduced by buzzwords and fear-mongering headlines, while ignoring the pragmatic, systemic challenges right in front of us.
There’s an almost comedic consistency to our technological misjudgments:
The Mainframe Era Is Fading -- And The Micro Is Taking Command” – Businessweek, 1987
Ironically, both coexist today.
“Nuclear-powered vacuum cleaners will probably be a reality within ten years.” – Alex Lewyt, 1955
We got Roombas instead—and thankfully without uranium.
“By 2005 or so, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s.” – Paul Krugman, 1998
Today, the Internet is the economy.
“In from three to eight years, we will have a machine with the general intelligence of an average human being.” – Marvin Minsky, 1970
Over half a century later, we’re still debating what “general intelligence” even means.
We are spectacularly unqualified at predicting how fast or slow technology will evolve and what form it will ultimately take. And yet, despite this demonstrable pattern, we're currently repeating the same mistake with AI, amplified by orders of magnitude.
Instead of asking grounded questions like “How can we secure what we already have?” the AI conversation today is primarily dominated by what might happen: artificial general intelligence, runaway superintelligence, robot overlords. We build doomsday narratives around AI’s hypothetical future, while leaving real-world AI systems, such as decision engines in HR, autonomous vehicles, and financial trading algorithms, woefully unprotected and unaudited.
This hype isn't just a harmless distraction. It’s actively dangerous. Because every moment we waste debating the metaphysics of machine consciousness is a moment we don’t spend building governance, resilience, and accountability into the systems already shaping people’s lives. The attacker doesn’t need an AGI. They only need your company’s LLM left unsecured on a public endpoint with overprivileged access to production data. Or, your distraction so they can capitalize on basic attacks and exploits that require no AI, like ransomware.
Instead of fueling the speculative hype cycle, we need to refocus on what matters:
We need a cybersecurity posture that assumes AI is already in the building, and possibly uncontrolled. Because in most organizations, it is.
We don’t need more dramatic predictions about AI’s future. What we need is sober, actionable foresight about its present. History shows that predictions, no matter how confident, are usually wrong in scope, timeline, or consequence. So, rather than betting the farm on being the first to predict the singularity, let’s focus on making sure the AI systems we do have don’t compromise everything we've spent decades trying to protect.
The next big cybersecurity breach isn’t going to come from a sentient AI—it’s going to come from an unprotected chatbot, an ungoverned AI agent with access to multiple secure applications, an overlooked model permission, or a lack of traceability in an automated decision pipeline.
Let’s make sure we’re not building today’s AI security on the same kind of wishful thinking.