Originally published at Project-Syndicate | Apr 10th, 2024
To think that artificial intelligence is advancing at warp speed and creating existential risks to humanity is to confuse a mania with useful progress. The technology is less like nuclear weapons than like many other slowly evolving technologies that have come before, from telephony to vaccines.
CAMBRIDGE – Experts who warn that artificial intelligence poses catastrophic risks on par with nuclear annihilation ignore the gradual, diffused nature of technological development. As I argued in my 2008 book, The Venturesome Economy, transformative technologies – from steam engines, airplanes, computers, mobile telephony, and the internet to antibiotics and mRNA vaccines – evolve through a protracted, massively multiplayer game that defies top-down command and control.
Joseph Schumpeter’s “gales of creative destruction” and more recent theories trumpeting disruptive breakthroughs are misleading. As economic historian Nathan Rosenberg and many others have shown, transformative technologies do not suddenly appear out of the blue. Instead, meaningful advances require discovering and gradually overcoming many unanticipated problems.
New technologies introduce new risks. Invariably, military applications develop alongside commercial and civilian uses. Airplanes and motorized ground vehicles have been deployed in conflicts since World War I, and personal computers and mobile communication are indispensable for modern warfare. Yet life goes on. Technologically advanced societies have developed legal, political, and law-enforcement mechanisms to contain the conflicts and criminality that technological advances enable. Case-by-case court judgments are crucial in the United States and other common-law countries. These mechanisms – like the technologies themselves – are evolutionary and adaptive. They produce pragmatic solutions, not visionary constructs.
The Manhattan Project, which developed the atomic bomb and helped end World War II, was an exception. It had a high-priority military mandate. With the Nazis seeking to develop a bomb of their own, speed and effective leadership were essential. And as all-out thermonuclear war became a real threat, statecraft and strategic deterrence helped avert doomsday.
But nuclear weapons are a misleading analogy for AI, which has followed the typically diffused, halting pattern of most other technological transformations. AI spans disparate techniques – such as machine learning, pattern recognition, and natural language processing – and has wide-ranging applications. Their common feature is mainly aspirational – to go beyond mere calculation to more speculative yet useful inferences and interpretations.
Unlike the Manhattan Project, which proceeded at breakneck speed, AI developers have been at work for more than seven decades, quietly inserting AI into everything from digital cameras and scanners to smartphones, automatic-braking and fuel-injection systems in cars, special effects in movies, Google searches, digital communications, and social-media platforms. And, as with other technological advances, AI has long been put to military and criminal uses.
Yet AI advances have been gradual and uncertain. IBM’s Deep Blue famously beat world chess champion Garry Kasparov in 1997 – 40 years after an IBM researcher first wrote a chess-playing program. And though Deep Blue’s successor, Watson, won $1 million by beating the reigning Jeopardy! champions in 2011, it was a commercial failure. In 2022, IBM sold off Watson Health for a fraction of the billions it had invested. Microsoft’s intelligent assistant, Clippy, became an object of ridicule. And after years of development, autocompleted texts continue to produce embarrassing results.
Machine learning – essentially a souped-up statistical procedure that many AI programs depend on – requires reliable feedback. But good feedback demands unambiguous outcomes produced by a stable process. Ambiguous human intentions, impulsiveness, and creativity undermine statistical learning and thus limit the useful scope of AI. While AI software flawlessly recognizes my face at airports, it cannot accurately comprehend the nuances of my carefully and slowly spoken words. The inaccuracy of 16 generations of professional dictation software (I bought the first in 1997) has repeatedly frustrated me.
Large language models (LLMs), which have become the public face of AI, are not technological discontinuities that magically transcend the limitations of machine learning. Claims that AI is advancing at warp speed confuse a mania with useful progress. I became an enthusiastic user of AI-enabled search back in the 1990s. I thus had high hopes when I signed up for ChatGPT’s public beta in December 2022. But my hopes that it, or some other LLM, would help with a book I was writing were dashed. While the LLMs responded in comprehensible sentences to questions posed in natural language, their convincing-sounding answers were often make-believe.
Thus, whereas I found my 1990s Google searches to be invaluable timesavers, checking the accuracy of LLM responses made them productivity killers. Relying on them to help edit and illustrate my manuscript was also a waste of time. These experiences make me shudder to think about the buggy LLM-generated software being unleashed on the world.
That said, LLM fantasies may be valuable adjuncts for storytelling and other entertainment products. Perhaps LLM chatbots can increase profits by providing cheap, if maddening, customer service. Someday, a breakthrough may dramatically increase the technology’s useful scope. For now, though, these oft-mendacious talking horses warrant neither euphoria nor panic about “existential risks to humanity.” Best keep calm and let the traditional decentralized evolution of technology, laws, and regulations carry on.
Amar Bhidé: Professor of Health Policy at Columbia University’s Mailman School of Public Health, is author of the forthcoming Uncertainty and Enterprise: Venturing Beyond the Known (Oxford University Press).