Today we are simultaneously enthralled by, and apprehensive of, artificial intelligence (AI). AI holds out the promise of eliminating drudgery in our lives and boosting our living standards. It may also threaten our livelihoods.
To be sure, as a species we have been here before. In the 19 th century, mechanization in agriculture replaced farm work, often at great human cost and the loss of communities. At the beginning of the industrial revolution, machines proved ideal at spinning and weaving, resulting in the abrupt and often misery-inducing substitution of capital for labor.
In the long run, both innovations yielded massive increases in productivity and boosted broad living standards without sacrificing total employment. Jobs were initially displaced, but new ones (often paying higher wages) followed.
But in the short run—long enough to cause immense human suffering and social disruption—mechanized agriculture and industry were greeted with the same distrust, dislike, and worry found in today’s discussions about artificial intelligence. Those concerns reflect more than resistance to change. They arise out of real fears about losses of livelihoods, social
standing, and aspirations.
Like earlier innovations, artificial intelligence risks displacing jobs. It poses a particular threat to anyone performing repetitive tasks easily replicated by algorithms. The same occurred when computers became commonplace in the 1980s and 1990s. Word processing eliminated clerical positions and spreadsheets displaced workers with basic math skills. Then, the advent of the internet upended distribution and retailing. Today, jobs in radiology, journalism, fraud detection, coding, legal writing, drafting, storage, logistics, and transportation—among others—are at risk to machine learning algorithms.
Moreover, while earlier innovations mostly replaced menial physical labor or lower value-added clerical work, artificial intelligence threatens middle class professions, those that typically have required college-level education.
Little wonder, therefore, that AI strikes so much fear into the minds of the middle class.
And, as in the past, a sense of helplessness pervades. Long ago, Luddites smashed machines and destroyed factories. But we remember them today not for their success in halting change, but for their futility in reversing ‘progress’.
Are we therefore doomed to become victims of AI?
As tempting as it might be to assume Hollywood got it right and that our future will be determined by malevolent algorithmic Terminators, it is within our power to devise and adapt technologies, including AI, to suit our needs. It is only a question of whether we have the will to do so.
As John Stuart Mill noted in 1850, the creation of wealth is a by-product of economics, but the distribution of wealth is determined by society. The free association of people—society—can (and should) mold the economic system to serve the will of the people.
Fifty years later, at the beginning of the 20th century, another great economist, Thorstein Veblen, agreed with Mill and went a step further, offering a vision about how machines could advance social welfare.
In his Theory of Business Enterprise (1904), Veblen noted that the machine (which was becoming ever more ubiquitous in his age) was the driver of economic progress. But unlike classical economists, including Adam Smith or even Karl Marx, who recognized the productive benefits of specialization and mechanization in capitalism, Veblen made a crucial social welfare distinction based on the perverse incentives of businessmen (in his day, it was mostly men). In contrast to Smith, whose memorable metaphor of the invisible hand explained how social welfare was advance by self-interest, Veblen saw the motives of the businessman as inimical to the general welfare.
According to Veblen, the purpose of business was to accumulate profit. To do so, those in charge of business had every incentive to restrict competition, legally or otherwise. And even legal impediments to competition, such as advertising, branding, patents, copyrights, and financial engineering all amounted to barriers to entry, whose sole purpose was to limit competition, enabling super-normal profits.
As any introductory microeconomics text points out, restrictions on competition reduce social welfare by lowering consumer and producer surplus. In the vernacular, the outcome is less output and higher prices, to the benefit of business and at the expense of consumers and society.
Veblen envisioned that in the future business could be run by machines, mechanically optimizing output to the social benefit. In his time, that was not possible. But one day, perhaps soon, AI may offer that tantalizing prospect, serving society in a manner agreeable to Mill or Veblen. In short, AI would replace the capricious behavior of businessmen by putting a machine in charge.
As Veblen might have seen it, it should be possible to devise an artificial intelligence algorithm to set output for the monopolistically competitive firm not where marginal cost equals marginal revenue (which maximizes profits), but where price equals average total cost. In that way, the outcome would mimic that of the perfectly competitive firm, creating the Smithian ‘invisible hand’ socially optimal result. (Note that Smith, as much as any economist since, abhorred monopoly power.)
In short, technological advance need not be something only to be feared. As Mill and Veblen both noted, transforming innovation to serve society requires the subordination of economics to the social good. It is within our power to create outcomes we prefer.
Of course, this missive is not intended to suggest a panacea for the many of threats that artificial intelligence poses to individual jobs and livelihoods. Instead, it serves to remind us that humans are not destined to be slaves to either innovation or economics. Rather, humans form social systems to improve our welfare, which implies that we must elevate our needs over those of ‘mere’ economics.
Smith, Mill and, especially, Veblen would surely have agreed with that sentiment.