The Rise and Fall of the Socially Beneficial Corporation

Originally published at Project-Syndicate | November 11th, 2022

The rise of the neoliberal order in the 1970s and 1980s coincided with the demise of the socially beneficial corporation. Since then, the US federal government and other institutions have managed to offset the loss of only part of the broader contributions that big business once made.

CAMBRIDGE – In his new book Slouching towards Utopia, the economist J. Bradford DeLong points out, correctly, that the “industrial research laboratory and the modern corporation” were the keys to unleashing a radical increase in the rate of scientific and technological innovation, and thus economic growth, from 1870 onward. DeLong also identifies the Treaty of Detroit, a landmark 1950 settlement between General Motors and the United Auto Workers, as a linchpin of American-style post-World War II social democracy. But what ever happened to the behemoth corporations that unlocked decades of growth while sponsoring health insurance and pensions for their employees?

As scientific discovery supplanted mechanical tinkering as the basis for economically meaningful innovation in the late nineteenth century, the required research funding was supplied by the corporations that the Second Industrial Revolution (steel, railroads, mass production) had spawned. “In firms such as American Telephone & Telegraph, General Electric, U.S. Steel or DuPont,” write David Mowery and Nathan Rosenberg in Technology and the Pursuit of Economic Growth, “the development of a strong central office was closely associated with the establishment or significant expansion of a central research facility.

By allocating their monopoly profits to scientific research and development of technological applications, these corporations extended their market power while also serving a larger, social purpose. Before World War II, this purpose was not being met by the US government, which, starting in the Lincoln administration, had provided federal research support only for the agriculture sector. By 1940, the US government was allocating more research funding to agriculture than to all the constituent agencies that would make up the post-war Department of Defense.

Whether they owed their positions to formal agreements with the federal government (AT&T), patent monopolies (RCA and Xerox), or a combination of innovative research and commercial dominance (DuPont and IBM), the leading research laboratories could afford to invest upstream in the basic science from which technological innovations of commercial significance might evolve.

Then came WWII. Unemployment fell to 1%, and major US employers, restricted by wage controls, had to compete fiercely for labor. Thanks to a pragmatic compromise with the government, they were allowed to offer fringe benefits, such as health insurance and defined-benefit private pensions. This was made possible by asymmetric tax treatment for these benefits: employers could deduct the costs, and employees did not have to include them as income. 

The Treaty of Detroit was both a peacetime validation of the compromise and a broad signal to the private sector, where union membership peaked in the 1950s at about one-third of the labor force. It radically extended the role that dominant companies had come to play in the communities where they were based – this being the era before shareholder primacy came to dominate corporate management thinking.

Within the space of a generation, however, the monopoly profits available for funding R&D and social benefits had come under growing pressure. One after another, the great tech companies of the WWII era succumbed to the forces of Schumpeterian creative destruction and federal antitrust enforcement. 

AT&T and IBM were repeat targets of the Department of Justice’s antitrust division; but it bears mentioning that each case of state intervention proved directly beneficial to the broader enterprise of American innovation. In a 1956 consent decree, AT&T agreed to license freely all of its patents that were not directly relating to communications. And in a pre-emptive response to the DOJ’s third assault on it, in 1969, IBM “unbundled” software from its computers, thereby creating an independent software industry.

Other corporate giants failed on their own. US Steel was run over by a combination of more efficient foreign producers and the emergence of domestic “mini-mills” that thrived on scrap metal. RCA and Westinghouse fell victim to short-sighted financial engineering that traded strategic technical capability for instant stock-market gratification in the conglomerate mania of the 1980s. DuPont’s key patents expired, and the productivity of its R&D investments declined in the face of ferocious international competition.

Even as the twentieth century’s private-sector champions withdrew from the scientific and technological frontier, their absence was more than offset by the US federal government, which came to be the leading funder of R&D. With its roots in the WWII Office of Scientific Research and Development, the Department of Defense funded development across all the technologies that combined to make the digital revolution – from silicon to software. And to exploit the new downstream commercial opportunities, the professional venture capital industry emerged, first to fund digital innovation and then, following President Richard Nixon’s “War on Cancer,” to launch biotech startups.

But major corporations’ role in providing for their employees’ social welfare was not offset after their decline. Worse yet, the 1947 Taft-Hartley Act opened the door to state-level “right to work” laws that proved highly effective in reducing union membership in the private sector over the course of the post-war decades.

By then, after President Harry Truman’s effort in 1949 to establish universal health care as a federal entitlement was defeated, President Lyndon B. Johnson’s passage of Medicare and Medicaid marked the limits of publicly underwritten health care in America. In parallel, a systemic shift from defined-benefit to defined-contribution pensions moved the burden of investment risk from the employer to the employee. Today, the great corporations that catalyzed innovation and sponsored social welfare have come and gone, but market power persists, raising the question of where those monopoly profits are going.

During the neoliberal era that is now ending, a new target of opportunity for the application of excess cash flow emerged in the form of corporate stock repurchases. Previously, regulators barred this practice as a form of market manipulation. But the Securities and Exchange Commission changed the rule in 1982. Now over 60% of US companies buy back their own stock each year, and the annual amounts of these purchases typically exceed the payment of cash dividends (which is unsurprising, given the more favorable treatment afforded to capital gains).

The rise of the neoliberal order, so richly documented and analyzed in a recent book by the University of Cambridge’s Gary Gerstle, coincided with the demise of the socially beneficial corporation. Today’s digital tech giants are neither motivated nor equipped to play such a role, which is one reason why they have been struggling for legitimacy. Looking ahead, enhanced investment in technological dynamism and social welfare, under the stress of climate change, will come predominantly from the public sector, if at all. Can the new US CHIPS and Science Act and Inflation Reduction Act kick-start a new era of innovation? We can hope so, but hope is not an active verb.


William H. Janeway: A special limited partner at the private-equity firm Warburg Pincus, is an affiliated lecturer in economics at the University of Cambridge and author of Doing Capitalism in the Innovation Economy (Cambridge University Press, 2018).

Related Posts

Pin It on Pinterest

Share This