How to Fix Social Media

by | February 17, 2020

Illegal and harmful content has invaded the internet, particularly social media platforms.  The debate about what to do following the Russian interference in the 2016 US election has led to investigations by international regulatory bodies, states attorney generals, the Department of Justice and the Federal Trade Commission.  Internet giants, including Facebook, Google, Amazon, Microsoft (LinkedIn) and Twitter, are engaged in aggressive lobbying and public relations campaigns to defend themselves.  The questions that have arisen are fundamental.

Are these companies mere platforms or should they belong to the regulated media industry?  Are they responsible for their content or are they purely technology enablers of user-generated communication?  Do they facilitate conversation or prevent it? Are they a natural extension of freedom of expression or do they undermine democracy by, for example, spreading lies and innuendo? 

When the social networks began, their goals were simple and altruistic – to unite people around the world, and in so doing, foster relationships, broaden cultural understanding and ‘make the world a better place’.  These idealistic goals, however naïve they now appear, remain fundamental claims of the internet  giants.  Nor are they entirely wrong. Social networks and the virtual world provide enormous convenience by bringing people together otherwise separated by distance, time and custom. 

But however noble their original intent, the ideals of ‘Big Tech’ have been sacrificed to their business models. Advances in artificial intelligence, massive economies of scale and free user accessibility underwritten by advertising have led to a witches’ brew ripe for exploitation by nefarious third-party actors and the social media platforms alike.  Harmful disinformation content is amplified by social networks, which create spiraling webs (content that goes viral), adding to both the cacophony and the market value of online advertising as massive data collection combined with AI erodes users’ privacy and choice. Echo-chambers reverberate with exaggeration and mis-representation, while generating both increasing advertising revenues for the platform provider and action-driven targeting by advertisers. 

Roger McNamee and Shoshana Zuboff, in their books “Zucked” and “The Age of Surveilance Capitalism,” argue that social network platforms’ massive user data collection – further enhanced by data from third party sites – enables a twofold problem. First, the platforms’ algorithms create and filter user preferences to assemble ‘bubbles’ of like-minded people (supercharging groupthink) which is core to their business model by driving time on site, engagement, and sharing.  These attributes increase the platform’s ad targeting and pricing power but also, second, lead to amplification of content which can spread disinformation as fast, or faster than real content.

To attack the intertwined problems of amalgamation of user data along with its over exploitation and harmful content, government regulation will be needed. Proper regulation first requires a fundamental understanding about the internal workings of the platform’s business model, user social behavior, use of artificial intelligence, as well as a willingness to beneficially curb the convenience and conversational benefits of social media currently enjoyed by billions of users. It will require a sophisticated effort to curb the abuse of social networks while maintaining their benefits. Big Tech will fight legislation and regulation. Effective oversight of the ‘internet commons’ will need to be accomplished across borders, in collaborative fashion between national regulators. Even in the most cooperative of times – which today is most certainly not – proper international oversight and regulation would take years to achieve.  As for harmful content, regulation must find an appropriate balance between free speech rights and the need to prevent the harm done by intentional disinformation. Given these realities, it is doubtful that in the near future effective regulation will solve the social media problem.

There are two possible paths to navigate this dilemma.

First, social media platforms could be defined as media companies by removing Section 230 of the Communication Decency Act of 1996. Those companies would then be responsible for content on their sites, with all the associated operational, legal and reputational costs. The threat of legal liability (those harmed by intentionally misleading content could sue the platform provider through tort law) should force social media companies to limit the posting and wider distribution of harmful content. In such lawsuits, an independent judiciary determines harm and the appropriate financial compensation. If the probability of outcomes in favor of victims is high, with significant monetary damages awarded, the cost of running sites that turn a blind eye to hate and lies will rise to the point where the amount of that ‘content’ permitted to go viral will fall.

Beyond tort law, another step exists that should be considered now. Parents, schools and communities should strive to create educated social media users.  Educators have long argued the merits of critical thinking and that approach ought to be expanded to address social media. The curriculum of every high school should include a mandatory course, “The use of social media and its consequences,” designed to help youth distinguish fact from fiction and learn how to protect their privacy. 

Students would be taught techniques for dealing with phishing and recognizing fake news and deep fake videos. They would learn the benefit of signing in with passwords, rather than through something like Facebook Connect, to prevent massive and unintended collection of their personal data. And crucially, they would learn the importance of dismissing ‘facts’ unless they include links to verified sources, and the reasons why it is useful to diversify vendors and news feeds.  Education should not, of course, be limited to young people. Congress could mandate a fund, financed by Big Tech, to underwrite an independent internet education foundation, accessible to all.

Polluting social intercourse is a tragedy of the commons. Yet the commons (the realm of free speech) cannot be owned, so there must be some other remedy beyond compensating its owners for harmful pollution. A ‘SuperFund’ paid for by companies like Facebook, Google, Microsoft and Twitter that underwrites the education of youth and adults, would seem a reasonable step. Importantly, such a ‘Social Media Education SuperFund’ (SMESF) would be set up independent of the social media companies, with an impartial board of trustees, financial and educational auditors, and other forms of appropriate governance.

In 2020, the knowledge of how to appropriately use social media is as important to everyday life as learning algebra. Society can’t wait for traditional forms of regulation to solve this crisis. Innovative and implementable ideas are needed now. Nothing less than the future of our democracy depends on it.

Filed Under: Sustainability

About the Author

Jim Ramo spent his entire career in the media business. He was the CEO of Movielink, a joint venture of 5 of the major movie studios that launched the delivery of movies over the internet. Prior to Movielink, he was part of the founding team of of Directv as Executive Vice President in charge of programming, sales and marketing and customer service.

Related Posts

Pin It on Pinterest

Share This