fbpx

Can Democracy and Social Media Coexist? Next Year’s US Election May Hold the Answer

Back in the ancient days before the internet, when phones hung on walls and dinosaurs roamed the earth, the promise of true mass communication seemed far too good to be true. The media that did exist was fundamentally exclusive – a one-way system where big business and big government held nearly all the microphones. Getting around the gatekeepers, seeing through the filters, and deciphering the truth behind the slogans, slants and euphemisms took inordinate amounts of effort and patience. In short, it was hard to find out what was going on behind all the propaganda … and even harder to tell anyone else about it.

Mass connectivity promised to democratize information – and for a little while, it did. Beginning right around the time The Matrix inspired a generation of moviegoers to believe in a separate reality beneath the world we see, bits of that reality began bubbling to the surface. The availability of unfiltered news articles sent ripples through an establishment media that was concerned about losing its influence, as it no longer held a virtual monopoly on the printing press. But the biggest change was yet to come.

Soon afterwards, YouTube made consumer-to-consumer video communication as easy to access as any cable channel. Independent video channels were basically incentivized to show mass media in a poor light, because that was the simplest and most effective way to steal part of their market share.

Alternative newsmagazines were an inconvenience to the traditional narrative, but user-created videos could be visceral and electrifying. Legacy media organizations, with plenty of skeletons in their closets, were suddenly naked and exposed to a new kind of predator that would do anything to take them apart.

Few would wish to mourn the establishment media of a generation ago, with its unlimited servility to corporate and government power. Yet there was at least a kind of structure and stability to it. The editorial standards were skewed, but they were skewed in reliable directions. Society, flawed though it was, at least held together.

At around this time, a new and ideologically extreme bomb-thrower decided to walk onstage and upset the apple cart, smirking confidently and offering constant self-praise, smashing the status quo to smithereens and showing no interest in picking up the pieces.

Not Trump, of course – at least, not yet.

Facebook.

 

The Eye of the Hurricane

In retrospect, a world so perfectly primed for cynicism was never going to be a match for Facebook. When dissatisfaction with the establishment media was at its peak, Facebook offered an alternative that skyrocketed in popularity and hasn’t stopped since. The company’s promotion of essentially unmoderated, free communication between users was hard to resist, particularly in democracies that gave lip service to the importance of free speech. If free speech was supposed to be good, it became hard to articulate why more speech wouldn’t be better. As a consequence, Facebook was able to successfully use this framing to get its foot in the door, beginning its mission of unbridled data collection while staying ahead of watchdogs and tech journalists who were slow to formulate a nuanced and clear response to the danger it presented.

The Arab Spring provided Facebook and other social media platforms with an ideological coup to accompany the real-world transfers of power that took place in the Middle East, as a result of the widespread protests which were facilitated online. In its wake, social media was painted as the best sort of destabilizing force – a way for people to collectively push back against the excesses of the powerful.

But those were the early days of social media, before bad-faith actors, including governments, had learned how to use the system to their advantage. It soon became clear that for every visible innovation that empowered ordinary users over the forces of disinformation, there were several invisible innovations that reversed the direction of influence.

On one side, the consumer-to-consumer format let users bypass the de facto content filters of other types of media: the often-aligned interests and biases of media company owners, advertisers, and official sources, for example. The rich and poor could have their voices heard at a much closer level of parity than ever before. Attempts at spin could be juxtaposed with counter-evidence from any level of society, or simply treated with mockery.

On the other side, however, censorship from the platform (or imposed by governments) could block certain messages from the marketplace of ideas. Governments could order Facebook to provide data about specific users based on their posts (or simply collect it without permission). All kinds of organizations could collect deeply personal information about social media users, and use it against them for purposes of propaganda. Fake accounts could be used to sway online conversation, edited photos and videos could give misleading accounts of events, and people could be subjected to harassment or threats based on their posting habits.

The tug-of-war for the minds of entire populations has been a central theme of the debate surrounding social media in recent years – and this debate is only increasing in intensity. Some have called for regulation, others for antitrust action, others for greater transparency and user freedom, and still others for a reduction in overzealous application of existing rules surrounding posting and advertising.

Each side digs in its heels – yet there in the middle, at the eye of the storm, Facebook itself continues to grow in size and influence. The company makes routine and empty promises about its plans to increase user privacy and reform its policies to clamp down on abuse. All the while, it skirts responsibility by classifying itself as a platform, and then a publisher, and then a platform again, depending on the needs of the situation.

Perhaps fittingly for these volatile times, the center of today’s cultural and ideological fight is itself chameleonic, operating with no clear set of values or vision. The ship of society has a captain, but a curious one: Facebook’s CEO often seems to show little concern about the rocks up ahead if we stay the course.

One such rock looms over the landscape in 2020, with potentially redemptive – or catastrophic – consequences for how Facebook is perceived and dealt with in much of the world. The US election in November of next year will act as a referendum on many things, not least of which will be the role of social media in democratic societies.

Just over a century ago, an iceberg sent an unsinkable ship was sent to the bottom of the ocean. Will the 2020 US election play the same role this time around, and at last pierce the momentum of today’s most powerful media company?

 

The Iceberg vs the Zuckerberg

US political culture has been at a constant fever pitch since campaigning began in earnest for the 2016 election. What may seem bizarrely entertaining from the outside is experienced very differently within the country, as partisan criticism escalates into threats and protests escalate into violence.

Nor are these phenomena limited to the US itself; the politics of division have sent shockwaves across Europe, Latin America, and elsewhere. Street fights and even mass shootings seem to trace back to online radicalization of the kind that is facilitated by social media, and sometimes these episodes are even live-streamed by the perpetrators.

Regarding sociopolitical destabilization, the most persuasive criticism of companies like Facebook is as follows:

Whatever the faults of the mainstream media, they generally exercised restraint rather than pit people against their neighbors. Where there was bias, it generally went in one direction, rather than pulling in all directions simultaneously. Editorial decisions may be dubious sometimes, but at least we knew who the editors were and where they were coming from; social media sources are often impossible to pin down, and fake stories often go to elaborate lengths to appear genuine. The urge to share posts that fit with pre-conceived ideas often overrides people’s better judgment, giving provocative – even incendiary – content a considerable advantage over carefully reasoned pieces that look for a sensible middle ground.

Moreover, consumers of 20th century media content would generally be given the same set of facts to work with, even if they came to different conclusions; whereas Facebook algorithms create tailored newsfeeds that reinforce existing beliefs, leaving no common set of facts to help people understand each other. These polarizing forces can cause society to rip apart at the seams, while helping to elect leaders who would rather spend their energy further picking away at fault-lines than looking for responsible long-term solutions.

To complicate matters further, Facebook’s business model relies on the very features that create these problems. Precisely targeted advertising depends on rampant data collection, and Facebook’s size makes it very difficult to vet each advertiser, advertisement, and social media post. The company also gets its power and influence from high user engagement rates, so it will hardly be amenable to solutions that involve fundamentally re-thinking its approach to newsfeed algorithms.

Taking away Facebook’s USPs may be good for society as a whole – but for the company itself, the idea will be a non-starter. The path that Facebook has chosen, therefore, is the only one that makes business sense: Leave the USPs alone, put band-aids on the main problems, dodge responsibility whenever possible, and solemnly promise to do better in the future whenever the public pressure gets too intense.

By adopting this strategic framework, it becomes easy to understand all of Facebook’s recent moves – and all the more fascinating to wonder whether they will be enough to guide the company through the storm that is coming.

 

The Strategy in Action

Above all else, Facebook wants to be able to continue its winning streak – and that means avoiding antitrust action. The more it can avoid public scrutiny and outrage, the better its chances at staying whole. That’s why the company is eager to outsource its editorial decisions (and therefore the blame for those editorial decisions) to regulators. A recent article in the Guardian shows clearly which forms of government interference are welcome in the eyes of Facebook, and which aren’t:

Mark Zuckerberg has said Facebook cannot be expected to manage the crisis around election misinformation campaigns on its own. “I don’t think as a society we want private companies to be the final word on making these decisions.”

“We would be better off if we had a robust democratic process setting rules on how we want to arbitrate the processes to protect values that we hold dear, but in the absence of regulation we are going to do the best we can to build up sophisticated systems to address these issues,” he said.

In September 2018, Facebook introduced new measures meant to curb foreign influence in elections, including identifying and removing fake accounts and preventing accounts from outside the United States from purchasing advertisements on the platform.

Despite repeatedly stating his support for legislating consumer and election security on Facebook, Zuckerberg fired back at calls from politicians like Elizabeth Warren to break up the company, saying it was investing billions of dollars into “systems that are more sophisticated than what governments have” to address privacy and other social issues.

“It is not the case that if you broke up Facebook into a bunch of little pieces you wouldn’t have those issues – you would still have them but you would be less equipped to deal with them,” he said.

With government regulating online speech, Facebook wouldn’t have to get its hands dirty by engaging in the controversies generated by the content it hosts on its platform. Zuckerberg has consistently sounded this note; he often makes clear that he wants great power, but not great responsibility:

The Facebook chief used the interview to push back against the idea that his company should determine what constitutes political speech. He’s previously called for more government regulation around political speech and advertising.

“I think setting the rules around political advertising is not a company’s job,” he told host George Stephanopoulos. “There have been plenty of rules in the past. It’s just at this point they’re not updated to the modern threats that we face. We need new rules.”

Yet the company is taking active steps to combat misinformation in some specific instances.

Facebook plans to ban misinformation about the 2020 Census as well as a range of disruptive information about the elections.

The announcements come as part of a civil rights audit into Facebook’s practices.

Facebook is banning misinformation about the 2020 Census and elections as it updates it policies to deal with online trolls and other bad actors.

In a report released over the weekend, Facebook COO Sheryl Sandberg detailed a number of steps it’s taking to secure the 2020 elections. The company is banning ads that tell people not to vote and last year banned a range of “misinformation” on voting, including posts that misrepresent when, where or how to vote or threaten violence relating to voting or an election, it said.

The company also plans to introduce a “census interference policy” later this year, in the lead-up to the once-a-decade count of all U.S. residents. The policy will “prohibit misrepresentations of census requirements, methods, or logistics, and census-related misinformation,” the company said in its report.

Facebook has also announced the launch of a civil rights task force and election monitoring center “to combat anti-democratic tactics such as voter intimidation and suppression”. This initiative is undergoing a few trial runs elsewhere in the world, yet it is already running into controversy:

It’s clear that Facebook needs help, and this spring, ahead of elections in Taiwan and Europe, the business has made an unprecedented move. Using a third party non-profit, it has opened up its data to 60 outside researchers from 30 academic institutions in 11 different countries.

The crusade is no doubt important, but some of Facebook’s partners have proved a bit questionable. The organization overseeing this particular project is Social Science One, and its founder has gotten in trouble with Facebook before for data harvesting.

One of the funders is also the Charles Koch Foundation, a major financial contributor to climate misinformation.

What’s more, last week Facebook defended its partnership with Check Your Fact, an organization that is known for its ties to white nationalists and is also funded by the Koch brothers.

While it’s admirable that the largest social media platform is taking action on fake news and also attempting some level of transparency, if Facebook continues to align itself with politically motivated organizations and funders, the company’s conclusions might not be trusted by the general public or policy makers.

This lack of trust is also the result of a decade filled with misinformation coming from Facebook’s own executives. The company’s handling of the 2016 US election was roundly criticized for its unwillingness to deal consistently with misleading advertisements on its platform as well as its failure to come clean on the extent of its data breaches that allowed user information to fall into the hands of bad-faith actors. A recent summary in the New York Times reviews some of the more recent scandals:

  • A security flaw that potentially exposed the public and private photos of as many as 6.8 million users on its platform to developers (the company told the told European regulators almost two months after discovering the bug and waited almost three months before disclosing publicly).
  • A separate “bug” that exposed up to 30 million users’ personal information in late September. Among the information exposed: emails and phone numbers, and profile information including recent search history, gender and location.
  • An admission that the company “unintentionally uploaded” the email contacts of 1.5 million new Facebook users since May 2016.
  • And, of course, the Cambridge Analytica scandal, where data from tens of millions of users was misappropriated and shared to be used for profiling for political campaigning.

The above is a highly abbreviated list of Facebook’s poor handling of user data, for which it occasionally apologizes and pays fines. Yet just as on Wall Street, the fines for bad behavior seem to be accepted as just part of the cost of doing business.
This business model, apparently inspired by the notorious Ford Pinto strategy, is a gutsy move for a company with so much else going for it. Yet Facebook’s consistent embrace of its full speed ahead, damn the torpedoes model has worked out well for the company thus far, at least judging by its bottom line.

Facebook’s most recent major data breach fine – this one for its misuse of user data in connection with the Cambridge Analytica scandal, which may have significantly disrupted the 2016 US election – elicited a mixture of relief and apparent excitement among its investors, rather than the chastening effect one might have hoped for. A Washington Post report put the fine in context, publishing the following summary just hours later:

Facebook on Wednesday announced another quarter of strong revenue growth, as Wall Street appeared unfazed by the company’s historic $5 billion privacy settlement with the U.S. Federal Trade Commission.

The social network said its second-quarter revenue, which is driven by advertising, grew by 28 percent to $16.9 billion, beating analysts’ estimates of $16.49 billion.

The good news for Facebook – that it can withstand multi-billion-dollar fines without batting an eyelash – would seem to be bad news for a democratic system whose proper functioning depends on media outlets that feel at least some pressure to take their civic responsibility seriously. A recent Buzzfeed article began with a stark graphic (see below), noting: “After announcing the anticipated settlement, Facebook’s market capitalization climbed by approximately $40 billion in just over an hour of after-hours trading.”

At the same time, however, the company faces other issues that will continue to hang over its head for some time to come. From the aforementioned article in the Post:

But Facebook acknowledged there are more hurdles ahead. In its earnings report, Facebook disclosed that the FTC had informed the company in June that it had opened an antitrust investigation into its business. Adding to the woes, on Tuesday, the Justice Department announced a sweeping antitrust investigation into Silicon Valley tech giants, the latest probe faced by Facebook and other companies around the globe.

 

The Coming Wave of Fake News

On top of these concerns, next year’s US election is universally expected to be the most challenging stress test ever faced by social media companies, with all sides devoting enormous amounts of money and effort to ‘winning’ an increasingly combative culture war. Genuine fans of the candidates, as well as trolls, political action committees, disinformation campaigns, clickbait-heavy news outlets, sockpuppet accounts, and whatever else will be invented over the next 14 months, will be out in force to try to game the social media system to their advantage.

They will have more experience and knowhow to draw on this time around, along with even more powerful tools at their disposal. Consider, for example, the rise of deepfake technology, which can allow for doctored videos that are nevertheless utterly convincing. The video below places a digitally generated deepfake rendering of Jim Carrey in The Shining, in the role in that Jack Nicholson actually played.

.

Voices can be convincingly faked using similar technology. As Charlie Warzel frequently points out, the mere existence of deepfake technology potentially casts all video evidence in doubt, regardless of whether it is actually used. The very fact that any given video might be a deepfake is enough to weaken its credibility and perceived authenticity.

Facebook’s policies have already been put to the test on this issue. Several weeks ago, a doctored video of Congresswoman Nancy Pelosi appeared online, giving the impression that she was slurring her words. (In reality, the video had been deliberately slowed down to create this effect.) Despite calls for Facebook to censor this content, the social network issued a statement to the effect that the video violated none of their posting policies, and could therefore remain on its platform – although it was nevertheless downgraded to appear less frequently in people’s newsfeeds.

Unconvinced, a pair of artists decided to test Facebook’s commitment to this decision by creating a deepfake video of Mark Zuckerberg, and posting it on Facebook to see if it would be deleted. The video shows the Facebook founder saying to the audience, “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures. I owe it all to Spectre. Spectre showed me that whoever controls the data, controls the future.” To its credit – at least in terms of consistency, if nothing else – Facebook stuck to its stated policy once again, and refused to censor the video despite its provocative content.

Deepfakes are, of course, just one tool in the ideologue’s arsenal. Where videos are concerned, selective edits can be even more damaging. The recent episode involving the Covington High School boys is one of the most stunning. Most media outlets (mainstream as well as alternative) took at face value a highly misleading video clip, used it to condemn its underage subjects in the most vituperative language, and in many cases refused to apologize even when the full context made the original deception clear.

Facebook is at the center of this minefield and played a significant role in creating it. Yet there is much more to the story. Twitter and YouTube have earned their share of criticism as well, and indeed every such platform struggles with similar challenges. But Facebook still wears the crown, and famously failed to control its own platform in the run-up to the 2016 election, so all eyes will be on how it handles the next one.

 

The End of an Era?

As the field of presidential candidates begins to slowly take shape, unsettling questions mark the media landscape ahead. Will the 2020 election be the iceberg to this titanic social media megaplatform? Will Facebook itself be the iceberg to the democratic experiment? Or will we all, by some miracle, glide past the hazards as they scratch the sides of the ship – and make it through to the other side, where cooler heads can help the internet find its balance?

The cost to Facebook could be severe, depending on the result of the US election as well as the public mood that prevails in its wake. Some are already calling for disruptive regulation as the only way to rein in a disruptive business model. The key challenge, as Charlie Warzel writes in the New York Times, is of:

not only how to regulate Facebook but of how to conceptualize an entity that operates largely without meaningful competitors and collects troves of information on more than two billion human beings. Facebook — alongside Big Tech counterparts of Google and Amazon — is operating at a vast scale. As such, any meaningful attempts to hold the company accountable should be equally disruptive, to borrow a Silicon Valley term of art….

David Carroll, a professor at the New School, who took legal action against Cambridge Analytica in the United Kingdom, agreed that, “no fine would be adequate for a monopoly. Advertisers, users, and investors do not register the fines in their economic activity. [They’re] still buying, using investing. It causes no material harm to Facebook. It’s just bad PR.”

What does he have in mind? “Seizure of servers, criminal penalties against corporate directors and Stop Data Processing orders would actually be painful penalties,” he argued. Or, even more intense, “force Facebook to delete its user data and models and start from scratch under rigorous collection restrictions.”

The US, which has in recent years deregulated media and other industries, allowing giant companies to merge with other giants, and then to merge again with even larger giants, would need to reverse course considerably before considering such an aggressive approach to a company like Facebook.

Yet we live in a time of wide pendulum swings in our political culture. Polarized opinion, which Facebook itself has played a large role in exacerbating, could create enough momentum for a leftist political candidate to enter office with a considerable public mandate to break up the internet giants. Such a scenario may seem unlikely, yet after witnessing the astonishing political developments of the past four years in Europe and America, anyone in the business of predicting the future should know by now that they are playing a fool’s game.

And what of Facebook’s game? Can more speech really equal better speech? Does our current path lead anywhere we want to go? Can Facebook find a way to crack the code, prevent the abuse of its platform, and make social media great again? If so, then it can fulfill its original promise of bringing people closer together through positive interactions and time well spent.
Here’s hoping that Mr. Zuckerberg, who singlehandedly changed media forever, has one more trick to pull out of his hat. Yet for the sake of his company, and ours, along with the well-being of the entire global community, each of us must also take responsibility to be the change we wish to see in the world.

We need to find a new way to get along together, rediscover our common interests, and reward unity as a value in itself.

AUTHOR

Latest Blogs