ADVERTISEMENT

Technology

Expect This Election to Play Differently than 2020 on Social Media

(Illustration: Michelle Cohn for )

(Bloomberg Businessweek) -- Heading into the US presidential election four years ago, Facebook and Twitter instituted substantial policies to combat misleading political content, touted their investments into content moderation and promised to do all they could to avoid a free-for-all on their platforms. Just before Election Day in 2020, Facebook Chief Executive Officer Mark Zuckerberg told investors the election would be a test of the company’s yearslong effort to protect the political process. “Election integrity is and will be an ongoing challenge,” he said. “And I’m proud of the work we’ve done here.”

When then-President Donald Trump used his social media accounts to sow discord and his supporters did resort to violence on Jan. 6, 2021, both companies banned him. Since then, however, each has reinstated Trump’s accounts. They’ve also moved away from the approach they took in 2020, albeit each in their own ways. Zuckerberg has tried to reduce the prominence of political content on services owned by Meta Platforms Inc., while Elon Musk, who bought Twitter in 2022 and renamed it X, has mostly rejected content moderation and has gone all-in for Trump.

Across the social media industry, companies have reduced staffing on content moderation, often leaning more on artificial-intelligence-powered technology that may have trouble picking up nuance. The only major social media platform taking a more aggressive approach this election cycle may be TikTok, a service that had been active in the US for barely two years during the 2020 election.

Academics say all the elements are in place for dangerous abuse of the platforms. They’re concerned about posts that seek to confuse people about the voting process, attempts to declare victory early and campaigns using disinformation to sow doubts about the integrity of the outcome. The platforms don’t allow such behavior, but the specific ways they enforce their rules could make a huge difference.

The techniques for creating disinformation have improved since the last presidential election, especially with the advent of generative AI tools that can quickly produce passable falsehoods and fake imagery, says Brian Fishman, a former Facebook counterterrorism policy director and the co-founder of Cinder, a trust and safety software platform. Users have gotten savvier about dealing with disinformation, he says, but they’re also exhausted. “We’ve gotten to the point where the notion of misinformation has become so prevalent that people have sort of given up on fact and are turning to their gut instead,” he says, “and then cherry-picking whatever information seems to support those preconceived notions.”

A Meta spokesperson disputed the idea that the company had stepped back from election integrity work, writing in an email that “while each election is unique, in recent years Meta has developed a comprehensive approach for helping ensure the integrity of elections on our platforms: one that gives people a voice, helps support participation in the civic process and combats voter interference and foreign influence.” A spokesperson for X said that the company is actively enforcing its content policies and has been engaging directly with regulators, political campaigns and others about the threat landscape. "We have made sure that those in charge of administering elections have proper pathways to escalate threats to their personnel and processes, helping ensure that everyone has access to the ballot box," the spokesperson wrote.

The most drastic reversal over the last four years has taken place at Twitter. Musk cited what he saw as its politically charged approach to policing misinformation as a major reason he bought the company. He loosened policies at X, allowing more of what the company had previously considered abusive behavior. X has become less engaged in coordinating with other companies and the government about identifying manipulation happening on multiple platforms, say people familiar with the matter, who asked to remain anonymous because they weren’t authorized to speak publicly. Musk has also become one of Trump’s biggest financial supporters and public champions, and he uses his own account to spread political messaging that is often unabashedly partisan and demonstrably untrue.

Despite evidence that foreign operatives are still at work on social media, Musk also dismantled much of the infrastructure Twitter had developed to combat the abuse of its platform for political purposes. He made deep cuts to the divisions dedicated to trust and safety issues. Staff departures have made some tools to combat large-scale manipulation useless. According to former employees who asked not to be named for fear of retaliation, Musk has also undermined these systems by slashing spending on the cloud computing capacity needed for them to function properly. Some of those decisions are irreversible because they led to the permanent loss of historical data that the tools needed to analyze the platform, the people say.

A current X employee, who asked not to be named discussing private business matters, says the company has begun rebuilding some of its trust and safety capacity this year. But this person also says the team is primarily focused on responding to specific complaints related to issues like child safety and harassment rather than election interference.

One consequence of the changes at X is that it’s ceded its status as the digital water cooler of the chattering classes, says Daniel Kreiss, a professor of political communication at the University of North Carolina at Chapel Hill. The platform has always been small compared with Facebook and YouTube, but it had been disproportionately influential because of its penetration among journalists and political figures. “Twitter was the center of the political universe” for more than a decade, says Kreiss. “There is no one master platform like that anymore.”

If Musk has tried to shape the political landscape, Zuckerberg has done what he can to stay away from it. He’s said one of his biggest mistakes in 2020 was going too far in policing politically charged content, and Meta has changed its practices to show less of it overall. In February the company said it wouldn’t recommend content it deems political to Instagram and Threads users unless they’d specifically chosen to follow the account it came from. This makes political posts significantly less prominent than other types of content, which Meta’s recommendation systems now show to people based on their perceived interests, not who they follow. “We don’t think it’s our place to amplify political news,” Adam Mosseri, the head of Instagram, told Bloomberg News in July. “We think that comes along with too many problems to be worth any potential upside there might be on engagement or revenue.”

Meta will also take a lighter touch on Election Day. In 2020 it added contextual information to every post about the election, directing users to a voter information guide; after the election those labels told users that Joe Biden had won. The company will continue to label some crucial posts if they dispute the election results but not as broadly as in 2020, according to a spokesperson.

One platform that’s amped up its defenses is TikTok, which is owned by the Chinese company ByteDance Ltd. and is facing a US ban. It now labels videos from state-controlled accounts from dozens of countries. As of this spring, it keeps posts from those accounts out of the main feeds of global audiences if it determines those post are about politics or current events. TikTok has also started releasing more frequent reports about the influence operations it detects. This year alone it disclosed it had identified covert networks from China, Russia and Iran targeting the US.

In the past independent efforts to track activity on social media have helped uncover important trends, such as the rise of the #StopTheSteal movement that helped stoke the violence at the US Capitol in 2021. But companies have shut off key tools for doing this kind of work. Before Musk, Twitter made it possible for independent researchers to access the so-called firehose—a complete database of what’s being posted on the platform. Access is now prohibitively expensive for many of the researchers. In August, Meta shut down CrowdTangle, a tool used by researchers and reporters to study misinformation across its most popular apps. TikTok has stopped providing publicly facing data about how much any specific piece of content is being viewed.

Fishman, the former Facebook employee, says tech companies at least have a better idea of how to respond to attempts to manipulate their platforms and will likely be more efficient at using the resources they still devote to trust and safety work. But, he says, the reduction in investment in staffing those operations means they’re less prepared than they should be.

(Updates with comment from X in paragraph 6)

©2024 Bloomberg L.P.