Bad actors are actively working to spread disinformation among the American public, according to the U.S. government. Researchers say social media companies are making it more difficult to track just how deep these campaigns go.
The FBI has warned that Russia, China and Iran are leveraging major social media platforms and pay-for-influence celebrities to spread propaganda around the presidential elections, including fake videos intended to sow discord and undermine confidence in the elections. Collecting data on these incidents is a challenge, some say.
“The platforms have restricted or monetized access to application programming interfaces or APIs, which allow groups like mine, journalists or researchers at universities to be able to track this stuff,” said Nina Jankowicz, who previously led an advisory board on disinformation in Joe Biden’s administration.
Jankowicz is the CEO of the American Sunlight Project, which aims to expose disinformation campaigns spreading online. But this year’s election has strained resources.
“So three huge sites that we used to have a window into, we can no longer actually get a fulsome estimate of the numbers of posts,” she said of X, Facebook and Reddit. “That’s leaving aside TikTok and YouTube, which have always been pretty opaque and difficult to sort of track things on those platforms.”
Reddit, with its more than 73.1 million active registered users worldwide, shuttered most of the public access to its data in 2023. “Reddit needs to be a self-sustaining business, and to do that, we can no longer subsidize commercial entities that require large-scale data use,” the CEO said in a post on the platform at the time. The increase in cost is a burden that most independent research and policy shops can’t afford, she said.
X took a similar turn after Elon Musk bought the company. It upped costs to $42,000 a month for “enterprise”-level access to the type of data organizations like Jankowicz’s would need to do more in-depth analysis and research.
“My budget definitely does not allow for us to access that API,” she said.
Meta, which runs Facebook, also changed access, moving away from the widely used CrowdTangle tool, which could search publicly available posts on the platform, in August. The move was decried by nonprofits, researchers and news organizations.
Reddit pointed NOTUS to its election safety and Reddit 4 Researchers programs when asked about researchers struggling to access misinformation data and noted the content policy has “long prohibited” disinformation campaigns. Reddit’s program for researchers launched in beta form a month ago, and while 280 researchers worldwide applied, only “a few dozen” currently have access, according to the program’s official subreddit.
A Meta spokesperson defended the company’s decision to sunset CrowdTangle, telling NOTUS over email that “it did not provide a complete picture of what was happening on our platforms.”
“We’ve built new, more comprehensive tools with more comprehensive data, on our own systems, that enable independent study of key social issues, including elections,” the Meta spokesperson said.
While the new tools are open to academic researchers and nonprofits, Jankowicz said her team still finds them “woefully inadequate” in real-world use.
Social media companies have come under heavy scrutiny from both sides of the aisle over their moderation policies around disinformation and misinformation. In recent years, a legal campaign pushed by Republican attorneys general against alleged censorship has turned the tide away from measures intended to crack down on false — and potentially inflammatory — information on the platforms. Congressional probes, too, have put many social media executives in the hot seat.
Meta CEO Mark Zuckerberg sent House Rep. Jim Jordan, chairman of the Judiciary Committee, a letter about misinformation this summer. Zuckerberg said that he’d “temporarily demoted” content while waiting for fact-checkers during the election in 2020. “We’ve changed our policies and processes to make sure this doesn’t happen again,” Zuckerberg says in the letter. “For instance, we no longer temporarily demote things in the U.S. while waiting for fact-checkers.”
According to CNN, employees at major social media companies have reported “an overall decline in their engagement with the issue,” despite many of the platforms maintaining publicly stated guidelines and safeguards for misinformation and disinformation.
Musk significantly rolled back misinformation safeguards at X. His America PAC has also become a fertile ground for these disinformation campaigns. The pro-Trump group launched an “Election Integrity” community page on X, bearing the America PAC logo and moderated by the organization, soliciting possible voter fraud claims. Users on the page have recirculated debunked stories.
X did not respond to requests for comment. In a statement to The Washington Post, a Meta spokesperson said protecting the election remains a top priority and that “no tech company does more to protect its platforms — not just during election periods but at all times.”
The federal government, which has more resources to access data from social media companies, has also become less directly engaged with social media companies, per The Washington Post. The Post reported that the Department of Homeland Security “pulled back from direct outreach to companies such as Meta, Google and X” and that the FBI has updated its guidance to make clear that companies have agency over how they act on information the government shares with them.
The intelligence community sees itself as instrumental in filling the gaps in information and acting as a stopgap between social media companies, the government and voters.
“We do recognize that, obviously, state and local officials are critical in this space,” an FBI official said during a press briefing in late October. “They are often themselves potential targets of various campaigns, whether those are on the criminal side, the physical threat side or on the foreign influence or counterintelligence side.”
Working alongside partners at the Cybersecurity and Infrastructure Security Agency, intelligence professionals have gone across the country to try and train election officials on how to react when they see possible threats to democracy. “I think kind of across the board, particularly this cycle, one of our focuses has been getting information to them,” the FBI official said.
“Since 2023, we’ve conducted over 180 tabletop exercises across the country, over 480 election security-specific trainings, reaching over 30,000 election stakeholders,” a CISA official told reporters during the late October briefing.
But even with all those training events and exercises, there’s still a major concern about what disinformation could do to an American election in the days after the votes come in. One official from the Office of the Director of National Intelligence said that they’re looking for signs of foreign governments “seeking to foment violent protests or otherwise engage in violence in the period between Election Day and inauguration.”
“Iran and China, for example, probably will be opportunistic and quickly tailor their narratives in response to events during the postelection period,” one ODNI official said of the foreign nation’s tactics. “Iran may try to incite violence like they did after the last presidential election in that cycle, in December 2020.”
“We think that is a real possibility,” a CISA official said. “We want the American people to know that these things may occur and to understand the facts behind their potential impact.”
—
John T. Seward is a NOTUS reporter and an Allbritton Journalism Institute fellow.