YouTube’s stricter policies against electoral disinformation were followed by sharp drops in the prevalence of false and misleading videos on Facebook and Twitter, according to new research published Thursday, underscoring the power of the video service on social media.
Researchers from New York University Center for Social Media and Politics found a significant increase in YouTube videos of voter fraud shared on Twitter in the immediate aftermath of the November 3 election. In November, those videos consistently accounted for about a third of all election-related videos shared on Twitter. The main YouTube channels on voter fraud that were shared on Twitter that month came from sources that had promoted electoral disinformation in the past, such as Project Veritas, Right Side Broadcasting Network and One America News Network.
But the proportion of voter fraud complaints shared on Twitter dropped dramatically after Dec. Eighth. That was the day YouTube said it would remove videos promoting the unfounded theory that widespread mistakes and fraud changed the outcome of the presidential election. By Dec. On January 21, the share of YouTube voter fraud content shared on Twitter had dropped below 20 percent for the first time since the election.
The proportion fell further after January 7, when YouTube announced that any channel that violated its electoral disinformation policy would receive a “strike” and that channels that received three warnings in a 90-day period would be permanently removed. By opening day, the proportion was around 5 percent.
The trend was replicated on Facebook. A post-election spike in the sharing of videos containing fraud theories peaked at about 18 percent of all videos on Facebook just before Dec. 1. 8. After YouTube introduced its stricter policies, the ratio fell dramatically for much of the month, before increasing slightly before the January 6 riot on Capitol Hill. The proportion fell again, to 4 percent by inauguration day, after the new policies were implemented on January 7.
To arrive at their findings, the researchers collected a random sample of 10 percent of all tweets each day. They then isolated tweets that linked to YouTube videos. They did the same with YouTube links on Facebook, using a Facebook-owned social media analytics tool, CrowdTangle.
From this large data set, the researchers leaked YouTube videos on the elections in general, as well as on voter fraud using a set of keywords such as “Stop the Steal” and “Sharpiegate.” This allowed researchers to get an idea of the volume of YouTube videos on voter fraud over time and how that volume changed in late 2020 and early 2021.
Disinformation on the main social networks has proliferated in recent years. YouTube in particular has lagged behind other platforms in cracking down on different types of misinformation, often announcing stricter policies several weeks or months after Facebook and Twitter. However, in recent weeks YouTube has tightened its policies, such as banning all misinformation about vaccines and suspending the accounts of prominent anti-vaccine activists, including Joseph Mercola and Robert F. Kennedy Jr.
Megan Brown, a research scientist at New York University’s Center for Politics and Social Media, said it was possible that after YouTube banned the content, people would no longer be able to share videos promoting voter fraud. It is also possible that interest in voter fraud theories declined considerably after states certified their election results.
But the bottom line, Brown said, is that “we know that these platforms are deeply interconnected.” YouTube, he noted, has been identified as one of the most shared domains on other platforms, even on both of them of recently launched Facebook content reports and NYU’s own investigate.
“It’s a huge part of the information ecosystem,” Ms. Brown said, “so when YouTube’s platform gets healthier, others do as well.”