Ukraine’s official Twitter account wants @Russia to be removed from the platform. YouTube is facing scrutiny for allowing a Russian broadcaster who is widely viewed as part of the country’s propaganda machine to continue making money from ads on the video site. And TikTok, a service that didn’t even exist during the 2014 crisis in Ukraine over Russia’s annexation of Crimea, is now offering an unprecedented close-up of the front lines through videos — some authentic, others not.
Social media companies have in recent years grappled with how to handle misinformation and conspiracy theories about a pandemic, a fraught US presidential election and an insurrection, often while facing intense criticism from lawmakers for doing too little or too much. Now, the platforms are scrambling to confront a growing list of challenges, some of which appear to be almost unheard of in their histories, as war unfolds in Europe.
On Thursday, Twitter faced a new kind of moderation decision when the verified account for the country of Ukraine posted: “hey people, let’s demand @Twitter to remove @Russia from here … they should not be allowed to use these platforms to promote their image while brutally killing the Ukrainian people @TwitterSupport.” Twitter spokesperson Trenton Kennedy declined to comment on whether Twitter might remove the official Russian account referenced in the tweet, or the Kremlin’s verified account, from the platform.
“That question of should we let state actors that prevent their own citizens from seeing the free expression on these [Western social media] platforms have the right to use those platforms as mouthpieces for their own propaganda is a really nuanced and complex topic,” Renee DiResta, technical research manager at the Stanford Internet Observatory, told CNN Business
Other challenges from the conflict are familiar to the major platforms, such as how to prevent the rapid spread of misinformation. But given the life-or-death circumstances and Russia’s history of deploying propaganda and covert online manipulation, the stakes are heightened.
After Russia’s invasion of Ukraine officially began, social media was flooded with photos of bombed out buildings, first-person accounts from Ukrainian civilians fleeing their homes, and even videos purporting to be from soldiers engaged in the fighting. Users were left to sort through what might be real or old, fake or manipulated content meant to sow confusion and discord in a conflict that is being waged in part through the use of propaganda.
In one instance, a video appearing to show a soldier parachuting into the conflict went viral on TikTok Thursday morning, racking up millions of views. But the video had originally been posted to Instagram about seven years ago, NBC disinformation reporter Ben Collins noted on Twitter. In some other cases, clips from video games or videos from old conflicts recirculated on the platform, purporting to show what was happening in Ukraine.
The social media companies should be “making sure there’s no overt manipulation on their platforms, and then trying to surface accurate information, particularly within trends, to help the public understand what’s going on,” DiResta said. “In these moments, there is always going to be something that gets through unfortunately, so I think … the platforms being as transparent as they can be is very important.”
Twitter (TWTR) and Facebook (FB) parent company Meta both told CNN Business that they have teams monitoring for misinformation, coordinated inauthentic behavior and other potential issues related to the conflict. TikTok did not respond to requests for comment about its response to the war on their platforms.
Even with those preparations, there have already been some missteps.
Twitter faced backlash in the days leading up to the invasion for having briefly removed the accounts of open source researchers who had been sharing information on the platform about the movement of Russian troops and equipment. Twitter’s head of site integrity, Yoel Roth, said on Twitter Wednesday that the removals were due to a “small number of human errors” made as part of an effort to enforce the company’s policies against manipulated media. Twitter said the accounts were quickly restored.
“Twitter’s top priority is keeping people safe,” Twitter spokesperson Trenton Kennedy said in a statement. “As we do around major global events, our safety and integrity teams are monitoring for potential risks associated with the conflict to protect the health of the service, including identifying and disrupting attempts to amplify false and misleading information and to advance the speed and scale of our enforcement.”
Twitter on Thursday and Friday had its executives promoting live audio discussions on Twitter Spaces about the conflict being held by reporters at major news outlets. The company also shared a series of safety tips for users on the ground in Ukraine or Russia, and curated a Twitter “moment” where it is compiling the latest updates from reliable sources. Twitter also launched a feature allowing users to affix a sensitive content warning to photos and videos they tweet, and on Friday paused advertisements in Russia and Ukraine “to ensure critical public safety information is elevated.”
On Facebook, the war in Ukraine has yet to be added to the platform’s “crisis response” page as an event where users can mark themselves safe. But the company did spin up a new feature that allows users in Ukraine to lock their profiles for “an extra layer of privacy and security protection.” On Instagram, the platform is showing users in the country alerts on how to protect their accounts.
“We have established a Special Operations Center to respond to activity across our platform in real time,” Meta spokesperson Dani Lever told CNN Business Thursday. “It is staffed by experts from across the company, including native speakers, to allow us to closely monitor the situation so we can remove content that violates our Community Standards faster,” Lever said.
On YouTube, videos from Russian state-funded television network RT continued to run advertisements as of Friday morning. That means the media company whose American arm was forced by the US Justice Department in 2017 to register as a “foreign agent” and that intelligence researchers have said “conducts strategic messaging for [the] Russian government” continues to be able to monetize its presence on the video-sharing platform. YouTube labels RT’s videos with a disclaimer that it is funded by the Russian government.
YouTube spokesperson Ivy Choi declined to comment about RT directly, but said Google is evaluating what new US sanctions and export controls may mean for YouTube and its other platforms.
Google Europe said on Twitter it was enhancing security controls for users in Ukraine, and that its intel teams were working to address disinformation campaigns, hacking and “financially motivated abuse.”
“On YouTube, we’re prominently surfacing videos from trusted news sources and working hard to remove content that violates our policies,” Google said. “Over the last few days, we’ve removed hundreds of channels & thousands of videos.”
Taking action on Russian accounts carries its own risks for the platforms, however.
On Friday, the Russian government moved to “partially restrict” Facebook access in the country after accusing the platform of unlawful censorship. Russia’s ministry of communications claimed Facebook had “violated the rights and freedoms of Russian citizens” when the social network on Thursday allegedly clamped down on several Russian media outlets on its platform.
In response to the allegations, Meta global affairs president Nick Clegg said Russia had ordered the company to “stop the independent fact-checking and labelling” of four Russian outlets.
“We refused,” Clegg said in a statement. “Ordinary Russians are using our apps to express themselves and organize for action. We want them to continue to make their voices heard, share what’s happening, and organize.”