A PYMNTS Company

Self-Regulation and Litigation as Incipient Content Moderation in the US

 |  May 23, 2022

While Europe discusses how to better regulate digital platforms, the US relies on the platforms’ self-regulation to police fake information. These efforts may grow due to a recent statement by the New York Attorney General. Twitter announced on Thursday that it intends to reduce what it considers to be false information around crises. Some examples are armed conflicts, natural catastrophes, and public health issues. According to the social platform, a crisis is a widespread threat to life, physical safety, health, or subsistence. The firm’s reaction comes shortly after Letitia James announced she would initiate an investigation into the role of social media and online message boards in the recent Buffalo mass shooting.

Twitter’s head of safety and integrity said that if a claim on the site is deemed to be deceptive, the social media would not amplify or recommend that material, including in the home timeline, search, and explore functions. The company will decide whether or not allegations are deceptive by verification from numerous trustworthy, publicly accessible sources, such as evidence from conflict monitoring groups, humanitarian organizations, journalists, and others. It also announced that it would add warning notifications to some tweets, including those from high-profile accounts and those owned by governments or state-affiliated media. Users would have to click through the warning to access the tweet, and it would be unable to be liked, retweeted, or shared. 

It is puzzling, however, whether this policy will endure. Elon Musk has stated that he wants the social media network to reduce the amount of moderation it applies to tweets and has affirmed that he wants to take the firm in a different direction, citing what he calls the website’s censorship as a specific issue. However, he has given mixed signals. In a video in which he appears together with Europe’s commissioner for the internal market, Thierry Breton, Musk stated his views aligned with Europe’s Digital Services Act (“DSA”). “I think we’re very much of the same mind,” Musk said in the video. The DSA is a regulation that establishes an unprecedented new standard for the accountability of online platforms regarding harmful and illegal content. The new rules, which are not yet in place, would require major social-media companies, among other things, to quickly address illegal content and conduct frequent risk assessments.

With no bill projects addressing content moderation in the US, the main—if not only—possibility to police misinformation and harmful content appears to be self-regulation.  Nevertheless, the risk of litigation may encourage firms to adopt further safety/moderation standards. New York Attorney General stated she would initiate an investigation into the role of social media and online message boards in the recent tragedy in Buffalo, where an 18-year-old gunman opened fire on a Tops supermarket, killing ten people and injuring three more. According to the suspect shooter’s documents, white supremacy on 4chan radicalized his ideas and appeared to influence his decision to carry out the lethal attack. The investigation was sparked by a referral from New York Governor Kathy Hochul, who in the days following the horrific shooting urged social media companies to more actively monitor content for dangerous extremism.

How would Musk’s acquisition of Twitter affect this new policy? This is unclear. Yet, after the Capitol attack of 2021 and so many other tragedies that have occurred in part due to the role social media play in amplifying fake information and hate speech, the US may only resort to the prosecution of platforms on the base of elusive legal grounds. New regulation could potentially benefit not only social media users but also the platforms themselves by providing clear rules to follow, which would facilitate compliance and the consequential reduction of litigation risks.