When a social media company “shapes and controls its overall content according to a discernible viewpoint,” it shouldn’t be able to claim legal immunity, said Adam Candeub, acting assistant secretary of Commerce for communications and information, during an Aug. 24 panel discussion held by the Federalist Society.
Section 230 of the 1996 Communications Decency Act generally absolves online platforms from liability for user-generated content. But it shouldn’t be interpreted so broadly as to also protect the firms’ decisions to remove or reshape the content, Candeub asserted during the virtual discussion.
“When they remove, promote, comment, or edit user content … when social media firms act in this way, they are speakers and they do not qualify for Section 230 protection,” he said.
The law specifically spells out legal immunity for regulating content “in good faith to restrict access” to content the firms deem “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.”
But this shouldn’t be interpreted as an invitation to block any content the firm labels as objectionable, Candeub said. Instead, it should only apply to content falling into similar categories of inappropriateness as the law outlines “to create a family-friendly environment,” he said.
The Commerce Department asked the Federal Communications Commission (FCC) in July to clarify the language of Section 230 along the lines described by Candeub.
The Commerce petition was called for by President Donald Trump in his May 28 executive order that accused online companies of “selective censorship that is harming our national discourse” and “invoking inconsistent, irrational, and groundless justifications to censor or otherwise restrict Americans’ speech.”
Major digital companies such as Facebook, Twitter, and Google claim that their services are designed and operated to be politically neutral. Mounting evidence indicates, however, that the companies are infusing political preferences into their content policing.
Tech companies have been ramping up content policing in the lead-up to the election, urged by Democrat lawmakers, while Republicans have complained that the policies are enforced unevenly.
Google-owned video platform YouTube has recently started banning content that includes information obtained by “hacking” and that “may interfere with democratic processes, such as elections.”
It isn’t clear how the company plans to determine whether any particular information was hacked and what counts as interference.
It continues to allow national news outlets like NBC, CNN, 60 Minutes, the New York Times and others to display investigative stories which obtained materials outside legal means.
–EPTimes and Wire services