Supreme Court sidesteps ruling on scope of Section 230


0


The Supreme Court on Thursday handed twin victories to technology platforms, sidestepping an effort to limit a powerful liability shield for user posts and ruling that a law that allows rights for aiding terrorism does not apply to the ordinary activities of social media companies.

The court’s unanimous decision in one of the cases, Twitter v. Taamneh, no. 21-1496, effectively resolved both cases and left the justices facing difficult questions about the scope of a 1996 law, Section 230 of the Communications Decency Act.

In a brief, unsigned opinion in the liability shield case, Gonzalez v. Google, no. plausible claim for relief.” The court instead sent the case back to the appeals court “in light of our decision on Twitter.”

The tech industry cheered the court’s decision to leave Section 230 untouched, which it claims paved the way for the modern Internet, with sprawling social media platforms that constantly update feeds of messages, images and videos to have.

“Companies, scholars, content creators and civil society organizations that joined us in this case will be reassured by this result,” Halimah DeLaine Prado, Google’s general counsel, said in a statement.

The Twitter case was about Nawras Alassaf, who was killed in a 2017 terrorist attack on the Reina nightclub in Istanbul for which the Islamic State claimed responsibility. His family sued Twitter, Google and Facebook, saying they had allowed ISIS to use their platforms to recruit and train terrorists.

Justice Clarence Thomas, writing for the court, said the “prosecutions’ allegations are insufficient to establish that these defendants aided and abetted ISIS in carrying out the relevant attack.”

He wrote that the defendants transferred staggering amounts of content. “It seems that before every minute of the day, approximately 500 hours of videos are uploaded to YouTube, 510,000 comments are posted on Facebook, and 347,000 tweets are sent on Twitter,” Justice Thomas wrote.

And he acknowledged that the platforms use algorithms to direct users to content that interests them.

“So, for example,” wrote Justice Thomas, “a person who watches cooking shows on YouTube is more likely to see videos and ads for cookbooks, while someone who likes to watch professor lectures, collegiate debates, and ads for TED may see Conversations.

“But,” he added, “not all content on defendants’ platforms is so benign.” In particular, “ISIS uploaded videos that raised funds for weapons of terror and that showed brutal executions of soldiers and civilians.”

The platforms’ failure to remove such content, Justice Thomas wrote, was insufficient to establish liability for aiding and abetting, which he said required plausible allegations that they “provided such knowing and substantial assistance to ISIS that they were guilty participated in the Reina attack.”

The prosecutors had not cleared that bar, Justice Thomas wrote. “Plaintiffs’ claims fall far short of any plausible allegation that Defendants aided and abetted the Reina attack,” he wrote.

The platforms’ algorithms did not change the analysis, he wrote.

“The algorithms appear agnostic about the nature of the content, matching any content (including ISIS’ content) with any user who is more likely to view that content,” Justice Thomas wrote. “Thus, the fact that these algorithms match some ISIS content with some users does not convert the passive assistance of suspects into active assistance.”

A contrary ruling, he added, would expose the platforms to potential liability for “any ISIS terrorist act committed anywhere in the world.”

The court’s decision in the Twitter case prevented the justices from deciding the reach of Section 230, a law meant to nurture what was then a nascent creation called the Internet.

Section 230 was a response to a decision holding an online message board liable for what a user had posted because the service had done some content moderation. The provision said: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another provider of information content.”

Section 230 helped enable the rise of huge social networks like Facebook and Twitter by ensuring that the sites did not incur legal liability with every new tweet, status update and comment. Limiting the law’s sweep could expose the platforms to lawsuits alleging they sent people to messages and videos that promoted extremism, incited violence, damaged reputations and caused emotional distress.

The case against Google was brought by the family of Nohemi Gonzalez, a 23-year-old student who was killed in a restaurant in Paris during terrorist attacks there in November 2015, which also targeted the Bataclan concert hall. The family’s lawyers argued that YouTube, a subsidiary of Google, had used algorithms to send Islamic State videos to interested viewers.

It is unclear what the ruling will mean for legislative efforts to eliminate or modify the legal shield.

A growing group of bipartisan lawmakers, academics and activists have become skeptical of Section 230, saying it has shielded giant tech companies from repercussions for disinformation, discrimination and violent content on their platforms.

In recent years, they have proposed a new argument: that the platforms lose their protections when their algorithms recommend content, target ads or introduce new connections to their users. These recommendation engines are pervasive, promoting features like YouTube’s autoplay feature and Instagram’s suggestions of accounts to follow. Judges have mostly rejected this reasoning.

Members of Congress have also called for changes to the law. But political realities have largely stopped those proposals from gaining traction. Republicans, angered by tech companies removing posts from conservative politicians and publishers, want the platforms to remove less content. Democrats want the platforms to remove more, such as false information about Covid-19.

Critics of Section 230 had mixed reactions to the court’s decision, or lack of one, in the Gonzalez case.

Senator Marsha Blackburn, a Tennessee Republican who has criticized big tech platforms, said on Twitter that Congress had to step in to reform the law because the companies “turn a blind eye” to harmful activities online.

Hany Farid, a professor of computer science at the University of California, Berkeley, who signed a letter supporting the Gonzalez family’s case, said he was heartened that the court had not offered a full defense of the Section 230-liability shield.

He added that he thought “the door is still open for a better case with better facts” to challenge the tech platforms’ immunity.

Tech companies and their allies

have warned that any changes to Section 230 would cause the online platforms to take down much more content to avoid potential legal liability.

Jess Miers, a legal counsel attorney for Chamber of Progress, a lobbying group that represents tech companies including Google and Meta, the parent company of Facebook and Instagram, said in a statement that arguments in the case made it clear that “changing the interpretation of section 230 would create more problems than it would solve.”

David McCabe contributed reporting.

Source link


Like it? Share with your friends!

0
ncult

0 Comments

Your email address will not be published. Required fields are marked *