Supreme Court evades ruling on scope of Internet Liability Shield

0
20

The Supreme Court said Thursday it would not rule on an issue of major concern to the tech industry: whether YouTube can invoke a federal law that frees internet platforms from legal responsibility for what their users post, in one of the family Case filed by YouTube rules out a woman killed in a terrorist attack.

Instead, in a companion case, the court ruled that another statute that allows claims of “knowing significant assistance” to terrorists generally does not apply to technology platforms at all, meaning there is no need to rule on whether liability protections applied.

The courts Unanimous decision In the second case, Twitter v. Taamneh, No. 21-1496, both cases were effectively resolved, allowing the judges to sidestep difficult questions regarding the scope of the 1996 Act, Section 230 of the Communications Decency Act.

In brief, without signature Opinion In Gonzalez v. Google, No. 21-1333, concerning YouTube, the court said it “would not deal with the application of Section 230 to a complaint that appears to contain little or no plausible right to a remedy.” Instead, the court referred the case to the Court of Appeals “to consider the plaintiffs’ complaint in light of our decision on Twitter.”

The Twitter case concerned Nawras Alassaf, who was killed in a 2017 terrorist attack at a nightclub in Istanbul that Islamic State claimed responsibility for. His family sued Twitter and other tech companies for allowing ISIS to use their platforms to recruit and train terrorists.

Judge Clarence Thomas, writing for the court, said the “plaintiffs’ “allegations are insufficient to establish that these defendants aided and assisted ISIS in carrying out the attack in question.”

That decision allowed the judges to avoid a decision on the scope of Section 230 of the Communications Decency Act, a 1996 law designed to encourage the then fledgling creation called the Internet.

Section 230 was in response to a decision that held an online forum liable for a user’s posts because the service had engaged in some content moderation. The provision states: “No provider or user of an interactive computer service shall be treated as a publisher or speaker of information provided by another information content provider.”

Section 230 helped the rise of major social networks like Facebook and Twitter by ensuring that the sites didn’t assume legal liability with every new tweet, status update, and comment. Limiting the law’s reach could result in platforms facing lawsuits for directing people to posts and videos that promote extremism, incite violence, tarnish reputations and cause emotional distress.

The ruling comes at a time when developments in cutting-edge artificial intelligence products are raising profound questions about whether laws can keep pace with rapidly changing technologies.

The case was brought by the family of Nohemi Gonzalez, a 23-year-old student who was killed in November 2015 in terrorist attacks at a Paris restaurant that also targeted the Bataclan concert hall. Lawyers for the family argued that YouTube, a subsidiary of Google, used algorithms to make Islamic State videos available to interested viewers.

A growing group of bipartisan lawmakers, academics and activists have grown skeptical of Section 230, saying it has saved giant tech companies from the fallout of disinformation, discrimination and violent content on their platforms.

In recent years they have brought up a new argument: that platforms lose protection when their algorithms recommend content, target ads, or create new connections with their users. These recommendation engines are ubiquitous, supporting features like YouTube’s autoplay feature and Instagram suggestions for accounts to follow. The judges largely rejected this argument.

Congressmen have also called for changes to the law. But political realities have largely prevented these proposals from gaining traction. Republicans are angry at tech companies removing posts from conservative politicians and publishers and want platforms to remove less content. Democrats want the platforms to remove more, such as false information about Covid-19.

LEAVE A REPLY

Please enter your comment!
Please enter your name here