Google has warned that a ruling against it in an ongoing Supreme Court (SC) case could jeopardize the entire internet by removing important protections from lawsuits over decisions to moderate content involving artificial intelligence (AI).
Section 230 of the Communications Decency Act 1996 (opens in new tab) currently offers blanket “liability protection” in relation to how companies moderate content on their platforms.
However, as reported by CNN (opens in new tab)wrote Google in a legal filing (opens in new tab) that the internet could be flooded with dangerous, offensive and extremist content, should the SC rule in favor of the plaintiff in the Gonzalez v. Google case, which involves YouTube’s algorithms recommending pro-ISIS content to users.
Automation in moderation
Being part of a nearly 27 year old law already targeted for reform by US President Joe Biden (opens in new tab)Section 230 is not equipped to legislate on modern developments such as artificially intelligent algorithms, and this is where the problems begin.
The crux of Google’s reasoning is that the internet has grown so rapidly since 1996 that integrating artificial intelligence into content moderation solutions has become a necessity. “Virtually no modern website would work if users had to sort content themselves,” the submission reads.
“Content abundance” means tech companies must use algorithms to present it to users in a concise manner, from search engine results to flight listings to job recommendations on job sites.
Google also elaborated that under current law, tech companies simply refusing to moderate their platforms is a perfectly legal way to evade liability, but that doing so puts the internet at risk of becoming a “virtual cesspool.” will.
The tech giant also pointed out that YouTube’s Community Guidelines specifically oppose terrorism, adult content, violence and “other dangerous or objectionable content” and that it continually tweaks its algorithms to preemptively block prohibited content.
It also claimed that in Q2 2022, “approximately” 95% of videos violating YouTube’s “Violent Extremism Policy” were automatically detected.
Nonetheless, in this case, the petitioners allege that YouTube failed to remove all Isis-related content, thereby promoting “the rise of ISIS”.
In an attempt to further disassociate itself from any liability on this point, Google responded that YouTube’s algorithms recommend content to users based on similarities between content and content that a user is already interested in.
This is a complicated case, and while it’s easy to subscribe to the idea that the internet has outgrown manual moderation, it’s equally compelling to suggest that companies should be held accountable when their automated solutions fall short.
Because when even tech giants can’t guarantee what’s on their site, users of filters and child lock cannot be assured that they are taking effective action to block objectionable content.