The US Supreme Court has taken up two cases involving tech companies’ potential liability for algorithmically recommending posts to users that promote terrorism.
The first case, Gonzalez v. Google LLC, will decide whether Section 230(c)(1) of the Communications Decency Act immunizes interactive computer services when they make targeted recommendations of information provided by another information content provider, or only limits the liability of interactive computer services when they engage in traditional editorial functions (such as deciding whether to display or withdraw) with regard to such information.
The case was brought by the plaintiff, Reynaldo Gonzalez, against Google under the AntiTerrorism Act following the death of his daughter during an ISIS attack in Paris in November 2015. Though Gonzalez acknowledges that Section 230 protects Google against liability for ISIS’s posting of videos on YouTube, Gonzalez is arguing that YouTube’s targeting and recommending of such videos to users is illegal as it allegedly amounts to materially supporting terrorism.
The other case, Twitter, Inc v. Taamneh, will decide whether a defendant that “provides generic, widely available services” to its users and “’regularly’ works to detect and prevent terrorists from using those services” could be considered to “knowingly” provide substantial assistance to terrorists under 18 U.S.C. § 2333 if it was found they could have taken stronger action to prevent such use of the platform. In layman’s the case argues that sites could violate the same aforementioned AntiTerrorism Act if they took steps to ban terrorists from their platform but failed to adequately remove the content in question.