A diverse group of businesses, internet users, academics, and human rights experts defended Big Tech’s legal protection on Thursday in a Supreme Court case regarding YouTube algorithms.
They argued that removing federal legal protection for AI-driven recommendation engines would have a significant impact on the open internet.
The group included tech companies like Meta, Twitter, and Microsoft, as well as critics of Big Tech such as Yelp and the Electronic Frontier Foundation, and even Reddit and a collection of volunteer Reddit moderators.
They stated in their filings that Section 230 of the Communications Decency Act, which the Supreme Court could potentially narrow in this case, is crucial for the proper functioning of the web.
This law has been used to protect all websites, not just social media platforms, from lawsuits over third-party content.
A ruling finding that tech platforms can be sued for their recommendation algorithms would jeopardize GitHub, the vast online code repository used by millions of programmers, said Microsoft.
“The feed uses algorithms to recommend software to users based on projects they have worked on or showed interest in previously,” Microsoft wrote. It added that for “a platform with 94 million developers, the consequences [of limiting Section 230] are potentially devastating for the world’s digital infrastructure.”
Microsoft’s search engine Bing and its social network, LinkedIn, also enjoy algorithmic protections under Section 230, the company said.
Also Read: Yellen Contrasts US-Africa Relations with China, Russia on Senegal Visit
New York University’s Stern Center for Business and Human Rights states that it is highly unlikely to create a rule that specifically targets algorithmic recommendations for liability and that doing so could result in a loss or concealment of valuable speech, particularly speech from marginalized or minority groups.
“Websites use ‘targeted recommendations’ because those recommendations make their platforms usable and useful,”
“Without a liability shield for recommendations, platforms will remove large categories of third-party content, remove all third-party content, or abandon their efforts to make the vast amount of user content on their platforms accessible. In any of these situations, valuable free speech will disappear—either because it is removed or because it is hidden amidst a poorly managed information dump.” the NYU filing said.