Lawmakers Choose the Wrong Path, Again, With New Anti-Algorithm Bill


powered by Surfing Waves
Lawmakers Choose the Wrong Path, Again, With New Anti-Algorithm Bill

Facebook needs to be reined in. Lawmakers and everyday users are mad, having heard former Facebook employee Frances Haugen explain how Facebook valued growth and engagement over everything else, even health and safety. But Congress’s latest effort—to regulate algorithms that recommend content on social media platforms—misses the mark.

We need a strong privacy law. We need laws that will restore real competition and data interoperability. And then, we need to have a serious discussion about breaking Facebook up into its component parts. In other words, the federal government should go back and do the rigorous merger review that it should have done in the first place, before the Instagram and WhatsApp purchases.

It’s unfortunate that lawmakers are, by and large, declining to pursue these solutions. As they express shock and rage at Haugen’s testimony, we continue to see them promote legislation that will entrench the power of existing tech giants and do grievous harm to users’ right to free expression.

Personalized Recommendations Aren’t The Problem

The most recent effort is a bill called the “Justice Against Malicious Algorithms Act” (JAMA Act, H.R. 5596). This proposed law, sponsored by House Energy and Commerce Committee Chairman Frank Pallone (D-NJ) and three others, is yet another misguided attack on internet users in the name of attacking Big Tech.

The JAMA Act takes as its premise that because some large platforms are failing at content moderation, the government should mandate how services moderate users’ speech. This particular attempt at government speech control focuses on regulating the “personalized algorithms” that are used to promote or amplify user speech and connect users to each other.

The JAMA Act would remove Section 230 protections for internet services when users are served content with a “personalized algorithm” that suggests some piece of outside content to them, and then suffer a severe physical or emotional injury as a result. Essentially, the bill would make online services liable if they served a user with content from another user that the first user later claims harmed them.

One of the biggest problems is that the bill offers a vague definition for “personalized algorithm,” which it defines as any algorithm that uses “information specific to an individual.” This broad definition could go well beyond personally identifiable information and could include a user providing their location to the service, or indicating the type of content they’d like to see on a service.

Personalized recommendations happen a lot in the online world because they’re useful to users. Users who have seen a good article, watched an interesting video, or shown interest in a product or service are often interested in other, similar things.

The vague definition of “personalized algorithm” makes it almost impossible for a service to know what efforts it takes to organize and curate user-generated content on its site will fall under it. And that’s a big problem because the bill removes Section 230’s protections based on this vague definition.

Once Section 230 protections are gone, it will be much easier to sue internet services over the suggestions they make. The bill applies to any service with more than 5 million monthly users, which includes a vast swath of the net—most of it services that are much smaller than Facebook and don’t have anywhere near its level of content moderation resources.

Section 230 puts sharp limits on lawsuits when the legal claims are based on other peoples’ speech. JAMA would remove those limits in many situations. For instance, Section 230 means someone who maintains an online discussion forum usually can’t be sued over one of the discussions they hosted—but under JAMA, the forum owner could be sued if comments were presented according to a “personalized recommendation.”

A flood of low-quality lawsuits will be a strong incentive for online services to censor user speech and curtail basic tools that allow users to find topics and other users who share their interests and views.

For example, Section 230 generally prevents reviews sites from being sued over user reviews. But under JAMA, a site like Yelp could be sued for user speech, since the reviews are presented in a personalized way. A site like Etsy or eBay could be held liable for recommended products. Personalized news aggregators like Flipboard could be sued over news articles they didn’t write, but have served to users.

Punishing Recommendations Won’t Solve Anything

Given the new legal landscape JAMA would create, it’s easy to imagine web services and media companies dramatically paring back the number of recommendations they make if this proposal were to pass. But that won’t be the worst harm. In all likelihood, it will be the flood of meritless lawsuits that will be a huge burden for any web service that doesn’t have the money or clout of a Google or a Facebook. It’ll be very hard to create a small company or website that can afford to defend against the inevitable legal challenges over the content it recommends.

There are a few narrow exemptions in the bill, including one that would exempt businesses with 5 million or fewer unique monthly visitors. But that size limit wouldn’t exempt even many mid-size services. That limit would still make a site like BoardGameGeek.com liable for recommending you connect with the wrong game, or wrong gamer community; knitting site Ravelry could be in trouble for connecting you with the wrong crafter. Fitness sites like Strava, MapMyFitness, and RunKeeper are all well above the size limit, and could lose protection for recommending other users’ running and hiking routes.

This bill seems to be a direct response to ex-Facebooker Frances Haugen’s suggestion that the problem with her former employer is “the algorithm.” It’s true that content moderation at Facebook is terrible, a point we’ve made many times. But lawmakers jumped from there to drawing up a proposal that would punish a vast swath of services that simply use software to make personalized suggestions for other internet users. That’s bad policy, and it’s an attack on user speech rights.

The real answers to the problems the authors of this bill seek to fix—competition, privacy, interoperability, and strong antitrust action—are there for the taking. By introducing bills like the JAMA Act, lawmakers like Rep. Pallone are simply choosing not to use the tools that will actually get the job done. They’ve chosen the wrong path again and again—with last year’s EARN IT Act, in this year’s PACT Act, and in the SAFE Tech Act. These bills would create an explosion of new business for tort lawyers—including attorneys who file suit on behalf of online personalities who are in the business of spreading medical lies, hateful speech, and election disinformation.

Users should be able to get information tailored to them, and they should be able to choose the platform they want to deliver it. Creators should be able to have their content reach users who are interested in it, with the help of chosen platforms. All users should be able to connect with other users according to their interests, beliefs, and opinions—activities that are protected by the First Amendment. To have a truly vigorous digital public square, we need low barriers of entry for the platforms that can provide it. 



* This article was originally published here

powered by Surfing Waves

HELP STOP THE SPREAD OF FAKE NEWS!

SHARE our articles and like our Facebook page and follow us on Twitter!




Post a Comment

0 Comments