Lawmakers have spent years investigating how hate speech, disinformation and bullying on social media sites can wreak havoc in the real world. More and more, they are pointing fingers at the algorithms that power sites like Facebook and Twitter, the software that decides what content users see and when they see it.
Some lawmakers on both sides argue that when social media sites improve the performance of hateful or violent posts, the sites become accomplices. And they have proposed bills to deprive businesses of a legal shield that allows them to fend off lawsuits against most content posted by their users, in cases where the platform has amplified the reach of a harmful message.
The House Energy and Trade Committee will hold a hearing on Wednesday to discuss several of the proposals. The hearing will also include testimony from Frances Haugen, the former Facebook employee who recently leaked a trove of revealing internal company documents.
Removing the legal shield, known as Section 230, would mean a sea change for the internet, as it has long enabled the vast scale of social media websites. Ms Haugen said she supports changing Section 230, which is part of the Communications Decency Act, so that it no longer covers certain decisions made by algorithms on technology platforms.
But what exactly counts as algorithmic amplification? And what exactly is the definition of harmful? The proposals offer very different answers to these crucial questions. And how they respond to them can determine whether the courts find the bills constitutional.
Here’s how the bills tackle these thorny issues:
What is algorithmic amplification?
The algorithms are all over. Basically, an algorithm is a set of instructions telling a computer how to do something. If a platform could be sued every time an algorithm does something on a post, products that lawmakers aren’t trying to regulate could be trapped.
Some of the proposed laws define the behavior they wish to regulate in general terms. A bill sponsored by Senator Amy Klobuchar, Democrat of Minnesota, would expose a platform to prosecution if it “promotes” the scope of public health misinformation.
Ms. Klobuchar’s health disinformation bill would give platforms a pass if their algorithm promoted content in a “neutral” fashion. This could mean, for example, that a platform that ranked posts chronologically wouldn’t have to worry about the law.
Other laws are more specific. A bill by Representatives Anna G. Eshoo of California and Tom Malinowski of New Jersey, both Democrats, defines dangerous amplification as any action to “classify, order, promote, recommend, amplify or similarly modify the broadcast. or the display of information â.
Another bill drafted by House Democrats clarifies that platforms can only be prosecuted when the amplification in question is motivated by a user’s personal data.
âThese platforms are not passive bystanders – they knowingly choose profits over people, and our country pays the price,â said Representative Frank Pallone Jr., Chairman of the Energy and Energy Committee. commerce, in a statement when he announced the legislation.
Mr. Pallone’s new bill includes an exemption for any business with five million monthly users or less. It also excludes posts that appear when a user searches for something, even though an algorithm classifies them, as well as web hosting and other companies that are the backbone of the internet.
What content is harmful?
Lawmakers and others have pointed to a wide range of content they see as relating to real-world harm. There are conspiracy theories, which could lead some followers to turn violent. Posts by terrorist groups could prompt someone to carry out an attack, as one man’s relatives argued when they sued Facebook after a Hamas operative stabbed him to death. Other policymakers have expressed concerns about targeted advertising that leads to discrimination in housing.
Most of the bills currently in Congress deal with specific types of content. Ms. Klobuchar’s bill covers âhealth disinformationâ. But the proposal leaves it up to the Department of Health and Social Services to determine exactly what that means.
“The coronavirus pandemic has shown us how deadly misinformation can be and it is our responsibility to act,” Ms. Klobuchar said when she announced the proposal, which was co-authored by Senator Ben Ray LujÃ¡n, a Democrat from New Mexico.
The legislation proposed by Ms Eshoo and Mr Malinowski takes a different approach. It only applies to the amplification of messages that violate three laws – two that prohibit violations of civil rights and a third that prosecutes international terrorism.
Mr. Pallone’s bill is the most recent of the group and applies to any position which “has materially contributed to serious physical or emotional injury to any person.” This is a high legal standard: emotional distress should be accompanied by physical symptoms. But that could cover, for example, a teenage girl who views Instagram posts that lower her self-esteem so much that she tries to hurt herself.
What do the courts think?
Judges were skeptical that platforms should lose their legal immunity when expanding the reach of content.
In the case involving an attack for which Hamas claimed responsibility, most of the judges who heard the case agreed with Facebook that its algorithms did not cost it legal shield protection for user-generated content .
If Congress creates an exemption to the legal shield – and it stands up to legal scrutiny – the courts may have to follow its lead.
But if the bills become law, they are likely to attract important questions about whether they violate First Amendment free speech protections.
The courts have ruled that the government cannot subordinate the benefits to an individual or business to the restriction on speech that the Constitution would otherwise protect. Thus, the tech industry or its allies could challenge the law, arguing that Congress was finding a back door method to limit free speech.
âThe problem becomes: can the government directly ban algorithmic amplification? Said Jeff Kosseff, associate professor of cybersecurity law at the United States Naval Academy. âIt’s going to be difficult, especially if you’re trying to say that you can’t amplify certain types of speech. “