Rep. Cathy McMorris Rodgers (R-WA) during an Environment and Climate Change subcommittee hearing of the House of Energy and Commerce on April 2, 2019 on Capitol Hill in Washington, DC.
Zach Gibson | Getty Images
The Republicans of the House of Representatives are preparing to target the legal shield protecting the technical platforms from liability for the content posted by users.
On Thursday, Republican officials on the House’s Energy and Trade Committee sent a memo proposing various concepts to reform Section 230 of the Communications Decency Act 1996, which protects tech platforms from liability for user contributions and for their own moderation practices.
These concepts include:
- Restricting technology companies’ right to exclude users based on their beliefs or political affiliations
- Require “reasonable moderation” to address harms such as illicit drug sales and child exploitation
- Limiting protected moderation to certain types of language that are not protected by the first change
- Removal of protection for discriminatory moderation decisions based on points of view.
All concepts are underlined by three main principles: protecting freedom of expression, balancing the interests of small businesses to protect competition, and promoting American technology leadership.
The memo said the proposed legislation would only target “big tech companies with $ 1 billion in annual sales,” suggesting it will focus on giants like Amazon, Apple, Google and Facebook.
E&C Republican staff sent the memo to individual Republican committee members and other unspecified interest groups.
Republicans have generally criticized Section 230’s protection for allowing tech platforms to make supposedly biased decisions about which posts to remove, while Democrats are trying to give platforms more responsibility to expand their moderation of content and theirs Make services safer for users.
The memo also addresses more specific proposals, including the following:
Appealing decisions. The memo suggests that technology platforms should have a stronger way for users to challenge decisions they consider unfair. One concept is that platforms should be required to maintain an easy-to-use complaints process to challenge decisions and let users know why they were made.
Some companies cut out completely. The memo suggests cutting big tech companies out of Section 230 protections so that only smaller businesses and newcomers can retain protection, and removing the shield from companies that use targeted behavioral advertising (the latter resembles a bill proposed by Democrats) .
Re-authorization every five years. The staff suggested re-authorizing Section 230 for the big tech companies every five years to motivate them to be cautious and to allow it to iterate as the industry evolves.
Transparency. Another set of principles specifically focuses on the transparency of the content moderation practices of big tech companies, such as: B. the obligation to provide the Federal Trade Commission with detailed descriptions of its policies.
Protect children. The memo also sets out principles for protecting children online, an issue that came up during the committee’s last hearing of several technical CEOs in March. Some concepts involve holding companies accountable for the content and ads they show minors, while others require them to track how their products affect children’s mental health.
Working with law enforcement agencies. A final set of concepts outlines how big tech should be needed to work with law enforcement agencies. Many are already working with law enforcement agencies reporting illegal material. However, conflicts arose when law enforcement asked for access to encrypted information. In that situation, Apple said it couldn’t create a so-called law enforcement backdoor that wouldn’t compromise the safety of all users. In particular, the memo does not mention encryption.
Subscribe to CNBC on YouTube.
CLOCK: Section 230 explained