A leaked memo circulating among Senate Democrats contains a host of very tyrannical and authoritarian proposals for regulating digital platforms, purportedly as a way to get tough on Russian bots and fake news.
To save American trust in “our institutions, democracy, free press, and markets,” it suggests, we need unprecedented and iron-fisted government intervention into online press and markets, including “comprehensive (GDPR-like) data protection legislation” of the sort enacted in the E.U.
Titled “Potential Policy Proposals for Regulation of Social Media and Technology Firms,” the draft policy paper—penned by Sen. Mark Warner and leaked by an unknown source to Axios—the paper starts out by noting that Russians have long spread disinformation, including when “the Soviets tried to spread ‘fake news’ denigrating Martin Luther King” (here he fails to mention that the Americans in charge at the time did the same).
But now it’s somehow magically different and more nefarious because of technology.
“Today’s tools seem almost built for Russian disinformation techniques,” Warner whines. And the ones to come, he assures us, will be even worse.
Here’s how Warner is suggesting we deal:
Mandatory location verification. The paper suggests forcing social media platforms to authenticate and disclose the geographic origin of all user accounts or posts.
Mandatory identity verification:
The paper suggests forcing social media and tech platforms to authenticate user identities and only allow “authentic” accounts (“inauthentic accounts not only pose threats to our democratic process…but undermine the integrity of digital markets”), with “failure to appropriately address inauthentic account activity” punishable as “a violation of both SEC disclosure rules and/or Section 5 of the [Federal Trade Commission] Act.”
Warner’s paper suggests forcing companies to somehow label bots or be penalized (no word from Warner on how this is remotely feasible)
Define popular tech as “essential facilities.”
These would be subject to all sorts of heightened rules and controls, says the paper, offering Google Maps as an example of the kinds of apps or platforms that might count. “The law would not mandate that a dominant provider offer the serve for free,” writes Warner. “Rather, it would be required to offer it on reasonable and non-discriminatory terms” provided by the government.
Other proposals include:
more disclosure requirements for online political speech, more spending to counter supposed cybersecurity threats, more funding for the Federal Trade Commission, a requirement that companies’ algorithms can be audited by the feds (and this data shared with universities and others), and a requirement of “interoperability between dominant platforms.”
The paper also suggests making it a rule that tech platforms above a certain size must turn over internal data and processes to “independent public interest researchers” so they can identify potential “public health/addiction effects, anticompetitive behavior, radicalization,” scams, “user propagated misinformation,” and harassment—data that could be used to “inform actions by regulators or Congress.”
And—of course— these include further revisions to Section 230 of the Communications Decency Act, recently amended by Congress to exclude protections for prostitution-related content. A revision to Section 230 could provide the ability for users to demand takedowns of certain sorts of content and hold platforms liable if they don’t abide, it says, while admitting that “attempting to distinguish between true disinformation and legitimate satire could prove difficult.”
“The proposals in the paper are wide ranging and in some cases even politically impossible, and raise almost as many questions as they try to answer,” suggested Mathew Ingram, putting it very mildly at the Columbia Journalism Review.
Recently, the EU Parliament has set the course for a tightly controlled internet and is threatening to build a gigantic filter infrastructure and serious restrictions of the Internet.
24 articles include the planned EU copyright reform. Two of these are of particular relevance to the Internet: Article 11 and Article 13. The majority of the Legal Affairs Committee of the European Parliament has voted in favor of versions of these two articles which are likely to do more harm than good. If the plenary follows this vote, Europe will corer the Internet.
Article 11, namely, the introduction of a Europe-wide ancillary copyright for press publishers would result in: Any commercial online services that want to use content from publishers in digital form, should pay for it – where “benefits” can be anything that goes beyond the mere linking.
In Germany, this right exists since 2013 and may safely be called a total flop . How the pan-European solution is to become a success, can not be seen. Spongy or even missing definitions will provide years of massive legal uncertainty, reports Der Spiegel.
A functioning licensing model, which companies like Google and Facebook, let alone smaller providers would submit, is not foreseeable. It is more likely that publisher content will simply become less common in the future. Those who want less coverage will remain a mystery to those who voted for Article 11 today.
Technical departure from the presumption of innocence
The EU’s Article 13 even means a paradigm shift. The goal is to prevent anyone from making copyrighted material available on the internet without permission. This will not be possible on large platforms like Facebook without automatic pre-filters, the mass of uploaded material is far too big for a human review. After all, it’s not just about articles, but also about videos, sound files or snippets of code.
However, such upload filters signify a departure from the so-called provider privilege , which frees the platform operators from liability for copyright infringement in their services. At least as long as they eliminate them as soon as they learn about them. Article 13 is something like the end of the presumption of innocence for the providers and their users.
Instead, large parts of the Internet throughout the EU get a filter infrastructure prescribed. Once it exists, it is only a matter of time before the first politicians call for an expansion. So far this has been the case with all newly introduced Internet surveillance and control instruments. Anyone who can not imagine, despite the shift to the right in Europe, which abuse potential such an infrastructure entails, has to be politically short-sighted.
Quite apart from that, the construct of filters and user agreements in the form of licenses is out of touch with reality. It is foreseeable how automatic filters reliably differentiate between genuine copyright infringement on the one hand and satire, quotations and other permitted exceptions on the other hand: not at all.
This can not afford today’s technology. See YouTube, that has such a filter already. The system is called Content ID and works limited to videos, after years of work and with the tremendous financial resources of Google anything but perfect.
Licenses of all for all? – How does that work?
As a practicable licensing of all rights holders to all willing to look like, probably nobody knows of the Article 13 advocates. Without licenses, however, no publisher or right holder will earn more on the internet than before.
All this is not settled yet. The submission of the committee is expected to be in plenary at the beginning of July. If there is a noticeable protest by EU citizens by then, MEPs may open the draft again for amendments.
If the template is waved through unchanged, the so-called trilogue between the Commission, the Member States and Parliament can begin – and then all three institutions want more or less the same. The day to save the Internet from two very bad ideas for European users would have been today.
James E Windsor, Overpasses News Desk
August 7th, 2018