FM4-Logo

jetzt live:

Aktueller Musiktitel:

copyright-zeichen mit zündschnur

radio fm4

Orwellian proposals?

An interview with Thomas Lohninger, Executive Director of Epicenter.Works about what Article 13 in its current form could mean for users.

By Joanna Bostock

Back in 2014, the European Union started working on modernising copyright rules to make them more relevant in the Internet age. On 20 June, the Commission’s proposed Copyright Directive will go before a vote in the European Parliament. Experts and digital rights activists are concerned about the effect the proposed laws could have on freedom of expression saying , for example, it will “break the Internet”; these proposals are “Orwellian”; they will “screw over every creator in every country on any platform” and will “limit the freedom to impart & receive information”.

Much of the criticism is focussed on what is known as “Article 13”. I spoke to Thomas Lohninger, Executive Director of Epicenter.Works, to find out what Article 13 in its current form could mean for users:

Can you describe an example of how I, as an internet user, might experience the consequences of Article 13?

So let’s say you want to upload a video of a protest here in Vienna. This video is then sent to a so-called censorship machine which checks this video against copyright material. So if there is Lady Gaga music playing somewhere in the background it could be flagged as copyright and could never be published on the Internet. All providers of platforms that have user-generated content have to establish such censorship machines. That means Wikipedia, e-learning platforms like Moodle, GitHub, Vimeo… All of these platforms would need to have such methods so that before something is published it is checked against copyright material.

So what exactly is a censorship machine?

“Censorship machine” is quite a blunt name for these upload filters that the [proposed] European law foresees. Those machines would need to be centralised because every creative production – which includes videos, music, but also text – would have to be checked against all copyright materials, and that of course is something that a small company, a small platform cannot do. So it would probably send users’ uploads to big upload filter providers. Google is one of these providers, and therefore this is a tremendously dramatic proposal because it really infringes on our freedom of speech but also on our right to privacy. Every cultural production in Europe could soon be tested for copyright infringements by Google even before it is published and that means they of course know whenever we want to say something.

It sounds a bit like a plagiarism checker – is that a fair comparison?

In a way it is. It is a little more elaborate than a plagiarism checker but when it comes to text that is basically what it is. It will compare a user’s upload to a database of known text snippets or known literary works. With audio and video it’s a little bit trickier on a technical level, but yes.

So, going back the example of I want to upload a video of a protest, and if this censorship machine kicks in, what could happen to my upload?

It could either not be published at all or it could be published if the filter does not see it as problematic, or it could also be foreseen that advertisements of someone else are placed inside your video. So let’s say in the video of your protest you have Bilderbuch music in the background. That means that Bilderbuch would have advertisements in the video of this protest, or whoever the filter thinks has copyright on this material.

What’s the big picture of what this would cause across the internet and for everybody using it?

This law poses probably the greatest danger to the free and open Internet in Europe that I’ve seen in my career as a digital rights activist. It would mean a shift away from an Internet where you basically have the right to say and do everything and, when there is a problem, only afterwards we try to come with the law, try to bring things back to our legal realities. With THIS law you basically hand over the control about where our freedom of speech ends and where other rights begin to a machine, probably to a Silicon Valley-based company that offers a tool that detects whether something is illegal or not. This is really a dramatic shift. We already have such filters on YouTube and on Facebook but on the rest of the Internet we thankfully have more freedom, and exactly those decentralised platforms are at stake here. It’s also why Wikipedia is so up in arms against this law because for them it really about all or nothing. Wikipedia, in its whole existence has only once been asked to establish upload filters, and that was by the Chinese government. Wikipedia refused – that’s why they are blocked now in China. And now the European Union comes up with exactly the same type of censorship obligation? That’s ludicrous.

What is the European Commission trying to do with this directive?

You have to put this into perspective: of course, it is a legitimate interest to safeguard the copyright of artists and creators but this is a very extreme measure. Just as a point of comparison – recently we established an anti-terrorism directive in the European Union but we did not establish an obligation for every platform to [introduce] upload filters for terrorist content, so why do we do something for copyright that we have not seen as proportionate when it comes to videos of somebody being beheaded? And detecting copyright is in some ways far more difficult technically, because a head that is chopped off is something that can be recognised, whereas art, creativity – that can be everything. This includes not just videos, it also includes music, texts, and whatever forms of creativity that are out there. Just as a final point – copyright is not a simple thing. We have exceptions to copyright for parody, for citation, for education, and all of these contexts are invisible to a machine. The censorship machine will not know that this is a parody of some Austrian famous band, it will just see that “oh, there’s copyright for this so I’d better take it down”.

Who designs the censorship machines?

There are only two implementations out there: one is Content ID from Google (that is used by YouTube) and the other one is a project of European music rights holders. These two would probably remain the only ones in that field because it is actually quite difficult to come up with such elaborate technology, which looks deeply into content that users upload and has a database of each and every copyrighted snippet that’s out there.

So the database of snippets and text – who decides what goes on that? I’m wondering if, say if a friend of mine writes a certain phrase in response to something in a blog, and I happen to write the same phrase, if you’re talking about “xyz is outrageous” it’s conceivable that more than one person happens to write the same phrase – would that be caught in a censorship machine?

This is exactly the problem, particularly when this censorship machine is applied to text, we could end up in a situation where the phrase “Merkel meets Trump” holds a copyright if this law is adopted as it is. Also, we have restrictions from the [Court of Justice of the European Union] that even the shortest pieces of music could be seen as copyrighted material – “Metall auf Metall” is a very famous case. It is a bottomless pit in a way because you can never be sure that there is no copyright on something that you are creating, and if we are honest with ourselves remix is to a certain extent the requirement for any cultural invention. Nobody is such a genius that they can create something that is in no way inspired by what has been previously there. So we can never be a hundred percent new and therefore such technology poses great risks. There are already problems with the detection. Everybody in Austria knows Maschek, the famous group that mixes your ORF material mostly. For the most part of their career, they were operating illegally because they did not hold the copyright on their material. Only after they got so famous could they enter into contracts with the ORF and no longer pirate these materials. Actually up to this day to my knowledge, they have not managed to really monetise their own content online because it is the ORF which holds the copyright on these materials. Companies like Google are not good at making detailed distinctions and knowing that there are contracts between those two parties and that therefore Maschek has the rights to monetise that online. So there will be casualties and those will be the small artists, the remix artists and particularly people who use remixes as a form of political speech.

We talked about the honourable intentions of such a change in the European laws. If it’s going to be so destructive, do you think it’s because people who don’t really know what they’re doing are drawing up this regulation, or is it that somebody wants to have much more control?

That’s a good question, I cannot see into the minds of politicians and lobbyists but I would say it’s a little bit of both. For the rights holders it’s a little bit like the police, everything that strengthens them is by definition good and they don’t see the casualties and the fundamental rights of others. When it comes to the politicians – of course after Brexit and Trump they all see that they are losing control of the online debate and that the new media are far harder to control than the classical media. Exercising more control over what people say online is something that is sadly no longer just something for the extremist forces and that’s why we are having this debate. It is really about safeguarding the fundamental right to innovation and free speech online.

mehr Politik:

Aktuell: