¡Felicidades! Tu apoyo al autor se ha enviado correctamente
Paul Nemitz: ‘Democracy must prevail over technology and business models for the common good’

Paul Nemitz: ‘Democracy must prevail over technology and business models for the common good’

Publicado el 26, dic, 2023 Actualizado 28, dic, 2023 Tecnología
time 12 min
1
Me encanta
0
Solidaridad
0
Wow
thumb comentario
lecture leer
3
reacción

En Panodyssey, puedes leer hasta 10 publicaciones al mes sin iniciar sesión. Disfruta de 8 articles más para descubrir este mes.

Para obtener acceso ilimitado, inicia sesión o crea una cuenta haciendo clic a continuación, ¡es gratis! Inicar sesión

Paul Nemitz: ‘Democracy must prevail over technology and business models for the common good’

Published on 16 December 2023
In his contribution to a Voxeurop Live talk on whether AI is an opportunity or a threat for democracy, expert Paul Nemitz emphasises the need for democratic control over technological advancements, and the establishment of laws that prioritise societal interests over corporate interests.

By Voxeurop

Link to original article

Paul Nemitz is a senior advisor to the European commission's directory general for Justice and a professor of Law at the Collège d’Europe. Considered one of Europe's most respected experts on digital freedom, he led the work on the General Data Protection Regulation. He is also the author, along with Matthias Pfeffer, of The Human imperative: power, freedom and democracy in the Age of Artificial Intelligence, an essay on the impact of new technologies on individual liberties and society. 

Voxeurop: Would you say artificial intelligence is an opportunity or a threat for democracy, and why? 

Paul Nemitz: I would say that one of the big tasks of democracy in the 21st Century is to control technological power. We have to take stock of the fact that power needs to be controlled. There are good reasons why we have a legal history of controlling power of companies, States or in the executives. This principle certainly also applies to AI.

Many, if not all technologies have an element of opportunity but also carry risks: we know this from chemicals or atomic power, which is exactly why it is so important that democracy takes charge of framing how technology is developed, in which direction innovation should be going and where the limits of innovation, research and use can be. We have a long history of limiting research, for example on dangerous biological agents, genetics, or atomic power: all this was highly framed, so it's nothing unusual that democracy looks at new technologies like artificial intelligence, thinks about their impact and takes charge. I think it's a good thing. 

So in which direction should AI be regulated? Is it possible to regulate artificial intelligence for the common good and if so, what would that be?

Paul Nemitz: First of all, it is a question of the primacy of democracy over technology and business models. What the common interest looks like is in a democracy, decided exactly through this process in a democracy. Parliaments and lawmakers are the place to decide on the direction common interest should take: the law is the most noble speaking act of democracy. 

A few months ago, speaking about regulation and AI, some tech moguls wrote a letter warning governments that AI might destroy humanity if there were no rules, asking for regulation. But many critical experts like Evgeny Morozov and Christopher Wylie, in two stories that we recently published, say that by wielding the threat of AI-induced extinction, those tech giants are actually diverting the public and the government's attention from current issues with artificial intelligence. Do you agree with that?

We have to look both at the immediate challenges of today, of the digital economy, as well as at the challenges to democracy and fundamental rights: power concentration in the digital economy is a current issue. AI adds to this power concentration: they bring all the elements of AI, such as researchers and start-uppers together into functioning systems. We have an immediate challenge today, coming not only from the technology itself, but also from the implications of this add-on to power concentration.

And then we have long-term challenges, but we have to look at both. The precautionary principle is part of innovation in Europe, and it's a good part. It has become a principle of legislation and of primary law in the European Union, forcing us to look at the long-term impacts of technology and their potentially terrible consequences. If we cannot exclude with certainty that these negative consequences will arise, we have to make decisions today to make sure that they don't. That is what the precautionary principle is about, and our legislation also partially serves this purpose. 

Elon Musk tweeted that there is a need for comprehensive deregulations. Is this the way to protect individual rights and democracy ? 

To me, those who were already writing books in which they said AI is like atomic power before putting innovations like ChatGPT on the market and afterwards calling for regulations didn't draw the consequences from this. If you think about Bill Gates, Elon Musk, if you think about the president of Microsoft Brad Smith, they were all very clear about the risks and opportunities of AI. Microsoft first bought a big part of open AI and just put it on the market to cash in a few billion before going out and saying “now we need laws”. But, if taken seriously, the parallel with atomic power would have meant waiting until regulation is in place. When atomic power was introduced in our societies, nobody had the idea to start operating it without these regulations being established. If we look back at the history of legal regulation of technology, there has always been resistance from the business sector. It took 10 years to introduce seatbelts in American and European cars, people were dying because the car industry was so successfully lobbying, even though everybody knew that deaths would be cut in half if seatbelts were to be introduced. 

So I am not impressed if some businessmen say that the best thing in the world would be to not regulate by law: this is the wet dream of the capitalists and neoliberalists of this time. But democracy actually means the opposite: in democracy, the important matters of society, and AI is one of them, cannot be left to companies and their community rules or self regulation. Important matters in societies which are democratic must be dealt with by the democratic legislator. This is what democracy is about. 

I also do believe that the idea that all  problems of this world can be solved by technology, like we've heard from ex-President Trump when the US left the climate agreements in Paris, is actually wrong in climate policy as well as in all the big issues of this world. The coronavirus has shown us that behaviour rules are key. We have to invest in being able to agree on things: the scarcest resource today for problem solving is not the next great technology and all this ideological talk. The scarcest resource today is the ability and willingness of people to agree, in democracy and between countries. Whether it's in the transatlantic relationship, whether it's in international law, whether it's between parties who wage war with each other to come to Peace again, this is the greatest challenge of our times. And I would say those who think that technology will solve all problems are driven by a certain hubris.

Are you optimistic that regulation through a democratic process will be strong enough to curtail the deregulation forces of lobbyists ?

Let's put it this way: in America, the lobby prevails. If you listen to the great constitutional law professor Lawrence Lessig about the power of money in America and his analysis as to why there is no law curtailing big tech coming out of Congress anymore, money plays a very serious role. In Europe we are still able to agree. Of course the lobby is very strong in Brussels and we have to talk about this openly: the money big tech spends,  how they try to influence not only politicians but also journalists and scientists.

There is a GAFAM culture of trying to influence public opinion, and in my book I've described their toolbox quite in detail. They are very present, but I would say our democratic process still functions because our political parties and our members of Parliament are not dependent on big tech’s money like American parliamentarians are. I think we can be proud of the fact that our democracy is still able to innovate, because making laws on these cutting edge issues is not a technological matter, it really is at the core of societal issues. The goal is to transform these ideas into laws which then work in the way normal laws work: there's no law which is perfectly enforced. This is also part of innovation. Innovation is not only a technological matter.

One of the big Leitmotives of Evgeny Morozovs’s take on artificial intelligence and big tech in general is pointing out solutionism, what you mentioned as the idea that technology can solve everything. Currently the European Union is discussing the AI act that should regulate artificial intelligence. Where is this regulation heading and do we know to what extent the tech lobby has influenced it? We know that it's the largest lobby in terms of budget within the EU institutions. Can we say that the AI act is the most comprehensive law on the subject today?

In order to have a level playing field in Europe, we need one law, we don't want to have 27 laws in all the different member states, so it's a matter of equal treatment. I would say the most important thing about this AI act is that we once again establish the principle of the primacy of democracy over technology and business models. That is key, and for the rest I'm very confident that the Council and the European Parliament will be able to agree on the final version of this law before the next European election, so by February at the latest.

Evgeny Morozov says that it’s the rise of artificial general intelligence (AGI), basically an AI that doesn't need to be programmed and thus that might have unpredictable behaviour, that worries most experts. However, supporters like openAI’s founder Sam Altman say that it might turbocharge the economy and “elevate humanity by increasing abundance”.  What is your opinion on that?

First, let’s see if all the promises made by specialised AI are really fulfilled. I am not convinced, it is unclear when the step to AGI will arise. Stuart Russell, author of “Human Compatible: Artificial Intelligence and the Problem of Control”, says AI will never be able to operationalize general principles like constitutional principles or fundamental rights. That is why whenever there's a decision of principle of value to be made, the programs have to be designed in such a way that they circle back to humans. I think this thought should guide us and those who develop AGI for the time being. He also believes decades will pass until we have AGI, but makes the parallel with the splitting of the atom, arguing that many very competent scientists said it wasn’t possible and then one day, by surprise, a scientist gave a speech in London and the next day showed how it was indeed possible. So I think we have to prepare for this, and more. There are many fantasies out there about how technology will evolve, but I think the important thing is that public administrations, parliaments and governments stay on course and watch this very carefully. 

We need an obligation to truth from those who are developing these technologies, often behind closed doors. There is an irony in EU law: when we do competition cases we can impose a fine if big corporations lie to us. Facebook, for example, received a fine of more than 100 million for not telling us the full story about WhatsApp’s take over. But there is no duty to truth when we consult as Commission in the preparation of a legislative proposal or when the European Parliament consults to prepare its legislative debates or trials. There's unfortunately a long tradition of digital businesses, as well as other businesses, lying in the course of this process. This has to change. I think what we need is a legal obligation to truth, which also has to be sanctionned. We need a culture change, because we are increasingly dependent on what they tell us. And if politics are depending on what businesses tell, then we must be able to hold them to truth. 

Do these fines have any impact? Even if Facebook is fined one billion dollars, does that make any difference? Do they start acting differently, what does it mean for them in terms of money, or impact? Is that all we have?

I think fining is not everything, but we live in a world of huge power concentration and we need counterpower. And the counter power must be with the state, so we must be able to enforce all laws, if necessary with a hard hand. Unfortunately these companies largely only react to a hard hand. America knows how to deal with capitalism: people go to prison when they create a cartel, when they agree on prices, in Europe they don’t. So I think we have to learn from America in this respect, we must be ready and willing to enforce our laws with a hard hand, because democracy means that laws are made and democracy also means that laws are complied with. And there can be no exception for big tech. 

Does that mean we should be moving towards a more American way?

It means we must take enforcing our laws seriously and unfortunately this often makes it necessary to fine. In competition law we can fine up to 10% of overall turnover of big companies, I think that has an effect. In privacy law it's only 4%, but I think these fines still have an effect of motivating board members to make sure that their companies comply.

This being said, this is not enough: we must remember that in a democratic society, counterpower comes from citizens and civil society. We cannot leave individuals alone to fight for their rights in the face of big tech. We need public enforcement and we need to empower civil society to fight for the rights of individuals. I think this is part of controlling the power of technology in the 21st century, and will guide innovation. It's not an obstacle to innovation but it guides it towards public interest and middle of the road legality. And that's what we need ! We need the big powerful tech companies to learn that it's not a good thing to move fast and break things if “breaking things” implies breaking the law. I think we’re all in favour of innovation, but it undermines our democracy if we allow powerful players to disrupt and break the law and get away with it. That is not good for democracy. 

Thierry Breton, the European commissioner for industry, has written a letter to Elon Musk, telling him that if X continues to favour disinformation he might encounter some sanctions from the EU. Musk replied that in this case they might leave Europe, and that other tech giants might be tempted to do the same if they don't like the regulation that Europe is setting up. So what is the balance of power between the two? 

I would say it's very simple, I'm a very simple person in this respect: democracy can never be blackmailed. If they try to blackmail us, we should just laugh them off: if they want to leave they are free to leave, and I wish Elon Musk good luck on the stock exchange if he leaves Europe. Fortunately we are still a very big and profitable market, so if he can afford to leave: goodbye Elon Musk, we wish you all the best.

What about the danger of the unconventional use of AI?

Yes, “unconventional” meaning the use for war. Of course that is a danger, there is work on this in the United Nations, and weapons which are getting out of control are a problem for every person who understands security and how the military works: the military wants to have control over its weapons. In the past we had countries sign multilateral agreements, not only on the non-proliferation of atomic weapons, but also for small weapons and weapons which get out of control like landmines. I think in the common interest of the world, of humanity and of governability, we need progress on rules for the use of AI for military purposes. These talks are difficult, sometimes it can take years, in some cases even decades to come to agreements, but eventually I think we do need rules for autonomous weapons certainly, and in this context also for AI.

To go back to what Chris Wiley said in the article we mentioned: the current regulatory approach does not work because “it treats artificial intelligence like a service, not like architecture”. Do you share that opinion? 

I would say that the bar for what works and what doesn’t work, and what is considered to be working and not working in tech law should not be higher than in any other field of Law. We all know that we have tax laws and we try to enforce them as well as we can. But we know that there are many people and companies who get away with not paying their taxes.  We have intellectual property laws and they are not always being obeyed. Murder is something which is highly punished, but people are being murdered on a daily basis.

So I think in tech law we should not fall into the trap which is the discourse of the tech industry according to which “we'd rather prefer no law than a bad law”, a bad law being one that can not be perfectly enforced. My answer to that is: there is no law which works perfectly, and there is no law which can be perfectly enforced. But that's not an argument against having laws. Laws are the most noble speaking act of democracy, and that means that they are a compromise.

They are a compromise with the lobby interests, which these companies carry into the Parliament and which are taken up by some parties more than by others. And because laws are compromise, they are perfect neither from a scientific perspective, nor from a functional one. They are creatures of democracy, and in the end I would say it is better that we agree on a law even if many consider it imperfect. In Brussels we say that if at the end all are screaming: businesses saying “this is too much of an obstacle to innovation” and civil society thinking it is a lobby success, then probably we've got it more or less right in the middle.

👉 Watch the video of the Voxeurop Live with Paul Nemitz here.
This article was produced as part of Voxeurop's participation in the Creative Room European Alliance (CREA)  consortium led by Panodyssey and supported by funding from the European Commission.
lecture 230 lecturas
thumb comentario
3
reacción

Comentario (0)

¿Te gustan las publicaciones de Panodyssey?
¡Apoya a sus escritores independientes!

Seguir descubriendo el universo Tecnología

donate Puedes apoyar a tus escritores favoritos

promo

Download the Panodyssey mobile app