Toespraak Alexandra van Huffelen, 30 maart 2022, Brussel

Speech by the Minister for Digitalisation, Alexandra van Huffelen, at 'AI and the future of Europe Brussels, 30 March 2022

Ladies and gentlemen,

Let me start by thanking MEP Dragos Tudorache and his team for organizing this event today. And thanks to EP President Metsola, NATO Deputy Secretary General Geoan ā EC Vice Presidents Jourov á and Vestager and the other speakers for making this an inspiring meeting.

As I prepared for today my mind went back to the past two years when I served as State Secretary for Benefits and Customs. In this role I was responsible for helping solve the so called childcare benefits scandal. More than 25.000 people were wrongly accused of making fraudulent
benefit claims. Many of them ended up in deep poverty because of the thousands of euros they were obliged to pay back to the tax office.
Every week I sat with 10 of the people affected. Those meetings were deeply touching. The lives of these people were devastated: imagine losing your job, your health, your home, you r marriage or even your children being placed in child custody. All because of a terrible combination of errors in judgment by their government.

And yes, AI had a part to play in that combination. So when I think of AI and Democracy the faces and the v oices of these people immediately come to mind. To these people their government was not trustworthy and their democratic rights were trampled with immense effects.

On a broader note, we see AI becoming an increasing factor in the information we see and spread. There can be a danger in this: a risk of filter bubbles, polarization and even radicalization. Democracy and open debat should be safeguarded online; you should get a variety of information so you can form your opinion freely. The digital age offers many opportunities to strengthen democracy, to enable people to express themselves and to be included in the democratic process. But this will not happen automatically. Human centric AI is not a given. The development and application of AI is never value free. On a state level we see Artificial Intelligence being used effectively in information warfare.

On a business level we ve seen large companies using AI to hire people, only to find out later that it discriminated against women and
recommended unqualified people Consumer behavior can be manipulated by AI in ways that are detrimental to them. We must be more realistic about the downsides of digitalization and make sure that the digital transition is human centric always and based on our demo cratic values and rights. And we must take a good look at the effects of AI on our democratic process: the free flow of information, safe public spaces, public discourse that is inclusive and based on facts and a balance in power on the digital market.

All these principles should be upheld online just as they are offline. I believe that the best place to start is here: in Europe. Within the European Union as an economic and democratic community.

But also within the broader community of the Council of Europe. Together we should the worldwide standard for the online protection of
democracy, human rights and the rule of law. To do this, we need transparency, effective control over the use of AI and we must ensure that citizens have somewhere to go if things seem to go wrong.

I see three practical ways to get there:

First let us quickly proceed with the legislative package presented by the European Commission. It contains powerful tools to address disinformation and the imbalance of power and to make sure AI systems are being developed in a fair and transparent way. But I believe it can be improved, especially with the Dutch childcare benefits case in mind. In the end it was the sum of four harmful factors that led to so much
harm:

1) Discriminating assumptions were used as input for the tax office's riskassessment;
2) The algorithm turned out to be biased, especially discriminating against women and people with an immigration background;
3) There was no human verification afterwards;
4) People who protested against the judgment made against them, weren't heard. Even the highest court found them wrong.

In short: AI worked as an amplifier. It was part of the problem, but not the heart of the problem. For me these lessons learned should be reflected in the legislation, offering solid guarantees for human rights:

1) Certification is crucial. All high risk AI systems should be certified before they are being put to use. Personally, however, I would like to see all AI systems being certified, in order to protect citizens.
2) Impact assessments should be meaningful and address fundamental rights in the complete AI lifecycle. The AI act asks developers to assess certain risks and privacy legislation asks deployers to assess privacy risks in particular. But is th is enough? In the Netherlands we have already developed a broader Impact Assessment tool on Human Rights and algorithms. It asks all parties involved in the entire product cycle of an AI system to consider fundamental rights risks.
3) Natura l persons and their rights should play a bigger role across the board. Those impacted by AI systems need clear information about the systems used by governments especially. In line with this I would like to see a legal right of complaint for citizens and consumers. Either in the AI Act or in another way as long as we make sure people and organizations have somewhere to go to get their justice.
4) But human centric AI requires more. Not just to protect people by demanding more transparency or making sure they have somewhere to go to, but also in the use of AI systems. Because there should always be a human factor or human decision involved especially when systems have a big influence on people and their judgments.

These factors will strengthen the quality of legislation and its enforcement both on the EU level and in member states. In Netherlands we have already taken two additional steps: we are in the process of establishing a so called Algorithm Watchdog and we are expanding the budget of the Dutch Data Protection Authority (DPA).

Second, we have to make sure Big Tech takes responsibility for their products and platforms. Polarizing algorithms are not okay: we need more access to data for independent re search and supervision. Recommender systems and other systemic risks such as deep fakes need to be managed better by the platforms. We re taking a big step ahead with the Digital Services Act and can also with the AI Act. 

However on the short term t his needs to be embedded in the new Code of practice. I call on the signatories of the code to include commitments on recommend er systems and the labelling of content that is generated or manipulated by an AI system.

Besides, we have to make sure there ar e safe and viable alternatives. Privacy friendly messaging apps like Signal, for example. And in the Netherlands a number of public organizations joined hands in creating Public Spaces, which aims to reclaim the online domain from the dominance of Big Tech platforms. Their ambition is to create an internet 'for the common good based on public values like privacy, autonomy and transparency. Their slogan doesn't mince words. It says: 'We want the internet back.'

Third, and last governments need to lead by example in order to strengthen our trust in democracy. What we ve learned from the childcare benefits case is that strong legislation and enforcement is needed: with clear prohibitions and duties, applied in every government level and helped by practical instruments. An example if these instruments is the Dutch Impact Assessment tool on Human Rights and algorithms, which I mentioned before. Or the new Dutch guidelines for non discrimination by design that help us to develop systems with a minimized risk of discrimination.

And the Dutch CiviC AI lab 4 is an important scientific initiative by two Dutch universities, aiming for the development of AI that strengthens the
autonomy of citizens within a democratic society. Ladies and gentlemen, the most valuable lesson the childcare benefits scandal has t aught me is
this: digital technology should always serve people instead of the other way round. In the end it is about people and their fundamental rights.
I believe we can put AI to good use in serving our citizens, by applying it to important societal issues like health or climate change. Severe diseases can be more easily diagnosed through apps like Skinvision.

Governments can offer their citizens better and more transparent services using AI. But the digital world will only thrive if people are protected online just as they are offline. If the democracy and the rule of law are upheld in the digital world as well as in the physical world. I am deeply
motivated to keep the people at the heart of the matter. Because it is the only way to maintain trust in technology and democracy. So let us work together on this.

Thank you very much.