Call for fair, trustworthy AI in the public sphere

/ Author: Karel Feenstra
© Hollandse Hoogte / Westend61 GmbH
Governments are using artificial intelligence (AI) on an ever-increasing scale and for diverse purposes, from streamlining traffic flows to tackling complex social challenges such as the energy transition, poverty and climate change. When large amounts of data are analysed, this generates new information that can be used as the basis for policy decisions. That makes AI a great resource – and a potential source of risk. Statistics Netherlands (CBS) takes an active interest in developments surrounding AI, both within the Netherlands and around the world.


AI is everywhere. Its influence on the pace and quality of new developments in the public arena is only increasing. AI can help improve and accelerate processes and reveal patterns in data, giving us a clearer picture of complex issues. But there is a downside. CBS Innovation Manager Barteld Braaksma is fully aware of how delicate digital processes can be. ‘When you’re working with AI, it’s crucial to use accurate data, to develop the training set – the algorithm’s ‘textbook’ – in a transparent, neutral way, and to check the results for errors. If a small error gets into the algorithm’s learning process when large amounts of data are involved, it can have a big impact on the results. The important thing is to make sure the system is transparent and can be explained. To achieve a fair algorithm, the training data also has to be representative.’

Neutral, fair algorithms

The development of the worldwide internet was left in the hands of private entrepreneurs for a long time. In recent years there has been a growing realisation that the digital technosphere is also a public space, a space that needs to be regulated using legislative means. The government is increasingly investing time and money in this field. Resources from the Growth Fund have been allocated to the Netherlands AI Coalition to harness the potential of AI to benefit both Dutch society and the Dutch economy. ‘Within the Netherlands AI Coalition we work with partners such as the University of Amsterdam, Utrecht Data School, TU Delft and the Association of Dutch Municipalities,’ says Braaksma. ‘We’re building a digital toolbox to create neutral, fair algorithms. CBS is also engaged in international cooperation with many European partners in diverse networks and consortia such as TAILOR.’

Resources from the Growth Fund have been allocated to the Netherlands AI Coalition to harness the potential of AI to benefit both society and the economy

The TAILOR network

TAILOR is a European partnership between governments, knowledge institutes and businesses and encompasses 55 partners from all parts of the continent. TAILOR aims to use a joint research and development programme to create a new standard for trustworthy, fair AI, a standard that places human values squarely at its heart. Europe is keen to use this initiative to lead the world in the development of responsible, humane digital technology. The Netherlands Organisation for Applied Scientific Research (TNO) has also joined the TAILOR consortium. As TNO’s Freek Bomhof explains, ‘AI technology is developing rapidly, and although Europe is engaged in that technological field, it is not a leader. Where Europe does lead is in research and initiatives that focus on ethical applications for AI. The need for legal frameworks to protect users is a bigger political priority here. That has already resulted in the General Data Protection Regulation (GDPR), and the AI Act that is currently under discussion at the European Commission will represent another step forwards.’ Still, legislation and regulation can only do so much.

Responsible AI

For an ethical framework to be effective, the general public must also have a certain level of awareness and basic knowledge of AI. ‘But that doesn’t have to mean training everyone to be a programmer,’ says Bomhof reassuringly. Neither is he afraid of dystopian scenarios where humans go head to head with technology. ‘It’s a bit like road traffic. Just when New York was about to be swamped by manure because of the huge numbers of horse-drawn carts in the city, a new technological innovation appeared that did away with that problem: the car. At that time they didn’t know about the downsides of that mode of transport, such as air pollution. Now we do have that basic knowledge, and that’s important when we make decisions about things like the environment and climate. In the same way, it’s important for users of AI – and that’s all of us – to have a certain level of insight into how AI works, so we can understand the implications of using it. The recent scandal surrounding the Dutch Tax Authority’s mismanagement of child benefit payments wasn’t just the fault of the algorithm: there were real people pressing the buttons. Technology plays a role, but so do the operating processes themselves.’

Trustworthy digital public space

Towards the end of 2021, TAILOR collaborated with the CLAIRE research platform and the VISION project to organise a workshop about the use of AI in the public sector. The goal was to stimulate a wide-ranging conversation that would incorporate many different perspectives. Braaksma and Bomhof view the initiative as a success. ‘Building transparent, trustworthy AI is also a matter of demystification,’ Bomhof explains. ‘With an algorithm register, with open infrastructure for SMEs in every European member state, slowly but surely we are building a trustworthy digital public space. The European signature of AI in the public domain is all over this process: the diversity and complexity of the continent, with a greater emphasis on human values. The individual is paramount. Those are the values at the core of our “roadmap for trustworthy AI”. It’s not necessarily the easiest way, but in the end it will deliver the best results for society.’