CBS carefully considers questions about AI

/ Author: Masja de Ree
Glass fibre cables for computers
© Hollandse Hoogte / Westend61 GmbH
Artificial intelligence (AI) is everywhere. Search engines and computer programs use it to generate texts, it’s in algorithms that select interview candidates, and it’s even used to work out the likelihood that someone will drop out of school. But there are risks associated with AI, which is why Statistics Netherlands (CBS) treats AI-related questions from external partners with great care. CBS has established an AI policy to clarify their considerations.

AI policy framework

CBS produces statistics about AI, for example about its use within enterprises and the different kinds of AI technology deployed. At the request of Dutch authorities, CBS is currently looking into opportunities to collect more statistical information on AI. Other government organisations also consult CBS about the use of AI, including the data sources that are fed into its internal AI algorithms. In addition to the existing framework that clearly governs the production of CBS’ own statistics on AI, CBS has now established an AI policy framework to answer questions from external partners about how they can best take advantage of AI. According to CBS policy adviser Kim van Ruler, ‘The AI policy makes it clear which AI-related questions CBS can and cannot answer, both for ourselves and for our external partners. We want to be transparent about the risks as well as the opportunities.’

Self-learning algorithm

CBS never accepts requests that involve potentially tracing people or information on individuals or enterprises. ‘That’s considered administrative use of data, and we cannot do that given our legal constraints,’ Van Ruler states. ‘But there is a big grey area when organisations want our help with using AI algorithms and the associated data.’ AI is an umbrella term, says Joost Huurman, Director of Research and Development at CBS. ‘The AI policy mainly deals with AI-related situations that involve self-learning algorithms.’ These are algorithms that create their own new rules based on the input – the data – they receive. ‘The danger is that bias in the data – for example when a specific group is overrepresented – can lead to a negative spiral. In turn, that can lead to unjustified conclusions about certain groups of people. You have to really watch out for that and stay alert.’

Conscientious and helpful

‘It’s very important to CBS to approach new AI-related developments in a responsible way,’ says Van Ruler. ‘But we also want to be helpful, if that’s possible and if it will benefit society.’ One example of a question in that grey area came from the Dutch Central Agency for the Reception of Asylum Seekers (COA), which wanted to explore the usability of an algorithm to find suitable housing for status holders in the Netherlands. ‘That was too much like administrative use,’ says Huurman, ‘so we didn’t accept that request. But what we could do was advise the COA on their dataset – a good dataset is essential if you want an algorithm to be fair. The purpose of the use of this algorithm was positive: to provide proper accommodation. That was relevant to our decision.’

‘The AI policy makes it clear which AI-related questions CBS can and cannot answer, both for ourselves and for our external partners’

Three considerations

The AI policy is based on three considerations. When an organisation submits an AI-related request, CBS first determines which role it is being asked to play: adviser or assessor? ‘We don’t assess data and algorithms that come from external partners,’ Van Ruler explains. ‘That’s not our job.’ The second consideration relates to whether the request is about the data to be used or about algorithms. CBS can advise on most data-related matters; the decision about whether or not we can advise on algorithms depends on the context. Finally, CBS identifies the type of algorithm: is it intended to predict something or to interpret something? As CBS does not make predictions [apart from those relating to population growth – Ed.], most requests relating to predictive algorithms will not be accepted, but explanatory algorithms are a different matter. ‘There’s a level of nuance,’ says Huurman. ‘Take an algorithm that distils risk factors from a neighbourhood with a high poverty rate, for example. That is an explanatory algorithm. The problem is that you can often switch that research around, and predict poverty based on risk factors that are present in a neighbourhood. So we are always very conscientious when we make decisions.’ CBS’ AI policy includes many examples of what is and is not allowed. ‘And if it’s really unclear how we should approach the issue,’ Van Ruler adds, ‘CBS has an ethics committee that considers the request.’

Accountable and transparent

Citizens and organisations need to be able to see how government organisations work and which processes and methods underpin decisions, and CBS’ AI policy is no exception. AI technology is developing rapidly at the moment, and CBS is actively engaged with the opportunities and the risks that arise from that development. On a policy level we are seeking to understand CBS’ role in that process, and in practice we are looking into how AI can be used in an accountable, transparent way. ‘And we want that to be clearly visible,’ says Van Ruler. ‘The most important thing is for us to be able to explain our decisions. If an issue or its solution is so complex that we can no longer do that, we don’t get involved.’ CBS’ partners will experience only minor effects of the new policy framework. ‘We’ve always approached requests with great care,’ says Huurman. ‘Now we can be clearer and more explicit, which is all to the good when you’re in a partnership. That way our partners know where they stand: there’s a lot we can do, but we are neither willing nor permitted to do absolutely everything.’