Data ethics high on the agenda at CBS

/ Author: Miriam van der Sangen
Woman studying database on a computer screen
© Getty Images
On 6 March 2024, the fourth IBDS Café was held at Statistics Netherlands (CBS). Around 100 participants – including representatives from municipalities, provinces and ministries – came together to discuss the topic of data ethics. The agenda included the ethical aspects of a wide range of data issues. This is a crucial area for CBS, and part of its day-to-day work as a governmental institution. Given the importance of data ethics, in December 2020 CBS established its own ethics committee which meets every two weeks to review external requests for new statistical research.

Data ethics

The chair of the CBS ethics committee is Erik Bruinsma, Director of Strategy and Management Advice. ‘The reason for setting up an ethics committee at CBS was to address the following dilemma: data is enabling us to do ever more, but just because we have the technology to do something, it doesn’t mean we should do it. Questions around data ethics go much wider than just CBS and involve other parts of central government, as well as local government. There has been a lot of optimism about what is possible today using data, but it’s important to reflect on the implications of those possibilities too.’

Core values

CBS’s ethics committee consists of eight CBS staff members who each have expertise in a different field: policy, legal, methodology, communications, economic statistics and social statistics. ‘We review dozens of requests for new statistical research every year, based on these different perspectives. We always start out from CBS’s core values: reliable, objective and society-oriented. We ask ourselves: is there an ethical dilemma here? Do we see any risk of stigmatisation? Or could particular social groups be harmed? In short, what values are at stake? We look at a diverse set of areas, ranging from migration to healthcare, and from police and justice to youth services.’

Complex case work

The members of the ethics committee are currently all CBS staff members, but every six months the committee is joined by two external members to look back on the cases that it has handled: what went well and where could improvements be made? Erik Bruinsma: ‘This year a professor from outside the organisation has joined our ethics committee on a more structural basis. Her name is Mariette van den Hoven, and she is Professor of Ethics, Law & Medical Humanities at UMC Amsterdam. She has 30 years of experience of working on ethical issues, and she joins us to consider more complex cases. The ethics committee holds frequent meetings – once every fortnight. ‘We need to meet frequently to provide timely advice. In cases where we issue negative advice, it’s important to communicate this to the relevant applicant quickly.’

Unrest during pandemic curfew

Bruinsma mentions two examples of issues that the ethics committee has handled in recent years. ‘Many people will remember the civil disturbances during the curfew that was in place for a time during the COVID-19 pandemic. Those disturbances involved a small and identifiable group of individuals in very specific locations. We were asked by the police to look into the background characteristics of the group involved, but we issued negative advice regarding that request. The first reason for this was because the group of suspects was small and so the risk of their identities being traced was too high. The second reason was that the police’s intent with respect to their request was unclear to us.

Aftermath of childcare benefit scandal

Another example relates to the possible consequences of the childcare benefit scandal. ‘We received questions on this subject from various government departments – for instance concerning children who were removed from the care of their families. But because the childcare benefit scandal was so wide-ranging and involved so many different aspects, our advice was not to respond to every individual request that we received from the ministries, but to provide an overall picture instead. Based on that advice, the Director General of CBS decided to complete a feasibility study on the options for conducting statistical research on this subject.’

We all have a role to play in data ethics

An ethics committee is all well and good, but how does CBS ensure that ethical questions are addressed properly all across the organisation? ‘We do this by visiting the various teams and explaining what we do, for example. We also host an annual meeting for staff members to highlight data ethics and the work of our committee. And we also make use of a range of internal communication channels. After all, data ethics are not just a matter for the ethics committee, but for all of us.’

Ethics ambassador

The subject of data ethics is not limited to CBS. This is an area that is also receiving attention at the government’s Department of Infrastructure and Water Management (Rijkswaterstaat), for example. Nelleke Groen is a lawyer who specialises in data and artificial intelligence (AI) at Rijkswaterstaat. Her portfolio previously included the area of privacy. She also served as a member of the Provincial Assembly from 2019, and in that role she spent five years on the ethics committee of the Inter-Provincial Consultative Body. ‘The subject of data and ethics is definitely on the agenda at Rijkswaterstaat. And a good ambassador is essential when it comes to data ethics, because it takes a lot of work to make sure that these questions are front and centre in people’s minds.’

Bridge with cars
© Tineke Dijkstra

Rijkswaterstaat is the government department responsible for developing and managing national roads and navigable waterways in our country

Implications for the public realm

The core task of Rijkswaterstaat is to manage and develop our country’s roads, highways and waterways. The department is also committed to a sustainable living environment. Nelleke Groen: ‘The work of Rijkswaterstaat often relates to the public realm and how it is designed. Our interventions involve making certain design choices, which have consequences. They leave a mark on particular areas, and they affect the environment and the freedom of movement of the people who live there or who make use of public infrastructure. An important question is therefore: where does the underlying data come from? Who do we share it with? How long do we keep it?

Moral reflection

Rijkswaterstaat does not have a dedicated ethics committee, but it has developed an AI Impact Assessment (AIIA) questionnaire that is used to facilitate discussion around the deployment of AI systems. This serves as a tool in regular moral reflection meetings. ‘These meetings were introduced at Rijkswaterstaat several years ago, but we have only incorporated data ethics and AI since the introduction of AIIA. The impact assessment itself relates to practical ways of handling common ethical dilemmas involving AI, such as bias and questions around climate impact. But it also includes a plan in case the AI develops in an unexpected or undesirable direction. However, we still don’t think this is quite enough to cover all the hard questions. We will be evaluating AIIA over the next few months and developing an AI strategy. That will involve looking at whether moral reflection should be given a more wide-ranging role, or whether we need to set up an ethics committee perhaps.’

Pitfalls

So what are the potential pitfalls for an ethics committee or holding regular meetings for moral reflection, according to Groen? ‘Often, we don’t ask ourselves early enough in the process whether the things we are doing are really necessary. Or whether we’re working on what we really want. We also find it difficult to pull the plug on projects that are not going well. We should actually be checking at multiple points along the way to see if we’re still on the right path, so that we don’t run into unexpected issues.’ Groen says it is also important to pay close attention to whether the required areas of expertise are involved at the right moments. ‘At Rijkswaterstaat, we’re moving towards building interdisciplinary teams for AI applications, to ensure that all the relevant perspectives are included early in the process. This is currently difficult because developers, policymakers and lawyers don’t speak the same language and they take different approaches to risk assessment. But practice makes perfect, and this will probably improve over time.’