Human Lives Human Rights: The continued use of unregulated algorithms in the public sector by the Dutch government risks exacerbating racial discrimination, a new analysis of the country’s childcare benefit scandal says.
The report Xenophobic Machines exposes how racial profiling was baked into the design of the algorithmic system used to determine whether claims for childcare benefit were flagged as incorrect and potentially fraudulent.
Tens of thousands of parents and caregivers from mostly low-income families were falsely accused of fraud by the Dutch tax authorities as a result, with people from ethnic minorities disproportionately impacted.
While the scandal brought down the Dutch government in January, sufficient lessons have not been learnt despite multiple investigations.
Thousands of lives were ruined by a disgraceful process which included a xenophobic algorithm based on racial profiling.
The Dutch authorities risk repeating these catastrophic mistakes as human rights protections are still lacking in the use of algorithmic systems.
Alarmingly, the Dutch are not alone. Governments around the world are rushing to automate the delivery of public services, but it is the most marginalized in society that are paying the highest price.
Human Rights organizations call on all governments to implement an immediate ban on the use of data on nationality and ethnicity when risk-scoring for law enforcement purposes in the search of potential crime or fraud suspects.
From the start, racial and ethnic discrimination was central to the design of the algorithmic system introduced in 2013 by the Dutch tax authorities to detect incorrect applications for child benefits and potentially fraud.
The tax authorities used information on whether an applicant had Dutch nationality as a risk factor and non-Dutch nationals received higher risk-scores.
Parents and caregivers who were selected by the system had their benefits suspended and were subjected to hostile investigations, characterized by harsh rules and policies, rigid interpretations of laws, and ruthless benefits recovery policies.
This led to devastating financial problems for the families affected, ranging from debt and unemployment to forced evictions because people were unable to pay their rent or make payments on their mortgages. Others were left with mental health issues and stress on their personal relationships, leading to divorces and broken homes.
The design of the algorithm reinforced existing institutional bias of a link between race and ethnicity, and crime, as well as generalizing behavior to an entire race or ethnic group.
These discriminatory design flaws were reproduced by a self-learning mechanism that meant the algorithm adapted over time based on experience, with no meaningful human oversight.
The result was a discriminatory loop with non-Dutch nationals flagged as potentially committing fraud more frequently than those with Dutch nationality.
When an individual was flagged as a fraud risk, a civil servant was required to conduct a manual review but was given no information as to why the system had generated a higher-risk score.
Such opaque ‘black box’ systems, in which the inputs and calculations of the system are not visible, led to an absence of accountability and oversight.
The black box system resulted in a black hole of accountability, with the Dutch tax authorities trusting an algorithm to help in decision-making without proper oversight.
A perverse incentive existed for the tax authorities to seize as many funds as possible regardless of the veracity of the fraud accusations, as they had to prove the efficiency of the algorithmic decision-making system.
Parents and caregivers who were identified by the tax authorities as fraudsters were for years given no answers to questions about what they had done wrong.
The findings of Xenophobic Machines are to be presented at a United Nations General Assembly side event on algorithmic discrimination on 26 October.