Human Lives Human Rights: A quick action is needed as it can take time to assess and address the serious risks this technology poses to human rights, warned the UN High Commissioner.
“The higher the risk for human rights, the stricter the legal requirements for the use of AI technology should be”.
Ms. Bachelet also called for AI applications that cannot be used in compliance with international human rights law, to be banned.
“Artificial intelligence can be a force for good, helping societies overcome some of the great challenges of our times. But AI technologies can have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights”.
The UN rights chief expressed concern about the “unprecedented level of surveillance across the globe by state and private actors”, which she insisted was “incompatible” with human rights.
She was speaking at a Council of Europe hearing on the implications stemming from July’s controversy over Pegasus spyware.
The Pegasus revelations were no surprise to many people, Ms. Bachelet told the Council of Europe’s Committee on Legal Affairs and Human Rights, in reference to the widespread use of spyware commercialized by the NSO group, which affected thousands of people in 45 countries across four continents.
Earlier, the OHCHR report published has shown how AI affects people’s right to privacy and other rights, including the rights to health, education, freedom of movement, freedom of peaceful assembly and association, and freedom of expression.
The document includes an assessment of profiling, automated decision-making and other machine-learning technologies.
The situation is “dire” said Tim Engelhardt, Human Rights Officer, Rule of Law and Democracy Section, who was speaking at the launch of the report in Geneva on Wednesday.
The situation has “not improved over the years but has become worse” he said.
Whilst welcoming “the European Union’s agreement to strengthen the rules on control” and “the growth of international voluntary commitments and accountability mechanisms”, he warned that “we don’t think we will have a solution in the coming year, but the first steps need to be taken now or many people in the world will pay a high price”.
OHCHR Director of Thematic Engagement, Peggy Hicks, added to Mr Engelhardt’s warning, stating “it’s not about the risks in future, but the reality today. Without far-reaching shifts, the harms will multiply with scale and speed and we won’t know the extent of the problem.”
According to the report, States and businesses often rushed to incorporate AI applications, failing to carry out due diligence. It states that there have been numerous cases of people being treated unjustly due to AI misuse, such as being denied social security benefits because of faulty AI tools or arrested because of flawed facial recognition software.
The document also highlights a need for much greater transparency by companies and States in how they are developing and using AI.
“The complexity of the data environment, algorithms and models underlying the development and operation of AI systems, as well as intentional secrecy of government and private actors are factors undermining meaningful ways for the public to understand the effects of AI systems on human rights and society,” the report says.
“We cannot afford to continue playing catch-up regarding AI – allowing its use with limited or no boundaries or oversight and dealing with the almost inevitable human rights consequences after the fact.
“The power of AI to serve people is undeniable, but so is AI’s ability to feed human rights violations at an enormous scale with virtually no visibility. Action is needed now to put human rights guardrails on the use of AI, for the good of all of us,” Ms. Bachelet stressed.