Routes, maps and commitments for what is coming

Artificial intelligence 2021: important developments in the international legal framework

We made some progress in 2021 in the international legal framework on artificial intelligence (AI), but we need more specific measures to uphold human rights in the application of these technologies because the impacts are real, negative, and catastrophic, as Michelle Bachelet pointed out.

CC:BY (Jernej Furman)

Versão em português
Versión en español

Much of what is discussed about artificial intelligence (AI) refers to science fiction movies and books. Pictures of humanoid and shiny robots illustrate many webpages that talk about this type of technology. Daniel Leufer points out to this and other myths in the use and discussion of AI on a website that is worth checking out. To this myth of representation is added a very broad definition: that these would be technologies endowed with super intelligence, whose use can be objective, unbiased and which could solve all kinds of problems.

However, far from being close to robots as in Steven Spielberg’s movies or those starring Will Smith, many parts of our lives are already affected by the use of AI: either because of its use by States to carry out the most varied tasks and in its decision-making, or even by private companies.

Two elements of the “AI governance” myth lead us to some questions. It is true that many countries in Latin America, such as Colombia, Chile, Brazil and Uruguay, are already regulating national strategies to deal with AI, in addition to trying to pass specific bills on regulation, as we are seeing in the current discussions.

In the case of Brazil, the 21/2020 bill has received a series of scathing criticisms, such as from Rights in Network Coalition after its approval in the Chamber of Deputies without an effective discussion with society and due to a weakening of existing legal guarantees. In Europe, discussions are also heated, and civil society is calling for an Artificial Intelligence Act (EIA) that prioritizes fundamental rights.

This week, the “Responsible Artificial Intelligence Global Index”, a project of Research ICT Africa and the Data 4 Development Network, was launched. This index aims to track the implementation of responsible AI principles in more than 120 countries, through an international network of independent research teams, in order to evaluate the degree to which the principles are being applied. The name of the launch event conveys the desire of a large part of society: to move from principles to practice, in the face of so many potential risks of human rights violations.

Here, we want to analyze the new developments on the topic of AI regulation within international organisms, to which were added, for example, the OCDE AI Principles, which were approved in 2019.

Negative and catastrophic impacts, with serious risks to privacy and requiring urgent action

Michelle Bachelet, United Nations High Commissioner for Human Rights, recently published an important report on the serious risks to privacy arising from the use of AI tools (A/HRC/48/31).

According to Bachelet, profiling, automated decision-making and machine learning technologies have an impact on the right to privacy and several other associated rights in at least four specific sectors. For the law enforcement sectors— homeland security, criminal justice and border management—the implications are many. To name a few: extensive databases that damage or restrict privacy, high probability of predictability for searches, investigations and criminal prosecutions, coupled with a high level of opacity in the systems that prevent true State accountability in areas that have historically suffered from a lack of transparency.

The use of AI in remote biometric recognition (facial and emotion recognition) is also severely criticized by the report, as it impairs the “ability of people to go about their lives unobserved and has a direct negative effect on the exercise of the rights to freedom of expression, peaceful assembly and  association, as well as freedom of movement”.

The report was requested by the UN in 2015, in Resolution 42/15, and was based on a meeting with experts in May 2020, as well as on inputs received from a specific call for this purpose in 2021. It analyzes the issue mainly based on Article 12 of the Universal Declaration of Human Rights and Article 17 of the International Covenant on Civil and Political Rights (ICCPR).

Bachelet points out that the risk of discrimination derived from the use of decisions based on artificial intelligence is very high. She lists possible approaches to address the challenges, making a series of recommendations on the design and implementation of safeguards to prevent and minimize harm. Even though the areas of health, education, housing and financial services need greater scrutiny, according to the report, the area of biometric identification is in urgent need of guidance to defend human rights.

Two of Bachelet’s nine recommendations to the States are very significant. In the first place, it attempts to expressly prohibit AI applications that do not respect human rights. It also imposes a moratorium on the sale and purchase of AI systems that represent a high risk to human rights until adequate protections are in place.

The second recommendation implies that States postpone the use of remote biometric recognition in public spaces, until the authorities can demonstrate the fulfillment of privacy standards and data protection, and that there are no problems of accuracy and discriminatory impacts. It is interesting to note that this issue of the moratorium on facial recognition had already been expressed in the 2020 report produced by Bachelet about the impact of new technologies on the promotion and protection of human rights in the context of assemblies, including peaceful protests (A/HRC/44/24).

The recommendations directed to companies and States emphasize the need for due diligence throughout the entire cycle of AI systems, including design, development, implementation, sale, acquisition and operation, with a strong focus on human rights impact assessments.

Impact on privacy, mass surveillance and other human rights

In October 2021, the UN Human Rights Council reviewed the Resolution on the right to privacy in the digital age (A/HRC/RES/48/4). This is an important step, considering that it not only updated the text, but also made clear the risks and dangers of adopting AI. The new text was presented by Brazil and Germany. It underwent a series of informal meetings between States with the participation of civil society and was approved by consensus. Although the revision of the Resolution was not more incisive, there is no doubt that the resolution demands greater efforts from States, mainly to immediately observe the right to privacy and other affected human rights.

Resolution 48/4 recognized that AI could present serious risks to privacy rights, “in particular when employed for identification, tracking, profiling, facial recognition, behavioral prediction or the scoring of individuals”. It also requires States to adopt preventive measures and remedy for violations and abuses to the right to privacy, including the duty to adopt preventive and reparatory measures for violations and abuses, including those related to the right to privacy, which may affect all individuals, but have particular effects against women, children, and people from vulnerable groups. It also emphasizes that States should develop and strengthen gender-sensitive public policies that promote and protect the right to privacy of all persons.

There was a great expectation that this resolution would leave some issues better delimited, especially due to the strong position of the High Commissioner for Human Rights in proposing a moratorium on certain biometric and facial recognition technologies. Particularly, a stronger recommendation for States to comply with the moratorium on the purchase and sale of AI systems.

However, we understand that there will be further news of this resolution, since it ordered the High Commissioner for Human Rights to submit a written report to the 51st period of sessions of the Human Rights Council. The report is expected to address the trends and challenges in the subject, identify and clarify principles, safeguards and best human rights practices, ensuring a wide participation from multiple stakeholders for its production.

Approach to ethical recommendations in AI

On November 24th this year, the UNESCO General Conference adopted the recommendation on the Ethics of Artificial Intelligence. The document, endorsed by 193 countries, presents a preamble with more than 20 considerations, defining its scope, purposes, objectives, values, principles and areas of application.

As values, UNESCO’s recommendation lists: respect, protection and promotion of human rights and fundamental freedoms and human dignity; environment and ecosystem flourishing; ensuring diversity and inclusiveness; and living in peaceful, just and interconnected societies. The guiding principles are: proportionality and do no harm, safety and security, fairness and non-discrimination, sustainability, right to privacy and data protection, human oversight determination, transparency and explainability, responsibility and accountability, awareness and literacy, as well as multi-stakeholder and adaptive governance and collaboration.

In addition, the Recommendation brings together 11 main areas of public policy and, among them, a specific one on ethical impact assessment. Although this may appear to be a step forward, we understand that this point may be of concern and needs further explanation. First, because the previously mentioned ethical impact assessment has human rights impact assessment as one of its elements. In this sense, there is a possible erroneous overlap between the two tools, as the human rights impact assessment is broader and deeper than the ethical impact assessment.

Secondly, because the human rights impact assessment tool and human rights due diligence are already contained in international legal instruments and have become the UN’s most recommended tool for companies to begin continuous processes of human rights due diligence, according to CELE, while ethical guidelines lack enforcement and definition mechanisms regarding which institutions they invoke and how they empower people, as argues Article 19.

Although this is a great start, it is not enough to establish ethical recommendations for the use of AI technologies. As María Paz Canales has already pointed out, “ethics is not enough in democratic states where there is a normative commitment to promote and protect human rights”. More regulation on the use of AI is needed, as it already has disastrous effects on a part of the population which is under previous conditions of vulnerability.

As pointed out by Daniel Leufer, quoted at the beginning of this article, despite the rise of AI ethics, when we face very serious dangers, a balance between benefits and harms may lead to questioning a utilitarian approach to ethics. However, a human rights approach simply begins with the starting point that certain harms are unacceptable. Even though the UNESCO Recommendation provides an important ethical framework made up of values and principles, it should be understood as a complement to international human rights obligations to guide the actions of States in the formulation of their legislation, policies or other instruments related to AI, in accordance with international law already in force.


* Translated by Gonzalo Bernabó.