Artificial Intelligence and source code

Digital trade agreements cannot prevent AI transparency

The use of digital trade agreements to regulate artificial intelligence, as well as mechanisms that could prevent its transparency, warrant greater attention from Latin American States.

CC:BY (Gilda Martini)

A few weeks ago Stanford University’s AI Index 2023 was published, analyzing different facets of advances in Artificial Intelligence, with a comparative look at Human-Centered Artificial Intelligence. Several aspects of this publication are worth mentioning. Among them, the growth in the number of AI-related incidents and controversies, which have multiplied by a factor of 26 since 2012, according to data from the Algorithmic and Automation Incidents and Controversies (AAIC) Repository.

Diverse studies on, for example, ChatGPT also show that problems related to the use of these systems, whether by the private sector or by authorities, are increasingly evident and confirm the fears of several human rights organizations dedicated to studying the issue in recent years. On the other hand, the mapping of important machine learning systems attributed to researchers from certain countries is noteworthy. Continuing the trend of prior years, in 2022 the vast majority of authors are from countries in the global North, specifically the United States, a few European countries, and China.

We highlight a point that merits greater consideration by Latin American countries, specifically, the use of digital trade agreements to regulate artificial intelligence, in addition to mechanisms that could prevent the transparency of how various AI systems are used. 

Trade agreements: the digital agenda without human rights considerations

Diverse trade agreements have consequences for digital environments, regulating aspects of issues such as e-trade, online consumer protection and setting standards for artificial intelligence. The Digital Economy Partnership Agreement (DEPA), signed by Chile, New Zealand and Singapore, contains a chapter specific to AI, obligating States to adopt ethical frameworks for governance of this technology.

But these digital trade agreements may also prohibit the requirement of algorithmic transparency, preventing the disclosure of source code for AI software, which could limit the possibilities for seeking solutions to these problems, excluding the participation of legal or regulatory authorities. This is how, for example, it is defined in the European Union’s proposal for regulating AI, presented in 2021 and still under discussion.

Currently a series of bilateral and multilateral trade agreements with commitments allowing the free flow of data across borders have been signed, strengthening national laws on trade secrets and applying strict intellectual property protections for source code and even algorithms. This comes in the shape of a new prohibition for governments to require access to or transfer of source code for software, subject to certain exceptions, and has the active support of the United States, Australia, Canada, Japan and New Zealand.

Efforts to apply another layer of protection for software in digital trade law could be highly problematic. The source code clause may already turn out to be too restrictive for national digital policies. At the same time, much less progress has been made in addressing AI-related cross-border risks and harm, in areas like competition policy, personal data protection, protections against the misuse of algorithms in labor and consumer markets, and use that respects human rights, transparently and responsibly.

Looking back in historical analysis, the provisions prohibiting source code disclosure requirements were introduced by the US in the Trans-Pacific Economic Partnership Agreement (TPP), and since then they have been emulated in many other trade agreements, although many developing countries are firmly opposed to them. The TPP prevents signatory states from requesting the transfer of or access to source code for software. Source code is the specification of a computational process that can be compiled and executed by a computer.  In contrast to source code we have what is known as “object code,” which is that converting source code (readable by humans) to machine-readable instructions, through a process known as “compilation.” A computer program’s source code can be protected by intellectual property and considered a trade secret. Pursuant to Article 10.1 of the WTO’s Agreement on Trade Related Aspects of Intellectual Property Rights (TRIPS), computer programs, in both their source code and their object code, can be protected by copyright. Therefore, these provisions can threaten the right to access public information and thus the right to freedom of expression, as determined by the Constitution and laws of various Latin American countries.

Transparency, in addition to strengthening trust in institutions, is a fundamental human right when considered from the perspective of access to information. This is a right enshrined in the American Convention on Human Rights and, in the case of Chile, is a State obligation provided for in the Constitution under Articles 8 and 19(14), by virtue of which all information held by government administrative agencies is presumed to be public, along with the principle of maximum transparency or disclosure, expressly defined in the public information access law.

Despite the above, requests for access to public information related to the source code of a government facial recognition application, for example, cannot be denied based on the presumed threat to property rights. The disclosure of a software’s source code makes it possible to audit its operations, participate in improving it and monitor its safety, appropriateness and efficacy. However, there are positions that consider it a type of forced technology disclosure and a barrier to foreign trade. It is thus established in the “Report on Foreign Trade Barriers” prepared by the US in 2022 and 2023, but without considering the human rights involved in these cases.

Artificial intelligence use must respect human rights

The negative effects of various AI systems are increasingly obvious in terms of potential human rights violations and, in particular, the rise in discrimination toward historically vulnerable groups, based on their income, skin color or gender. The other noteworthy factor in the 2023 AI Index is the increase in interest in Artificial Intelligence from policymakers, given the growing number of laws and bills that seek to regulate AI use. In Brazil and Chile, there are currently different regulatory proposals in this regard and a large share of these debates is precisely on how to lend transparency to the use of these systems, for which access to source code and algorithms definitely plays a fundamental role.

International AI standards cover its uses and processes, and it is important that they reflect public interests, in line with human rights and social values. However, setting standards is dominated by the main industry players and a select number of governments from the global North. AI technologies are granted additional intellectual property rights through trade agreements, including algorithms and source code, which presents the risk of reinforcing the market power of dominant companies and preventing progress toward algorithmic transparency and responsibility.