Feminist reflections for the development of Artificial Intelligence

CC:BY Maria José Porras Sepúlveda

CC:BY (Maria José Porras Sepúlveda)

During the first half of 2022, Derechos Digitales, with support of the f<A+I>r project’s Latin America and Caribbean Hub, developed a feminist artificial intelligence (AI) guide titled Towards a feminist framework for AI development: from principles to practice.

The document opens with the question: «Is it possible to develop AI without reproducing logics of oppression?» It invites us to reflect on how we understand the field of AI and Latin America’s participation in the knowledge production scenarios there, on what discrimination problems are associated with how the AI field is currently configured, and on what alternative proposals for careful data management exist. Essentially, it is about trying to understand how feminist practices can lay the foundation for developing an inclusive AI with hope for social justice.

To continue this debate, in January and February 2023, Derechos Digitales organized a series of conversations among Latin American women who are developing artificial intelligence systems under the auspices of the f<A+I>r network, together with other women AI experts in the region. The main goal of these encounters was to encourage reflection based on experiences of developing specific projects, and exchange recommendations and methodologies applicable to system design, with feminist perspectives.

This text is available in Spanish and Portuguese.

On the construction of this text

This document attempts to synthesize the conversations developed during the sessions, emphasizing concepts, challenges and lessons learned that can inspire future AI development initiatives. In addition to direct references to the dialogues and the participants’ interventions, the text seeks to expand on their ideas and combine them with other references.

The following offers a brief description of the topics addressed in each of the dialogue spaces held during the project:

  • The first session, held on January 26, 2023, and titled “Technologies as collective processes” had the participation of Sofía Trejo and Ivan Meza, who are currently developing the “Conversational agent to support the dignified exercise of interpretation in indigenous languages in the legal field in Mexico” project. They conversed with Karla Prudencio, director of the law degree program at the Centro de Investigación y Docencia Económicas in Mexico, about methodological commitments for co-design among communities, listening to, understanding and contributing to their needs.

  • In the “Artificial Intelligence, for what and for whom?” session, held on January 31, 2023, with expert Fernanda Carles, an attempt was made to answer the question: what are the steps to follow in building an AI system? Carles mentioned some considerations to keep in mind at each stage of AI project development with a social objective.

  • The third session, held on February 2 under the title “Feminist Power–AI Power. Connections and disruptions,” was proposed as a conversation between Cristina Martinez Pinto and Luz Elena Gonzalez, coordinators of the “Gender perspective in AI crowd work in the Global South” project, with Gina Neff, the Executive Director of the Minderoo Centre for Technology and Democracy. The participants discussed how to develop possible strategies for achieve the participation of women crowd workers in the digital connection and organization.

  • In “Can the feminist approach be adapted to protocols?”, the cycle’s fourth and final session held on February 8th, Virginia Brussa , one of the managers of the “Integration of gender perspective in the design of Data Science projects for the public sector in LatAm” project, had an exchange with Maia Numerosky, Data Science Engineer, who shared her perspective on the development and deployment of data science-based projects by the public sector and academia.

The conversations were facilitated by Adriana Castrillón and Juliana Guerra and featured active participation from members of the Derechos Digitales team and of the f<A+I>r project’s Latin America and Caribbean node.

Feminist AI initiatives in Latin America

CC:BY (Maria José Porras Sepúlveda)

Latin America has been an important center for production of and reflection on feminist technology, and the same is true for AI. Since 2020, the f<A+I>r network has been creating a space for exchanging views and strengthening a series of initiatives that propose thinking about and developing an inclusive, transformative AI. Currently, the network is led by Women at the Table, the Tecnológico de Monterrey and the Tecnológico de Costa Rica, with support from the International Development Research Centre (IDRC). It has an active node in Latin America and the Caribbean. In addition to being a network, f<A+I>r promotes research and experimentation with feminist AI production.

Below you will find more detail about the feminist AI projects discussed during the dialogues promoted by Derechos Digitales, which were supported and financed by the f<A+I>r network in Latin America in 2022 and 2023:

Conversational agent to support the dignified exercise of interpretation in indigenous languages in the legal field in Mexico

This initiative sought to co-design a conversational agent with indigenous language interpreters that allowed them to generate data collaboratively to make visible the problems they face in their daily activities; enhance planning and decision-making; give a stronger policy advocacy power to interpreters, and their organizations in subjects related to interpretation and access to justice in Mexico. Additionally, the agent would allow interpreters to build collective knowledge (like glossaries) that can serve as supporting tools for their labor.

The project sought to align all its processes and results with principles of co-design, shared benefits, digital autonomy, and data sovereignty. To activate them, a fundamental part of the research work was built from in-person workshops, that served as spaces for dialogue and listening. Besides such principles, the project incorporated a transversal gender perspective throughout the development process. Not only by seeking equity in terms of numbers, but also by generating a workshop that allowed the incorporation of a gender perspective within the design of the project and the agent.

All development processes needed to focus on balancing power relations among all involved actors, particularly between the research team, interpreters, and interpreter organizations. Such exercise was reflected in the work methodology that included the development of a research protocol and community agreements, as well as the development of strategies to incorporate the CREA Principles for managing indigenous data in the project.

More information about the project and the interpreters who collaborated on its development are available on their website.

Integration of gender perspective in the design of Data Science projects for the public sector

This initiative had the objective of formulating a methodology for designing data science projects for public officials based on alternative dimensions of analysis and of regional action proposals. The analysis incorporated approaches from the fields of data justice, design and intersectionality, to promote a critical implementation of data science in the public arena and to explore key steps that strengthen the formulation of questions, team building and the hybrid nature of data inherent to decision-making processes.

The methodology included conducting three workshops in October and November 2022 in Rosario, Argentina, with public officials and activists of various fields from the region. During the workshops, changes to a project design sheet used in Chile and published in an Inter-American Development Bank (IDB) guide for Latin America were explored and validated. In these instances an attempt was made to understand the need for a governance strategy, the importance of public engagement in data science projects led by the State, and the concept of data justice as an alternative to the idea of data ethics.

As a result, the initiative proposes the reformulation of data science project sheets in the public sector, following internal review, collective input from the on-line workshops held and an analysis based on a set of exploratory dimensions. The main changes involve the inclusion of participatory instruments in the different stages of design and of a project governance strategy and a crosscutting, iterative vision of feminist data justice.

«We had to adapt the materials for the workshops, to talk not only about technical questions, but also the feminist approach. Concerns arose around the impact on the right to privacy, but less so around the rights to communication or to information. We need to keep thinking about how to communicate data science issues to populations affected by this kind of project and about which rights we need», mentioned Virginia Brussa and Maria Paz Hermosilla, project managers.

A full article on the project can be read here.

Mainstreaming gender perspective for Crowd Workers in AI from the Global South

This is a research project on the Latin American women who work labelling content that will be used for training AI models, the so-called crowd workers. The research included conducting surveys to understand who they are, how they work and what their needs are.

Based on an understanding of their contexts, their family realities and the principal challenges they face, the project proposes development of an AI platform that includes feminist perspectives for Crowd Work platforms, enabling workers to exchange information, create partnerships and achieve system scale-up.

«We found that these women do not have communication channels to connect with other workers or translation tools for the various tasks they develop. Our platform, supported by AI, will recommend tools that assist the development of abilities for their professional growth and will allow them to connect with other colleagues», state the project managers.

The complete description of the project and its findings can be found here.

Feminist reflections on AI

CC:BY (Maria José Porras Sepúlveda)

Rethinking concepts

Artificial intelligence has incorporated a series of specific concepts into daily language, and the construction of feminist AI systems requires examining and questioning them. In AI, as in any other field, language is not neutral, and the Latin American women experts investigating and developing systems have recast these concepts in their own terms and words.

The very idea of “Artificial intelligence” has been subject to challenges and was the object of discussion during the dialogues. Fernanda Carles defines it as the ability of a system to adapt to its environment to solve a problem, operating with insufficient knowledge and resources. This involves systems that are designed to process huge amounts of information and that can solve problems that humans cannot. In some cases, it is faster «and, designed well, can be more objective» she notes.

Carles differentiates between two types of AI: narrow artificial intelligence and general AI. «Narrow AI is that which we see being deployed today, that exists outside of theory. It is focused on specific or defined tasks. It does not have consciousness, self-awareness or the ability to think», as opposed to what the very idea of “intelligence” might suggest.

On the other hand, the expert notes that general AI is a development that only exists in theory, since it has not become a reality. The idea is to generate computer systems that can experience information in ways similar to those of human beings, that have the capacity to learn, generalize, apply knowledge and plan the future, that are creative, express emotions and can work without supervision.

Matteo Pasquinelli and Vladan Joler, in a manifesto on AI as a mechanism for knowledge extraction, note that in the expression “artificial intelligence,” the adjective ‘artificial’ involves a myth of the autonomy of technology, as explained by Carlés on the idea of general AI. According to them, this idea mystifies two processes of alienation in favor of a corporate scheme that extracts human knowledge: the geopolitical autonomy of technology firms and the erasure of the autonomy of working people. In their work, they propose changing that logic and thinking of machine learning as a tool for expanding knowledge. Their complete reflection can be read in Portuguese here.

Fernanda Carles introduced two other concepts that are also central to the development of artificial intelligence projects: modeling and weighting.

Data modeling is the process of documenting a design for a complex software system as an easily understood diagram, using text and symbols to represent the way that the data need to flow. The modeling does not indicate what the network will look like, only the kind of data that will fuel it. «I add data that I know, I control what comes out and with this I analyze what new information it can give me», says Carles.

She explains that correlation analysis allows understanding the degree of dependency of the target variable (what is intended to predict or classify) with other variables. This makes it possible to decide which data to use or not and the importance of each variable in the system from a mathematics perspective.

Maia Numerosky notes that «data are a key element in an AI system». She emphasizes how data represent power relationships: the availability of some data but not others represents deeper social relationships. «For example, we have less data about the people who work informally, less data on clandestine abortions, less data on trans- and non-binary people. No bias mitigation work done on algorithms produces an improvement in the database».

Thus, a first problem when considering implementing an AI model is in the availability and representativity of the data, in addition to the criteria considered when the data are collected. The unavailability or lack of representativity of data can produce a series of problems. For example, in terms of training systems, the lack of data will mean that there are things that the systems will never be able to “learn,” and this will have an impact on their results.

In the Artificial Intelligence & Inclusion study led by Derechos Digitales and developed in partnership with a set of Latin American academic and civil society organizations, it was possible to detect the impact of biased data sets on decisions mediated by automated systems. In these cases, the biases can have an impact on the quality of life and the autonomy of the people affected, as well as potentially reinforcing their condition of exclusion and deepening pre-existing inequalities.

Talking about the use of AI in the public policy realm, Numerosky notes that «the problem is where and how the data are collected. The data must be collected with quality and clear criteria by different agencies». Otherwise, it will be impossible to analyze them and to generate information relevant to public policy development.

Numerosky distinguishes between two types of data on which AI works: critical data and non-critical data. The former are personal data, while the latter are data referring to objects or accessories. In any case, this may be a fine distinction, because objects can also disclose personal, and even sensitive, information.

The availability and accuracy of data, therefore, can lead to the existence of biases which, if not detected and addressed from the beginning, can permeate the whole system, affecting its results. A bias occurs when there is disproportionate weight in favor of or against one piece of data or another. This is where it is important to remember what Numerosky emphasized: when we talk about data, often we are referring to information collected or inferred from real people, who will be affected by those biases.

Biases can happen at the source of collection when the population is not representative of the phenomenon to be studied. There can also be biases in the protocol design, data engineering biases or those arising from the algorithms themselves. For example, statistical tables created in databases can distort research: «if you put garbage into your model, you will get garbage out», summarizes Fernanda Carles.

Another important concept that emerged in the conversations was that of Open Data, which refers to a philosophy and practice that aims for certain types of data to be freely available for everyone, with no restrictions due to copyright, patent or other technical or legal control mechanisms. Open Data are digital data made available with certain technical and legal characteristics needed for them to be used, reused and shared freely by anyone, any time and anywhere.

It is worth noting that, according to the International Open Data Charter, unlocking data can only happen when people can be certain that this “will not compromise their right to privacy” and they have the “right to influence the collection and use of their own personal data or of data generated as a result of their interactions with governments.” In addition, it is not simply about publishing information: there is a series of criteria that must be met for a data set to be considered open and to be freely used in different applications. Learn more about open data standards and principles in the International Open Data Charter.

CC:BY (Maria José Porras Sepúlveda)

During the dialogue between Virginia Brussa and Maia Numerosky, the importance of interoperability was also discussed. These are the characteristics of data that make it possible to process them on different types of systems and in different ways, without any technical barriers. Considering the importance of open data in the public sphere, the question was posed: «How can we have less machista justice, for example, if we cannot systematically understand the data or if the data are not available

The point not only hearkens back to the discussion on biases in the collection and availability of databases and the historical struggles of the feminist movement for certain kinds of information to be systematically gathered by the State, e.g., in terms of violence against women. It also touches on debates over the right to information access, public transparency and algorithmic transparency: current topics in debates around AI regulation and, at the same time, foundational in discussions on human rights and the limits of government operation.

The idea of open data was presented in contrast to the assessment that «codes work and sometimes we don’t understand very well why», as Virginia Brussa summed up. The reference is to the widespread idea that AI algorithms work like a “black box” and that it is impossible to completely understand their functioning.

Brussa notes that this is a common problem in the public sector, where often already developed technology is acquired from third parties and adapted without fully knowing its characteristics: «in the government sector, many software packages are purchased that are private code, and we don’t know how the code operates». This is particularly problematic when talking about AI, since automated decisions made in the public sector must also be justified and explained. The adoption of systems in the ways indicated by Brussa adds a layer of opacity to government operations and, in the case of affecting rights, also makes it more complicated to remedy potential harm.

Proposals for explainability and algorithmic transparency developed within regulations on data protection and AI, as well as in different proposals for ethical frameworks developed in various sectors, seek to meet these challenges. Algorithmic transparency means that the factors influencing decisions made by algorithms should be visible to the people who use, regulate and are affected by them. On the other hand, explainability guarantees that these be understandable and is a key element for transparency.

Beyond the need for new standards, the participants emphasized that Latin America has been a pioneer in implementing open data policies and practices, even using open procurement, and that these initiatives need to be given greater visibility in the public sector, as a form of advocacy so that they are maintained and can be institutionalized and expanded, even in the case of greater algorithmic transparency.

CC:BY (Maria José Porras Sepúlveda)

Challenges

Advancements in the use of artificial intelligence have posed a series of challenges that pervade the work of people dedicated to developing feminist initiatives and thinking about how feminist practices can be incorporated into AI projects. Many of them are well known, although there has been little space in public debates for discussing them.

During the dialogues, participants pointed out different potential dangers in the implementation of AI systems, starting with biases, which can be in either the databases or the models, but which also reflect historical discrimination patterns. «Every technology that makes predictions or gives us conclusions based on automatic pattern detection in a data set will reinforce existing biases and, therefore, will amplify and spread them», sums up Maia Numerosky.

Biases in databases can be more or less evident. Training a system to automate the selection of people for management positions using existing databases of those who hold such jobs can involve reproducing a historical bias in favor of a fairly specific social group, for example: men and people with white skin, who are those who have mostly held such positions.

On the other hand, there are more subtle biases that require a closer look: training a system to identify COVID-19 contagion patterns and to guide mitigation policies using self-diagnosis data available from an on-line application, for example, means potentially failing to consider a series of unreported cases from people who do not have access to devices or connections with the quality required for using self-diagnosis apps. Social inequalities and gaps, therefore, are also reflected in databases.

«Models are opinions embedded in mathematics. Any model, be it algorithmic or otherwise, constitutes an abstraction of reality and simplifies and ignores details», explains Numerosky, who also suggests that «we have to take biases into consideration throughout the process of working with data, from the model’s collection to the evaluation of its operation», a key lesson for feminist AI initiatives, but also for any project of this kind.

Furthermore, a series of challenges is currently identified in obtaining information on the use of automated systems in States around the region, as Derechos Digitales noted in its studies on AI & Inclusion in Latin America. The dialogues have emphasized the existence of interesting initiatives for making information available about algorithm use in countries like Chile, for example, where the Transparency Council and the Universidad Adolfo Ibáñez have jointly published a study with a list of 219 systems in operation and a proposal for standards to guide such publication. However, the participants mentioned the importance of adopting open data principles in their publication and of considering the incorporation of mechanisms that make it easier to obtain meaningful information about the systems implemented, without having to consult each of the agencies individually about their operation.

Thinking about the development of systems by feminist groups, the challenge was discussed of acquiring knowledge for the creation of an application that incorporates AI elements to solve relevant problems in a community, especially when the code needs to be developed from very early stages. This kind of initiative is key for reclaiming AI for collective, shared and public interests, beyond the commercial logic that has guided its development.

According to Virginia Brussa, citizen projects that propose creating an application involve a huge drain during development to obtain and share knowledge. She considers that there are not enough accessible materials that could be replicated, adapted and reused in the framework of these projects, and «it is necessary for more materials to circulate».

For her, it is important that in this kind of project, efforts are made to manage knowledge better and to document not only the results, but also the development processes. In the same vein, a proposal that arose during the dialogues in response to such a challenge was the importance of fostering the creation, promotion and sustainability of open code libraries: repositories that contain code with free licenses enabling anyone to reuse, modify or publish them, with no need to request permission from their developers.

Considering feminist AI initiatives, Sofía Trejo highlights that every process for opening information has to be based on the agreements developed within each project and with each community: «we work a lot on what implies sharing with each person, and what implies sharing with the world». She also emphasizes that decisions about making knowledge public depend on the communities.

Iván Meza complements by pointing out that it is necessary to understand the politics behind selecting a technology. He highlights that technologies are not neutral; they are both impacted and impact power relations. It necessary then to question the “whys” before getting to the “hows”.

Karla Prudencio explains the importance of developing processes that allow the creation of long-term relationships so that these reflections can be meaningful. About the project developed with Meza, she tells: «one of our principles is that we only go to communities when they call us». According to her, there is a challenge to obtain funding for initiatives that are focused on processes more than on results, or that go beyond the technological aspect.

Proposals for a guide: lessons learned from Latin American feminist initiatives

CC:BY (Maria José Porras Sepúlveda)

During the dialogues, participants identified principles and values that guided their work in building AI processes and projects instructed by feminist ethics and practices. Their lessons can inspire future initiatives that aim in the same direction.

Building a committed and collaborative team

  • A feminist AI project must begin with building a diverse work team, keeping in mind an intersectional approach. Priority must be given to the inclusion of people who have historically been excluded from technology decision-making and development spaces, such as women and LGBTQIA+ people, along with the creation of multidisciplinary teams that include, for example, ethics experts.

  • Work agreements and collaboration between the team and the community participating in the project’s development must be explicitly established: this means identifying and addressing potential conflicts of interest; establishing agreements on the ownership and authorship of any materials stemming from the interaction; and defining the licenses that will be used for publication and dissemination of the data, articles, reports, etc.

  • Spaces for sharing knowledge need to be strengthened, so that communication and learning can be expanded, not just about AI, but also about technology.

Choosing, using and taking care of technology, data and people

  • It is important to explore approach options, considering the context where the project will be implemented. The selection of a given technology is not neutral and has an impact on the power relationships that are set up.

  • Technology or Artificial Intelligence must not be understood as the solution for everything. Projects must be developed based on the specific needs identified, not just as a tool for consumption.

  • «It is essential to stop a moment to decide if an AI system is necessary and to train people in asking themselves this question», emphasizes Maia Numerosky. Where such a system is selected, she recommends «thinking about the application’s objective: if it will be descriptive, predictive or prescriptive; the care and the effects it will have».

  • In choosing a technological application, it is important to understand the policies that guide it and, in the case of proposals that adapt previously used systems, to know the history of their implementation in other contexts, to integrate lessons learned and avoid using databases known to be built unethically.

  • Everyone participating in an AI project must have the capabilities needed for taking ownership of the technology, including those used throughout the system or application development process. While it is important to distribute roles in a work team, everyone on the team must feel themselves able to intervene in decisions around the technology used and the functions projected for the system. The creation of workshops or other spaces for exchanging knowledge about technologies is central to making this possible.

  • Any AI development process must be guided by the protection of people and of their autonomy. Strong anonymity and pseudonymity criteria must be contemplated in the construction and use of databases, as well as in later making them available.

Towards a conclusion: encourage participation, expand the communities, and build better futures

CC:BY (Maria José Porras Sepúlveda)

«The AI we have today is going to be the infrastructure of tomorrow’s digital society», warns Gina Neff. Hence the importance of countering dominant narratives that present AI as an ahistorical solution, unaware of structural social inequalities which, if not recognized, will be entrenched in our future.

An Artificial Intelligence at the service of social justice is a territorialized technology, created for the community, with the community. Communities should not be treated as external entities separate from the development of the systems, but as integral participants in the processes: communities should be actively engaged in every stage of developing an AI system or application, starting from the initial planning and design phases, and they must be able to to critically examine and question the proposed interventions.

Creating and sustaining spaces for dialogue is crucial, as well as allowing community members to participate and exercise their agency in the research and development stages of an AI system.

Furthermore, adopting specific methodologies that facilitate the participation of individuals from diverse communities, particularly those most potentially impacted by an AI initiative, is key to advance a feminist framework to AI.

There is an ethical obligation for individuals involved in the development of AI from a feminist perspective, to share and disseminate their experiences and knowledge. It is important to document processes and to consider making codes and data available using open formats, always respecting the particularities of each context and people’s privacy. By doing so, future initiatives can benefit from shared knowledge, common principles, and commitments.

Opting for free, open, and interoperable databases and technical infrastructures not operated by big tech companies, whenever possible, is a political stance that supports alternative approaches to technology development.

These measures can help in the urgent task of imagining futures that are more just and free from oppression.

CC:BY (Maria José Porras Sepúlveda)

Participants in the sessions

  • Cristina Martinez Pinto is the founder and CEO of the PIT Policy Lab. She has worked as a Digital Development Consultant at the World Bank, directed C Minds’ AI for Good Lab, and co-founded Mexico’s National AI Coalition IA2030Mx. She is a former student of the Global Shapers community of the World Economic Forum (WEF), member of The Day One Project Technology Policy Accelerator and the youngest member sitting at the Beeck Center for Social Impact and Innovation Board of Advisors.

  • Fernanda Carles is an activist, educator and programmer. She has worked for 5 years in coordination, management and consulting roles for civil society organizations, addressing topics like education with technology, digital security, ethical technology and human rights on the Internet. She is currently responsible for an educational maker space and works in research at the Mechanics and Energy Laboratory at the Universidad Nacional de Asunción, using machine learning to monitor and predict air pollution in the city.

  • Gina Neff directs the Minderoo Centre for Technology and Democracy at Cambridge University. Her award-winning research focuses on how digital information is changing our work and our daily life. Her books include Venture Labor (MIT Press 2012), Self-Tracking (MIT Press 2016) and Human-Centered Data Science (MIT Press 2022).

  • Ivan Meza is a researcher at the National Autonomous University of Mexico (UNAM). A specialist in Natural Language Processing, he has worked on the development of indigenous language translators.

  • Karla Prudencio is the head of Political Advocacy at REDES A.C. and a researcher at the Centro Mexicano de Tecnología y Conocimiento Comunitario (CITSAC). She was Head Legal Counselor at Mexico’s Federal Telecommunications Institute and head of the Office of Transparency and Data Protection at the Center for Economic Research and Teaching. She has a history of work with rural and indigenous communities in Mexico around connectivity and digital rights.

  • Luz Elena Gonzalez is a technologist committed to the ethical design of technology policies to create more inclusive, sustainable and resilient cities in Latin America. As a project leader at the PIT Policy Lab, she has developed the organization’s gender flow, manages research teams and develops public policy recommendations.

  • Maia Numerosky is a Data Science Engineer at Eryx Coop. She has worked as a multidisciplinary teacher in Secondary and Higher Education in Mathematics. She holds a bachelor’s degree in Applied Math from the Buenos Aires University.

  • Maria Paz Hermosilla is Founder and Director of GobLab, a public innovation laboratory of the School of Government at the Universidad Adolfo Ibáñez, an expert in public innovation and technology use for the transformation of Government. She has held positions in State administration and has counseled agencies on topics of transformation of the State, innovation and ethical use of information. Data ethics teacher for post-graduate studies at various schools of the UAI.

  • Sofía Trejo is a Doctor of Mathematics and a researcher at the Barcelona Supercomputing Center (BSC-CNS) in Spain, specialized in the ethical, legal, social, economic and cultural aspects of Artificial Intelligence. She is interested in promoting a critical understanding of technology inside and outside academic spaces with a particular emphasis on topics related to gender and Global Souths.

  • Virginia Brussa is a teacher and researcher in the areas of data, gender, international context in technological governance and public policies. She coordinates the +Datalab project (UNR), is co-director of the research unit on Open Environmental Education at the Environmental Studies and Sustainability Platform (PEAS-UNR) and counterpart for local open data projects of Argentina’s Federal Open Government Plan. 


Credits

This project has been idealized and led by Juliana Guerra together with the Derechos Digitales team and had the collaboration of Adriana Castrillón and Maria José Porras Sepúlveda.

This effort has been possible thanks to the support of the f<a+i>r Network.

Team for the talks

Alejandra Erramuspe
Adriana Castrillón
Juliana Guerra
Ileana Silva
María Encalada

Systematization and notes

Adriana Castrillón
Juliana Guerra
Ileana Silva

Text

Ileana Silva
Jamila Venturini
Vladimir Garay

Review and editing

Vladimir Garay

Translation

Alice Nunes, Jennifer Marshall e Sarah Reimann de Urgas Tradu.c.toras

Video (conception and script)

Ileana Silva
Vladimir Garay

Illustrations and animations

Maria José Porras Sepúlveda

Financial and administrative support

Camila Lobato
Juan Carlos Lara
Paula Jaramillo

General supervision

Jamila Venturini
Juan Carlos Lara
Michel Souza
Vladimir Garay

Version and license

“Feminist reflections for the development of Artificial Intelligence”

Version 2.0 of May 24, 2023.

This work is licensed under a Creative Commons Attribution 4.0 International (CC BY 4.0) license: https://creativecommons.org/licenses/by/4.0/deed.en