NORMATIVE PRINCIPLES FOR GENDER EQUITY IN AI


Introduction

The following principles offer a set of tools that can be employed to address gender inequity in AI in two central ways: First, these principles call for fostering a welcoming and inclusive AI work environment that adheres to the values of equality, respect, reflexivity, and accountability. Second, the principles highlight the need to critically reevaluate ML/AI research practices and their social impacts to ensure more inclusive and socially responsible AI systems. 

1. Diversifying Participation

Diverse participation begins at the early stages of forming AI research teams to the development and deployment of AI systems. It means prioritizing the inclusion of individuals with diverse perspectives who can meaningfully participate and contribute to the development process. Diverse teams are more creative and rigorous than non-diverse teams, and promote more innovative outcomes (Bodla & al, 2018, Díaz-García & al, 2014).

Moreover, when ML systems are developed by designers who reflect the diversity of those who will use them, they will produce less biased and discriminatory outcomes. For example, gender diversity in ML research teams can ensure that Two-spirit, non-binary, trans and intersex people are not miscategorized by ML systems within a gender binary. Systems that rely on binaries for gender in data descriptions often fail to recognize a wider diversity of gender identities. Therefore, developing diverse teams can reduce these omissions and enrich analyses and projects, making them more nuanced and inclusive. To accomplish these goals, should be in place to ensure that all actors can shape projects, provide their personal expertise, and be considered in decision-making processes.


Diversifying participation also requires including relevant stakeholders’ perspectives in the design of projects. Relevant experts and members of the communities most impacted by the technologies should be able to collaborate meaningfully in developing AI systems. For instance, if a project is undertaken to develop a technology for wheelchair users, medical experts, policymakers, healthcare providers, and disabled communities should be consulted and involved in the design stages of the project. Protocols for refusal of participation should also be established, especially for technology deployment or data use decisions.

[click block to read more]

2. Prioritizing Intersectional frameworks

In 2022, according to the US Department of Labor, women made up about a third of the US computing workforce. However, only 3% were Black women, and 2% were Latinx women. These statistics reveal that the barriers to participation for white women are different than those experienced by women of color. We can more effectively address these barriers to participation through an intersectional framework.

Intersectionality recognizes that individuals have multiple interlocking social identities (such as race, gender, class, sexuality, and ability) that shape their lived experiences to create distinct personal realities (Crenshaw, 1989). Women with different identities interact differently with technology (Haraway, 1985; Kafer, 2014; Benjamin, 2019). Conceptualizing diversity through an intersectional framework helps to work toward research teams composed of a wide range of professionals with varied personal expertise. This makes teams better at identifying and addressing blind spots and erroneous assumptions, making development processes more efficient, rigorous, and trustworthy.


An intersectional approach to the development and deployment of AI systems also recognizes that different groups have unique relationships with various technologies. For example, if a team of researchers develops a technology to detect certain medical conditions, including a range of body types in the testing stages, considering race, gender, weight, ability, age, etc. will ensure that a wider group of individuals with different body types are included and served by the ML technology. Prioritizing intersectional needs when developing models benefits more people.

[click block to read more]

3. Designing Reflexive Projects

If intersectional frameworks highlight the various ways that people with different backgrounds relate to AI and ML systems, it then follows that AI and ML projects are also inextricably shaped by the practitioners who develop them. There are no unbiased people, thus it is crucial to pay attention to how designers’ and stakeholders’ needs and desires influence the systems they create, as well as the origins and characteristics of the datasets they use.

This awareness is critical for responsible machine learning and artificial intelligence for two key reasons. First, when AI and ML development is understood as an endeavour which exists within the domain of human culture and decision-making, any claim for design neutrality dissolves. Second, when we know who is onboard in the development of a project, we can ascertain a better understanding of a project’s capabilities.


For example, Gallaudet University (a university that operates entirely in American Sign Language) enlists d/Deaf architects for the design of “DeafSpace” or sign-language friendly architecture, altering otherwise unknown architectural challenges relating to space, proximity, wall colour, and sensory reach (Gallaudet, 2023). In this case Deafness, fluency in ASL and a degree in architecture serves as a specific lived context that informs how physical space can be designed for sign language communication specifically. This example illustrates the way that positionality has a direct relationship to design and design claims, thus making specific the ways that different applied knowledge is produced.


Humanizing machine learning processes is intimately linked to intersectionality as it acknowledges that each person develops partial perspectives and expertise based on their own lived experience. Being reflexive is recognizing our own, and working with individuals who complement them. Reflexivity leads to more transparency, because a reflexive approach entails explicitly aligning a project’s objectives with the abilities and life experiences of the project’s participants.

[click block to read more]

4. Foregrounding the materiality of AI

Through imagery, marketing and discourse, AI is often represented as an immaterial, abstract technology. A critical approach to machine learning goes beyond these conceptions to highlight the human intervention, labour and environmental impacts intrinsic to the development of ML systems.

Machine learning models are designed and developed by people, and datasets are never ‘raw’. It is important to recognize this to challenge the idea of disembodied data subjects which works to obscure their varied interests, lived experiences, and locations within power relations (Gray & Witt, 2021). Moreover, human decision-making goes into the collection, organization, and preparation of data, and this labor is often unrecognized and undervalued. It is well documented that ML development relies on extractive forms of labor in the organization and labeling of data. Addressing this level of inequity in your project means having an awareness of how your data has been organized, where it was trained, and by which people. In addition to interrogating gender equity at the design stages of AI/ML development, equity also requires noticing the areas of AI/ML development that have been historically precarious, underpaid, invisibilized, and where workers from the Global South are often over-represented (Crawford, 2021; Irani, 2015).


Additionally, shedding light on the materiality of ML systems also means recognizing the environmental implications of technology development. The extraction of natural resources and the large carbon footprints linked to the lifecycles of machine learning are important to consider and be acknowledged in transparent ways: Only then can strategies be put in place to minimize these impacts in the future. Recognizing the reliance on extractive labour and the environment and naming these issues is an important step to taking measures to alleviate negative material effects and work to change unequal systems.

[click block to read more]

5. Identifying and Addressing Biases

To ensure fair AI systems and non-discriminatory outcomes, practitioners should identify at which point in the process of developing AI systems various sources of harm can unfold. These sources of harm can manifest as a range of different biases, depending on the stage and process in the project; for this reason, there are no standard mitigation strategies and addressing issues should take place on a case-by-case basis. But by identifying where the sources of bias arise, practitioners can better ascertain the problem and prevent worst-case scenarios in future projects.

Biases in the system can be identified and mitigated at various stages, from data collection to model deployment. ML/AI practitioners have highlighted several types of biases that can occur in the process of developing AI systems: historical bias, representation bias, measurement bias, aggregation bias, learning bias, evaluation bias and deployment bias (Suresh & Guttag, 2021, 3-4). Historical bias refers to biases in datasets that already exist in the world, and hence get reproduced in machine-learning systems; text datasets, for instance, which are used to develop large language models, are known to contain gendered biases and stereotypes. Representation bias refers to situations where a data sample used to develop a model doesn’t represent the use population and underrepresents parts of it. Evaluation bias, on the other hand, occurs when benchmarks employed to evaluate ML models and algorithms misrepresent the use population. For instance, facial recognition technologies, have well-known representation and evaluation biases. Higher error rates for dark-skin tones are well-documented, and this has been linked to the underrepresentation of dark-skinned faces in the training datasets used to develop their models.


These are just a few examples of biases that can arise from AI systems. To address all these biases, machine learning algorithms and models should be monitored and evaluated regularly using algorithmic fairness measures and techniques, in addition to human auditing and intervention.

[click block to read more]

6. Ensuring Transparency and Accountability

AI systems’ development and decision-making processes should be clear, understandable, and interpretable to ensure auditing and identifying biases that may arise from them. It is also crucial that development teams are honest about the capacity of the project and are communicating these capacities both internally and externally to prevent misuse of the final project.

As a first step towards transparency, accountability mechanisms should be established in collaboration with relevant communities either before or in the early stages of the project. This means establishing a meaningful channel of communication and outreach among the ML/AI team and relevant communities and stakeholders. As the project develops, open access to the datasets, algorithms, as well as development and decision-making processes should be available to these same communities. This openness has the potential to facilitate a framework of trust and reliability if done reflexively, mindfully and respectfully. ML/AI practitioners can work together with communication teams to document and communicate the risks and potential improvements that can be made at various stages of the development process to relevant communities and stakeholders. This addresses concerns of ‘black boxing’ and encourages a dialogue among practitioners, the wider project team and relevant community members.


Accountability implies that humans, at all stages of a project, are ethically responsible in their positions for any consequences that might arise from an ML system. It starts with thoroughly educating researchers and designers about the social impacts of their work. This can be done in workplaces, through training and the implementation of guidelines and resources for development teams. In a larger context, this also gets achieved through a more interdisciplinary training in STEM higher education programs. Finally, having processes such as presenting clear avenues of redress to the public and designating accountable individuals in case of negative social impacts can constitute meaningful solutions to hold ML systems’ creators accountable.

[click block to read more]

7. Democratizing AI knowledge and Research

ML/AI knowledge and research must be accessible to relevant stakeholders to ensure inclusivity and participation of the public in creating responsible AI systems. By removing barriers and disparities in accessing AI resources, such as research papers, educational materials, and technical documentation, the foundation for a more inclusive and democratized AI community can be developed. This includes making AI knowledge and research publicly available through providing open access resources and using non-technical communication and dissemination methods to ensure that more people, regardless of their expertise, has access and can contribute to the development of AI Systems. It also implies making the functioning of AI systems easily explainable to the public, so that users know to use them, but also understand how their output was produced and their limitations. This also makes sure that individuals can inquire into predictions or decisions made about them, understand how they were made, and are able to request second looks or human intervention if necessary.

Rather than addressing a generalized audience, the dissemination of knowledge should address the specific context of the community in which machine learning/AI is situated or has an impact on. Access, then, depends on the needs of the community at hand. Accessible knowledge dissemination thus can take various forms: Access provisions responding disability, language(s) spoken, expertise, internet service, social or cultural understanding, and/or lack of information.

[click block to read more]


Bibliography