Artificial Intelligence and Intersectional Development
Photo: © Mike MacKenzie/Image via www.vpnsrus.com / flickr.com; CC BY 4.0 Deed

We all know that AI brings enormous benefits to the development agenda in the digital era. It is already the main driver of emerging technologies like big data, robotics and IoT — not to mention generative AI, with tools like ChatGPT and AI art generators garnering mainstream attention. It can, nevertheless, instill bias, and significantly compromise the safety and agency of users worldwide.

Increasingly these inter-dependent and inter-connected elements are getting more international scrutiny. The UN Security Council for the first time held a session on 18th July 2023 on the threat that artificial intelligence poses to international peace and stability, and Secretary General António Guterres called for a global watchdog to oversee a new technology that has raised at least as many fears as hopes (Reuters, 2023). The UN Special Rapporteur on the rights of persons with disabilities presented a report (March 2022) to the Human Rights Council on artificial intelligence (AI) and the rights of persons with disabilities (OHCHR, 2021). Enhanced multi-stakeholder efforts on global AI cooperation are needed to help build global capacity for the development and use of AI in a manner that is trustworthy, human rights-based, safe, and sustainable, and promotes peace. In fact, the multi-stakeholder High-level Advisory Body on Artificial Intelligence, initially proposed in 2020 as part of the Secretary-General’s Roadmap for Digital Cooperation (A/74/821) (UN, 2020), is now being formed to undertake analysis and advance recommendations for the international governance of artificial intelligence (AI).

The UN Security Council for the first time held a session on 18th July 2023 on the threat that artificial intelligence poses to international peace and stability, and Secretary General António Guterres called for a global watchdog to oversee a new technology that has raised at least as many fears as hopes.

The intersectional case of Gender and Disability Inclusion

AI systems that are designed without considering the experiences and needs of diverse populations can perpetuate discrimination and inequality. For example, AI chatbots that take commands from customers are already reinforcing unfair gender stereotypes with their gendered names and voices. Facial recognition algorithms have shown a higher error rate for identifying women and people of colour, which is a direct result of the biased training data. Gender insensitive AI designs are leading to unfair credit scoring of women. Biased AI-recruiting tools have automatically filtered-out job applications from women.

AI can be of targeted benefit to women and girls with disabilities and drive the search for inclusive equality across a broad range of fields as highlighted by the Convention on the Rights of Persons with Disabilities. These include the rights to privacy, autonomy, education, employment, health, independent living, participation. However, we must be aware of many well-known discriminatory impacts of AI. It is made ‘smart’ through a process of machine learning dependent on a set of training data, or ‘algorithms,’ that often includes data shaped by prior human decisions and value judgements that may be faulted on many grounds. Disability can be ‘seen’ by the technology as deviant and therefore unwelcome. Emotion recognition technology, used to make evaluative judgements about people, also raises significant disability rights, privacy, and confidentiality concerns.

AI tools can carry human biases and be exclusionary towards marginalized groups including people with disabilities. Women in AI may face gender bias and stereotyping in the workplace, which can impact their career progression and limit their opportunities for advancement. One example was in employment, where recruitment processes across countries and global-regional-national institutions increasingly use algorithms to filter out candidates.

AI systems that are designed without considering the experiences and needs of diverse populations can perpetuate discrimination and inequality. (...) Facial recognition algorithms have shown a higher error rate for identifying women and people of colour, which is a direct result of the biased training data.
Photo: © Mike MacKenzie/Image via <a href="https://www.vpnsrus.com">www.vpnsrus.com</a> / flickr.com; CC BY 4.0 Deed <br>
Photo: © Mike MacKenzie/Image via www.vpnsrus.com / flickr.com; CC BY 4.0 Deed
Technology companies are expected to address actual or potential negative impacts regardless of the approach policy makers take to promote implementation of international normative standards. It is, therefore, important to promote human rights due diligence at the heart of artificial intelligence application.

Human Rights Due Diligence

Technology companies are expected to address actual or potential negative impacts regardless of the approach policy makers take to promote implementation of international normative standards. It is, therefore, important to promote human rights due diligence at the heart of artificial intelligence application. Misinformation and inherited biases associated with AI can have unintended outcomes across all sectors of society and risk leaving historically marginalized communities to deal with the brunt of the impact. We must consult Civil Society Organisations (CSOs) and, Organisations of Persons with Disabilities (OPDs) at the national and global levels before formulating AI policies.

UN Member States, large private sector and MNCs, and other global regional organisations should include gender, disability, intersectionality in their artificial intelligence strategies including national digital inclusion strategies. Conducting human rights due diligence and adopting a human rights-based approach to AI ethics and impact assessment should be a regular development process. Engaging people with human rights expertise to join AI ethics teams to encourage multi-disciplinary thinking and spread awareness of human rights organization-wide might help. AI has the potential to help break down barriers and create a more inclusive society by providing innovative solutions for individuals with disabilities. The synergy between accessibility and AI is reshaping the way we approach inclusivity. For both accessibility and disability inclusion, stakeholders must apply their obligation of ‘reasonable accommodation,’ as well as explicitly taking disability into account when purchasing AI products and services.

Engaging people with human rights expertise to join AI ethics teams to encourage multi-disciplinary thinking and spread awareness of human rights organization-wide might help. AI has the potential to help break down barriers and create a more inclusive society by providing innovative solutions for individuals with disabilities.
Photo: © DisobeyArt/iStock-1405999300 <br>
Photo: © DisobeyArt/iStock-1405999300
Disability can be ‘seen’ by the technology as deviant and therefore unwelcome. (...) AI tools can carry human biases and be exclusionary towards marginalized groups including people with disabilities.

A Case of SSTC for AI

There is robust case for promoting South-South and Triangular Cooperation (SSTC) across different regions on artificial intelligence good practices and lessons learned. In fact, governments can establish a discussion forum on AI governance that engages all stakeholders, including human rights advocates, to foster better understanding and mutual benefit from others’ perspectives. These forums can promote better, among others, UNESCO’s Recommendation on the Ethics of Artificial Intelligence, including the international human rights obligations and commitments to which it refers, facilitating knowledge-sharing and capacity-building to enable effective implementation in all states. This mist include collaboration between civil society and the software development community on the development and use of AI to achieve the SDGs.

Let us continue to push for inclusion and diversity in the AI governance conversation, including by fostering robust Multistakeholders’ conversations in both public policymaking process, academic discourses including privately funded research and in any deliberations among businesses through, for example, the United Nations Global Compact.

Towards Inclusive, more Diverse AI

New technologies give us a chance to start afresh - starting with AI - but it is up to people, not the machines, to remove bias. According to the Financial Times, without the training human problem solvers to diversify AI, algorithms will always reflect our own biases.

While there is a growing awareness of the broad human rights challenges that these new technologies can pose, a more focused debate on the specific challenges of such technology to different groups including the rights of persons with disabilities is urgently needed. Participation rights apply intersectionally, covering Indigenous people, migrants, minorities, women, children, and older persons with disabilities, among others. The right of persons with disabilities and their representative organizations including organisations led by women with disabilities to participate in artificial intelligence policymaking and in decisions on its development, deployment and use is key to achieving the best from artificial intelligence and avoiding the worst.

Let us continue to push for inclusion and diversity in the AI governance conversation, including by fostering robust Multistakeholders’ conversations in both public policymaking process, academic discourses including privately funded research and in any deliberations among businesses through, for example, the United Nations Global Compact. In an increasingly digital world, both inclusion and accessibility take on a new level of significance. It is about ensuring that everyone, regardless of their abilities, can access and interact with technology and information without any fear or favor.

New technologies give us a chance to start afresh - starting with AI - but it is up to people, not the machines, to remove bias.

References
Dr. A. H. Monjurul Kabir
Dr. A. H. Monjurul Kabir, a senior global adviser at UN Women HQ, is a public speaker and an expert of political science, Governance, Gender, Human Rights, Disability Inclusion, and Intersectionality. Dr. Kabir writes extensively on making development inclusive, accessible, and intersectional, and can be reached out via twitter at: Monjurul KABIR (@mkabir2011) / X (twitter.com). Email