Inclusive

Artificial Intelligence

"Inclusive AI" is about advocating for AI that is developed by and for everyone, in order to design just and empowering technologies.
Inclusive Plum

Browse research content

These are samples of research projects, papers or blogs written by our researchers.

More about Inclusive AI

‘Inclusive AI’ rings bells similar to ‘fairness in AI’ and ‘bias in AI’, the fields that try to overcome inequalities resulting from biases in datasets. However, although Inclusive AI is of course related to overcoming such inequalities, it also entails practices beyond ‘debiasing’ and ‘fairness’. Even if AI technology could be run on unbiased datasets or could act ‘fairly’ in the sense that it distributes outcomes equally across users, that would not necessarily mean that it is inclusive.

Inclusive AI means that AI technology, as a whole, should be developed for and by everyone. This means that Inclusive AI demands an inclusive and diverse group of developers. After all, it will be hard to anticipate all the needs and wishes of different user groups justly if developers only move and think within a homogenous group. An existing critique that Silicon Valley cannot seem to shake is that its engineers and developers are predominantly male, white and in their 20s and 30s. However, being a young male will come with its particular biases and therefore biased designs. This critique has for instance spurred the field of FemTech, technologies aimed at empowering women’s health. Entrepreneurs in this field advocate for technologies that are designed by and for women (and non-binary people) to avoid that they become designed as heteronormative, for example pink and purple applications that have fertility and pregnancy as their end-goal. Academics and activists such as Safiya Noble, Ruha Benjamin and Timnit Gebru are well-known advocates of inclusive and diverse environments among AI producers and developers. In general, active involvement of consumers, customers, patients and residents in developing and improving AI technology, based on the needs of the communities of end-users, is key for Inclusive AI.

Furthermore, Inclusive AI entails that AI technology is accessible to everyone no matter the color of their skin, their gender, their cognitive and physical abilities, religion, sexuality or geographic location. To work towards such equal access, one can think of practices aimed at enhancing people’s data skills and data literacy. Or, of making technological infrastructures, such as hardware and software, widely available. In addition, the burdens of the use and the access to AI technology should be equally distributed. For instance, people who live in the Global South tend to suffer from the costs and harms of AI technology without enjoying its benefits. Examples of high costs and harms are the exploitation of labor in low-wage countries to ensure ‘ethical AI’ for Global North users, but also the results of the exploitation of natural resources, the mining of rare minerals, e-waste and ecocide involved in enabling the technological infrastructures for AI. All the while, these people are not treated as the potential end users but as the means to profitable technology for a select group.

Finally, Inclusive AI indicates AI technologies that are explicitly aimed at (marginalized) groups to enhance their capabilities, recognition, participation and opportunities in society. For instance, speech and image recognition applications are important technologies that might aid people with visual constraints; smart wheelchairs and canes might assist people with mobility constraints. While we often discuss its dangers, AI technology of course also has the potential to support people’s autonomy and foster (social) justice in society. Provided that inclusivity is one of its core normative principles.

Projects within this research theme​

Inclusive Image and Video Captioning

Current AI systems describing the content of images and videos in natural language often make assumptions about the gender, nationality, or physical appearance of the people in them. In this project, we aim to fix this unwanted behavior by developing more inclusive systems.
Bias White
Inclusive White
Ongoing

Can NLP bias measures be trusted?

van der Wall, O., Bachmann, D., Leidinger, A., van Maanden, L. Zuidema, W. & Schulz, K.

Automating the Analysis of Matching Algorithms

Endriss, U.

Participatory budgeting

Improving Language Model bias measures

Explainability in Collective Decision Making

Papers within this research theme

Dealing with semantic underspecification in multimodal NLP