Fairness

in Artificial Intelligence

The term “fairness” covers a multitude of different concepts. Choosing the right notion of fairness for a specific AI system presents a formidable research challenge.
Fairness Plum

Browse research content

These are samples of research projects, papers or blogs written by our researchers.

More about Fairness

As most people would agree, we should want any AI system deployed in the public sphere to treat all people whose lives might be affected by its operation in a manner that is fair. But what does that actually mean? What is “fairness”?

One aspect of fairness regards the absence of undue bias. In a nutshell, this boils down to ensuring that people who should be treated equally by the AI system really are treated equally. For example, a system that decides on the credit-worthiness of an applicant may base its decision on relevant attributes such as the applicant’s financial history—but not on irrelevant attributes such as, say, their gender.

Treating people fairly is not as simple as just treating everyone the same. That’s because individuals may differ greatly in their needs, desires, and abilities. This insight goes back to at least Aristotle, and the question of how best to define the elusive notion of fairness has occupied philosophers ever since. John Rawls (1921–2002), often described as the most important political philosopher of the 20th century, proposed to address this question by means of an intriguing thought experiment, the veil of ignorance:

“Imagine that, shortly before the creation of the universe, you are asked to decide on the rules that should govern society. The catch is that you don’t yet know anything about your own position in society (such as your gender or ethnicity). How do you decide?”

Rawls’ claim is that, behind this veil of ignorance, a rational person would end up designing a society that’s fair.

Now, what does all of this have to do with AI? Rawls’ idea is pretty abstract. Designing all of society from scratch is just not a thing most of us will get to do very often. But building an AI system from scratch absolutely is something that people do. And to do it well, they will need to put themselves in the position of a user whose personal circumstances they do not know about. So, when designing AI systems, this abstract philosophical thought experiment suddenly becomes a very concrete question of immediate practical significance.

So let’s explore some possible answers to the question of how to design the rules of an AI system. Suppose that every user derives some utility —a number between 0 and, say, 100—from every decision taken by the system.

In a utilitarian system, we would try to take decisions that maximise average utility. Such a system is fair if we are willing to assume that people are risk-neutral, because—in expectation—everyone will receive the highest utility possible.

In an egalitarian system, we instead try to maximise the utility of the worst-off individual. In other words, we try to maximise minimum utility. This is fair if we are willing to assume that people are risk-averse and would want to protect themselves as best as possible in the worst of circumstances.

The utilitarian and the egalitarian approach are maybe the two most obvious routes one could take. But there are other options as well. For example, the game theorist John Nash (1928–2015), who received the Nobel Prize in Economics in 1994 and whose life was the subject of the 2001 Hollywood blockbuster A Beautiful Mind, proposed that we should aim for a decision that maximises the product of individual utilities (rather than their average or minimum). This might seem counterintuitive at first, but it actually is a beautiful compromise between the utilitarian and the egalitarian view: The product of utilities successfully tracks both increases in individual utility (utilitarian) and decreases in inequality (egalitarian).

Designing the right fairness principle for a new AI system and its intended domain of application represents a formidable research challenge in the field of AI. To start with, it requires acute awareness of related scholarship in the humanities and social sciences. Then, once we have identified a fairness principle of interest, to have any chance of creating a system that can offer formal guarantees regarding the respect of this principle, we need to find a suitable mathematical encoding of our fairness principle. Both of these considerations entail that we require a theory-driven approach to building and studying AI systems.

Determining the solution that is best in view of a given notion of fairness will often be a computationally demanding task, requiring sophisticated algorithms. Finally, convincing other scientists that our system respects certain fairness principles is one thing. Making the decisions taken by our AI system explainable to everyday users is yet another important challenge.

Projects within this research theme​

Automated Reasoning for Economics

In this project, we develop algorithms to support economists who try to design new mechanisms for group decision making.
Responsible White
Fairness White
Algorithm White
Theory-driven White
Ongoing

Participatory Budgeting

In this project, we design and analyse voting rules for participatory budgeting, a direct democracy initiative where residents jointly decide on how public funds are spent.
Fairness White
Ongoing

Explainability in Collective Decision Making

In this project, we develop methods for automatically generating explanations for why a given compromise (between disagreeing people) is the best available option.
Fairness White
Explainable White
Algorithm White
Ongoing

Can NLP bias measures be trusted?

van der Wall, O., Bachmann, D., Leidinger, A., van Maanden, L. Zuidema, W. & Schulz, K.

Automating the Analysis of Matching Algorithms

Endriss, U.

Participatory budgeting

Improving Language Model bias measures

Explainability in Collective Decision Making

Papers within this research theme

Automating the Analysis of Matching Algorithms