Project details

Fairness White
Explainable White
Algorithm White

Explainability in Collective Decision Making

Taking decisions in groups typically requires making compromises. Once we have settled on a given compromise, it is important to be able to justify why it constitutes a fair decision, even if not everyone got their most preferred outcome.

AI has the potential to support decision makers in generating explanations for why a given decision is the right one in view of both the individual preferences of the people involved and the normative principles (such as “treat all voters the same!”) that are deemed relevant to the decision at hand. In this project, we develop methods for automatically generating explanations for why a given compromise (between disagreeing people) is the best available option. In a multi-year effort, we have been developing a comprehensive theory of explainability and justifiability of decisions made by groups of people, and we have designed efficient algorithms to support the generation of explanations in practice.

A simple prototype that can be used to generate explanations for scenarios with small numbers of decision makers and alternatives is available at https://demo.illc.uva.nl/justify/.