Project details

Responsible White
Bias White
Theory-driven White

Bias across borders: towards a social-political understanding of bias in machine learning

Though AI technology can help in many ways, it also can cause harm. One way in which AI systems can harm that has been heavily discussed in recent years is that an AI system can be biased against certain groups or individuals. This can occur with very simple systems, like the one used in the Dutch childcare benefits scandal, but biases have been also detected in state-of-the -art machine learning technology like chatGPT. At the moment a lot of energy is being invested in understanding, mapping and mitigating bias in machine learning systems. However, this means that AI researchers and engineers are suddenly dealing with complex social and ethical questions without the necessary training to do so. And indeed, this has lead to a lot of confusion and drawbacks in the field. Goal of our project is to provide a clear conceptual framework on the notion of bias in machine learning for AI researchers in the context of algorithmic fairness. The conceptual framework should align with the needs of AI researchers and will highlight the normative and social-political dimensions of the problem.

Browse related blogs

Discover our researchers’ blogs—icons by each post indicate its themes.

Language models White
Bias White

Going beyond a mathematical investigation of bias