Responsible

Artificial Intelligence

"Responsible AI" is an approach for generating and evaluating AI systems that act in a “responsible manner” (e.g., avoid being “responsible” for harms).
Responsible Plum

Browse research content

These are samples of research projects, papers or blogs written by our researchers.

More about Responsible AI

Have you ever heard of “Responsible AI”?

As per the dictionary, being “responsible” means at least three things:

  1. being the cause of a particular action or situation (especially a harmful or unpleasant one),
  2. having the duty of taking care of something, and
  3. having good judgment and the ability to act correctly and make decisions on your own.

Whichever sense of the adjective we choose, “responsible” seems to go very well with contemporary Artificial Intelligence. Regarding the first sense, it is just a matter of fact that AI systems are responsible for a number of things. Consider, for example, the revolutionary impact that AI systems could bring to the current job market, the impact on decision-making processes, the consequences for the integrity and truthfulness of information, and so on. Focusing on models that generate language or images based on a prompt, that is, the so-called generative AI (ChatGPT, DALL-E, i.a.), the news and social media have recently been flooded with news reporting its responsibility in causing or fostering negative and harmful actions and behaviors. Among many others, generative AI has been responsible for underrepresenting or misrepresenting certain communities, making biased judgments and decisions, propagating biases and stereotypes, and ignoring or flattening opinions and points of view.

As for the second sense of the adjective, there is no doubt that this technology has many duties towards those who use it, that is, their users. Given the global reach of AI and the almost limitless ways and purposes for which it can be used, those who develop and make it available must ensure, among many other things, that this technology is reliable, truthful, and consistent. It should be usable by anyone in any language, be adapted to social and geopolitical contexts and at the same time impartial and egalitarian, represent all (legitimate) positions, be based on free and reliable data, disclose the processes that led to a certain output or decision, and so on. As one can imagine, achieving this result involves a great collective effort that goes beyond the technical work of programming these technologies. In fact, it is necessary that researchers with various backgrounds (computer science, linguistics, ethics among many others), tech companies, public and private investors and decision-makers, supranational institutions, and end users actively collaborate to achieve this goal. In this light, while the first sense of “responsible” is already de facto, the second one is currently a desideratum.

Finally, the third sense of “responsible” involves being able to make good judgments, act correctly, and make independent decisions. Again, this sense of the adjective also applies well to the current discussion about the purpose and capabilities of AI systems. While the concepts of AI agency and consciousness are highly debated and underlie at least two opposing lines of thought (we should enforce it vs we shouldn’t), it is unanimously agreed that AI should eventually be used for good, that is, it should aim to be useful to its users and make a positive impact on human behavior, actions, and decisions. For this to happen, it is necessary that those who work on the development and evaluation of these technologies — like us here at CERTAIN — have in mind what the potential and risks they offer are, and work to propose solutions that prioritize the well-being, growth, safety, and enrichment of its end users. Concretely, we make our contribution to this agenda by extensively testing the capabilities and limitations of current AI systems, investigating the internal mechanisms that guide their outputs and decisions, intervening on incorrect or biased mechanisms, or adapting the outputs to the needs and background of a specific user. All this is what we can do here and now.

At first, I asked: Have you ever heard of Responsible AI? If you got this far, your answer was probably no. In that case, hopefully, this post was helpful for you to learn a little more. I’m sure you’re wondering if there aren’t more things that AI needs to be responsible for. In fact, the answer is yes, and one of the aims of CERTAIN is precisely to discover and spread them. Stay tuned to find out more!

Projects within this research theme​

Automated Reasoning for Economics

In this project, we develop algorithms to support economists who try to design new mechanisms for group decision making.
Responsible White
Fairness White
Algorithm White
Theory-driven White
Ongoing

Bias across borders: towards a social-political understanding of bias in machine learning

Goal of this project is to provide a clear conceptual framework on the notion of bias in machine learning for AI researchers in the context of algorithmic fairness.
Responsible White
Bias White
Theory-driven White
Ongoing

Can NLP bias measures be trusted?

van der Wall, O., Bachmann, D., Leidinger, A., van Maanden, L. Zuidema, W. & Schulz, K.

Automating the Analysis of Matching Algorithms

Endriss, U.

Participatory budgeting

Improving Language Model bias measures

Explainability in Collective Decision Making

Papers within this research theme

Are LLMs classical or nonmonotonic reasoners? Lessons from generics