Project details

Trustworthy White
Explainable White

Explainable AI for Fraud Detection

Fraud in financial services costs billions of euros every year, undermining the structure and well-being of our society at a great scale. And the induced emotional cost that wounds thousands of lives mentally brings additional damage which is hard to measure. Therefore, building fraud detection systems that are not only accurate but also explainable is key in ensuring trust and transparency among regulators, business stakeholders and the customers, which makes it an important avenue for both industry and scientific research.

In this project, we study and develop new generation AI systems for fraud detection which are not only accurate, but also explainable for different stakeholders and adaptive to certain business rules. Some questions that we work on focus on the explainability e.g., “To what extent can we use AI systems to build explainable rules that govern the manifestation of the data?“. Some other questions emphasize controllability e.g., “To what extent can we guarantee that the system can detect new schemes of scam and anomalies, while it can still follow certain rules and policies?”. At a more fundamental level, we try to explore deep intricate relations i.e., information-theoretic and algorithmic barriers on trade-offs between size of an explanation of a system’s behavior and its accuracy.

An example of a paper related to this project is “Explainable Fraud Detection with Deep Symbolic Classification“.