“Responsible AI” is an approach for generating and evaluating AI systems that act in a “responsible manner” (e.g., avoid being “responsible” for harms).


“Explainable AI” is the subfield of AI concerned with providing explanations for how AI systems arrive at their predictions, decisions or other output.


Theory-driven AI emphasizes the necessity for truly understanding current AI systems, and for building future AI systems that are efficient, explainable and trustworthy.