Speaker: Prof. Giovanni Stilo, University of L'Aquila
Graphs' Counterfactual Explainability Landscape: current state and frontiers
Abstract: Counterfactual Explanation (CE) techniques have garnered attention as a means to provide insights to the users engaging with AI systems. While extensively researched in domains such as medical imaging and autonomous vehicles, Graph Counterfactual Explanation (GCE) methods have been comparatively under-explored. GCEs generate a new graph akin to the original one, having a different outcome grounded on the underlying predictive model. During this presentation, we take you on a journey across the GCE, commencing with foundational concepts. Next, we introduce the categorization of explainers, emphasize key tools essential for initiating work in this field, explore the latest advancements, offer a visual comparison of various generative methods, and conclude with our final remarks. More info, materials, and references at: http://aiimlab.org/events/AIxIA_XAI.it_2023_Graphs_Counterfactual_Explainability_Landscape_current_state_frontiers.html
Bio: Prof. Giovanni Stilo is a Computer Science and Data Science associate professor at the University of L'Aquila, where he leads the Master's Degree in Data Science and is part of the Artificial Intelligence and Information Mining Collective (http://aiimlab.org/). He received his PhD in Computer Science in 2013, and in 2014, he was a visiting researcher at Yahoo! Labs in Barcelona. His research interests are related to machine learning, data mining, and artificial intelligence, with a special interest in (but not limited to) trustworthiness aspects such as Bias, Fairness, and Explainability. Specifically, he is the head of the GRETEL (http://aiimlab.org/code.html) project devoted to empowering the research in the Graph Counterfactual Explainability field. He has co-organized a long series (2020-2023) of top-tier International workshops (http://aiimlab.org/events.html) and Journal Special Issues focused on Bias and Fairness in Search and Recommendation. He serves on the editorial boards of IEEE, ACM, Springer, and Elsevier Journals such as TITS, TKDE, DMKD, AI, KAIS, and AIIM.
He is responsible for New technologies for data collection, preparation, and analysis of the Territory Aperti project and coordinator of the activities on "Responsible Data Science and Training" of PNRR SoBigData.it project (http://sobigdata.eu/), and PI of the "FAIR-EDU: Promote FAIRness in EDUcation Institutions" project.
(15 min. presentation + 5 min. discussion per paper)
Caterina Borzillo, Alessio Ragno and Roberto Capobianco - Understanding Deep RL agent decisions: a novel interpretable approach with trainable prototypes
Soumick Chatterjee, Arnab Das, Rupali Khatun and Andreas Nürnberger - Unboxing the black-box of deep learning based reconstruction of undersampled MRIs
Andrea Tocchetti, Jie Yang and Marco Brambilla - Rationale Trees: Towards a Formalization of Human Knowledge for Explainable Natural Language Processing
Muhammad Suffian, Ilia Stepin, Jose Maria Alonso-Moral and Alessandro Bogliolo - Investigating Human-Centered Perspectives in Explainable Artificial Intelligence
10:30 - 11:00 - Coffee Break
11:00 - 11:35 - Invited Talk
Giovanni Stilo, University of L'Aquila - Graphs' Counterfactual Explainability Landscape: current state and frontiers
11:35 - 12:55 - Session 2
(15 min. presentation + 5 min. discussion per paper)
Simona Colucci, Tommaso Di Noia, Francesco M. Donini, Claudio Pomo and Eugenio Di Sciascio - Irrelevant Explanations: a logical formalization and a case study
Lorenzo Malandri, Fabio Mercorio, Mario Mezzanzanica and Andrea Seveso - Presenting MERLIN: A Tool For Model-Contrastive Explanations Through Symbolic Reasoning
Andrea Apicella, Salvatore Giugliano, Francesco Isgro and Roberto Prevete - SHAP-based explanations to improve classification systems
Andrea Colombo, Laura Fiorenza and Sofia Mongardi - A Flexible Metric-Based Approach to Assess Neural Network Interpretability in Image Classification