1) Lorenzo Malandri, Fabio Mercorio, Mario Mezzanzanica and Andrea Seveso - Presenting MERLIN: A Tool For Model-Contrastive Explanations Through Symbolic Reasoning
2) Caterina Borzillo, Alessio Ragno and Roberto Capobianco - Understanding Deep RL agent decisions: a novel interpretable approach with trainable prototypes
3) Andrea Tocchetti, Jie Yang and Marco Brambilla - Rationale Trees: Towards a Formalization of Human Knowledge for Explainable Natural Language Processing
4) Simona Colucci, Tommaso Di Noia, Francesco M. Donini, Claudio Pomo and Eugenio Di Sciascio - Irrelevant Explanations: a logical formalization and a case study
5) Soumick Chatterjee, Arnab Das, Rupali Khatun and Andreas Nürnberger - Unboxing the black-box of deep learning based reconstruction of undersampled MRIs
6) Andrea Colombo, Laura Fiorenza and Sofia Mongardi - A Flexible Metric-Based Approach to Assess Neural Network Interpretability in Image Classification
7) Andrea Apicella, Salvatore Giugliano, Francesco Isgro and Roberto Prevete - SHAP-based explanations to improve classification systems
8) Muhammad Suffian, Ilia Stepin, Jose Maria Alonso-Moral and Alessandro Bogliolo - Investigating Human-Centered Perspectives in Explainable Artificial Intelligence