ECML PKDD International Workshop on

eXplainable Knowledge Discovery in Data Mining

Monday 18th September 2023

Call for Papers

In the past decade, machine learning based decision systems have been widely used in a wide range of application domains, like credit score, insurance risk, and health monitoring, in which accuracy is of the utmost importance. Although the support of these systems has an immense potential to improve the decision in different fields, their use may present ethical and legal risks, such as codifying biases, jeopardizing transparency and privacy, and reducing accountability. Unfortunately, these risks arise in different applications. They are made even more serious and subtly by the opacity of recent decision support systems, which are often complex and their internal logic is usually inaccessible to humans.

Nowadays, most Artificial Intelligence (AI) systems are based on Machine Learning algorithms. The relevance and need for ethics in AI are supported and highlighted by various initiatives arising from the researches to provide recommendations and guidelines in the direction of making AI-based decision systems explainable and compliant with legal and ethical issues. These include the EU's GDPR regulation which introduces, to some extent, a right for all individuals to obtain ``meaningful explanations of the logic involved'' when automated decision making takes place, the ``ACM Statement on Algorithmic Transparency and Accountability'', the Informatics Europe's ``European Recommendations on Machine-Learned Automated Decision Making'' and ``The ethics guidelines for trustworthy AI'' provided by the EU High-Level Expert Group on AI.

The challenge to design and develop trustworthy AI-based decision systems is still open and requires a joint effort across technical, legal, sociological and ethical domains.

The purpose of XKDD, eXaplaining Knowledge Discovery in Data Mining, is to encourage principled research that will lead to the advancement of explainable, transparent, ethical and fair data mining and machine learning. Also, this year the workshop will seek submissions addressing uncovered important issues in specific fields related to eXplainable AI (XAI), such as XAI for a more Social and Responsible AI, XAI as a tool to align AI with human values, XAI for Outlier and Anomaly Detection, quantitative and qualitative evaluation of XAI approaches, and XAI case studies. The workshop will seek top-quality submissions related to ethical, fair, explainable and transparent data mining and machine learning approaches. Papers should present research results in any of the topics of interest for the workshop, as well as tools and promising preliminary ideas. XKDD asks for contributions from researchers, academia and industries, working on topics addressing these challenges primarily from a technical point of view but also from a legal, ethical or sociological perspective.

Topics of interest include, but are not limited to:

Submissions with a focus on uncovered important issues related to XAI are particularly welcome, e.g. XAI for fairness checking approaches, XAI for privacy-preserving systems, XAI for federated learning, XAI for time series and graph based approaches, XAI for visualization, XAI in human-machine interaction, benchmarking of XAI methods, and XAI case studies.

The call for paper can be dowloaded here.

Submission

Electronic submissions will be handled via CMT.

Papers must be written in English and formatted according to the Springer Lecture Notes in Computer Science (LNCS) guidelines following the style of the main conference (format).

The maximum length of either research or position papers is 14 pages references excluded. Overlength papers will be rejected without review (papers with smaller page margins and font sizes than specified in the author instructions and set in the style files will also be treated as overlength).

Authors who submit their work to XKDD 2023 commit themselves to present their paper at the workshop in case of acceptance. XKDD 2023 considers the author list submitted with the paper as final. No additions or deletions to this list may be made after paper submission, either during the review period, or in case of acceptance, at the final camera ready stage.

Condition for inclusion in the post-proceedings is that at least one of the co-authors has presented the paper at the workshop. Pre-proceedings will be available online before the workshop.

All accepted papers will be published as post-proceedings in LNCSI and included in the series name Lecture Notes in Computer Science.

All papers for XKDD 2023 must be submitted by using the on-line submission system at CMT.

Although ECML PKDD 2023 will guarantee a streaming service in all rooms and satellite events, this year's organization aims to maximize engagement and physical presence in Turin. The provided streaming service, with the associated remote registration fee, is considered an option only for non-presenting attendees. The goal is to avoid having little participation in the rooms, with the events happening almost exclusively "virtually." Therefore, we will adopt the same rules as the main event also for our workshop: at least one author of each accepted paper must have a full registration and be in Turin to present the paper. Papers without a full registration or in-presence presentation won't be included in the post-workshop Springer proceedings.

Important Dates

  • Paper Submission deadline: June 20 June 27, 2023
  • Accept/Reject Notification: July 13 June 19, 2023
  • Camera-ready deadline: July 31, 2023
  • Workshop: September 18, 2023

Organization

Program Chairs

Invited Speakers

Andreas Theissler

Professor at Aalen University of Applied Sciences, Aalen, Germany

Explainable AI: how far we have come and what’s left for us to do

The research field of explainable AI has gained importance over the previous years. In many cases, interpretable and trustworthy AI is desired. This invited talk will start by briefly surveying and categorizing the field. As the main contribution, the talk’s focus will be on discussing open issues and research questions: some of them in accordance with previous literature, other questions will be raised by taking a higher-level perspective. This aims to inspire discussions, to deduce new ideas, to identify research topics. Overall, the talk aims to contribute to the workshop by showing open research fields in the hope to form a research landscape.
Andreas Theissler is a full professor at Aalen University of Applied Sciences in Germany, where he researches and lectures on different aspects of Machine Learning and Human-Centered AI. He received his PhD from Brunel University London in 2014, prior to that he studied Software Engineering and has worked in different Data Science positions in industry. He is interested in different aspects of machine learning, e.g. the interplay, for example on the questions of how we can evaluate, understand, improve, or enable Machine Learning by incorporating expert knowledge. In addition, he has worked in the field of machine learning in applications, e.g. anomaly detection for automotive data. He has published papers on interpretability, efficient data labeling by combining ML models and user interaction, anomaly detection, and on machine learning in applications.

Stefano Teso

Stefano Teso is a tenure-track Assistant Professor at the University of Trento.

The Blessing and Curse of Concepts in Explainable AI and Interactive Machine Learning

Explanations have gained an increasing level of interest in the AI and Machine Learning (ML) communities in order to improve model transparency and allow users to form a mental model of a trained ML model. However, explanations can go beyond this one way communication as a mechanism to elicit user control, because once users understand, they can then provide feedback. The goal of this talk is to give an accessible overview of research where explanations are combined with interactive capabilities as a mean to learn new models from scratch and to edit and debug existing ones, and on the new challenges that this brings forth, with a focus on the problem of aligning humans and machines in the context of human-interpretable representation learning.
Stefano Teso is a tenure-track Assistant Professor at the University of Trento whose research lies at the intersection of explainable AI, interactive machine learning, neuro-symbolic AI, representation learning, constraint learning, and preference elicitation. Stefano co-organized tutorials and workshops on (explainable) interactive machine learning and constraint learning at major international conferences (AAAI, IJCAI, ECML-PKDD). He is a regular PC/SPC at ICML, NeurIPS, ICLR, UAI, AAAI, IJCAI, ECML-PKDD and an AC at AISTATS.

Program Committee

  • Miguel Couceiro, Université de Lorraine, France
  • Alessandro Castelnovo, Intesa San Paolo, Italy
  • Riccardo Crupi, Intesa San Paolo, Italy
  • Josep Domingo-Ferrer, Universitat Rovira i Virgili, Spain
  • Françoise Fessant, Orange Labs, France
  • Salvatore Greco, Politecnico di Torino, Italy
  • Andreas Holzinger, Medical University of Graz, Austria
  • Paulo Lisboa, Liverpool John Moores University, UK
  • Marcin Luckner, Warsaw University of Technology, Poland
  • Amedeo Napoli, CNRS, France
  • John Mollas, Aristotle University of Thessaloniki, Greece
  • Enea Parimbelli, University of Pavia, Italy
  • Francesca Pratesi, ISTI-CNR, Italy
  • Roberto Prevete, University of Napoli, Italy
  • Antonio Rago, Imperial College London, UK
  • Eliana Pastor, Politecnico di Torino, Italy
  • Jan Ramon, Inria, France
  • Xavier Renard, AXA, France
  • Mahtab Sarvmaili, Dalhousie University, Canada
  • Udo Schlegel, Konstanz University, Germany
  • Mattia Setzu, University of Pisa, Italy
  • Dominik Slezak, University of Warsaw, Poland
  • Myra Spiliopoulou, University Magdeburg, Germany
  • Francesco Spinnato, Scuola Noramle Superiore, Italy
  • Grigorios Tsoumakas, Aristotle University of Thessaloniki, Greece
  • Genoveva Vargas-Solar, CNRS, LIRIS, France
  • Albrecht Zimmermann, Université de Caen, France

Program

Welcome, General Overview, Supporting projects presentation: XAI, HumanE-AI-Net, SoBigData++, TAILOR, SAI, PNRR-FAIR, PNRR-SoBigData.it

Morning

First Keynote Talk Andreas Theissler

Explainable AI: how far we have come and what’s left for us to do.

Research Paper presentation (15 min + 3 min Q&A)


Matching the expert’s knowledge via a counterfactual-based feature importance measure.

Antonio Luca Alfeo, Mario G.C.A. Cimino, Guido Gagliardi.


Wave Top-k Random-d Family Search: How to Guide an Expert in a Structured Pattern Space.

Etienne P. Lehembre, Bruno Cremilleux, Albrecht Zimmermann, CUISSART Bertrand, Abdelkader Ouali.

Coffee Break 30 minutes

Research Paper presentation (15 min + 3 min Q&A)


FIPER: a Visual-based Explanation Combining Rules and Feature Importance.

Eleonora Cappuccio, Daniele Fadda, Rosa Lanzilotti, Salvatore Rinzivillo.


Unexplainable Explanations: Towards Interpreting tSNE and UMAP Embeddings.

Andrew Draganov and Simon B. Dohn


Diffusion-based Visual Counterfactual Explanations - Towards Systematic Quantitative Evaluation.

Philipp Väth, Alexander M. Frühwald, Benjamin Paaßen, Magda Gregorova.


Using Graph Neural Networks for the Detection and Explanation of Network Intrusions.

Ahmed-Rafik-El Mehdi BAAHMED, Giuseppina Andresini, Celine Robardet, Annalisa Appice.


Explaining Fatigue in Runners Using Time Series Analysis on Wearable Sensor Data.

Bahavathy Kathirgamanathan, Thu Trang Nguyen, Brian Caulfield, Georgiana Ifrim, Pádraig Cunningham.


Lunch Break

Afternoon

Second Keynote Talk Stefano Teso

The Blessing and Curse of Concepts in Explainable AI and Interactive Machine Learning

Research Paper presentation (15 min + 3 min Q&A)


From Black Box to Glass Box: Evaluating the Faithfulness of Process Predictions with GCNNs.

Myriam Schaschek, Fabian Gwinner, Benedikt Hein, and Axel Winkelmann


Exploring gender bias in misclassification with clustering and local explanations.

Aurora Ramirez.


Coffee Break 30 minutes

Research Paper presentation (15 min + 3 min Q&A)


Game Theoretic Explanations for Graph Neural Networks.

Ataollah Kamal, Elouan Vincent, Marc Plantevit and Celine Robardet.


A New Class of Intelligible Models for Tabular Learning.

Kodjo Mawuena Amekoe, Hanane Azzag, Mustapha Lebbah, Zaineb Chelly Dagdia, Grégoire Jaffre.


Are Generative-based Graph Counterfactual Explainers Worth It?

Mario A Prado Romero, Bardh Prenkaj, Giovanni Stilo.


Manipulation Risks in Explainable AI: The Implications of the Disagreement Problem.

Sofie E Goethals, David Martens, Theodoros Evgeniou.


Concluding Remarks

Venue

The event will take place at the ECML-PKDD 2023 Conference at the Officine Grandi Riparazioni (OGR), Room 7i .


Additional information about the location can be found at
the main conference web page: ECML-PKDD 2023

ECML-PKDD 2023 plans TBC

Partners

This workshop is partially supported by the European Community H2020 Program under research and innovation programme, grant agreement 834756 XAI, science and technology for the explanation of ai decision making.

This workshop is partially supported by the European Community H2020 Program under the funding scheme FET Flagship Project Proposal, grant agreement 952026 HumanE-AI-Net.

This workshop is partially supported by the European Community H2020 Program under the funding scheme INFRAIA-2019-1: Research Infrastructures, grant agreement 871042 SoBigData++.

This workshop is partially supported by the European Community H2020 Program under research and innovation programme, grant agreement 952215 TAILOR.

This workshop is partially supported by the European Community H2020 Program under research and innovation programme, SAI. CHIST-ERA-19-XAI-010, by MUR (N. not yet available), FWF (N. I 5205), EPSRC (N. EP/V055712/1), NCN (N. 2020/02/Y/ST6/00064), ETAg (N. SLTAT21096), BNSF (N. KP-06-AOO2/5). SAI.

This workshop is partially supported by the European Community NextGenerationEU programme under the funding schemes PNRR-PE-AI FAIR (Future Artificial Intelligence Research). FAIR.

The XKDD 2023 event was organised as part of the SoBigData.it project (Prot. IR0000013 - Call n. 3264 of 12/28/2021) initiatives aimed at training new users and communities in the usage of the research infrastructure (SoBigData.eu). “SoBigData.it receives funding from European Union – NextGenerationEU – National Recovery and Resilience Plan (Piano Nazionale di Ripresa e Resilienza, PNRR) – Project: “SoBigData.it – Strengthening the Italian RI for Social Mining and Big Data Analytics” – Prot. IR0000013 – Avviso n. 3264 del 28/12/2021.” SoBigData.it.

Contacts

All inquires should be sent to

francesca.naretto@di.unipi.it

riccardo.guidotti@unipi.it