Neurolab Web Site

EXPLAINABLE AND INTERPRETABLE DEEP LEARNING (EX-DL) APPROACHES TO ADDRESS HEALTH CHALLENGES

IEEE WCCI 2022

Session ID: IJCNN-SS-16, Explainable and Interpretable Deep Learning (EX-DL) Approaches to address Health Challenges

Deep Learning, Healthcare applications, Biomedical Machine Learning, Explainable Neural Networks

Target: researchers in the field, practitioners from industry and start-ups, clinical people and health professionals

Abstract:

As the recent Covid-19 pandemic has shown, healthcare is one of the key sectors of the global economy, especially in Europe. Any improvement in healthcare systems has a high impact on the welfare of the society.
The use of technologies in health is a way to modernize and make healthcare more efficient, thus benefitting both individual citizens and public budgets. Health systems are continuously generating a wide range of biomedical data. These datasets represent a formidable challenge for deep learning (DL) systems, since typically, knowledge extraction is carried out through interpretations by experts.
For automatizing and accelerating the analysis of health data, health scientific discovery and innovation are expected to advance and exploit growing advances in DL approaches, which rely on data-driven decisions and interpretation of data. For many biomedical applications, application of state-of-the-art DL techniques on large and complex biomedical datasets yields novel ways of diagnosis, monitoring and treatment of diseases.
The aim of this Special session is firstly, to collect papers describing innovative approaches that use DL to improve healthcare solutions. Secondly, since practical and clinical benefits of exploiting these DL techniques is subject to the capability of explaining and interpreting decisions from machines, we also aim to solicit a collection of works on this subject, which we believe will serve as a timely resource for future researchers in this emerging field.

Description:

As of today, Machine/Deep Learning systems represent the most pervasive aspect of Artificial Intelligence (AI) technology in global research, development, industry, finance, governments, and public administration. However, the main critique of these approaches remains the lack of transparency and interpretability of their performance in solving complex tasks like real-time modeling, classification, regression, prediction, identification and control. Development of more explainable and interpretable DL (EX-DL) approaches will lead to rapid scaling and deployment of intelligent and autonomous systems across many disciplines.
There is growing research across the world aimed at making future EX-DL systems more transparent, flexible, fair, accountable, reproducible, verifiable and readily adaptable to different real-world applications. Many of these efforts are underpinned by collaborative frameworks exploiting a range of cutting-edge technologies, including, amongst others, wireless sensor networks, the Internet-of-Things (IoT), 5G and beyond communications, nano-materials, robotics, cyber security and quantum computing.
Such endeavours will transform the development of smart and secure EX-DL based systems of tomorrow, with each specific design stage, albeit complex, expected to be centered on human beings. The latter will ensure the development of robust and agile intelligent and autonomous tools for the benefit and advancement of humanity. DL is a growingly transversal discipline that is truly multi- and trans-disciplinary.
To meet rapidly evolving societal needs and expectations, particularly related to healthcare, there is a need also for non-technical aspects, such as social, ethical and privacy aspects, to be holistically considered by researchers and innovators, starting from the design level.
Consequently, this timely Special Session is proposed, with aims to stimulate and promote the development of future EX-DL systems underpinned by holistic human-centred approaches.

Topics:

We will solicit original and highly innovative contributions, addressing a range of relevant topics, including (but not limited to):
  • Explainable vs. Interpretable DL systems
  • Black-box to gray-box DL systems
  • DL systems incorporating human-interpretable user interfaces
  • Fuzzy logic improved approaches to explainability
  • Transparency in design of Deep Learning (DL) based systems/models
  • Transparency in data collection, bias mitigation and management of big data
  • High-level features and latent variable interpretation
  • Dimensionality reduction and space projection for explainability
  • Integrating social and ethical aspects of explainability in AI/DL systems
  • Integrating explainability into existing systems
  • Design of novel explanation modalities
  • Theoretical aspects of explanation and interpretability
  • Ethical impact of EX-DL in healthcare
  • Bias, fairness, explainability, accountability, responsibility and risk in DL systems
  • EX-DL based applications in health, bioinformatics, hearing and assistive technologies, brain computer interaction (BCI), risk assessment, and other real-world healthcare applications
  • Embedding of Internet-of-Things and DL
  • ML/DL explainability in Neuroscience
  • Explainable and interpretable ML/DL based Natural Language Processing (NLP) and social media analysis for healthcare applications

Important dates:

We will solicit original and highly innovative contributions, addressing a range of relevant topics, including (but not limited to):
  • January 31, 2022 – Paper submission (STRICT DEADLINE!)
  • April 26, 2022 – Notification of acceptance
  • May 23, 2022 – Final paper submission
  • July 18-23, 2021 – Conference in Padua, Italy

Paper submission:

Papers submitted to this Special Session are reviewed according to the same rules (i.e. double-blind) as the submissions to the regular sessions of WCCI 2022. Authors who submit papers to this session are invited to mention it in the form during the submission. Submissions to regular and special sessions follow identical format, instructions, deadlines and procedures of the other papers.
For further information and news, you may refer to the WCCI 2022 NeuroLab website: https://wcci2022.org/accepted-special-sessions/
Please select as the main research topic of the submission the Special Session ID: "IJCNN-SS-16: Explainable and Interpretable Deep Learning (EX-DL) Approaches to address Health Challenges"
Francesco Carlo Morabito

Francesco Carlo Morabito
University Mediterranea of Reggio Calabria, Italy
Roberto Tagliaferri

Roberto Tagliaferri
University of Salerno, Italy
Amir Hussain

Amir Hussain
Edinburgh Napier University, Scotland, UK
Mufti Mahmud

Mufti Mahmud
Nottingham Trent University, UK