Article contents
Explainable and Trustworthy Deep Learning for MRI-Based Auxiliary Diagnosis of Alzheimer’s Disease
Abstract
Alzheimer’s disease (AD) and its prodromal stage, mild cognitive impairment (MCI), are characterized by a long preclinical phase. Early identification is crucial for timely intervention and prognostic assessment. Structural MRI has become an important imaging modality for auxiliary diagnosis of AD due to its non-invasive nature, repeatability, and strong clinical accessibility. Deep learning has made significant progress in AD/MCI/CN classification, staging, differential diagnosis, and clinical risk prediction. However, the “black-box” nature of these models, insufficient cross-center generalization, and lack of trustworthy explanations severely limit their clinical translation. This review focuses on four main threads—“model pipeline—explanation methods—explanation evaluation—generalization and deployment”—to systematically summarize representative directions of MRI-based deep learning in AD-related tasks. It also outlines the primary pathways of explainable artificial intelligence (XAI) (saliency attribution, counterfactual explanation, and inherently interpretable architectures) and the key dimensions of trustworthiness assessment (fidelity, stability, anatomical/pathological plausibility, and human-factor usability). Methodological risks such as real-world data challenges, domain generalization, cross-scanner harmonization, and data leakage are discussed in depth. Key contributions include: (1) emphasizing that “explainability ≠ heatmap visualization” and proposing an explanation evaluation framework centered on fidelity and stability; (2) integrating risks such as shortcut learning, data leakage, and cross-scanner differences into a unified discussion of explainability and generalization; and (3) summarizing clinically translatable pathways from the perspectives of real-world application and privacy compliance. MRI-based deep learning is shifting from “pursuing accuracy” toward “trustworthy explanation + cross-center generalization + clinical usability.” Promising future directions include standardized explanation evaluation, domain generalization and harmonization techniques tailored to real-world distribution shifts, and human-interpretable explanations validated through multi-center real-world studies.
Article information
Journal
Journal of Medical and Health Studies
Volume (Issue)
7 (5)
Pages
100-104
Published
Copyright
Copyright (c) 2026 https://creativecommons.org/licenses/by/4.0/
Open access

This work is licensed under a Creative Commons Attribution 4.0 International License.

Aims & scope
Call for Papers
Article Processing Charges
Publications Ethics
Google Scholar Citations
Recruitment