Research Article

Explainable and Trustworthy Deep Learning for MRI-Based Auxiliary Diagnosis of Alzheimer’s Disease

Authors

  • Mingxuan Zhang Sino-British Joint College, China Medical University, Shenyang 110000, China

Abstract

Alzheimer’s disease (AD) and its prodromal stage, mild cognitive impairment (MCI), are characterized by a long preclinical phase. Early identification is crucial for timely intervention and prognostic assessment. Structural MRI has become an important imaging modality for auxiliary diagnosis of AD due to its non-invasive nature, repeatability, and strong clinical accessibility. Deep learning has made significant progress in AD/MCI/CN classification, staging, differential diagnosis, and clinical risk prediction. However, the “black-box” nature of these models, insufficient cross-center generalization, and lack of trustworthy explanations severely limit their clinical translation. This review focuses on four main threads—“model pipeline—explanation methods—explanation evaluation—generalization and deployment”—to systematically summarize representative directions of MRI-based deep learning in AD-related tasks. It also outlines the primary pathways of explainable artificial intelligence (XAI) (saliency attribution, counterfactual explanation, and inherently interpretable architectures) and the key dimensions of trustworthiness assessment (fidelity, stability, anatomical/pathological plausibility, and human-factor usability). Methodological risks such as real-world data challenges, domain generalization, cross-scanner harmonization, and data leakage are discussed in depth. Key contributions include: (1) emphasizing that “explainability ≠ heatmap visualization” and proposing an explanation evaluation framework centered on fidelity and stability; (2) integrating risks such as shortcut learning, data leakage, and cross-scanner differences into a unified discussion of explainability and generalization; and (3) summarizing clinically translatable pathways from the perspectives of real-world application and privacy compliance. MRI-based deep learning is shifting from “pursuing accuracy” toward “trustworthy explanation + cross-center generalization + clinical usability.” Promising future directions include standardized explanation evaluation, domain generalization and harmonization techniques tailored to real-world distribution shifts, and human-interpretable explanations validated through multi-center real-world studies.

Article information

Journal

Journal of Medical and Health Studies

Volume (Issue)

7 (5)

Pages

100-104

Published

2026-03-31

How to Cite

Mingxuan Zhang. (2026). Explainable and Trustworthy Deep Learning for MRI-Based Auxiliary Diagnosis of Alzheimer’s Disease. Journal of Medical and Health Studies, 7(5), 100-104. https://doi.org/10.32996/jmhs.2026.7.5.13

Downloads

Views

13

Downloads

5

Keywords:

Alzheimer’s disease; structural MRI; deep learning; explainable artificial intelligence; trustworthiness assessment; generalization and real-world application