Research Article

Integrating Deep Learning and Interpretable Regression Models for Transparent Decision Support in Healthcare Diagnostics

Authors

  • Md Murshid Reja Sweet Department of Management Science and Quantitative Methods, Gannon University, USA
  • Md Parvez Ahmed Master of Science in Information Technology, Washington University of Science and Technology, USA
  • Salma Akter Department of College of Nursing, Wayne State University, USA
  • Sanjida Akter Tisha Master of Science in Information Technology, Washington University of Science and Technology, USA

Abstract

Deep learning models have demonstrated exceptional predictive capabilities in healthcare diagnostics, yet their black-box nature limits clinical adoption due to a lack of interpretability and trust. This study addresses this limitation by developing a hybrid decision-support framework that integrates deep representation learning with interpretable regression modeling. Using the MIMIC-IV dataset, sourced from U.S. intensive care units, and comprising patient demographics, vital signs, and laboratory data, we train a deep neural network to learn 128-dimensional patient embeddings that capture underlying physiological patterns. These embeddings are then used as inputs to interpretable regression models, Logistic Regression, and Generalized Additive Models, to predict hospital mortality while maintaining transparency. SHAP-based interpretability analysis is employed to quantify and visualize the contribution of each embedding dimension and clinical feature to model predictions. Experimental results show that the hybrid model achieves competitive performance relative to standalone deep models, while providing clear feature-level explanations through regression coefficients and SHAP importance rankings. The findings demonstrate that deep–deep-interpretable hybrid architectures can bridge the performance–explainability divide, offering a viable pathway for deploying transparent, trustworthy AI systems in clinical diagnostics. This integration not only enhances predictive reliability but also strengthens clinician confidence through evidence-based, interpretable decision support.

Article information

Journal

Journal of Medical and Health Studies

Volume (Issue)

6 (5)

Pages

17-38

Published

2025-10-11

How to Cite

Sweet, M. M. R., Ahmed, M. P., Akter , S., & Tisha , S. A. (2025). Integrating Deep Learning and Interpretable Regression Models for Transparent Decision Support in Healthcare Diagnostics. Journal of Medical and Health Studies, 6(5), 17-38. https://doi.org/10.32996/jmhs.2025.6.5.4

Downloads

Views

35

Downloads

11

Keywords:

Explainable AI, Deep Learning, Healthcare Diagnostics, MIMIC-IV, Interpretable Models, SHAP, Logistic Regression, Transparency