Article contents
Integrating Deep Learning and Interpretable Regression Models for Transparent Decision Support in Healthcare Diagnostics
Abstract
Deep learning models have demonstrated exceptional predictive capabilities in healthcare diagnostics, yet their black-box nature limits clinical adoption due to a lack of interpretability and trust. This study addresses this limitation by developing a hybrid decision-support framework that integrates deep representation learning with interpretable regression modeling. Using the MIMIC-IV dataset, sourced from U.S. intensive care units, and comprising patient demographics, vital signs, and laboratory data, we train a deep neural network to learn 128-dimensional patient embeddings that capture underlying physiological patterns. These embeddings are then used as inputs to interpretable regression models, Logistic Regression, and Generalized Additive Models, to predict hospital mortality while maintaining transparency. SHAP-based interpretability analysis is employed to quantify and visualize the contribution of each embedding dimension and clinical feature to model predictions. Experimental results show that the hybrid model achieves competitive performance relative to standalone deep models, while providing clear feature-level explanations through regression coefficients and SHAP importance rankings. The findings demonstrate that deep–deep-interpretable hybrid architectures can bridge the performance–explainability divide, offering a viable pathway for deploying transparent, trustworthy AI systems in clinical diagnostics. This integration not only enhances predictive reliability but also strengthens clinician confidence through evidence-based, interpretable decision support.