Article contents
Explainable Artificial Intelligence for Credit Risk Assessment: Balancing Transparency and Predictive Performance
Abstract
Credit risk assessment is a cornerstone of financial decision-making, guiding loan approvals, interest rate determination, and capital allocation strategies. While machine learning and deep learning models have demonstrated superior predictive accuracy compared to traditional statistical techniques, their black-box nature often undermines trust, interpretability, and regulatory compliance. This study explores the integration of Explainable Artificial Intelligence (XAI) into credit risk modeling, with the dual goal of enhancing transparency while maintaining strong predictive performance. We propose a hybrid framework that combines gradient boosting and neural network models with post-hoc interpretability tools such as SHAP (SHapley Additive Explanations) and LIME (Local Interpretable Model-agnostic Explanations), alongside inherently interpretable models like decision trees and logistic regression. By evaluating the trade-offs between accuracy, fairness, and explainability on benchmark credit datasets, we demonstrate that XAI methods can provide actionable insights into borrower default risk without substantially compromising predictive power. Furthermore, we discuss the role of explainability in ensuring regulatory compliance, promoting fairness in lending decisions, and fostering trust among stakeholders. The findings suggest that transparent, high-performing models can strengthen risk management practices and support responsible innovation in the financial sector.
Article information
Journal
Journal of Economics, Finance and Accounting Studies
Volume (Issue)
7 (6)
Pages
14-27
Published
Copyright
Open access

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.