Article contents
Explainability Metrics in Reinforcement-Based Therapy Systems
Abstract
The learning systems that utilize reinforcement learning (RL) are revolutionizing customized behavioral interventions by being able to dynamically adjust to the reactions of patients. Nonetheless, the darkness of their decision-making remains restraining clinical trust and ethical implementation. In this study, the researchers presented Explainability Metric Framework (EMF), a methodology to quantitatively evaluate interpretability in RL-based therapy models. The proposed measures, which are Policy Transparency (PT), Reward Attribution (RA), and Clinical Alignment (CA) assess model understandability in an algorithmic, therapeutic, and human-centered way. As shown with the help of a hybrid Actor-Critic-based architecture with SHAP-based interpretability modules, it was demonstrated that explainability is improved by 27 percent and action misclassification is reduced by 19 percent as compared to classic RL baselines. This work makes a contribution to a transparent and ethically regulated AI in behavioral healthcare by conforming to the principles of Human-Centered AI and NIST AI RMF.
Article information
Journal
Frontiers in Computer Science and Artificial Intelligence
Volume (Issue)
1 (2)
Pages
24-30
Published
Copyright
Open access

This work is licensed under a Creative Commons Attribution 4.0 International License.

Aims & scope
Call for Papers
Article Processing Charges
Publications Ethics
Google Scholar Citations
Recruitment