Research Article

Explainability Metrics in Reinforcement-Based Therapy Systems

Authors

  • Ankur Singh Master of Science, Computer Science, University of North America

Abstract

The learning systems that utilize reinforcement learning (RL) are revolutionizing customized behavioral interventions by being able to dynamically adjust to the reactions of patients. Nonetheless, the darkness of their decision-making remains restraining clinical trust and ethical implementation. In this study, the researchers presented Explainability Metric Framework (EMF), a methodology to quantitatively evaluate interpretability in RL-based therapy models. The proposed measures, which are Policy Transparency (PT), Reward Attribution (RA), and Clinical Alignment (CA) assess model understandability in an algorithmic, therapeutic, and human-centered way. As shown with the help of a hybrid Actor-Critic-based architecture with SHAP-based interpretability modules, it was demonstrated that explainability is improved by 27 percent and action misclassification is reduced by 19 percent as compared to classic RL baselines. This work makes a contribution to a transparent and ethically regulated AI in behavioral healthcare by conforming to the principles of Human-Centered AI and NIST AI RMF.

Article information

Journal

Frontiers in Computer Science and Artificial Intelligence

Volume (Issue)

1 (2)

Pages

24-30

Published

2024-12-28

How to Cite

Explainability Metrics in Reinforcement-Based Therapy Systems. (2024). Frontiers in Computer Science and Artificial Intelligence, 1(2), 24-30. https://al-kindipublisher.com/index.php/fcsai/article/view/11462

Downloads

Views

4

Downloads

1

Keywords:

Reinforcement Learning, Explainable AI, Behavioral Therapy, Human-Centered AI, SHAP Analysis, Clinical Transparency, Trustworthy AI