Article contents
Explainable Trust-Centric Artificial Intelligence for Integrated Healthcare, Financial Security, and Cyber-Risk Management
Abstract
The rapid deployment of artificial intelligence across healthcare, finance, and cybersecurity has intensified concerns regarding transparency, trust, and ethical accountability in automated decision-making systems. While predictive models demonstrate strong performance in isolated domains, their real-world adoption remains constrained by limited explainability and insufficient alignment with human judgment. This research proposes an Explainable Trust-Centric Artificial Intelligence (ETC-AI) framework that unifies behavioral analytics, explainable machine learning, and governance-aware risk modeling across healthcare, financial security, and public systems. Drawing on advances in autism behavioral monitoring, cloud-based IoT architectures, cybersecurity for connected medical devices, financial fraud detection, and ethical AI for welfare systems, the framework operationalizes trust as a measurable and adaptive system property. Through cross-domain simulation and analytical evaluation, the study demonstrates improved interpretability, reduced false alerts, and enhanced decision confidence among human stakeholders. The findings support a shift toward explainable, trust-centric AI architectures capable of responsibly managing risk across interconnected socio-technical domains.
Article information
Journal
Frontiers in Computer Science and Artificial Intelligence
Volume (Issue)
4 (5)
Pages
01-06
Published
Copyright
Copyright (c) 2025 https://creativecommons.org/licenses/by/4.0/
Open access

This work is licensed under a Creative Commons Attribution 4.0 International License.

Aims & scope
Call for Papers
Article Processing Charges
Publications Ethics
Google Scholar Citations
Recruitment