Article contents
From Competence to Calibration: Modeling Cognitive Trust in Human-AI Collaborative Systems
Abstract
Human-AI collaboration is rapidly becoming embedded across domains: decision support, operations, creative work, quality assurance, and more. Yet what often limits effective collaboration is the human’s cognitive trust in the AI system, the belief that the system is capable, reliable, understandable, and aligned with the user’s goals. What this paper does is provide a conceptual model of how cognitive trust forms, evolves and influences behavior in human–AI collaboration. We synthesize trust antecedents (such as perceived competence, integrity, transparency, reliability, user disposition and contextual risk) with dual cognitive processing mechanisms (heuristic and systematic evaluation) to explain how users appraise an AI partner, adjust their trust level, and then behave (compliance, delegation, reliance). We also integrate a feedback loop by which outcomes of collaboration reshape cognitive trust and future appraisal. The contribution is two-fold: academically, we build a theory-driven framework linking psychological trust theory to AI system design; practically, we map design implications for AI systems that need calibrated trust rather than un-thinking over-trust or skeptical under-trust. We conclude by offering a set of testable propositions for subsequent empirical validation and highlight the implications for system designers, quality assurance professionals, and organizations adopting AI collaborations. The model opens the door for future studies on how to measure cognitive trust in AI settings, how to enable trust calibration, and how to build AI systems that align with human cognitive patterns of trust formation.

Aims & scope
Call for Papers
Article Processing Charges
Publications Ethics
Google Scholar Citations
Recruitment