Article contents
Governance Readiness Beyond Predictive Performance: An Empirical Benchmark for Higher-Education Early Warning Systems
Abstract
Higher-education early warning systems (EWS) are predominantly evaluated on predictive discrimination, yet institutional deployment requires that models simultaneously satisfy calibration, explanation stability, subgroup fairness, and operational feasibility. This paper proposes and empirically evaluates a structured governance-readiness benchmark comprising four measurable domains applied to the 2020 Beginning Postsecondary Students Longitudinal Study (BPS:20/22), a nationally representative complex survey of approximately 22,320 first-time undergraduates representing roughly 3.3 million students nationally. The study compares a gradient-boosted classifier (XGBoost) and an institution-aware multilevel logistic regression under a survey-weighted evaluation protocol with balanced repeated replication (BRR) variance estimation and bootstrap explanation stability testing. BPS:20/22 is used as a national governance benchmark; it does not represent a real-time campus intervention system. Findings indicate that the model achieving higher discrimination (AUC: 0.814 vs. 0.772, BRR 95% CIs: [0.801, 0.827] and [0.758, 0.786]) exhibits substantially lower explanation stability (rank-stability ρ_S: 0.713 vs. 0.893), larger subgroup disparities across race/ethnicity and income dimensions, and lower operational efficiency within the constrained alert threshold. Under four of five institutional priority scenarios summarizing predictive, calibration, stability, and fairness dimensions, the governance-readiness composite favors the institution-aware model. Operational actionability, evaluated as a parallel deployment constraint, similarly indicates that accuracy-based selection may underestimate accountability risk under discrimination-centered evaluation. The paper concludes with a recommendation for Algorithmic Impact Statements as a recommended minimum governance disclosure practice and provides a reproducible benchmark framework subject to restricted-use data access for institutional EWS review.
Article information
Journal
Frontiers in Computer Science and Artificial Intelligence
Volume (Issue)
4 (5)
Pages
49-65
Published
Copyright
Copyright (c) 2025 https://creativecommons.org/licenses/by/4.0/
Open access

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Aims & scope
Call for Papers
Article Processing Charges
Publications Ethics
Google Scholar Citations
Recruitment