Article contents
Bias Mitigation in Federated Healthcare Cloud Models
Abstract
Federated learning (FL) provides a privacy sensitive model training approach that can be applied to train predictive models in more than one hospital without having to share raw patient information. Nevertheless, heterogeneity in hospital data, which is created by differences in demographics, diagnostic practices, and treatment procedures, may create bias in algorithms and produce unfair or unfair predictive results. The proposed study is aimed at identifying and preventing bias in federated healthcare cloud models to provide equal decision-making opportunities to various groups of people. Fairness measures like demographic parity and equalized odds are used in bias detection, which is accompanied by model performance audits across institutions. Mitigation techniques consist of reweighting data, training with fairness constraints and subsequent calibration of post-processing to achieve a tradeoff of predictive accuracy against fairness goals. Secure aggregation methods are also examined in order to preserve privacy and make collaborative fairness auditing possible in clouds. The experimental evidence shows that the inclusion of bias mitigation measures leads to a large boost in enhancing fairness without reducing the overall model utility, which makes federated learning a more trustworthy choice in practical healthcare settings.
Article information
Journal
Journal of Medical and Health Studies
Volume (Issue)
6 (4)
Pages
83-95
Published
Copyright
Open access

This work is licensed under a Creative Commons Attribution 4.0 International License.