2 research outputs found
Preventing Discriminatory Decision-making in Evolving Data Streams
Bias in machine learning has rightly received significant attention over the
last decade. However, most fair machine learning (fair-ML) work to address bias
in decision-making systems has focused solely on the offline setting. Despite
the wide prevalence of online systems in the real world, work on identifying
and correcting bias in the online setting is severely lacking. The unique
challenges of the online environment make addressing bias more difficult than
in the offline setting. First, Streaming Machine Learning (SML) algorithms must
deal with the constantly evolving real-time data stream. Second, they need to
adapt to changing data distributions (concept drift) to make accurate
predictions on new incoming data. Adding fairness constraints to this already
complicated task is not straightforward. In this work, we focus on the
challenges of achieving fairness in biased data streams while accounting for
the presence of concept drift, accessing one sample at a time. We present Fair
Sampling over Stream (), a novel fair rebalancing approach capable of
being integrated with SML classification algorithms. Furthermore, we devise the
first unified performance-fairness metric, Fairness Bonded Utility (FBU), to
evaluate and compare the trade-off between performance and fairness of
different bias mitigation methods efficiently. FBU simplifies the comparison of
fairness-performance trade-offs of multiple techniques through one unified and
intuitive evaluation, allowing model designers to easily choose a technique.
Overall, extensive evaluations show our measures surpass those of other fair
online techniques previously reported in the literature
Beyond Accuracy: A Critical Review of Fairness in Machine Learning for Mobile and Wearable Computing
The field of mobile, wearable, and ubiquitous computing (UbiComp) is
undergoing a revolutionary integration of machine learning. Devices can now
diagnose diseases, predict heart irregularities, and unlock the full potential
of human cognition. However, the underlying algorithms are not immune to biases
with respect to sensitive attributes (e.g., gender, race), leading to
discriminatory outcomes. The research communities of HCI and AI-Ethics have
recently started to explore ways of reporting information about datasets to
surface and, eventually, counter those biases. The goal of this work is to
explore the extent to which the UbiComp community has adopted such ways of
reporting and highlight potential shortcomings. Through a systematic review of
papers published in the Proceedings of the ACM Interactive, Mobile, Wearable
and Ubiquitous Technologies (IMWUT) journal over the past 5 years (2018-2022),
we found that progress on algorithmic fairness within the UbiComp community
lags behind. Our findings show that only a small portion (5%) of published
papers adheres to modern fairness reporting, while the overwhelming majority
thereof focuses on accuracy or error metrics. In light of these findings, our
work provides practical guidelines for the design and development of ubiquitous
technologies that not only strive for accuracy but also for fairness