19,788 research outputs found
Mitigating Gender Bias in Machine Learning Data Sets
Artificial Intelligence has the capacity to amplify and perpetuate societal
biases and presents profound ethical implications for society. Gender bias has
been identified in the context of employment advertising and recruitment tools,
due to their reliance on underlying language processing and recommendation
algorithms. Attempts to address such issues have involved testing learned
associations, integrating concepts of fairness to machine learning and
performing more rigorous analysis of training data. Mitigating bias when
algorithms are trained on textual data is particularly challenging given the
complex way gender ideology is embedded in language. This paper proposes a
framework for the identification of gender bias in training data for machine
learning.The work draws upon gender theory and sociolinguistics to
systematically indicate levels of bias in textual training data and associated
neural word embedding models, thus highlighting pathways for both removing bias
from training data and critically assessing its impact.Comment: 10 pages, 5 figures, 5 Tables, Presented as Bias2020 workshop (as
part of the ECIR Conference) - http://bias.disim.univaq.i
Towards Socially Responsible AI: Cognitive Bias-Aware Multi-Objective Learning
Human society had a long history of suffering from cognitive biases leading
to social prejudices and mass injustice. The prevalent existence of cognitive
biases in large volumes of historical data can pose a threat of being
manifested as unethical and seemingly inhuman predictions as outputs of AI
systems trained on such data. To alleviate this problem, we propose a
bias-aware multi-objective learning framework that given a set of identity
attributes (e.g. gender, ethnicity etc.) and a subset of sensitive categories
of the possible classes of prediction outputs, learns to reduce the frequency
of predicting certain combinations of them, e.g. predicting stereotypes such as
`most blacks use abusive language', or `fear is a virtue of women'. Our
experiments conducted on an emotion prediction task with balanced class priors
shows that a set of baseline bias-agnostic models exhibit cognitive biases with
respect to gender, such as women are prone to be afraid whereas men are more
prone to be angry. In contrast, our proposed bias-aware multi-objective
learning methodology is shown to reduce such biases in the predictied emotions
Improving fairness in machine learning systems: What do industry practitioners need?
The potential for machine learning (ML) systems to amplify social inequities
and unfairness is receiving increasing popular and academic attention. A surge
of recent work has focused on the development of algorithmic tools to assess
and mitigate such unfairness. If these tools are to have a positive impact on
industry practice, however, it is crucial that their design be informed by an
understanding of real-world needs. Through 35 semi-structured interviews and an
anonymous survey of 267 ML practitioners, we conduct the first systematic
investigation of commercial product teams' challenges and needs for support in
developing fairer ML systems. We identify areas of alignment and disconnect
between the challenges faced by industry practitioners and solutions proposed
in the fair ML research literature. Based on these findings, we highlight
directions for future ML and HCI research that will better address industry
practitioners' needs.Comment: To appear in the 2019 ACM CHI Conference on Human Factors in
Computing Systems (CHI 2019
- …