127,988 research outputs found
Towards Socially Responsible AI: Cognitive Bias-Aware Multi-Objective Learning
Human society had a long history of suffering from cognitive biases leading
to social prejudices and mass injustice. The prevalent existence of cognitive
biases in large volumes of historical data can pose a threat of being
manifested as unethical and seemingly inhuman predictions as outputs of AI
systems trained on such data. To alleviate this problem, we propose a
bias-aware multi-objective learning framework that given a set of identity
attributes (e.g. gender, ethnicity etc.) and a subset of sensitive categories
of the possible classes of prediction outputs, learns to reduce the frequency
of predicting certain combinations of them, e.g. predicting stereotypes such as
`most blacks use abusive language', or `fear is a virtue of women'. Our
experiments conducted on an emotion prediction task with balanced class priors
shows that a set of baseline bias-agnostic models exhibit cognitive biases with
respect to gender, such as women are prone to be afraid whereas men are more
prone to be angry. In contrast, our proposed bias-aware multi-objective
learning methodology is shown to reduce such biases in the predictied emotions
Language (Technology) is Power: A Critical Survey of "Bias" in NLP
We survey 146 papers analyzing "bias" in NLP systems, finding that their
motivations are often vague, inconsistent, and lacking in normative reasoning,
despite the fact that analyzing "bias" is an inherently normative process. We
further find that these papers' proposed quantitative techniques for measuring
or mitigating "bias" are poorly matched to their motivations and do not engage
with the relevant literature outside of NLP. Based on these findings, we
describe the beginnings of a path forward by proposing three recommendations
that should guide work analyzing "bias" in NLP systems. These recommendations
rest on a greater recognition of the relationships between language and social
hierarchies, encouraging researchers and practitioners to articulate their
conceptualizations of "bias"---i.e., what kinds of system behaviors are
harmful, in what ways, to whom, and why, as well as the normative reasoning
underlying these statements---and to center work around the lived experiences
of members of communities affected by NLP systems, while interrogating and
reimagining the power relations between technologists and such communities
Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints
Language is increasingly being used to define rich visual recognition
problems with supporting image collections sourced from the web. Structured
prediction models are used in these tasks to take advantage of correlations
between co-occurring labels and visual input but risk inadvertently encoding
social biases found in web corpora. In this work, we study data and models
associated with multilabel object classification and visual semantic role
labeling. We find that (a) datasets for these tasks contain significant gender
bias and (b) models trained on these datasets further amplify existing bias.
For example, the activity cooking is over 33% more likely to involve females
than males in a training set, and a trained model further amplifies the
disparity to 68% at test time. We propose to inject corpus-level constraints
for calibrating existing structured prediction models and design an algorithm
based on Lagrangian relaxation for collective inference. Our method results in
almost no performance loss for the underlying recognition task but decreases
the magnitude of bias amplification by 47.5% and 40.5% for multilabel
classification and visual semantic role labeling, respectively.Comment: 11 pages, published in EMNLP 201
- …