12,904 research outputs found
CILIATE: Towards Fairer Class-based Incremental Learning by Dataset and Training Refinement
Due to the model aging problem, Deep Neural Networks (DNNs) need updates to
adjust them to new data distributions. The common practice leverages
incremental learning (IL), e.g., Class-based Incremental Learning (CIL) that
updates output labels, to update the model with new data and a limited number
of old data. This avoids heavyweight training (from scratch) using conventional
methods and saves storage space by reducing the number of old data to store.
But it also leads to poor performance in fairness. In this paper, we show that
CIL suffers both dataset and algorithm bias problems, and existing solutions
can only partially solve the problem. We propose a novel framework, CILIATE,
that fixes both dataset and algorithm bias in CIL. It features a novel
differential analysis guided dataset and training refinement process that
identifies unique and important samples overlooked by existing CIL and enforces
the model to learn from them. Through this process, CILIATE improves the
fairness of CIL by 17.03%, 22.46%, and 31.79% compared to state-of-the-art
methods, iCaRL, BiC, and WA, respectively, based on our evaluation on three
popular datasets and widely used ResNet models
Fairness-enhancing interventions in stream classification
The wide spread usage of automated data-driven decision support systems has
raised a lot of concerns regarding accountability and fairness of the employed
models in the absence of human supervision. Existing fairness-aware approaches
tackle fairness as a batch learning problem and aim at learning a fair model
which can then be applied to future instances of the problem. In many
applications, however, the data comes sequentially and its characteristics
might evolve with time. In such a setting, it is counter-intuitive to "fix" a
(fair) model over the data stream as changes in the data might incur changes in
the underlying model therefore, affecting its fairness. In this work, we
propose fairness-enhancing interventions that modify the input data so that the
outcome of any stream classifier applied to that data will be fair. Experiments
on real and synthetic data show that our approach achieves good predictive
performance and low discrimination scores over the course of the stream.Comment: 15 pages, 7 figures. To appear in the proceedings of 30th
International Conference on Database and Expert Systems Applications, Linz,
Austria August 26 - 29, 201
Language (Technology) is Power: A Critical Survey of "Bias" in NLP
We survey 146 papers analyzing "bias" in NLP systems, finding that their
motivations are often vague, inconsistent, and lacking in normative reasoning,
despite the fact that analyzing "bias" is an inherently normative process. We
further find that these papers' proposed quantitative techniques for measuring
or mitigating "bias" are poorly matched to their motivations and do not engage
with the relevant literature outside of NLP. Based on these findings, we
describe the beginnings of a path forward by proposing three recommendations
that should guide work analyzing "bias" in NLP systems. These recommendations
rest on a greater recognition of the relationships between language and social
hierarchies, encouraging researchers and practitioners to articulate their
conceptualizations of "bias"---i.e., what kinds of system behaviors are
harmful, in what ways, to whom, and why, as well as the normative reasoning
underlying these statements---and to center work around the lived experiences
of members of communities affected by NLP systems, while interrogating and
reimagining the power relations between technologists and such communities
Legal Solutions in Health Reform: Insurance Discrimination on the Basis of Health Status: An Overview of Discrimination Practices, Federal Law, and Federal Reform Options
Provides an overview of the insurance industry's discriminatory practices based on health status in designing and administering health insurance and employee health benefit plans. Discusses current federal law and interim and long-term reform options
Fairness in Credit Scoring: Assessment, Implementation and Profit Implications
The rise of algorithmic decision-making has spawned much research on fair
machine learning (ML). Financial institutions use ML for building risk
scorecards that support a range of credit-related decisions. Yet, the
literature on fair ML in credit scoring is scarce. The paper makes two
contributions. First, we provide a systematic overview of algorithmic options
for incorporating fairness goals in the ML model development pipeline. In this
scope, we also consolidate the space of statistical fairness criteria and
examine their adequacy for credit scoring. Second, we perform an empirical
study of different fairness processors in a profit-oriented credit scoring
setup using seven real-world data sets. The empirical results substantiate the
evaluation of fairness measures, identify more and less suitable options to
implement fair credit scoring, and clarify the profit-fairness trade-off in
lending decisions. Specifically, we find that multiple fairness criteria can be
approximately satisfied at once and identify separation as a proper criterion
for measuring the fairness of a scorecard. We also find fair in-processors to
deliver a good balance between profit and fairness. More generally, we show
that algorithmic discrimination can be reduced to a reasonable level at a
relatively low cost.Comment: Preprint submitted to European Journal of Operational Researc
Fairness Continual Learning Approach to Semantic Scene Understanding in Open-World Environments
Continual semantic segmentation aims to learn new classes while maintaining
the information from the previous classes. Although prior studies have shown
impressive progress in recent years, the fairness concern in the continual
semantic segmentation needs to be better addressed. Meanwhile, fairness is one
of the most vital factors in deploying the deep learning model, especially in
human-related or safety applications. In this paper, we present a novel
Fairness Continual Learning approach to the semantic segmentation problem. In
particular, under the fairness objective, a new fairness continual learning
framework is proposed based on class distributions. Then, a novel Prototypical
Contrastive Clustering loss is proposed to address the significant challenges
in continual learning, i.e., catastrophic forgetting and background shift. Our
proposed loss has also been proven as a novel, generalized learning paradigm of
knowledge distillation commonly used in continual learning. Moreover, the
proposed Conditional Structural Consistency loss further regularized the
structural constraint of the predicted segmentation. Our proposed approach has
achieved State-of-the-Art performance on three standard scene understanding
benchmarks, i.e., ADE20K, Cityscapes, and Pascal VOC, and promoted the fairness
of the segmentation model
The Exclusion of Race From Mandated Continuing Legal Education Requirements: A Critical Race Theory Analysis
The purpose of this paper is to critique the system of CLE using Critical Race Theory as an analytical lens in an effort to reveal possible reasons for the exclusion of bias and discrimination from CLE offerings in the legal profession
- …