64,042 research outputs found
Visual analysis of discrimination in machine learning
The growing use of automated decision-making in critical applications, such
as crime prediction and college admission, has raised questions about fairness
in machine learning. How can we decide whether different treatments are
reasonable or discriminatory? In this paper, we investigate discrimination in
machine learning from a visual analytics perspective and propose an interactive
visualization tool, DiscriLens, to support a more comprehensive analysis. To
reveal detailed information on algorithmic discrimination, DiscriLens
identifies a collection of potentially discriminatory itemsets based on causal
modeling and classification rules mining. By combining an extended Euler
diagram with a matrix-based visualization, we develop a novel set visualization
to facilitate the exploration and interpretation of discriminatory itemsets. A
user study shows that users can interpret the visually encoded information in
DiscriLens quickly and accurately. Use cases demonstrate that DiscriLens
provides informative guidance in understanding and reducing algorithmic
discrimination
The Grammar of Interactive Explanatory Model Analysis
The growing need for in-depth analysis of predictive models leads to a series
of new methods for explaining their local and global properties. Which of these
methods is the best? It turns out that this is an ill-posed question. One
cannot sufficiently explain a black-box machine learning model using a single
method that gives only one perspective. Isolated explanations are prone to
misunderstanding, which inevitably leads to wrong or simplistic reasoning. This
problem is known as the Rashomon effect and refers to diverse, even
contradictory interpretations of the same phenomenon. Surprisingly, the
majority of methods developed for explainable machine learning focus on a
single aspect of the model behavior. In contrast, we showcase the problem of
explainability as an interactive and sequential analysis of a model. This paper
presents how different Explanatory Model Analysis (EMA) methods complement each
other and why it is essential to juxtapose them together. The introduced
process of Interactive EMA (IEMA) derives from the algorithmic side of
explainable machine learning and aims to embrace ideas developed in cognitive
sciences. We formalize the grammar of IEMA to describe potential human-model
dialogues. IEMA is implemented in the human-centered framework that adopts
interactivity, customizability and automation as its main traits. Combined,
these methods enhance the responsible approach to predictive modeling.Comment: 17 pages, 10 figures, 3 table
Individual Fairness Guarantee in Learning with Censorship
Algorithmic fairness, studying how to make machine learning (ML) algorithms
fair, is an established area of ML. As ML technologies expand their application
domains, including ones with high societal impact, it becomes essential to take
fairness into consideration when building ML systems. Yet, despite its wide
range of socially sensitive applications, most work treats the issue of
algorithmic bias as an intrinsic property of supervised learning, i.e., the
class label is given as a precondition. Unlike prior fairness work, we study
individual fairness in learning with censorship where the assumption of
availability of the class label does not hold, while still requiring that
similar individuals are treated similarly. We argue that this perspective
represents a more realistic model of fairness research for real-world
application deployment, and show how learning with such a relaxed precondition
draws new insights that better explain algorithmic fairness. We also thoroughly
evaluate the performance of the proposed methodology on three real-world
datasets, and validate its superior performance in minimizing discrimination
while maintaining predictive performance
Algorithmic Fairness from a Non-ideal Perspective
Inspired by recent breakthroughs in predictive modeling, practitioners in both industry and government have turned to machine learning with hopes of operationalizing predictions to drive automated decisions. Unfortunately, many social desiderata concerning consequential decisions, such as justice or fairness, have no natural formulation within a purely predictive framework. In efforts to mitigate these problems, researchers have proposed a variety of metrics for quantifying deviations from various statistical parities that we might expect to observe in a fair world and offered a variety of algorithms in attempts to satisfy subsets of these parities or to trade o the degree to which they are satised against utility. In this paper, we connect this approach to fair machine learning to the literature on ideal and non-ideal methodological approaches in political philosophy. The ideal approach requires positing the principles according to which a just world would operate. In the most straightforward application of ideal theory, one supports a proposed policy by arguing that it closes a discrepancy between the real and the perfectly just world. However, by failing to account for the mechanisms by which our non-ideal world arose, the responsibilities of various decision-makers, and the impacts of proposed policies, naive applications of ideal thinking can lead to misguided interventions. In this paper, we demonstrate a connection between the fair machine learning literature and the ideal approach in political philosophy, and argue that the increasingly apparent shortcomings of proposed fair machine learning algorithms reflect broader troubles
faced by the ideal approach. We conclude with a critical discussion of the harms of misguided solutions, a
reinterpretation of impossibility results, and directions for future researc
- …