39 research outputs found
The Disparate Effects of Strategic Manipulation
When consequential decisions are informed by algorithmic input, individuals
may feel compelled to alter their behavior in order to gain a system's
approval. Models of agent responsiveness, termed "strategic manipulation,"
analyze the interaction between a learner and agents in a world where all
agents are equally able to manipulate their features in an attempt to "trick" a
published classifier. In cases of real world classification, however, an
agent's ability to adapt to an algorithm is not simply a function of her
personal interest in receiving a positive classification, but is bound up in a
complex web of social factors that affect her ability to pursue certain action
responses. In this paper, we adapt models of strategic manipulation to capture
dynamics that may arise in a setting of social inequality wherein candidate
groups face different costs to manipulation. We find that whenever one group's
costs are higher than the other's, the learner's equilibrium strategy exhibits
an inequality-reinforcing phenomenon wherein the learner erroneously admits
some members of the advantaged group, while erroneously excluding some members
of the disadvantaged group. We also consider the effects of interventions in
which a learner subsidizes members of the disadvantaged group, lowering their
costs in order to improve her own classification performance. Here we encounter
a paradoxical result: there exist cases in which providing a subsidy improves
only the learner's utility while actually making both candidate groups
worse-off--even the group receiving the subsidy. Our results reveal the
potentially adverse social ramifications of deploying tools that attempt to
evaluate an individual's "quality" when agents' capacities to adaptively
respond differ.Comment: 29 pages, 4 figure
Robot Rights? Let's Talk about Human Welfare Instead
The 'robot rights' debate, and its related question of 'robot
responsibility', invokes some of the most polarized positions in AI ethics.
While some advocate for granting robots rights on a par with human beings,
others, in a stark opposition argue that robots are not deserving of rights but
are objects that should be our slaves. Grounded in post-Cartesian philosophical
foundations, we argue not just to deny robots 'rights', but to deny that
robots, as artifacts emerging out of and mediating human being, are the kinds
of things that could be granted rights in the first place. Once we see robots
as mediators of human being, we can understand how the `robots rights' debate
is focused on first world problems, at the expense of urgent ethical concerns,
such as machine bias, machine elicited human labour exploitation, and erosion
of privacy all impacting society's least privileged individuals. We conclude
that, if human being is our starting point and human welfare is the primary
concern, the negative impacts emerging from machinic systems, as well as the
lack of taking responsibility by people designing, selling and deploying such
machines, remains the most pressing ethical discussion in AI.Comment: Accepted to the AIES 2020 conference in New York, February 2020. The
final version of this paper will appear in Proceedings of the 2020 AAAI/ACM
Conference on AI, Ethics, and Societ
Steps Towards Value-Aligned Systems
Algorithmic (including AI/ML) decision-making artifacts are an established
and growing part of our decision-making ecosystem. They are indispensable tools
for managing the flood of information needed to make effective decisions in a
complex world. The current literature is full of examples of how individual
artifacts violate societal norms and expectations (e.g. violations of fairness,
privacy, or safety norms). Against this backdrop, this discussion highlights an
under-emphasized perspective in the literature on assessing value misalignment
in AI-equipped sociotechnical systems. The research on value misalignment has a
strong focus on the behavior of individual tech artifacts. This discussion
argues for a more structured systems-level approach for assessing
value-alignment in sociotechnical systems. We rely primarily on the research on
fairness to make our arguments more concrete. And we use the opportunity to
highlight how adopting a system perspective improves our ability to explain and
address value misalignments better. Our discussion ends with an exploration of
priority questions that demand attention if we are to assure the value
alignment of whole systems, not just individual artifacts.Comment: Original version appeared in Proceedings of the 2020 AAAI ACM
Conference on AI, Ethics, and Society (AIES '20), February 7-8, 2020, New
York, NY, USA. 5 pages, 2 figures. Corrected some typos in this versio
Fairness Testing: Testing Software for Discrimination
This paper defines software fairness and discrimination and develops a
testing-based method for measuring if and how much software discriminates,
focusing on causality in discriminatory behavior. Evidence of software
discrimination has been found in modern software systems that recommend
criminal sentences, grant access to financial products, and determine who is
allowed to participate in promotions. Our approach, Themis, generates efficient
test suites to measure discrimination. Given a schema describing valid system
inputs, Themis generates discrimination tests automatically and does not
require an oracle. We evaluate Themis on 20 software systems, 12 of which come
from prior work with explicit focus on avoiding discrimination. We find that
(1) Themis is effective at discovering software discrimination, (2)
state-of-the-art techniques for removing discrimination from algorithms fail in
many situations, at times discriminating against as much as 98% of an input
subdomain, (3) Themis optimizations are effective at producing efficient test
suites for measuring discrimination, and (4) Themis is more efficient on
systems that exhibit more discrimination. We thus demonstrate that fairness
testing is a critical aspect of the software development cycle in domains with
possible discrimination and provide initial tools for measuring software
discrimination.Comment: Sainyam Galhotra, Yuriy Brun, and Alexandra Meliou. 2017. Fairness
Testing: Testing Software for Discrimination. In Proceedings of 2017 11th
Joint Meeting of the European Software Engineering Conference and the ACM
SIGSOFT Symposium on the Foundations of Software Engineering (ESEC/FSE),
Paderborn, Germany, September 4-8, 2017 (ESEC/FSE'17).
https://doi.org/10.1145/3106237.3106277, ESEC/FSE, 201
POTs: Protective Optimization Technologies
Algorithmic fairness aims to address the economic, moral, social, and
political impact that digital systems have on populations through solutions
that can be applied by service providers. Fairness frameworks do so, in part,
by mapping these problems to a narrow definition and assuming the service
providers can be trusted to deploy countermeasures. Not surprisingly, these
decisions limit fairness frameworks' ability to capture a variety of harms
caused by systems.
We characterize fairness limitations using concepts from requirements
engineering and from social sciences. We show that the focus on algorithms'
inputs and outputs misses harms that arise from systems interacting with the
world; that the focus on bias and discrimination omits broader harms on
populations and their environments; and that relying on service providers
excludes scenarios where they are not cooperative or intentionally adversarial.
We propose Protective Optimization Technologies (POTs). POTs provide means
for affected parties to address the negative impacts of systems in the
environment, expanding avenues for political contestation. POTs intervene from
outside the system, do not require service providers to cooperate, and can
serve to correct, shift, or expose harms that systems impose on populations and
their environments. We illustrate the potential and limitations of POTs in two
case studies: countering road congestion caused by traffic-beating
applications, and recalibrating credit scoring for loan applicants.Comment: Appears in Conference on Fairness, Accountability, and Transparency
(FAT* 2020). Bogdan Kulynych and Rebekah Overdorf contributed equally to this
work. Version v1/v2 by Seda G\"urses, Rebekah Overdorf, and Ero Balsa was
presented at HotPETS 2018 and at PiMLAI 201
Crowdsourcing the Perception of Machine Teaching
Teachable interfaces can empower end-users to attune machine learning systems
to their idiosyncratic characteristics and environment by explicitly providing
pertinent training examples. While facilitating control, their effectiveness
can be hindered by the lack of expertise or misconceptions. We investigate how
users may conceptualize, experience, and reflect on their engagement in machine
teaching by deploying a mobile teachable testbed in Amazon Mechanical Turk.
Using a performance-based payment scheme, Mechanical Turkers (N = 100) are
called to train, test, and re-train a robust recognition model in real-time
with a few snapshots taken in their environment. We find that participants
incorporate diversity in their examples drawing from parallels to how humans
recognize objects independent of size, viewpoint, location, and illumination.
Many of their misconceptions relate to consistency and model capabilities for
reasoning. With limited variation and edge cases in testing, the majority of
them do not change strategies on a second training attempt.Comment: 10 pages, 8 figures, 5 tables, CHI2020 conferenc
Bureaucracy as a Lens for Analyzing and Designing Algorithmic Systems
Scholarship on algorithms has drawn on the analogy between algorithmic systems and bureaucracies to diagnose shortcomings in algorithmic decision-making. We extend the analogy further by drawing on Michel Crozier’s theory of bureaucratic organizations to analyze the relationship between algorithmic and human decision-making power. We present algorithms as analogous to impartial bureaucratic rules for controlling action, and argue that discretionary decision-making power in algorithmic systems accumulates at locations where uncertainty about the operation of algorithms persists. This key point of our essay connects with Alkhatib and Bernstein’s theory of ’street-level algorithms’, and highlights that the role of human discretion in algorithmic systems is to accommodate uncertain situations which inflexible algorithms cannot handle. We conclude by discussing how the analysis and design of algorithmic systems could seek to identify and cultivate important sources of uncertainty, to enable the human discretionary work that enhances systemic resilience in the face of algorithmic errors.Peer reviewe
Replication Data for: Unintended Consequences of Geographic Targeting
This dataset was used for this paper published on 9/1/2015 on Technology Science. http://techscience.org/a/2015090103