8,788 research outputs found
Allocating Limited Resources to Protect a Massive Number of Targets using a Game Theoretic Model
Resource allocation is the process of optimizing the rare resources. In the
area of security, how to allocate limited resources to protect a massive number
of targets is especially challenging. This paper addresses this resource
allocation issue by constructing a game theoretic model. A defender and an
attacker are players and the interaction is formulated as a trade-off between
protecting targets and consuming resources. The action cost which is a
necessary role of consuming resource, is considered in the proposed model.
Additionally, a bounded rational behavior model (Quantal Response, QR), which
simulates a human attacker of the adversarial nature, is introduced to improve
the proposed model. To validate the proposed model, we compare the different
utility functions and resource allocation strategies. The comparison results
suggest that the proposed resource allocation strategy performs better than
others in the perspective of utility and resource effectiveness.Comment: 14 pages, 12 figures, 41 reference
POTs: Protective Optimization Technologies
Algorithmic fairness aims to address the economic, moral, social, and
political impact that digital systems have on populations through solutions
that can be applied by service providers. Fairness frameworks do so, in part,
by mapping these problems to a narrow definition and assuming the service
providers can be trusted to deploy countermeasures. Not surprisingly, these
decisions limit fairness frameworks' ability to capture a variety of harms
caused by systems.
We characterize fairness limitations using concepts from requirements
engineering and from social sciences. We show that the focus on algorithms'
inputs and outputs misses harms that arise from systems interacting with the
world; that the focus on bias and discrimination omits broader harms on
populations and their environments; and that relying on service providers
excludes scenarios where they are not cooperative or intentionally adversarial.
We propose Protective Optimization Technologies (POTs). POTs provide means
for affected parties to address the negative impacts of systems in the
environment, expanding avenues for political contestation. POTs intervene from
outside the system, do not require service providers to cooperate, and can
serve to correct, shift, or expose harms that systems impose on populations and
their environments. We illustrate the potential and limitations of POTs in two
case studies: countering road congestion caused by traffic-beating
applications, and recalibrating credit scoring for loan applicants.Comment: Appears in Conference on Fairness, Accountability, and Transparency
(FAT* 2020). Bogdan Kulynych and Rebekah Overdorf contributed equally to this
work. Version v1/v2 by Seda G\"urses, Rebekah Overdorf, and Ero Balsa was
presented at HotPETS 2018 and at PiMLAI 201
Ethical Challenges in Data-Driven Dialogue Systems
The use of dialogue systems as a medium for human-machine interaction is an
increasingly prevalent paradigm. A growing number of dialogue systems use
conversation strategies that are learned from large datasets. There are well
documented instances where interactions with these system have resulted in
biased or even offensive conversations due to the data-driven training process.
Here, we highlight potential ethical issues that arise in dialogue systems
research, including: implicit biases in data-driven systems, the rise of
adversarial examples, potential sources of privacy violations, safety concerns,
special considerations for reinforcement learning systems, and reproducibility
concerns. We also suggest areas stemming from these issues that deserve further
investigation. Through this initial survey, we hope to spur research leading to
robust, safe, and ethically sound dialogue systems.Comment: In Submission to the AAAI/ACM conference on Artificial Intelligence,
Ethics, and Societ
Society-in-the-Loop: Programming the Algorithmic Social Contract
Recent rapid advances in Artificial Intelligence (AI) and Machine Learning
have raised many questions about the regulatory and governance mechanisms for
autonomous machines. Many commentators, scholars, and policy-makers now call
for ensuring that algorithms governing our lives are transparent, fair, and
accountable. Here, I propose a conceptual framework for the regulation of AI
and algorithmic systems. I argue that we need tools to program, debug and
maintain an algorithmic social contract, a pact between various human
stakeholders, mediated by machines. To achieve this, we can adapt the concept
of human-in-the-loop (HITL) from the fields of modeling and simulation, and
interactive machine learning. In particular, I propose an agenda I call
society-in-the-loop (SITL), which combines the HITL control paradigm with
mechanisms for negotiating the values of various stakeholders affected by AI
systems, and monitoring compliance with the agreement. In short, `SITL = HITL +
Social Contract.'Comment: (in press), Ethics of Information Technology, 201
- …