7,965 research outputs found
Preference-Informed Fairness
We study notions of fairness in decision-making systems when individuals have
diverse preferences over the possible outcomes of the decisions. Our starting
point is the seminal work of Dwork et al. which introduced a notion of
individual fairness (IF): given a task-specific similarity metric, every pair
of individuals who are similarly qualified according to the metric should
receive similar outcomes. We show that when individuals have diverse
preferences over outcomes, requiring IF may unintentionally lead to
less-preferred outcomes for the very individuals that IF aims to protect. A
natural alternative to IF is the classic notion of fair division, envy-freeness
(EF): no individual should prefer another individual's outcome over their own.
Although EF allows for solutions where all individuals receive a
highly-preferred outcome, EF may also be overly-restrictive. For instance, if
many individuals agree on the best outcome, then if any individual receives
this outcome, they all must receive it, regardless of each individual's
underlying qualifications for the outcome.
We introduce and study a new notion of preference-informed individual
fairness (PIIF) that is a relaxation of both individual fairness and
envy-freeness. At a high-level, PIIF requires that outcomes satisfy IF-style
constraints, but allows for deviations provided they are in line with
individuals' preferences. We show that PIIF can permit outcomes that are more
favorable to individuals than any IF solution, while providing considerably
more flexibility to the decision-maker than EF. In addition, we show how to
efficiently optimize any convex objective over the outcomes subject to PIIF for
a rich class of individual preferences. Finally, we demonstrate the broad
applicability of the PIIF framework by extending our definitions and algorithms
to the multiple-task targeted advertising setting introduced by Dwork and
Ilvento
Perturbing Inputs to Prevent Model Stealing
We show how perturbing inputs to machine learning services (ML-service)
deployed in the cloud can protect against model stealing attacks. In our
formulation, there is an ML-service that receives inputs from users and returns
the output of the model. There is an attacker that is interested in learning
the parameters of the ML-service. We use the linear and logistic regression
models to illustrate how strategically adding noise to the inputs fundamentally
alters the attacker's estimation problem. We show that even with infinite
samples, the attacker would not be able to recover the true model parameters.
We focus on characterizing the trade-off between the error in the attacker's
estimate of the parameters with the error in the ML-service's output
TRIDEnT: Building Decentralized Incentives for Collaborative Security
Sophisticated mass attacks, especially when exploiting zero-day
vulnerabilities, have the potential to cause destructive damage to
organizations and critical infrastructure. To timely detect and contain such
attacks, collaboration among the defenders is critical. By correlating
real-time detection information (alerts) from multiple sources (collaborative
intrusion detection), defenders can detect attacks and take the appropriate
defensive measures in time. However, although the technical tools to facilitate
collaboration exist, real-world adoption of such collaborative security
mechanisms is still underwhelming. This is largely due to a lack of trust and
participation incentives for companies and organizations. This paper proposes
TRIDEnT, a novel collaborative platform that aims to enable and incentivize
parties to exchange network alert data, thus increasing their overall detection
capabilities. TRIDEnT allows parties that may be in a competitive relationship,
to selectively advertise, sell and acquire security alerts in the form of
(near) real-time peer-to-peer streams. To validate the basic principles behind
TRIDEnT, we present an intuitive game-theoretic model of alert sharing, that is
of independent interest, and show that collaboration is bound to take place
infinitely often. Furthermore, to demonstrate the feasibility of our approach,
we instantiate our design in a decentralized manner using Ethereum smart
contracts and provide a fully functional prototype.Comment: 28 page
Consideration Sets and Competitive Marketing
We study a market model in which competing firms use costly marketing devices to influence the set of alternatives which consumers perceive as relevant. Consumers in our model are boundedly rational in the sense that they have an imperfect perception of what is relevant to their decision problem. They apply well-defined preferences to a "consideration set", which is a function of the marketing devices employed by the firms. We examine the implications of this behavioral model in the context of a competitive market model, particularly on industry profits, vertical product differentiation, the use of marketing devices and consumers' conversion rates.consideration sets, marketing, industrial organization, advertising, default bias, inertia, product display, bounded rationality, limited attention, persuasion
Distributed Information Retrieval using Keyword Auctions
This report motivates the need for large-scale distributed approaches to information retrieval, and proposes solutions based on keyword auctions
DYNAMIC STRATEGIC INTERACTION: A SYNTHESIS OF MODELING METHODS
Research Methods/ Statistical Methods,
- …