11,142 research outputs found
Mechanisms for Risk Averse Agents, Without Loss
Auctions in which agents' payoffs are random variables have received
increased attention in recent years. In particular, recent work in algorithmic
mechanism design has produced mechanisms employing internal randomization,
partly in response to limitations on deterministic mechanisms imposed by
computational complexity. For many of these mechanisms, which are often
referred to as truthful-in-expectation, incentive compatibility is contingent
on the assumption that agents are risk-neutral. These mechanisms have been
criticized on the grounds that this assumption is too strong, because "real"
agents are typically risk averse, and moreover their precise attitude towards
risk is typically unknown a-priori. In response, researchers in algorithmic
mechanism design have sought the design of universally-truthful mechanisms ---
mechanisms for which incentive-compatibility makes no assumptions regarding
agents' attitudes towards risk.
We show that any truthful-in-expectation mechanism can be generically
transformed into a mechanism that is incentive compatible even when agents are
risk averse, without modifying the mechanism's allocation rule. The transformed
mechanism does not require reporting of agents' risk profiles. Equivalently,
our result can be stated as follows: Every (randomized) allocation rule that is
implementable in dominant strategies when players are risk neutral is also
implementable when players are endowed with an arbitrary and unknown concave
utility function for money.Comment: Presented at the workshop on risk aversion in algorithmic game theory
and mechanism design, held in conjunction with EC 201
Budget Feasible Mechanism Design: From Prior-Free to Bayesian
Budget feasible mechanism design studies procurement combinatorial auctions
where the sellers have private costs to produce items, and the
buyer(auctioneer) aims to maximize a social valuation function on subsets of
items, under the budget constraint on the total payment. One of the most
important questions in the field is "which valuation domains admit truthful
budget feasible mechanisms with `small' approximations (compared to the social
optimum)?" Singer showed that additive and submodular functions have such
constant approximations. Recently, Dobzinski, Papadimitriou, and Singer gave an
O(log^2 n)-approximation mechanism for subadditive functions; they also
remarked that: "A fundamental question is whether, regardless of computational
constraints, a constant-factor budget feasible mechanism exists for subadditive
functions."
We address this question from two viewpoints: prior-free worst case analysis
and Bayesian analysis. For the prior-free framework, we use an LP that
describes the fractional cover of the valuation function; it is also connected
to the concept of approximate core in cooperative game theory. We provide an
O(I)-approximation mechanism for subadditive functions, via the worst case
integrality gap I of LP. This implies an O(log n)-approximation for subadditive
valuations, O(1)-approximation for XOS valuations, and for valuations with a
constant I. XOS valuations are an important class of functions that lie between
submodular and subadditive classes. We give another polynomial time O(log
n/loglog n) sub-logarithmic approximation mechanism for subadditive valuations.
For the Bayesian framework, we provide a constant approximation mechanism for
all subadditive functions, using the above prior-free mechanism for XOS
valuations as a subroutine. Our mechanism allows correlations in the
distribution of private information and is universally truthful.Comment: to appear in STOC 201
Hedging predictions in machine learning
Recent advances in machine learning make it possible to design efficient
prediction algorithms for data sets with huge numbers of parameters. This paper
describes a new technique for "hedging" the predictions output by many such
algorithms, including support vector machines, kernel ridge regression, kernel
nearest neighbours, and by many other state-of-the-art methods. The hedged
predictions for the labels of new objects include quantitative measures of
their own accuracy and reliability. These measures are provably valid under the
assumption of randomness, traditional in machine learning: the objects and
their labels are assumed to be generated independently from the same
probability distribution. In particular, it becomes possible to control (up to
statistical fluctuations) the number of erroneous predictions by selecting a
suitable confidence level. Validity being achieved automatically, the remaining
goal of hedged prediction is efficiency: taking full account of the new
objects' features and other available information to produce as accurate
predictions as possible. This can be done successfully using the powerful
machinery of modern machine learning.Comment: 24 pages; 9 figures; 2 tables; a version of this paper (with
discussion and rejoinder) is to appear in "The Computer Journal
- …