9 research outputs found
Automated Mechanism Design
Mechanisms design has traditionally been a manual endeavor. In 2002, Conitzer and Sandholm introduced the automated mechanism design (AMD) approach, where the mechanism is computationally created for the specific problem instance at hand. This has several advantages: 1) it can yield better mechanisms than the ones known to date, 2) it applies beyond the problem classes studied manually to date, 3) it can circumvent seminal economic impossibility results that hold for classes of problems but not all instances, and 4) it shifts the burden of design from man to machine. In this talk I will overview our results on AMD to date. I will cover problem representations and the computational complexity of different variants of the design problem. Initial applications include revenue-maximizing combinatorial auctions and (combinatorial) public goods problems. Algorithms for AMD will be discussed. To reduce the computational complexity of designing optimal combinatorial auctions, I introduce an incentive compatible, individually rational subfamily called Virtual Valuations Combinatorial Auctions. The auction mechanism\u27s revenue can be boosted (started, for example, from the VCG) by hill-climbing in this subspace. I will also present computational complexity and communication complexity results that motivate multi-stage and non-truth-promoting mechanisms. Finally, I present our first steps toward automatically designing multi-stage mechanisms
Limitations of Incentive Compatibility on Discrete Type Spaces
In the design of incentive compatible mechanisms, a common approach is to
enforce incentive compatibility as constraints in programs that optimize over
feasible mechanisms. Such constraints are often imposed on sparsified
representations of the type spaces, such as their discretizations or samples,
in order for the program to be manageable. In this work, we explore limitations
of this approach, by studying whether all dominant strategy incentive
compatible mechanisms on a set of discrete types can be extended to the
convex hull of .
Dobzinski, Fu and Kleinberg (2015) answered the question affirmatively for
all settings where types are single dimensional. It is not difficult to show
that the same holds when the set of feasible outcomes is downward closed. In
this work we show that the question has a negative answer for certain
non-downward-closed settings with multi-dimensional types. This result should
call for caution in the use of the said approach to enforcing incentive
compatibility beyond single-dimensional preferences and downward closed
feasible outcomes.Comment: 11 pages, 2 figures, to be published in Thirty-Fourth AAAI Conference
on Artificial Intelligenc
A General Theory of Sample Complexity for Multi-Item Profit Maximization
The design of profit-maximizing multi-item mechanisms is a notoriously
challenging problem with tremendous real-world impact. The mechanism designer's
goal is to field a mechanism with high expected profit on the distribution over
buyers' values. Unfortunately, if the set of mechanisms he optimizes over is
complex, a mechanism may have high empirical profit over a small set of samples
but low expected profit. This raises the question, how many samples are
sufficient to ensure that the empirically optimal mechanism is nearly optimal
in expectation? We uncover structure shared by a myriad of pricing, auction,
and lottery mechanisms that allows us to prove strong sample complexity bounds:
for any set of buyers' values, profit is a piecewise linear function of the
mechanism's parameters. We prove new bounds for mechanism classes not yet
studied in the sample-based mechanism design literature and match or improve
over the best known guarantees for many classes. The profit functions we study
are significantly different from well-understood functions in machine learning,
so our analysis requires a sharp understanding of the interplay between
mechanism parameters and buyer values. We strengthen our main results with
data-dependent bounds when the distribution over buyers' values is
"well-behaved." Finally, we investigate a fundamental tradeoff in sample-based
mechanism design: complex mechanisms often have higher profit than simple
mechanisms, but more samples are required to ensure that empirical and expected
profit are close. We provide techniques for optimizing this tradeoff
Instantiating the contingent bids model of truthful interdependent value auctions
(Article begins on next page) The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters. Citation Ito, Takayuki, and David C. Parkes. 2006. Instantiating the contingent bids model of truthful interdependent value auctions. I
Methods for Boosting Revenue in Combinatorial Auctions
We study the recognized open problem of designing revenuemaximizing combinatorial auctions. It is unsolved even for two bidders and two items for sale. Rather than pursuing the pure economic approach of attempting to characterize the optimal auction, we explore techniques for automatically modifying existing mechanisms in a way that increase expected revenue. We introduce a general family of auctions, based on bidder weighting and allocation boosting, which we call virtual valuations combinatorial auctions (VVCA). All auctions in the family are based on the Vickrey-Clarke-Groves (VCG) mechanism, executed on virtual valuations that are linear transformations of the bidders' real valuations. The restriction to linear transformations is motivated by incentive compatibility. The auction family is parameterized by the coefficients in the linear transformations. The proble