10 research outputs found
Settling the Sample Complexity of Single-parameter Revenue Maximization
This paper settles the sample complexity of single-parameter revenue
maximization by showing matching upper and lower bounds, up to a
poly-logarithmic factor, for all families of value distributions that have been
considered in the literature. The upper bounds are unified under a novel
framework, which builds on the strong revenue monotonicity by Devanur, Huang,
and Psomas (STOC 2016), and an information theoretic argument. This is
fundamentally different from the previous approaches that rely on either
constructing an -net of the mechanism space, explicitly or implicitly
via statistical learning theory, or learning an approximately accurate version
of the virtual values. To our knowledge, it is the first time information
theoretical arguments are used to show sample complexity upper bounds, instead
of lower bounds. Our lower bounds are also unified under a meta construction of
hard instances.Comment: 49 pages, Accepted by STOC1
A Scalable Neural Network for DSIC Affine Maximizer Auction Design
Automated auction design aims to find empirically high-revenue mechanisms
through machine learning. Existing works on multi item auction scenarios can be
roughly divided into RegretNet-like and affine maximizer auctions (AMAs)
approaches. However, the former cannot strictly ensure dominant strategy
incentive compatibility (DSIC), while the latter faces scalability issue due to
the large number of allocation candidates. To address these limitations, we
propose AMenuNet, a scalable neural network that constructs the AMA parameters
(even including the allocation menu) from bidder and item representations.
AMenuNet is always DSIC and individually rational (IR) due to the properties of
AMAs, and it enhances scalability by generating candidate allocations through a
neural network. Additionally, AMenuNet is permutation equivariant, and its
number of parameters is independent of auction scale. We conduct extensive
experiments to demonstrate that AMenuNet outperforms strong baselines in both
contextual and non-contextual multi-item auctions, scales well to larger
auctions, generalizes well to different settings, and identifies useful
deterministic allocations. Overall, our proposed approach offers an effective
solution to automated DSIC auction design, with improved scalability and strong
revenue performance in various settings.Comment: NeurIPS 2023 (spotlight
A Permutation-Equivariant Neural Network Architecture For Auction Design
Designing an incentive compatible auction that maximizes expected revenue is
a central problem in Auction Design. Theoretical approaches to the problem have
hit some limits in the past decades and analytical solutions are known for only
a few simple settings. Computational approaches to the problem through the use
of LPs have their own set of limitations. Building on the success of deep
learning, a new approach was recently proposed by Duetting et al. (2019) in
which the auction is modeled by a feed-forward neural network and the design
problem is framed as a learning problem. The neural architectures used in that
work are general purpose and do not take advantage of any of the symmetries the
problem could present, such as permutation equivariance. In this work, we
consider auction design problems that have permutation-equivariant symmetry and
construct a neural architecture that is capable of perfectly recovering the
permutation-equivariant optimal mechanism, which we show is not possible with
the previous architecture. We demonstrate that permutation-equivariant
architectures are not only capable of recovering previous results, they also
have better generalization properties
Strong Revenue (Non-)Monotonicity of Single-parameter Auctions
Consider Myerson's optimal auction with respect to an inaccurate prior, e.g.,
estimated from data, which is an underestimation of the true value
distribution. Can the auctioneer expect getting at least the optimal revenue
w.r.t. the inaccurate prior since the true value distribution is larger? This
so-called strong revenue monotonicity is known to be true for single-parameter
auctions when the feasible allocations form a matroid. We find that strong
revenue monotonicity fails to generalize beyond the matroid setting, and
further show that auctions in the matroid setting are the only downward-closed
auctions that satisfy strong revenue monotonicity. On the flip side, we recover
an approximate version of strong revenue monotonicity that holds for all
single-parameter auctions, even without downward-closedness. As applications,
we get sample complexity upper bounds for single-parameter auctions under
matroid constraints, downward-closed constraints, and general constraints. They
improve the state-of-the-art upper bounds and are tight up to logarithmic
factors
Learning Reserve Prices in Second-Price Auctions
This paper proves the tight sample complexity of Second-Price Auction with Anonymous Reserve, up to a logarithmic factor, for each of all the value distribution families studied in the literature: [0,1]-bounded, [1,H]-bounded, regular, and monotone hazard rate (MHR). Remarkably, the setting-specific tight sample complexity poly(?^{-1}) depends on the precision ? ? (0, 1), but not on the number of bidders n ? 1. Further, in the two bounded-support settings, our learning algorithm allows correlated value distributions.
In contrast, the tight sample complexity ??(n) ? poly(?^{-1}) of Myerson Auction proved by Guo, Huang and Zhang (STOC 2019) has a nearly-linear dependence on n ? 1, and holds only for independent value distributions in every setting.
We follow a similar framework as the Guo-Huang-Zhang work, but replace their information theoretical arguments with a direct proof
Learning Reserve Prices in Second-Price Auctions
This paper proves the tight sample complexity of Second-Price Auction with
Anonymous Reserve, up to a logarithmic factor, for all value distribution
families that have been considered in the literature. Compared to Myerson
Auction, whose sample complexity was settled very recently in (Guo, Huang and
Zhang, STOC 2019), Anonymous Reserve requires much fewer samples for learning.
We follow a similar framework as the Guo-Huang-Zhang work, but replace their
information theoretical argument with a direct proof
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Recommended from our members
Bayesian Auction Design and Approximation
We study two classes of problems within Algorithmic Economics: revenue guarantees of simple mechanisms, and social welfare guarantees of auctions. We develop new structural and algorithmic tools for addressing these problems, and obtain the following results:
In the -unit model, four canonical mechanisms can be classified as: (i) the discriminating group, including Myerson Auction and Sequential Posted-Pricing, and (ii) the anonymous group, including Anonymous Reserve and Anonymous Pricing. We prove that any two mechanisms from the same group have an asymptotically tight revenue gap of 1 + θ(1 /√), while any two mechanisms from the different groups have an asymptotically tight revenue gap of θ(log ).
In the single-item model, we prove a nearly-tight sample complexity of Anonymous Reserve for every value distribution family investigated in the literature: [0, 1]-bounded, [1, ]-bounded, regular, and monotone hazard rate (MHR).
Remarkably, the setting-specific sample complexity poly(⁻¹) depends on the precision ∈ (0, 1), but not on the number of bidders ≥ 1. Further, in the two bounded-support settings, our algorithm allows correlated value distributions. These are in sharp contrast to the previous (nearly-tight) sample complexity results on Myerson Auction.
In the single-item model, we prove that the tight Price of Anarchy/Stability for First Price Auctions are both PoA = PoS = 1 - 1/² ≈ 0.8647