11 research outputs found

    A Scalable Neural Network for DSIC Affine Maximizer Auction Design

    Full text link
    Automated auction design aims to find empirically high-revenue mechanisms through machine learning. Existing works on multi item auction scenarios can be roughly divided into RegretNet-like and affine maximizer auctions (AMAs) approaches. However, the former cannot strictly ensure dominant strategy incentive compatibility (DSIC), while the latter faces scalability issue due to the large number of allocation candidates. To address these limitations, we propose AMenuNet, a scalable neural network that constructs the AMA parameters (even including the allocation menu) from bidder and item representations. AMenuNet is always DSIC and individually rational (IR) due to the properties of AMAs, and it enhances scalability by generating candidate allocations through a neural network. Additionally, AMenuNet is permutation equivariant, and its number of parameters is independent of auction scale. We conduct extensive experiments to demonstrate that AMenuNet outperforms strong baselines in both contextual and non-contextual multi-item auctions, scales well to larger auctions, generalizes well to different settings, and identifies useful deterministic allocations. Overall, our proposed approach offers an effective solution to automated DSIC auction design, with improved scalability and strong revenue performance in various settings.Comment: NeurIPS 2023 (spotlight

    Corruption-Robust Lipschitz Contextual Search

    Full text link
    I study the problem of learning a Lipschitz function with corrupted binary signals. The learner tries to learn a LL-Lipschitz function f:[0,1]d[0,L]f: [0,1]^d \rightarrow [0, L] that the adversary chooses. There is a total of TT rounds. In each round tt, the adversary selects a context vector xtx_t in the input space, and the learner makes a guess to the true function value f(xt)f(x_t) and receives a binary signal indicating whether the guess is high or low. In a total of CC rounds, the signal may be corrupted, though the value of CC is \emph{unknown} to the learner. The learner's goal is to incur a small cumulative loss. This work introduces the new algorithmic technique \emph{agnostic checking} as well as new analysis techniques. I design algorithms which: for the symmetric loss, the learner achieves regret LO(ClogT)L\cdot O(C\log T) with d=1d = 1 and LOd(ClogT+T(d1)/d)L\cdot O_d(C\log T + T^{(d-1)/d}) with d>1d > 1; for the pricing loss, the learner achieves regret LO~(Td/(d+1)+CT1/(d+1))L\cdot \widetilde{O} (T^{d/(d+1)} + C\cdot T^{1/(d+1)}).Comment: Accepted at ALT 202

    Contextual Search in the Presence of Irrational Agents

    Full text link
    We study contextual search, a generalization of binary search in higher dimensions, which captures settings such as feature-based dynamic pricing. Standard game-theoretic formulations of this problem assume that agents act in accordance with a specific behavioral model. In practice, however, some agents may not prescribe to the dominant behavioral model or may act in ways that are seemingly arbitrarily irrational. Existing algorithms heavily depend on the behavioral model being (approximately) accurate for all agents and have poor performance in the presence of even a few such arbitrarily irrational agents. We initiate the study of contextual search when some of the agents can behave in ways inconsistent with the underlying behavioral model. In particular, we provide two algorithms, one built on robustifying multidimensional binary search methods and one on translating the setting to a proxy setting appropriate for gradient descent. Our techniques draw inspiration from learning theory, game theory, high-dimensional geometry, and convex analysis.Comment: Compared to the first version titled "Corrupted Multidimensional Binary Search: Learning in the Presence of Irrational Agents", this version provides a broader scope of behavioral models of irrationality, specifies how the results apply to different loss functions, and discusses the power and limitations of additional algorithmic approache

    Online Learning in Multi-unit Auctions

    Full text link
    We consider repeated multi-unit auctions with uniform pricing, which are widely used in practice for allocating goods such as carbon licenses. In each round, KK identical units of a good are sold to a group of buyers that have valuations with diminishing marginal returns. The buyers submit bids for the units, and then a price pp is set per unit so that all the units are sold. We consider two variants of the auction, where the price is set to the KK-th highest bid and (K+1)(K+1)-st highest bid, respectively. We analyze the properties of this auction in both the offline and online settings. In the offline setting, we consider the problem that one player ii is facing: given access to a data set that contains the bids submitted by competitors in past auctions, find a bid vector that maximizes player ii's cumulative utility on the data set. We design a polynomial time algorithm for this problem, by showing it is equivalent to finding a maximum-weight path on a carefully constructed directed acyclic graph. In the online setting, the players run learning algorithms to update their bids as they participate in the auction over time. Based on our offline algorithm, we design efficient online learning algorithms for bidding. The algorithms have sublinear regret, under both full information and bandit feedback structures. We complement our online learning algorithms with regret lower bounds. Finally, we analyze the quality of the equilibria in the worst case through the lens of the core solution concept in the game among the bidders. We show that the (K+1)(K+1)-st price format is susceptible to collusion among the bidders; meanwhile, the KK-th price format does not have this issue
    corecore