12,563 research outputs found

    A General Theory of Sample Complexity for Multi-Item Profit Maximization

    Full text link
    The design of profit-maximizing multi-item mechanisms is a notoriously challenging problem with tremendous real-world impact. The mechanism designer's goal is to field a mechanism with high expected profit on the distribution over buyers' values. Unfortunately, if the set of mechanisms he optimizes over is complex, a mechanism may have high empirical profit over a small set of samples but low expected profit. This raises the question, how many samples are sufficient to ensure that the empirically optimal mechanism is nearly optimal in expectation? We uncover structure shared by a myriad of pricing, auction, and lottery mechanisms that allows us to prove strong sample complexity bounds: for any set of buyers' values, profit is a piecewise linear function of the mechanism's parameters. We prove new bounds for mechanism classes not yet studied in the sample-based mechanism design literature and match or improve over the best known guarantees for many classes. The profit functions we study are significantly different from well-understood functions in machine learning, so our analysis requires a sharp understanding of the interplay between mechanism parameters and buyer values. We strengthen our main results with data-dependent bounds when the distribution over buyers' values is "well-behaved." Finally, we investigate a fundamental tradeoff in sample-based mechanism design: complex mechanisms often have higher profit than simple mechanisms, but more samples are required to ensure that empirical and expected profit are close. We provide techniques for optimizing this tradeoff

    Settling the Sample Complexity of Single-parameter Revenue Maximization

    Full text link
    This paper settles the sample complexity of single-parameter revenue maximization by showing matching upper and lower bounds, up to a poly-logarithmic factor, for all families of value distributions that have been considered in the literature. The upper bounds are unified under a novel framework, which builds on the strong revenue monotonicity by Devanur, Huang, and Psomas (STOC 2016), and an information theoretic argument. This is fundamentally different from the previous approaches that rely on either constructing an ϵ\epsilon-net of the mechanism space, explicitly or implicitly via statistical learning theory, or learning an approximately accurate version of the virtual values. To our knowledge, it is the first time information theoretical arguments are used to show sample complexity upper bounds, instead of lower bounds. Our lower bounds are also unified under a meta construction of hard instances.Comment: 49 pages, Accepted by STOC1

    Reducing Revenue to Welfare Maximization: Approximation Algorithms and other Generalizations

    Get PDF
    It was recently shown in [http://arxiv.org/abs/1207.5518] that revenue optimization can be computationally efficiently reduced to welfare optimization in all multi-dimensional Bayesian auction problems with arbitrary (possibly combinatorial) feasibility constraints and independent additive bidders with arbitrary (possibly combinatorial) demand constraints. This reduction provides a poly-time solution to the optimal mechanism design problem in all auction settings where welfare optimization can be solved efficiently, but it is fragile to approximation and cannot provide solutions to settings where welfare maximization can only be tractably approximated. In this paper, we extend the reduction to accommodate approximation algorithms, providing an approximation preserving reduction from (truthful) revenue maximization to (not necessarily truthful) welfare maximization. The mechanisms output by our reduction choose allocations via black-box calls to welfare approximation on randomly selected inputs, thereby generalizing also our earlier structural results on optimal multi-dimensional mechanisms to approximately optimal mechanisms. Unlike [http://arxiv.org/abs/1207.5518], our results here are obtained through novel uses of the Ellipsoid algorithm and other optimization techniques over {\em non-convex regions}

    On k-Column Sparse Packing Programs

    Full text link
    We consider the class of packing integer programs (PIPs) that are column sparse, i.e. there is a specified upper bound k on the number of constraints that each variable appears in. We give an (ek+o(k))-approximation algorithm for k-column sparse PIPs, improving on recent results of k22kk^2\cdot 2^k and O(k2)O(k^2). We also show that the integrality gap of our linear programming relaxation is at least 2k-1; it is known that k-column sparse PIPs are Ω(k/logk)\Omega(k/ \log k)-hard to approximate. We also extend our result (at the loss of a small constant factor) to the more general case of maximizing a submodular objective over k-column sparse packing constraints.Comment: 19 pages, v3: additional detail

    Show Me the Money: Dynamic Recommendations for Revenue Maximization

    Full text link
    Recommender Systems (RS) play a vital role in applications such as e-commerce and on-demand content streaming. Research on RS has mainly focused on the customer perspective, i.e., accurate prediction of user preferences and maximization of user utilities. As a result, most existing techniques are not explicitly built for revenue maximization, the primary business goal of enterprises. In this work, we explore and exploit a novel connection between RS and the profitability of a business. As recommendations can be seen as an information channel between a business and its customers, it is interesting and important to investigate how to make strategic dynamic recommendations leading to maximum possible revenue. To this end, we propose a novel \model that takes into account a variety of factors including prices, valuations, saturation effects, and competition amongst products. Under this model, we study the problem of finding revenue-maximizing recommendation strategies over a finite time horizon. We show that this problem is NP-hard, but approximation guarantees can be obtained for a slightly relaxed version, by establishing an elegant connection to matroid theory. Given the prohibitively high complexity of the approximation algorithm, we also design intelligent heuristics for the original problem. Finally, we conduct extensive experiments on two real and synthetic datasets and demonstrate the efficiency, scalability, and effectiveness our algorithms, and that they significantly outperform several intuitive baselines.Comment: Conference version published in PVLDB 7(14). To be presented in the VLDB Conference 2015, in Hawaii. This version gives a detailed submodularity proo

    Lottery pricing equilibria

    Get PDF
    We extend the notion of Combinatorial Walrasian Equilibrium, as defined by Feldman et al. [2013], to settings with budgets. When agents have budgets, the maximum social welfare as traditionally defined is not a suitable benchmark since it is overly optimistic. This motivated the liquid welfare of [Dobzinski and Paes Leme 2014] as an alternative. Observing that no combinatorial Walrasian equilibrium guarantees a non-zero fraction of the maximum liquid welfare in the absence of randomization, we instead work with randomized allocations and extend the notions of liquid welfare and Combinatorial Walrasian Equilibrium accordingly. Our generalization of the Combinatorial Walrasian Equilibrium prices lotteries over bundles of items rather than bundles, and we term it a lottery pricing equilibrium. Our results are two-fold. First, we exhibit an efficient algorithm which turns a randomized allocation with liquid expected welfare W into a lottery pricing equilibrium with liquid expected welfare 3-√5/2 W (≈ 0.3819-W). Next, given access to a demand oracle and an α-approximate oblivious rounding algorithm for the configuration linear program for the welfare maximization problem, we show how to efficiently compute a randomized allocation which is (a) supported on polynomially-many deterministic allocations and (b) obtains [nearly] an α fraction of the optimal liquid expected welfare. In the case of subadditive valuations, combining both results yields an efficient algorithm which computes a lottery pricing equilibrium obtaining a constant fraction of the optimal liquid expected welfare. © Copyright 2016 ACM

    Dispersion for Data-Driven Algorithm Design, Online Learning, and Private Optimization

    Full text link
    Data-driven algorithm design, that is, choosing the best algorithm for a specific application, is a crucial problem in modern data science. Practitioners often optimize over a parameterized algorithm family, tuning parameters based on problems from their domain. These procedures have historically come with no guarantees, though a recent line of work studies algorithm selection from a theoretical perspective. We advance the foundations of this field in several directions: we analyze online algorithm selection, where problems arrive one-by-one and the goal is to minimize regret, and private algorithm selection, where the goal is to find good parameters over a set of problems without revealing sensitive information contained therein. We study important algorithm families, including SDP-rounding schemes for problems formulated as integer quadratic programs, and greedy techniques for canonical subset selection problems. In these cases, the algorithm's performance is a volatile and piecewise Lipschitz function of its parameters, since tweaking the parameters can completely change the algorithm's behavior. We give a sufficient and general condition, dispersion, defining a family of piecewise Lipschitz functions that can be optimized online and privately, which includes the functions measuring the performance of the algorithms we study. Intuitively, a set of piecewise Lipschitz functions is dispersed if no small region contains many of the functions' discontinuities. We present general techniques for online and private optimization of the sum of dispersed piecewise Lipschitz functions. We improve over the best-known regret bounds for a variety of problems, prove regret bounds for problems not previously studied, and give matching lower bounds. We also give matching upper and lower bounds on the utility loss due to privacy. Moreover, we uncover dispersion in auction design and pricing problems
    corecore