7,749 research outputs found

    Learning Theory and Algorithms for Revenue Optimization in Second-Price Auctions with Reserve

    Full text link
    Second-price auctions with reserve play a critical role for modern search engine and popular online sites since the revenue of these companies often directly de- pends on the outcome of such auctions. The choice of the reserve price is the main mechanism through which the auction revenue can be influenced in these electronic markets. We cast the problem of selecting the reserve price to optimize revenue as a learning problem and present a full theoretical analysis dealing with the complex properties of the corresponding loss function. We further give novel algorithms for solving this problem and report the results of several experiments in both synthetic and real data demonstrating their effectiveness.Comment: Accepted at ICML 201

    Lowest Unique Bid Auctions

    Get PDF
    We consider a class of auctions (Lowest Unique Bid Auctions) that have achieved a considerable success on the Internet. Bids are made in cents (of euro) and every bidder can bid as many numbers as she wants. The lowest unique bid wins the auction. Every bid has a fixed cost, and once a participant makes a bid, she gets to know whether her bid was unique and whether it was the lowest unique. Information is updated in real time, but every bidder sees only what's relevant to the bids she made. We show that the observed behavior in these auctions differs considerably from what theory would prescribe if all bidders were fully rational. We show that the seller makes money, which would not be the case with rational bidders, and some bidders win the auctions quite often. We describe a possible strategy for these bidders

    Dispersion for Data-Driven Algorithm Design, Online Learning, and Private Optimization

    Full text link
    Data-driven algorithm design, that is, choosing the best algorithm for a specific application, is a crucial problem in modern data science. Practitioners often optimize over a parameterized algorithm family, tuning parameters based on problems from their domain. These procedures have historically come with no guarantees, though a recent line of work studies algorithm selection from a theoretical perspective. We advance the foundations of this field in several directions: we analyze online algorithm selection, where problems arrive one-by-one and the goal is to minimize regret, and private algorithm selection, where the goal is to find good parameters over a set of problems without revealing sensitive information contained therein. We study important algorithm families, including SDP-rounding schemes for problems formulated as integer quadratic programs, and greedy techniques for canonical subset selection problems. In these cases, the algorithm's performance is a volatile and piecewise Lipschitz function of its parameters, since tweaking the parameters can completely change the algorithm's behavior. We give a sufficient and general condition, dispersion, defining a family of piecewise Lipschitz functions that can be optimized online and privately, which includes the functions measuring the performance of the algorithms we study. Intuitively, a set of piecewise Lipschitz functions is dispersed if no small region contains many of the functions' discontinuities. We present general techniques for online and private optimization of the sum of dispersed piecewise Lipschitz functions. We improve over the best-known regret bounds for a variety of problems, prove regret bounds for problems not previously studied, and give matching lower bounds. We also give matching upper and lower bounds on the utility loss due to privacy. Moreover, we uncover dispersion in auction design and pricing problems
    • …
    corecore