13,245 research outputs found

    A Game-theoretic Machine Learning Approach for Revenue Maximization in Sponsored Search

    Full text link
    Sponsored search is an important monetization channel for search engines, in which an auction mechanism is used to select the ads shown to users and determine the prices charged from advertisers. There have been several pieces of work in the literature that investigate how to design an auction mechanism in order to optimize the revenue of the search engine. However, due to some unrealistic assumptions used, the practical values of these studies are not very clear. In this paper, we propose a novel \emph{game-theoretic machine learning} approach, which naturally combines machine learning and game theory, and learns the auction mechanism using a bilevel optimization framework. In particular, we first learn a Markov model from historical data to describe how advertisers change their bids in response to an auction mechanism, and then for any given auction mechanism, we use the learnt model to predict its corresponding future bid sequences. Next we learn the auction mechanism through empirical revenue maximization on the predicted bid sequences. We show that the empirical revenue will converge when the prediction period approaches infinity, and a Genetic Programming algorithm can effectively optimize this empirical revenue. Our experiments indicate that the proposed approach is able to produce a much more effective auction mechanism than several baselines.Comment: Twenty-third International Conference on Artificial Intelligence (IJCAI 2013

    Rare decays Bsβ†’l+lβˆ’B_s\to l^+l^- and Bβ†’Kl+lβˆ’B\to Kl^+l^- in \the topcolor-assisted technicolor model

    Full text link
    We examine the rare decays Bsβ†’l+lβˆ’B_s\to l^+l^- and Bβ†’Kl+lβˆ’B\to Kl^+l^- in the framework of the topcolor-assisted technicolor (TC2TC2) model. The contributions of the new particles predicted by this model to these rare decay processes are evaluated. We find that the values of their branching ratios are larger than the standard model predictions by one order of magnitude in wide range of the parameter space. The longitudinal polarization asymmetry of leptons in Bsβ†’l+lβˆ’B_s \to l^+l^- can approach \ord(10^{-2}). The forward-backward asymmetry of leptons in Bβ†’Kl+lβˆ’B \to Kl^+l^- is not large enough to be measured in future experiments. We also give some discussions about the branching ratios and the asymmetry observables related to these rare decay processes in the littlest Higgs model with T-parity.Comment: 29 pages, 9 figure, corrected typos, the version to appear in PR

    Complexity growth rates for AdS black holes in massive gravity and f(R)f(R) gravity

    Full text link
    The "complexity = action" duality states that the quantum complexity is equal to the action of the stationary AdS black holes within the Wheeler-DeWitt patch at late time approximation. We compute the action growth rates of the neutral and charged black holes in massive gravity and the neutral, charged and Kerr-Newman black holes in f(R)f(R) gravity to test this conjecture. Besides, we investigate the effects of the massive graviton terms, higher derivative terms and the topology of the black hole horizon on the complexity growth rate.Comment: 11 pages, no figur

    Generalized Second Price Auction with Probabilistic Broad Match

    Full text link
    Generalized Second Price (GSP) auctions are widely used by search engines today to sell their ad slots. Most search engines have supported broad match between queries and bid keywords when executing GSP auctions, however, it has been revealed that GSP auction with the standard broad-match mechanism they are currently using (denoted as SBM-GSP) has several theoretical drawbacks (e.g., its theoretical properties are known only for the single-slot case and full-information setting, and even in this simple setting, the corresponding worst-case social welfare can be rather bad). To address this issue, we propose a novel broad-match mechanism, which we call the Probabilistic Broad-Match (PBM) mechanism. Different from SBM that puts together the ads bidding on all the keywords matched to a given query for the GSP auction, the GSP with PBM (denoted as PBM-GSP) randomly samples a keyword according to a predefined probability distribution and only runs the GSP auction for the ads bidding on this sampled keyword. We perform a comprehensive study on the theoretical properties of the PBM-GSP. Specifically, we study its social welfare in the worst equilibrium, in both full-information and Bayesian settings. The results show that PBM-GSP can generate larger welfare than SBM-GSP under mild conditions. Furthermore, we also study the revenue guarantee for PBM-GSP in Bayesian setting. To the best of our knowledge, this is the first work on broad-match mechanisms for GSP that goes beyond the single-slot case and the full-information setting

    A Theoretical Analysis of NDCG Type Ranking Measures

    Full text link
    A central problem in ranking is to design a ranking measure for evaluation of ranking functions. In this paper we study, from a theoretical perspective, the widely used Normalized Discounted Cumulative Gain (NDCG)-type ranking measures. Although there are extensive empirical studies of NDCG, little is known about its theoretical properties. We first show that, whatever the ranking function is, the standard NDCG which adopts a logarithmic discount, converges to 1 as the number of items to rank goes to infinity. On the first sight, this result is very surprising. It seems to imply that NDCG cannot differentiate good and bad ranking functions, contradicting to the empirical success of NDCG in many applications. In order to have a deeper understanding of ranking measures in general, we propose a notion referred to as consistent distinguishability. This notion captures the intuition that a ranking measure should have such a property: For every pair of substantially different ranking functions, the ranking measure can decide which one is better in a consistent manner on almost all datasets. We show that NDCG with logarithmic discount has consistent distinguishability although it converges to the same limit for all ranking functions. We next characterize the set of all feasible discount functions for NDCG according to the concept of consistent distinguishability. Specifically we show that whether NDCG has consistent distinguishability depends on how fast the discount decays, and 1/r is a critical point. We then turn to the cut-off version of NDCG, i.e., NDCG@k. We analyze the distinguishability of NDCG@k for various choices of k and the discount functions. Experimental results on real Web search datasets agree well with the theory.Comment: COLT 201
    • …
    corecore