12,584 research outputs found

    Aggregation Bias in Sponsored Search Data: The Curse and the Cure

    Get PDF
    Recently there has been significant interest in studying consumer behavior in sponsored search advertising (SSA). Researchers have typically used daily data from search engines containing measures such as average bid, average ad position, total impressions, clicks, and cost for each keyword in the advertiser’s campaign. A variety of random utility models have been estimated using such data and the results have helped researchers explore the factors that drive consumer click and conversion propensities. However, virtually every analysis of this kind has ignored the intraday variation in ad position. We show that estimating random utility models on aggregated (daily) data without accounting for this variation will lead to systematically biased estimates. Specifically, the impact of ad position on click-through rate (CTR) is attenuated and the predicted CTR is higher than the actual CTR. We analytically demonstrate the existence of the bias and show the effect of the bias on the equilibrium of the SSA auction. Using a large data set from a major search engine, we measure the magnitude of bias and quantify the losses suffered by the search engine and an advertiser using aggregate data. The search engine revenue loss can be as high as 11% due to aggregation bias. We also present a few data summarization techniques that can be used by search engines to reduce or eliminate the bias

    Improving Estimation in Functional Linear Regression with Points of Impact: Insights into Google AdWords

    Full text link
    The functional linear regression model with points of impact is a recent augmentation of the classical functional linear model with many practically important applications. In this work, however, we demonstrate that the existing data-driven procedure for estimating the parameters of this regression model can be very instable and inaccurate. The tendency to omit relevant points of impact is a particularly problematic aspect resulting in omitted-variable biases. We explain the theoretical reason for this problem and propose a new sequential estimation algorithm that leads to significantly improved estimation results. Our estimation algorithm is compared with the existing estimation procedure using an in-depth simulation study. The applicability is demonstrated using data from Google AdWords, today's most important platform for online advertisements. The \textsf{R}-package \texttt{FunRegPoI} and additional \textsf{R}-codes are provided in the online supplementary material

    Deep Character-Level Click-Through Rate Prediction for Sponsored Search

    Full text link
    Predicting the click-through rate of an advertisement is a critical component of online advertising platforms. In sponsored search, the click-through rate estimates the probability that a displayed advertisement is clicked by a user after she submits a query to the search engine. Commercial search engines typically rely on machine learning models trained with a large number of features to make such predictions. This is inevitably requires a lot of engineering efforts to define, compute, and select the appropriate features. In this paper, we propose two novel approaches (one working at character level and the other working at word level) that use deep convolutional neural networks to predict the click-through rate of a query-advertisement pair. Specially, the proposed architectures only consider the textual content appearing in a query-advertisement pair as input, and produce as output a click-through rate prediction. By comparing the character-level model with the word-level model, we show that language representation can be learnt from scratch at character level when trained on enough data. Through extensive experiments using billions of query-advertisement pairs of a popular commercial search engine, we demonstrate that both approaches significantly outperform a baseline model built on well-selected text features and a state-of-the-art word2vec-based approach. Finally, by combining the predictions of the deep models introduced in this study with the prediction of the model in production of the same commercial search engine, we significantly improve the accuracy and the calibration of the click-through rate prediction of the production system.Comment: SIGIR2017, 10 page
    • …
    corecore