15 research outputs found

    Characterizing Optimal Adword Auctions

    Full text link
    We present a number of models for the adword auctions used for pricing advertising slots on search engines such as Google, Yahoo! etc. We begin with a general problem formulation which allows the privately known valuation per click to be a function of both the identity of the advertiser and the slot. We present a compact characterization of the set of all deterministic incentive compatible direct mechanisms for this model. This new characterization allows us to conclude that there are incentive compatible mechanisms for this auction with a multi-dimensional type-space that are {\em not} affine maximizers. Next, we discuss two interesting special cases: slot independent valuation and slot independent valuation up to a privately known slot and zero thereafter. For both of these special cases, we characterize revenue maximizing and efficiency maximizing mechanisms and show that these mechanisms can be computed with a worst case computational complexity O(n2m2)O(n^2m^2) and O(n2m3)O(n^2m^3) respectively, where nn is number of bidders and mm is number of slots. Next, we characterize optimal rank based allocation rules and propose a new mechanism that we call the customized rank based allocation. We report the results of a numerical study that compare the revenue and efficiency of the proposed mechanisms. The numerical results suggest that customized rank-based allocation rule is significantly superior to the rank-based allocation rules.Comment: 29 pages, work was presented at a) Second Workshop on Sponsored Search Auctions, Ann Arbor, MI b) INFORMS Annual Meeting, Pittsburgh c) Decision Sciences Seminar, Fuqua School of Business, Duke Universit

    Dynamic Online-Advertising Auctions as Stochastic Scheduling

    Get PDF
    We study dynamic models of online-advertising auctions in the Internet: advertisers compete for space on a web page over multiple time periods, and the web page displays ads in differentiated slots based on their bids and other considerations. The complex interactions between the advertisers and the website (which owns the web page) is modeled as a dynamic game. Our goal is to derive ad-slot placement and pricing strategies which maximize the expected revenue of the website. We show that the problem can be transformed into a scheduling problem familiar to queueing theorists. When only one advertising slot is available on a webpage, we derive the optimal revenue-maximizing solution by making connections to the familiar cμ rule used in queueing theory. More generally, we show that a cμ-like rule can serve as a good suboptimal solution, while the optimal solution itself may be computed using dynamic programming techniques

    On the Benefits of Keyword Spreading in Sponsored Search Auctions: An Experimental Analysis

    Get PDF
    Sellers of goods or services wishing to participate in sponsored search auctions must define a pool of keywords that are matched on-line to the queries submitted by the users to a search engine. Sellers must also define the value of their bid to the search engine for showing their advertisements in case of a query-keyword match. In order to optimize its revenue a seller might decide to substitute a keyword with a high cost, thus likely to be the object of intense competition, with sets of related keywords that collectively have lower cost while capturing an equivalent volume of user clicks. This technique is called keyword spreading and has recently attracted the attention of several researchers in the area of sponsored search auctions. In this paper we describe an experimental benchmark that through large scale realistic simulations allows us to pin-point the potential benefits/drawbacks of keyword spreading for the players using this technique, for those not using it, and for the search engine itself. Experimental results reveal that keyword spreading is generally convenient (or non-damaging) to all parties involved

    Optimal slot restriction and slot supply strategy in a keyword auction

    Get PDF

    MECHANISM DESIGN WITH GENERAL UTILITIES

    Get PDF
    This thesis studies mechanism design from an optimization perspective. Our main contribution is to characterize fundamental structural properties of optimization problems arising in mechanism design and to exploit them to design general frameworks and techniques for efficiently solving the underlying problems. Not only do our characterizations allow for efficient computation, they also reveal qualitative characteristics of optimal mechanisms which are important even from a non-computational standpoint. Furthermore, most of our techniques are widely applicable to optimization problems outside of mechanism design such as online algorithms or stochastic optimization. Our frameworks can be summarized as follows. When the input to an optimization problem (e.g., a mechanism design problem) comes from independent sources (e.g., independent agents), the complexity of the problem can be exponentially reduced by (i) decomposing the problem into smaller subproblems, each one involving one input source, (ii) simultaneously optimizing the subproblems subject to certain relaxation of coupling constraints, and (iii) combining the solutions of the subproblems in a certain way to obtain an (approximately) optimal solution for the original problem. We use our proposed framework to construct optimal or approximately optimal mechanisms for several settings previously considered in the literature and to improve upon the best previously known results. We also present applications of our techniques to non-mechanism design problems such as online stochastic generalized assignment problem which itself captures online and stochastic versions of various other problems such as resource allocation and job scheduling

    Analyse et application de la diffusion d'information dans les microblogs

    Get PDF
    Microblog service (such as Twitter and Sina Weibo) have become an important platform for Internet content sharing. As the information in Microblog are widely used in public opinion mining, viral marketing and political campaigns, understanding how information diffuses over Microblogs, and explaining the process through which some tweets become popular, are important.The analysis of the information diffusion in Microblogs involves the data collection from Microblog, the modeling on information spreading and using the resulting models. Dealing with the huge amount of data flowing through microblogs is by itself a challenge. Designing an efficient and unbiased sampling algorithm for Microblog is therefore essential. Besides, the retweeting process in Microblog is complex because of the ephemerality of information, the topology of Microblog network and the particular features (such as number of followers) of publisher and retweeters.Two traditional models have been used for information diffusion : Independent Cascades and Linear Threshold models. However no one of them can describe completely the retweeting process in Microblog accurately. The analysis and design of new models to characterize the information diffusion in Microblog is therefore necessary. Moreover, a comprehensive description of the correlation between the information diffusion in Microblog and the searching trends of keywords on search engines is lacking although some work has been found some preliminary relationships.This work presnets a complete analysis of information diffusion in Microblog from. The contributions and innovations of this thesis are as follows:1)There are two popular unbiased Online Social Network (OSN) sampling algorithms,Metropolis-Hastings Random Walk (MHRW) and Unbiased Sampling for Directed Social Graph (USDSG) method. However they are both likely to yield considerable self-sampling probabilities when applied to Microblogs where there is local. To solve this problem, I have modelled the process of OSN sampling as a Markov process and have deduced the sufficient and necessary conditions of unbiased sampling. Based on this unbiased conditions, I proposed an efficient and unbiased sampling algorithms, Unbiased Sampling method with Dummy Edges (USDE), which reduces strongly the self-sampling probabilities of MHRW. The experimental evaluation demonstrate thats the average node degree of samples of MHRW and USDSG is 2 - 4 times as high as the ground truth while USDE can provide the approximation of ground truth when the sampling repetitions are removed. Moreover the average sampling time per node in USDE is only a half of MHRW and USDSG one.2)A second contribution targets the shortages of Independent Cascades (IC) and Linear Threshold (LT) models in characterizing the retweeting process in Microblogs. I achieve this by introducing a Galton Watson with Killing (GWK) model which considers all the three important factors including the ephemerality of information, the topology of network and the features of publisher and retweeters accurately. We have validated the applicability of the of GWK model over two datasets from Sina Weibo and Twitter and showed that GWK model can fit 82% of information receivers and 90% of the maximum numbers of hops in the real retweeting process. Besides, the GWK model is useful for revealing the endogenous and exogenous factors which affect the popularity of tweets.3) Motivated by the correlation between popularity and trendiness of topicsin Microblog and search trends, I have developed an economic analysis of the market involving a third-party ad broker, which is a popular market in current SEM, and finds that the adwords augmenting strategy with the trending and popular topics in Twitter enables the broker to achieve, on average, four folds larger return on investment than with a non-augmented strategy, while still maintaining the same level of risk.Les services de microblogging (comme Twitter ou Sina Weibo) sont devenu ces dernières années des plateformes très importantes de partage d'information sur l'Internet. Les microblogs sont fréquemment utilisé pour l'analyse de l'opinion, le marketing viral, et les campagnes politiques. Comprendre les mécanismes sous-jacents de la diffusion d'information sur les microblogs et comment des contenus deviennent populaires est important.L‘analyse de la diffusion d'information dans les microblogs nécessite la collecte de donnée des microblogs, la modélisation de la diffusion d'information et l'application des modèles résultants. Traiter les données massives issues des microblogs est un défi en soi. Concevoir des algorithmes efficaces et sans biais afin d'échantillonner les microblogs est ainsi fondamental. Ceci doit prendre en compte la complexité du phénomène de « retweet » qui dépend de la valeur éphémère de l'information, de la topologie du réseau de microblogging et des caractéristiques particulières des éditeurs et retweeteurs.Deux modèles ont été traditionnellement appliqués à la diffusion d'information : les cascades indépendantes et modèle à seuil linéaire. Aucun de ces deux modèles n'est à même de décrire le processus du retweeting de façon correcte. Il devient donc nécessaire de de caractériser la diffusion d'information. De plus, une description complète de la relation entre la diffusion d'information dans les microblogs et de popularité des termes recherchés sur Internet serait utile.Ces travaux de thèse présentent une analyse complète de la diffusion d'information dans les microblogs. Les contributions ce cette thèse sont les suivantes :1) Il y'a deux technique d'échantillonnage sans biais pour les réseaux sociaux : la marche aléatoire de Métropolis-Hastings (MHRW), et la méthode d'échantillonnage sans biais de graphes dirigés (USDSG). Néanmoins ces deux méthodes peuvent aboutit à un taux important d'auto-échantillonnage quand elles sont appliquées à des microblogs. Pour résoudre ce problème, j'ai modélisé l'échantillonnage d'un OSN par un processus de Markov et j'en ai déduit les conditions nécessaires et suffisantes d'un échantillonnage sans biais. Ces conditions m'ont permis de proposer un algorithme d'échantillonnage sans biais et efficace que j'ai nommé : échantillonnage sans biais par liens vide (USDE). Cette nouvelle méthode d'échantillonage réduit fortement l'auto-échantillonnage du MHRW. L ‘évaluation empirique montre que la moyenne des dégrées des nœuds échantillonnés est proche de la vérité terrain alors que pour MHRW et USDSG elle est 2 à 4 fois supérieure.2) La seconde contribution de cette thèse vise les lacunes des modèles en cascades indépendantes et de seuils linéaires. J'ai développé un modèle fondé sur les processus de Galton-Watson avec mort (GWK) qui prennent en compte tous les facteurs importants du processus de retweet. Ce nouveau modèle est validé par une application sur des données issues de Twitter et de Weibo.3) La troisième contribution est relative au développement d'un modèle économique du marché des acteurs actifs dans le domaine du marketing sur les mots clés dans les sites de recherches. J'ai développé des méthodes de gestion de portfolios de mots clés et montrés que ces portfolios permettent d'améliorer fortement les rendements sans augmenter le niveau de risque

    Marketing Agencies and Collusive Bidding in Online Ad Auctions

    Get PDF
    The transition of the advertising market from traditional media to the internet has induced a proliferation of marketing agencies specialized in bidding in the auctions that are used to sell ad space on the web. We analyze how collusive bidding can emerge from bid delegation to a common marketing agency and how this can undermine the revenues and allocative efficiency of both the Generalized Second Price auction (GSP, used by Google and Microsoft-Bing and Yahoo!) and the of VCG mechanism (used by Facebook). We find that, despite its well-known susceptibility to collusion, the VCG mechanism outperforms the GSP auction both in terms of revenues and efficiency

    A Dynamic Model of Sponsored Search Advertising

    Get PDF
    Sponsored search advertising is ascendant Jupiter Research reports expenditures rose 28% in 2007 to 8.9Bandwillcontinuetoriseata15landscape.Yetlittle,ifanyempiricalresearchfocusesuponsearchenginemarketingstrategybyintegratingthebehaviorofvariousagentsinsponsoredsearchadvertising(i.e.,searchers,advertisers,andthesearchengineplatform).Thedynamicstructuralmodelweproposeservesasafoundationtoexploretheseandothersponsoredsearchadvertisingphenomena.Fittingthemodeltoaproprietarydatasetprovidedbyananonymoussearchengine,weconductseveralpolicysimulationstoillustratethebenetsofourapproach.First,weexplorehowinformationasymmetriesbetweensearchenginesandadvertiserscanbeexploitedtoenhanceplatformrevenues.Thishasconsequencesforthepricingofmarketintelligence.Second,weassesstheeffectofallowingadvertiserstobidnotonlyonkeywords,butalsobyconsumerssearchinghistoriesanddemographicstherebycreatingamoretargetedmodelofadvertising.Third,weexploreseveraldifferentauctionpricingmechanismsandassesstheroleofeachonengineandadvertiserprofitsandrevenues.Finally,weconsidertheroleofconsumersearchtoolssuchassortingonconsumerandadvertiserbehaviorandenginerevenues.Onekeyfindingisthattheestimatedadvertiservalueforaclickonitssponsoredlinkaveragesabout24cents.Giventhetypical8.9B and will continue to rise at a 15% CAGR, making it one of the major trends to affect the marketing landscape. Yet little, if any empirical research focuses upon search engine marketing strategy by integrating the behavior of various agents in sponsored search advertising (i.e., searchers, advertisers, and the search engine platform). The dynamic structural model we propose serves as a foundation to explore these and other sponsored search advertising phenomena. Fitting the model to a proprietary data set provided by an anonymous search engine, we conduct several policy simulations to illustrate the bene ts of our approach. First, we explore how information asymmetries between search engines and advertisers can be exploited to enhance platform revenues. This has consequences for the pricing of market intelligence. Second, we assess the effect of allowing advertisers to bid not only on key words, but also by consumers searching histories and demographics thereby creating a more targeted model of advertising. Third, we explore several different auction pricing mechanisms and assess the role of each on engine and advertiser profits and revenues. Finally, we consider the role of consumer search tools such as sorting on consumer and advertiser behavior and engine revenues. One key finding is that the estimated advertiser value for a click on its sponsored link averages about 24 cents. Given the typical 22 retail price of the software products advertised on the considered search engine, this implies a conversion rate (sales per click) of about 1.1%, well within common estimates of 1-2% (gamedaily.com). Hence our approach appears to yield valid estimates of advertiser click valuations. Another finding is that customers appear to be segmented by their clicking frequency, with frequent clickers placing a greater emphasis on the position of the sponsored advertising link. Estimation of the policy simulations is in progress
    corecore