16,239 research outputs found
You Must Have Clicked on this Ad by Mistake! Data-Driven Identification of Accidental Clicks on Mobile Ads with Applications to Advertiser Cost Discounting and Click-Through Rate Prediction
In the cost per click (CPC) pricing model, an advertiser pays an ad network
only when a user clicks on an ad; in turn, the ad network gives a share of that
revenue to the publisher where the ad was impressed. Still, advertisers may be
unsatisfied with ad networks charging them for "valueless" clicks, or so-called
accidental clicks. [...] Charging advertisers for such clicks is detrimental in
the long term as the advertiser may decide to run their campaigns on other ad
networks. In addition, machine-learned click models trained to predict which ad
will bring the highest revenue may overestimate an ad click-through rate, and
as a consequence negatively impacting revenue for both the ad network and the
publisher. In this work, we propose a data-driven method to detect accidental
clicks from the perspective of the ad network. We collect observations of time
spent by users on a large set of ad landing pages - i.e., dwell time. We notice
that the majority of per-ad distributions of dwell time fit to a mixture of
distributions, where each component may correspond to a particular type of
clicks, the first one being accidental. We then estimate dwell time thresholds
of accidental clicks from that component. Using our method to identify
accidental clicks, we then propose a technique that smoothly discounts the
advertiser's cost of accidental clicks at billing time. Experiments conducted
on a large dataset of ads served on Yahoo mobile apps confirm that our
thresholds are stable over time, and revenue loss in the short term is
marginal. We also compare the performance of an existing machine-learned click
model trained on all ad clicks with that of the same model trained only on
non-accidental clicks. There, we observe an increase in both ad click-through
rate (+3.9%) and revenue (+0.2%) on ads served by the Yahoo Gemini network when
using the latter. [...
Using Search Queries to Understand Health Information Needs in Africa
The lack of comprehensive, high-quality health data in developing nations
creates a roadblock for combating the impacts of disease. One key challenge is
understanding the health information needs of people in these nations. Without
understanding people's everyday needs, concerns, and misconceptions, health
organizations and policymakers lack the ability to effectively target education
and programming efforts. In this paper, we propose a bottom-up approach that
uses search data from individuals to uncover and gain insight into health
information needs in Africa. We analyze Bing searches related to HIV/AIDS,
malaria, and tuberculosis from all 54 African nations. For each disease, we
automatically derive a set of common search themes or topics, revealing a
wide-spread interest in various types of information, including disease
symptoms, drugs, concerns about breastfeeding, as well as stigma, beliefs in
natural cures, and other topics that may be hard to uncover through traditional
surveys. We expose the different patterns that emerge in health information
needs by demographic groups (age and sex) and country. We also uncover
discrepancies in the quality of content returned by search engines to users by
topic. Combined, our results suggest that search data can help illuminate
health information needs in Africa and inform discussions on health policy and
targeted education efforts both on- and offline.Comment: Extended version of an ICWSM 2019 pape
Studying Ransomware Attacks Using Web Search Logs
Cyber attacks are increasingly becoming prevalent and causing significant
damage to individuals, businesses and even countries. In particular, ransomware
attacks have grown significantly over the last decade. We do the first study on
mining insights about ransomware attacks by analyzing query logs from Bing web
search engine. We first extract ransomware related queries and then build a
machine learning model to identify queries where users are seeking support for
ransomware attacks. We show that user search behavior and characteristics are
correlated with ransomware attacks. We also analyse trends in the temporal and
geographical space and validate our findings against publicly available
information. Lastly, we do a case study on 'Nemty', a popular ransomware, to
show that it is possible to derive accurate insights about cyber attacks by
query log analysis.Comment: To appear in the proceedings of SIGIR 202
Generalized Team Draft Interleaving
Interleaving is an online evaluation method that compares
two ranking functions by mixing their results and interpret-
ing the users' click feedback. An important property of
an interleaving method is its sensitivity, i.e. the ability to
obtain reliable comparison outcomes with few user interac-
tions. Several methods have been proposed so far to im-
prove interleaving sensitivity, which can be roughly divided
into two areas: (a) methods that optimize the credit assign-
ment function (how the click feedback is interpreted), and
(b) methods that achieve higher sensitivity by controlling
the interleaving policy (how often a particular interleaved
result page is shown).
In this paper, we propose an interleaving framework that
generalizes the previously studied interleaving methods in
two aspects. First, it achieves a higher sensitivity by per-
forming a joint data-driven optimization of the credit as-
signment function and the interleaving policy. Second, we
formulate the framework to be general w.r.t. the search do-
main where the interleaving experiment is deployed, so that
it can be applied in domains with grid-based presentation,
such as image search. In order to simplify the optimization,
we additionally introduce a stratifed estimate of the exper-
iment outcome. This stratifcation is also useful on its own,
as it reduces the variance of the outcome and thus increases
the interleaving sensitivity.
We perform an extensive experimental study using large-
scale document and image search datasets obtained from
a commercial search engine. The experiments show that
our proposed framework achieves marked improvements in
sensitivity over efective baselines on both datasets
User Acquisition and Engagement in Digital News Media
Generating revenue has been a major issue for the news industry and journalism over the past decade. In fact, vast availability of free online news sources causes online news media agencies to face user acquisition and engagement as pressing issues more than before. Although digital news media agencies are seeking sustainable relationships with their users, their current business models do not satisfy this demand. As a matter of fact, they need to understand and predict how much an article can engage a reader as a crucial step in attracting readers, and then maximize the engagement using some strategies. Moreover, news media companies need effective algorithmic tools to identify users who are prone to subscription. Last but not least, online news agencies need to make smarter decisions in the way that they deliver articles to users to maximize the potential benefits.
In this dissertation, we take the first steps towards achieving these goals and investigate these challenges from data mining /machine learning perspectives. First, we investigate the problem of understanding and predicting article engagement in terms of dwell time as one of the most important factors in digital news media. In particular, we design data exploratory models studying the textual elements (e.g., events, emotions) involved in article stories, and find their relationships with the engagement patterns. In the prediction task, we design a framework to predict the article dwell time based on a deep neural network architecture which exploits the interactions among important elements (i.e., augmented features) in the article content as well as the neural representation of the content to achieve the better performance.
In the second part of the dissertation, we address the problem of identifying valuable visitors who are likely to subscribe in the future. We suggest that the decision for subscription is not a sudden, instantaneous action, but it is the informed decision based on positive experience with the newspaper. As such, we propose effective engagement measures and show that they are effective in building the predictive model for subscription. We design a model that predicts not only the potential subscribers but also the time that a user would subscribe.
In the last part of this thesis, we consider the paywall problem in online newspapers. The traditional paywall method offers a non-subscribed reader a fixed number of free articles in a period of time (e.g., a month), and then directs the user to the subscription page for further reading. We argue that there is no direct relationship between the number of paywalls presented to readers and the number of subscriptions, and that this artificial barrier, if not used well, may disengage potential subscribers and thus may not well serve its purpose of increasing revenue. We propose an adaptive paywall mechanism to balance the benefit of showing an article against that of displaying the paywall (i.e., terminating the session). We first define the notion of cost and utility that are used to define an objective function for optimal paywall decision making. Then, we model the problem as a stochastic sequential decision process. Finally, we propose an efficient policy function for paywall decision making.
All the proposed models are evaluated on real datasets from The Globe and Mail which is a major newspaper in Canada. However, the proposed techniques are not limited to any particular dataset or strict requirement. Alternatively, they are designed based on the datasets and settings which are available and common to most of newspapers. Therefore, the models are general and can be applied by any online newspaper to improve user engagement and acquisition
Scalable Semantic Matching of Queries to Ads in Sponsored Search Advertising
Sponsored search represents a major source of revenue for web search engines.
This popular advertising model brings a unique possibility for advertisers to
target users' immediate intent communicated through a search query, usually by
displaying their ads alongside organic search results for queries deemed
relevant to their products or services. However, due to a large number of
unique queries it is challenging for advertisers to identify all such relevant
queries. For this reason search engines often provide a service of advanced
matching, which automatically finds additional relevant queries for advertisers
to bid on. We present a novel advanced matching approach based on the idea of
semantic embeddings of queries and ads. The embeddings were learned using a
large data set of user search sessions, consisting of search queries, clicked
ads and search links, while utilizing contextual information such as dwell time
and skipped ads. To address the large-scale nature of our problem, both in
terms of data and vocabulary size, we propose a novel distributed algorithm for
training of the embeddings. Finally, we present an approach for overcoming a
cold-start problem associated with new ads and queries. We report results of
editorial evaluation and online tests on actual search traffic. The results
show that our approach significantly outperforms baselines in terms of
relevance, coverage, and incremental revenue. Lastly, we open-source learned
query embeddings to be used by researchers in computational advertising and
related fields.Comment: 10 pages, 4 figures, 39th International ACM SIGIR Conference on
Research and Development in Information Retrieval, SIGIR 2016, Pisa, Ital
Studying Interaction Methodologies in Video Retrieval
So far, several approaches have been studied to bridge the problem of the Semantic Gap, the bottleneck in image and video retrieval. However, no approach is successful enough to increase retrieval performances significantly. One reason is the lack of understanding the user's interest, a major condition towards adapting results to a user. This is partly due to the lack of appropriate interfaces and the missing knowledge of how to interpret user's actions with these interfaces. In this paper, we propose to study the importance of various implicit indicators of relevance. Furthermore, we propose to investigate how this implicit feedback can be combined with static user profiles towards an adaptive video retrieval model
- …