1,022 research outputs found
Pricing average price advertising options when underlying spot market prices are discontinuous
Advertising options have been recently studied as a special type of
guaranteed contracts in online advertising, which are an alternative sales
mechanism to real-time auctions. An advertising option is a contract which
gives its buyer a right but not obligation to enter into transactions to
purchase page views or link clicks at one or multiple pre-specified prices in a
specific future period. Different from typical guaranteed contracts, the option
buyer pays a lower upfront fee but can have greater flexibility and more
control of advertising. Many studies on advertising options so far have been
restricted to the situations where the option payoff is determined by the
underlying spot market price at a specific time point and the price evolution
over time is assumed to be continuous. The former leads to a biased calculation
of option payoff and the latter is invalid empirically for many online
advertising slots. This paper addresses these two limitations by proposing a
new advertising option pricing framework. First, the option payoff is
calculated based on an average price over a specific future period. Therefore,
the option becomes path-dependent. The average price is measured by the power
mean, which contains several existing option payoff functions as its special
cases. Second, jump-diffusion stochastic models are used to describe the
movement of the underlying spot market price, which incorporate several
important statistical properties including jumps and spikes, non-normality, and
absence of autocorrelations. A general option pricing algorithm is obtained
based on Monte Carlo simulation. In addition, an explicit pricing formula is
derived for the case when the option payoff is based on the geometric mean.
This pricing formula is also a generalized version of several other option
pricing models discussed in related studies.Comment: IEEE Transactions on Knowledge and Data Engineering, 201
Looking Beyond a Clever Narrative: Visual Context and Attention are Primary Drivers of Affect in Video Advertisements
Emotion evoked by an advertisement plays a key role in influencing brand
recall and eventual consumer choices. Automatic ad affect recognition has
several useful applications. However, the use of content-based feature
representations does not give insights into how affect is modulated by aspects
such as the ad scene setting, salient object attributes and their interactions.
Neither do such approaches inform us on how humans prioritize visual
information for ad understanding. Our work addresses these lacunae by
decomposing video content into detected objects, coarse scene structure, object
statistics and actively attended objects identified via eye-gaze. We measure
the importance of each of these information channels by systematically
incorporating related information into ad affect prediction models. Contrary to
the popular notion that ad affect hinges on the narrative and the clever use of
linguistic and social cues, we find that actively attended objects and the
coarse scene structure better encode affective information as compared to
individual scene objects or conspicuous background elements.Comment: Accepted for publication in the Proceedings of 20th ACM International
Conference on Multimodal Interaction, Boulder, CO, US
-softmax: Improving Intra-class Compactness and Inter-class Separability of Features
Intra-class compactness and inter-class separability are crucial indicators
to measure the effectiveness of a model to produce discriminative features,
where intra-class compactness indicates how close the features with the same
label are to each other and inter-class separability indicates how far away the
features with different labels are. In this work, we investigate intra-class
compactness and inter-class separability of features learned by convolutional
networks and propose a Gaussian-based softmax (-softmax) function
that can effectively improve intra-class compactness and inter-class
separability. The proposed function is simple to implement and can easily
replace the softmax function. We evaluate the proposed -softmax
function on classification datasets (i.e., CIFAR-10, CIFAR-100, and Tiny
ImageNet) and on multi-label classification datasets (i.e., MS COCO and
NUS-WIDE). The experimental results show that the proposed
-softmax function improves the state-of-the-art models across all
evaluated datasets. In addition, analysis of the intra-class compactness and
inter-class separability demonstrates the advantages of the proposed function
over the softmax function, which is consistent with the performance
improvement. More importantly, we observe that high intra-class compactness and
inter-class separability are linearly correlated to average precision on MS
COCO and NUS-WIDE. This implies that improvement of intra-class compactness and
inter-class separability would lead to improvement of average precision.Comment: 15 pages, published in TNNL
Peer Review in the Age of Generative AI
Rapid advances in artificial intelligence (AI), including recent generative forms, are significantly impacting our lives and work. A key aspect of our work as IS researchers is the publishing of research articles, for which peer review serves as the primary means of quality control. While there have been debates about whether and to what extent AI can replace researchers in various domains, including IS, we lack an in-depth understanding of how AI can impact the peer review process. Considering the high volume of submissions and limited reviewer resources, there is a pressing need to use AI to augment the review process. At the same time, advances in AI have been accompanied by concerns about biases introduced by AI tools and the ethics of using them, among other issues such as hallucinations. Thus, critical issues to understand are: how can AI augment and potentially automate the review process, what are the pitfalls in doing so, and what er the implications for IS research and peer review practice. I will offer my views on these issues in this opinion piece
Video Storytelling: Textual Summaries for Events
Bridging vision and natural language is a longstanding goal in computer
vision and multimedia research. While earlier works focus on generating a
single-sentence description for visual content, recent works have studied
paragraph generation. In this work, we introduce the problem of video
storytelling, which aims at generating coherent and succinct stories for long
videos. Video storytelling introduces new challenges, mainly due to the
diversity of the story and the length and complexity of the video. We propose
novel methods to address the challenges. First, we propose a context-aware
framework for multimodal embedding learning, where we design a Residual
Bidirectional Recurrent Neural Network to leverage contextual information from
past and future. Second, we propose a Narrator model to discover the underlying
storyline. The Narrator is formulated as a reinforcement learning agent which
is trained by directly optimizing the textual metric of the generated story. We
evaluate our method on the Video Story dataset, a new dataset that we have
collected to enable the study. We compare our method with multiple
state-of-the-art baselines, and show that our method achieves better
performance, in terms of quantitative measures and user study.Comment: Published in IEEE Transactions on Multimedi
Multi-keyword multi-click advertisement option contracts for sponsored search
In sponsored search, advertisement (abbreviated ad) slots are usually sold by
a search engine to an advertiser through an auction mechanism in which
advertisers bid on keywords. In theory, auction mechanisms have many desirable
economic properties. However, keyword auctions have a number of limitations
including: the uncertainty in payment prices for advertisers; the volatility in
the search engine's revenue; and the weak loyalty between advertiser and search
engine. In this paper we propose a special ad option that alleviates these
problems. In our proposal, an advertiser can purchase an option from a search
engine in advance by paying an upfront fee, known as the option price. He then
has the right, but no obligation, to purchase among the pre-specified set of
keywords at the fixed cost-per-clicks (CPCs) for a specified number of clicks
in a specified period of time. The proposed option is closely related to a
special exotic option in finance that contains multiple underlying assets
(multi-keyword) and is also multi-exercisable (multi-click). This novel
structure has many benefits: advertisers can have reduced uncertainty in
advertising; the search engine can improve the advertisers' loyalty as well as
obtain a stable and increased expected revenue over time. Since the proposed ad
option can be implemented in conjunction with the existing keyword auctions,
the option price and corresponding fixed CPCs must be set such that there is no
arbitrage between the two markets. Option pricing methods are discussed and our
experimental results validate the development. Compared to keyword auctions, a
search engine can have an increased expected revenue by selling an ad option.Comment: Chen, Bowei and Wang, Jun and Cox, Ingemar J. and Kankanhalli, Mohan
S. (2015) Multi-keyword multi-click advertisement option contracts for
sponsored search. ACM Transactions on Intelligent Systems and Technology, 7
(1). pp. 1-29. ISSN: 2157-690
Evaluating Content-centric vs User-centric Ad Affect Recognition
Despite the fact that advertisements (ads) often include strongly emotional
content, very little work has been devoted to affect recognition (AR) from ads.
This work explicitly compares content-centric and user-centric ad AR
methodologies, and evaluates the impact of enhanced AR on computational
advertising via a user study. Specifically, we (1) compile an affective ad
dataset capable of evoking coherent emotions across users; (2) explore the
efficacy of content-centric convolutional neural network (CNN) features for
encoding emotions, and show that CNN features outperform low-level emotion
descriptors; (3) examine user-centered ad AR by analyzing Electroencephalogram
(EEG) responses acquired from eleven viewers, and find that EEG signals encode
emotional information better than content descriptors; (4) investigate the
relationship between objective AR and subjective viewer experience while
watching an ad-embedded online video stream based on a study involving 12
users. To our knowledge, this is the first work to (a) expressly compare user
vs content-centered AR for ads, and (b) study the relationship between modeling
of ad emotions and its impact on a real-life advertising application.Comment: Accepted at the ACM International Conference on Multimodal Interation
(ICMI) 201
- …