9,113 research outputs found
Automatic Understanding of Image and Video Advertisements
There is more to images than their objective physical content: for example,
advertisements are created to persuade a viewer to take a certain action. We
propose the novel problem of automatic advertisement understanding. To enable
research on this problem, we create two datasets: an image dataset of 64,832
image ads, and a video dataset of 3,477 ads. Our data contains rich annotations
encompassing the topic and sentiment of the ads, questions and answers
describing what actions the viewer is prompted to take and the reasoning that
the ad presents to persuade the viewer ("What should I do according to this ad,
and why should I do it?"), and symbolic references ads make (e.g. a dove
symbolizes peace). We also analyze the most common persuasive strategies ads
use, and the capabilities that computer vision systems should have to
understand these strategies. We present baseline classification results for
several prediction tasks, including automatically answering questions about the
messages of the ads.Comment: To appear in CVPR 2017; data available on
http://cs.pitt.edu/~kovashka/ad
Finding Answers from the Word of God: Domain Adaptation for Neural Networks in Biblical Question Answering
Question answering (QA) has significantly benefitted from deep learning
techniques in recent years. However, domain-specific QA remains a challenge due
to the significant amount of data required to train a neural network. This
paper studies the answer sentence selection task in the Bible domain and answer
questions by selecting relevant verses from the Bible. For this purpose, we
create a new dataset BibleQA based on bible trivia questions and propose three
neural network models for our task. We pre-train our models on a large-scale QA
dataset, SQuAD, and investigate the effect of transferring weights on model
accuracy. Furthermore, we also measure the model accuracies with different
answer context lengths and different Bible translations. We affirm that
transfer learning has a noticeable improvement in the model accuracy. We
achieve relatively good results with shorter context lengths, whereas longer
context lengths decreased model accuracy. We also find that using a more modern
Bible translation in the dataset has a positive effect on the task.Comment: The paper has been accepted at IJCNN 201
Soft Seeded SSL Graphs for Unsupervised Semantic Similarity-based Retrieval
Semantic similarity based retrieval is playing an increasingly important role
in many IR systems such as modern web search, question-answering, similar
document retrieval etc. Improvements in retrieval of semantically similar
content are very significant to applications like Quora, Stack Overflow, Siri
etc. We propose a novel unsupervised model for semantic similarity based
content retrieval, where we construct semantic flow graphs for each query, and
introduce the concept of "soft seeding" in graph based semi-supervised learning
(SSL) to convert this into an unsupervised model.
We demonstrate the effectiveness of our model on an equivalent question
retrieval problem on the Stack Exchange QA dataset, where our unsupervised
approach significantly outperforms the state-of-the-art unsupervised models,
and produces comparable results to the best supervised models. Our research
provides a method to tackle semantic similarity based retrieval without any
training data, and allows seamless extension to different domain QA
communities, as well as to other semantic equivalence tasks.Comment: Published in Proceedings of the 2017 ACM Conference on Information
and Knowledge Management (CIKM '17
On the Generation of Medical Question-Answer Pairs
Question answering (QA) has achieved promising progress recently. However,
answering a question in real-world scenarios like the medical domain is still
challenging, due to the requirement of external knowledge and the insufficient
quantity of high-quality training data. In the light of these challenges, we
study the task of generating medical QA pairs in this paper. With the insight
that each medical question can be considered as a sample from the latent
distribution of questions given answers, we propose an automated medical QA
pair generation framework, consisting of an unsupervised key phrase detector
that explores unstructured material for validity, and a generator that involves
a multi-pass decoder to integrate structural knowledge for diversity. A series
of experiments have been conducted on a real-world dataset collected from the
National Medical Licensing Examination of China. Both automatic evaluation and
human annotation demonstrate the effectiveness of the proposed method. Further
investigation shows that, by incorporating the generated QA pairs for training,
significant improvement in terms of accuracy can be achieved for the
examination QA system.Comment: AAAI 202
The paradigm-shift of social spambots: Evidence, theories, and tools for the arms race
Recent studies in social media spam and automation provide anecdotal
argumentation of the rise of a new generation of spambots, so-called social
spambots. Here, for the first time, we extensively study this novel phenomenon
on Twitter and we provide quantitative evidence that a paradigm-shift exists in
spambot design. First, we measure current Twitter's capabilities of detecting
the new social spambots. Later, we assess the human performance in
discriminating between genuine accounts, social spambots, and traditional
spambots. Then, we benchmark several state-of-the-art techniques proposed by
the academic literature. Results show that neither Twitter, nor humans, nor
cutting-edge applications are currently capable of accurately detecting the new
social spambots. Our results call for new approaches capable of turning the
tide in the fight against this raising phenomenon. We conclude by reviewing the
latest literature on spambots detection and we highlight an emerging common
research trend based on the analysis of collective behaviors. Insights derived
from both our extensive experimental campaign and survey shed light on the most
promising directions of research and lay the foundations for the arms race
against the novel social spambots. Finally, to foster research on this novel
phenomenon, we make publicly available to the scientific community all the
datasets used in this study.Comment: To appear in Proc. 26th WWW, 2017, Companion Volume (Web Science
Track, Perth, Australia, 3-7 April, 2017
Survey on Evaluation Methods for Dialogue Systems
In this paper we survey the methods and concepts developed for the evaluation
of dialogue systems. Evaluation is a crucial part during the development
process. Often, dialogue systems are evaluated by means of human evaluations
and questionnaires. However, this tends to be very cost and time intensive.
Thus, much work has been put into finding methods, which allow to reduce the
involvement of human labour. In this survey, we present the main concepts and
methods. For this, we differentiate between the various classes of dialogue
systems (task-oriented dialogue systems, conversational dialogue systems, and
question-answering dialogue systems). We cover each class by introducing the
main technologies developed for the dialogue systems and then by presenting the
evaluation methods regarding this class
Look before you Hop: Conversational Question Answering over Knowledge Graphs Using Judicious Context Expansion
Fact-centric information needs are rarely one-shot; users typically ask follow-up questions to explore a topic. In such a conversational setting, the user's inputs are often incomplete, with entities or predicates left out, and ungrammatical phrases. This poses a huge challenge to question answering (QA) systems that typically rely on cues in full-fledged interrogative sentences. As a solution, we develop CONVEX: an unsupervised method that can answer incomplete questions over a knowledge graph (KG) by maintaining conversation context using entities and predicates seen so far and automatically inferring missing or ambiguous pieces for follow-up questions. The core of our method is a graph exploration algorithm that judiciously expands a frontier to find candidate answers for the current question. To evaluate CONVEX, we release ConvQuestions, a crowdsourced benchmark with 11,200 distinct conversations from five different domains. We show that CONVEX: (i) adds conversational support to any stand-alone QA system, and (ii) outperforms state-of-the-art baselines and question completion strategies
- …