119,605 research outputs found
Active word learning under uncertain input conditions
This paper presents an analysis of phoneme durations of emotional speech in two languages: Dutch and Korean. The analyzed corpus of emotional speech has been specifically developed for the purpose of cross-linguistic comparison, and is more balanced than any similar corpus available so far: a) it contains expressions by both Dutch and Korean actors and is based on judgments by both Dutch and Korean listeners; b) the same elicitation technique and recording procedure were used for recordings of both languages; and c) the phonetics of the carrier phrase were constructed to be permissible in both languages. The carefully controlled phonetic content of the carrier phrase allows for analysis of the role of specific phonetic features, such as phoneme duration, in emotional expression in Dutch and Korean. In this study the mutual effect of language and emotion on phoneme duration is presented
Dealing with uncertain input in word learning
In this paper we investigate a computational model of word learning, that is embedded in a cognitively and ecologically plausible framework. Multi-modal stimuli from four different speakers form a varied source of experience. The model incorporates active learning, attention to a communicative setting and clarity of the visual scene. The model's ability to learn associations between speech utterances and visual concepts is evaluated during training to investigate the influence of active learning under conditions of uncertain input. The results show the importance of shared attention in word learning and the model's robustness against noise
Facial Expression Recognition from World Wild Web
Recognizing facial expression in a wild setting has remained a challenging
task in computer vision. The World Wide Web is a good source of facial images
which most of them are captured in uncontrolled conditions. In fact, the
Internet is a Word Wild Web of facial images with expressions. This paper
presents the results of a new study on collecting, annotating, and analyzing
wild facial expressions from the web. Three search engines were queried using
1250 emotion related keywords in six different languages and the retrieved
images were mapped by two annotators to six basic expressions and neutral. Deep
neural networks and noise modeling were used in three different training
scenarios to find how accurately facial expressions can be recognized when
trained on noisy images collected from the web using query terms (e.g. happy
face, laughing man, etc)? The results of our experiments show that deep neural
networks can recognize wild facial expressions with an accuracy of 82.12%
Information Extraction, Data Integration, and Uncertain Data Management: The State of The Art
Information Extraction, data Integration, and uncertain data management are different areas of research that got vast focus in the last two decades. Many researches tackled those areas of research individually. However, information extraction systems should have integrated with data integration methods to make use of the extracted information. Handling uncertainty in extraction and integration process is an important issue to enhance the quality of the data in such integrated systems. This article presents the state of the art of the mentioned areas of research and shows the common grounds and how to integrate information extraction and data integration under uncertainty management cover
Learning how to Active Learn: A Deep Reinforcement Learning Approach
Active learning aims to select a small subset of data for annotation such
that a classifier learned on the data is highly accurate. This is usually done
using heuristic selection methods, however the effectiveness of such methods is
limited and moreover, the performance of heuristics varies between datasets. To
address these shortcomings, we introduce a novel formulation by reframing the
active learning as a reinforcement learning problem and explicitly learning a
data selection policy, where the policy takes the role of the active learning
heuristic. Importantly, our method allows the selection policy learned using
simulation on one language to be transferred to other languages. We demonstrate
our method using cross-lingual named entity recognition, observing uniform
improvements over traditional active learning.Comment: To appear in EMNLP 201
Online Influence Maximization (Extended Version)
Social networks are commonly used for marketing purposes. For example, free
samples of a product can be given to a few influential social network users (or
"seed nodes"), with the hope that they will convince their friends to buy it.
One way to formalize marketers' objective is through influence maximization (or
IM), whose goal is to find the best seed nodes to activate under a fixed
budget, so that the number of people who get influenced in the end is
maximized. Recent solutions to IM rely on the influence probability that a user
influences another one. However, this probability information may be
unavailable or incomplete. In this paper, we study IM in the absence of
complete information on influence probability. We call this problem Online
Influence Maximization (OIM) since we learn influence probabilities at the same
time we run influence campaigns. To solve OIM, we propose a multiple-trial
approach, where (1) some seed nodes are selected based on existing influence
information; (2) an influence campaign is started with these seed nodes; and
(3) users' feedback is used to update influence information. We adopt the
Explore-Exploit strategy, which can select seed nodes using either the current
influence probability estimation (exploit), or the confidence bound on the
estimation (explore). Any existing IM algorithm can be used in this framework.
We also develop an incremental algorithm that can significantly reduce the
overhead of handling users' feedback information. Our experiments show that our
solution is more effective than traditional IM methods on the partial
information.Comment: 13 pages. To appear in KDD 2015. Extended versio
- …