286 research outputs found
Sentiment Recognition in Egocentric Photostreams
Lifelogging is a process of collecting rich source of information about daily
life of people. In this paper, we introduce the problem of sentiment analysis
in egocentric events focusing on the moments that compose the images recalling
positive, neutral or negative feelings to the observer. We propose a method for
the classification of the sentiments in egocentric pictures based on global and
semantic image features extracted by Convolutional Neural Networks. We carried
out experiments on an egocentric dataset, which we organized in 3 classes on
the basis of the sentiment that is recalled to the user (positive, negative or
neutral)
Improving Distributed Representations of Tweets - Present and Future
Unsupervised representation learning for tweets is an important research
field which helps in solving several business applications such as sentiment
analysis, hashtag prediction, paraphrase detection and microblog ranking. A
good tweet representation learning model must handle the idiosyncratic nature
of tweets which poses several challenges such as short length, informal words,
unusual grammar and misspellings. However, there is a lack of prior work which
surveys the representation learning models with a focus on tweets. In this
work, we organize the models based on its objective function which aids the
understanding of the literature. We also provide interesting future directions,
which we believe are fruitful in advancing this field by building high-quality
tweet representation learning models.Comment: To be presented in Student Research Workshop (SRW) at ACL 201
Improving Distributed Representations of Tweets - Present and Future
Unsupervised representation learning for tweets is an important research
field which helps in solving several business applications such as sentiment
analysis, hashtag prediction, paraphrase detection and microblog ranking. A
good tweet representation learning model must handle the idiosyncratic nature
of tweets which poses several challenges such as short length, informal words,
unusual grammar and misspellings. However, there is a lack of prior work which
surveys the representation learning models with a focus on tweets. In this
work, we organize the models based on its objective function which aids the
understanding of the literature. We also provide interesting future directions,
which we believe are fruitful in advancing this field by building high-quality
tweet representation learning models.Comment: To be presented in Student Research Workshop (SRW) at ACL 201
Research on multi-modal sentiment feature learning of social media content
社交媒体已成为现代社会舆论交流和信息传递的主要平台。针对社交媒体的情感分析对于舆论监控、商业产品导向和股市预测等都具有重大应用价值。但社交媒体内容的多模态性(文本、图片等)让传统的单模态情感分析方法面临许多局限,多模态情感分析技术对跨媒体内容的理解与分析具有重大的理论价值。 多模态情感分析区别于单模态方法的关键问题在于,如何综合利用形态各异的多模态情感信息,来获取整体的情感倾向性,同时考虑单个模态本身在情感表达上的性质。针对该问题,利用社交媒体上的多模态内容在情感表达上所具有的关联性、抽象层级性的特点,提出了一套面向社交媒体的多模态情感特征学习与融合方法,实现多模态情感分析,主要内容和创新点...Social media has become a main platform of public communication and information transmission. Therefore, social media sentiment analysis has great application values in many fields, such as public opinion monitoring, production marking, stock forecasting and so on. But the multi-modal characteristic of social media content (e.g. texts and images) significantly challenges traditional text-based sen...学位:工学硕士院系专业:信息科学与技术学院_模式识别与智能系统学号:3152013115327
Recommended from our members
A Quantum-Inspired Multimodal Sentiment Analysis Framework
Multimodal sentiment analysis aims to capture diversified sentiment information implied in data that are of different modalities (e.g., an image that is associated with a textual description or a set of textual labels). The key challenge is rooted on the “semantic gap” between different low-level content features and high-level semantic information. Existing approaches generally utilize a combination of multimodal features in a somehow heuristic way. However, how to employ and combine multiple information from different sources effectively is still an important yet largely unsolved problem. To address the problem, in this paper, we propose a Quantum-inspired Multimodal Sentiment Analysis (QMSA) framework. The framework consists of a Quantum-inspired Multimodal Representation (QMR) model (which aims to fill the “semantic gap” and model the correlations between different modalities via density matrix), and a Multimodal decision Fusion strategy inspired by Quantum Interference (QIMF) in the double-slit experiment (in which the sentiment label is analogous to a photon, and the data modalities are analogous to slits). Extensive experiments are conducted on two large scale datasets, which are collected from the Getty Images and Flickr photo sharing platform. The experimental results show that our approach significantly outperforms a wide range of baselines and state-of-the-art methods
- …