6,785 research outputs found

    A Multi-modal Approach to Fine-grained Opinion Mining on Video Reviews

    Get PDF
    Despite the recent advances in opinion mining for written reviews, few works have tackled the problem on other sources of reviews. In light of this issue, we propose a multi-modal approach for mining fine-grained opinions from video reviews that is able to determine the aspects of the item under review that are being discussed and the sentiment orientation towards them. Our approach works at the sentence level without the need for time annotations and uses features derived from the audio, video and language transcriptions of its contents. We evaluate our approach on two datasets and show that leveraging the video and audio modalities consistently provides increased performance over text-only baselines, providing evidence these extra modalities are key in better understanding video reviews.Comment: Second Grand Challenge and Workshop on Multimodal Language ACL 202

    ํ† ํฐ ๋‹จ์œ„ ๋ถ„๋ฅ˜๋ชจ๋ธ์„ ์œ„ํ•œ ์ค‘์š” ํ† ํฐ ํฌ์ฐฉ ๋ฐ ์‹œํ€€์Šค ์ธ์ฝ”๋” ์„ค๊ณ„ ๋ฐฉ๋ฒ•

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ •๋ณด๊ณตํ•™๋ถ€, 2022. 8. ์ •๊ต๋ฏผ.With the development of internet, a great of volume of data have accumulated over time. Therefore, dealing long sequential data can become a core problem in web services. For example, streaming services such as YouTube, Netflx and Tictoc have used the user's viewing history sequence to recommend videos that users may like. Such systems have replaced the user's viewed video with each item or token to predict what item or token will be viewed next. These tasks have been defined as Token-Level Classification (TLC) tasks. Given the sequence of tokens, TLC identifies the labels of tokens in the required portion of this sequence. As mentioned above, TLC can be applied to various recommendation Systems. In addition, most of Natural Language Processing (NLP) tasks can also be formulated as TLC problem. For example, sentence and each word within the sentence can be expressed as token-level sequence. In particular, in the case of information extraction, it can be changed to a TLC task that distinguishes whether a specific word span in the sentence is information. The characteristics of TLC datasets are that they are very sparse and long. Therefore, it is a very important problem to extract only important information from the sequences and properly encode them. In this thesis, we propose the method to solve the two academic questions of TLC in Recommendation Systems and information extraction: 1) How to capture important tokens from the token sequence and 2) How to encode a token sequence into model. As deep neural networks (DNNs) have shown outstanding performance in various web application tasks, we design the RNN and Transformer-based model for recommendation systems, and information extractions. In this dissertation, we propose novel models that can extract important tokens for recommendation systems and information extraction systems. In recommendation systems, we design a BART-based system that can capture important portion of token sequence through self-attention mechanisms and consider both bidirectional and left-to-right directional information. In information systems, we present relation network-based models to focus important parts such as opinion target and neighbor words.์ธํ„ฐ๋„ท์˜ ๋ฐœ๋‹ฌ๋กœ, ๋งŽ์€ ์–‘์˜ ๋ฐ์ดํ„ฐ๊ฐ€ ์‹œ๊ฐ„์ด ์ง€๋‚จ์— ๋”ฐ๋ผ ์ถ•์ ๋˜์—ˆ๋‹ค. ์ด๋กœ์ธํ•ด ๊ธด ์ˆœ์ฐจ์  ๋ฐ์ดํ„ฐ๋ฅผ ์ฒ˜๋ฆฌํ•˜๋Š” ๊ฒƒ์€ ์›น ์„œ๋น„์Šค์˜ ํ•ต์‹ฌ ๋ฌธ์ œ๊ฐ€ ๋˜์—ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ์œ ํŠœ๋ธŒ, ๋„ทํ”Œ๋ฆญ์Šค, ํ‹ฑํ†ก๊ณผ ๊ฐ™์€ ์ŠคํŠธ๋ฆฌ๋ฐ ์„œ๋น„์Šค๋Š” ์‚ฌ์šฉ์ž์˜ ์‹œ์ฒญ ๊ธฐ๋ก ์‹œํ€€์Šค๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‚ฌ์šฉ์ž๊ฐ€ ์ข‹์•„ํ•  ๋งŒํ•œ ๋น„๋””์˜ค๋ฅผ ์ถ”์ฒœํ•œ๋‹ค. ์ด๋Ÿฌํ•œ ์‹œ์Šคํ…œ์€ ๋‹ค์Œ์— ์–ด๋–ค ํ•ญ๋ชฉ์ด๋‚˜ ํ† ํฐ์„ ๋ณผ ๊ฒƒ์ธ์ง€๋ฅผ ์˜ˆ์ธกํ•˜๊ธฐ ์œ„ํ•ด ์‚ฌ์šฉ์ž๊ฐ€ ๋ณธ ๋น„๋””์˜ค๋ฅผ ๊ฐ ํ•ญ๋ชฉ ๋˜๋Š” ํ† ํฐ์œผ๋กœ ๋Œ€์ฒดํ•˜์—ฌ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋‹ค. ์ด๋Ÿฌํ•œ ์ž‘์—…์€ ํ† ํฐ ์ˆ˜์ค€ ๋ถ„๋ฅ˜(TLC) ์ž‘์—…์œผ๋กœ ์ •์˜ํ•œ๋‹ค. ํ† ํฐ ์‹œํ€€์Šค๊ฐ€ ์ฃผ์–ด์ง€๋ฉด, TLC๋Š” ์ด ์‹œํ€€์Šค์˜ ํ•„์š”ํ•œ ๋ถ€๋ถ„์—์„œ ํ† ํฐ์˜ ๋ผ๋ฒจ์„ ์‹๋ณ„ํ•œ๋‹ค. ์ด๋ ‡๊ฒŒ์™€ ๊ฐ™์ด, TLC๋Š” ๋‹ค์–‘ํ•œ ์ถ”์ฒœ ์‹œ์Šคํ…œ์— ์ ์šฉ๋  ์ˆ˜ ์žˆ๋‹ค. ๋˜ํ•œ ๋Œ€๋ถ€๋ถ„์˜ ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ(NLP) ์ž‘์—…์€ TLC ๋ฌธ์ œ๋กœ ๊ณต์‹ํ™”๋  ์ˆ˜ ์žˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋ฌธ์žฅ๊ณผ ๋ฌธ์žฅ ๋‚ด์˜ ๊ฐ ๋‹จ์–ด๋Š” ํ† ํฐ ๋ ˆ๋ฒจ ์‹œํ€€์Šค๋กœ ํ‘œํ˜„๋  ์ˆ˜ ์žˆ๋‹ค. ํŠนํžˆ ์ •๋ณด ์ถ”์ถœ์˜ ๊ฒฝ์šฐ ๋ฌธ์žฅ์˜ ํŠน์ • ๋‹จ์–ด ๊ฐ„๊ฒฉ์ด ์ •๋ณด์ธ์ง€ ์—ฌ๋ถ€๋ฅผ ๊ตฌ๋ถ„ํ•˜๋Š” TLC ์ž‘์—…์œผ๋กœ ๋ฐ”๋€” ์ˆ˜ ์žˆ๋‹ค. TLC ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ํŠน์ง•์€ ๋งค์šฐ ํฌ๋ฐ•(Sparse)ํ•˜๊ณ  ๊ธธ๋‹ค๋Š” ๊ฒƒ์ด๋‹ค. ๋”ฐ๋ผ์„œ ์‹œํ€€์Šค์—์„œ ์ค‘์š”ํ•œ ์ •๋ณด๋งŒ ์ถ”์ถœํ•˜์—ฌ ์ ์ ˆํžˆ ์ธ์ฝ”๋”ฉํ•˜๋Š” ๊ฒƒ์€ ๋งค์šฐ ์ค‘์š”ํ•œ ๋ฌธ์ œ์ด๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๊ถŒ์žฅ ์‹œ์Šคํ…œ๊ณผ ์ •๋ณด ์ถ”์ถœ์—์„œ TLC์˜ ๋‘ ๊ฐ€์ง€ ํ•™๋ฌธ์  ์งˆ๋ฌธ- 1) ํ† ํฐ ์‹œํ€€์Šค์—์„œ ์ค‘์š”ํ•œ ํ† ํฐ์„ ์บก์ฒ˜ํ•˜๋Š” ๋ฐฉ๋ฒ• ๋ฐ 2) ํ† ํฐ ์‹œํ€€์Šค๋ฅผ ๋ชจ๋ธ๋กœ ์ธ์ฝ”๋”ฉํ•˜๋Š” ๋ฐฉ๋ฒ• ์„ ํ•ด๊ฒฐํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ์‹ฌ์ธต ์‹ ๊ฒฝ๋ง(DNN)์ด ๋‹ค์–‘ํ•œ ์›น ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜ ์ž‘์—…์—์„œ ๋›ฐ์–ด๋‚œ ์„ฑ๋Šฅ์„ ๋ณด์—ฌ ์™”๊ธฐ ๋•Œ๋ฌธ์— ์ถ”์ฒœ ์‹œ์Šคํ…œ ๋ฐ ์ •๋ณด ์ถ”์ถœ์„ ์œ„ํ•œ RNN ๋ฐ ํŠธ๋žœ์Šคํฌ๋จธ ๊ธฐ๋ฐ˜ ๋ชจ๋ธ์„ ์„ค๊ณ„ํ•œ๋‹ค. ๋จผ์ € ์šฐ๋ฆฌ๋Š” ์ž๊ธฐ ์ฃผ์˜ ๋ฉ”์ปค๋‹ˆ์ฆ˜์„ ํ†ตํ•ด ํ† ํฐ ์‹œํ€€์Šค์˜ ์ค‘์š”ํ•œ ๋ถ€๋ถ„์„ ํฌ์ฐฉํ•˜๊ณ  ์–‘๋ฐฉํ–ฅ ๋ฐ ์ขŒ์šฐ ๋ฐฉํ–ฅ ์ •๋ณด๋ฅผ ๋ชจ๋‘ ๊ณ ๋ คํ•  ์ˆ˜ ์žˆ๋Š” BART ๊ธฐ๋ฐ˜ ์ถ”์ฒœ ์‹œ์Šคํ…œ์„ ์„ค๊ณ„ํ•œ๋‹ค. ์ •๋ณด ์‹œ์Šคํ…œ์—์„œ, ์šฐ๋ฆฌ๋Š” ์˜๊ฒฌ ๋Œ€์ƒ๊ณผ ์ด์›ƒ ๋‹จ์–ด์™€ ๊ฐ™์€ ์ค‘์š”ํ•œ ๋ถ€๋ถ„์— ์ดˆ์ ์„ ๋งž์ถ”๊ธฐ ์œ„ํ•ด ๊ด€๊ณ„ ๋„คํŠธ์›Œํฌ ๊ธฐ๋ฐ˜ ๋ชจ๋ธ์„ ์ œ์‹œํ•œ๋‹ค.1. Introduction 1 2. Token-level Classification in Recommendation Systems 8 2.1 Overview 8 2.2 Hierarchical RNN-based Recommendation Systems 19 2.3 Entangled Bidirectional Encoder to Auto-regressive Decoder for Sequential Recommendation 27 3. Token-level Classification in Information Extraction 39 3.1 Overview 39 3.2 RABERT: Relation-Aware BERT for Target-Oriented Opinion Words Extraction 49 3.3 Gated Relational Target-aware Encoder and Local Context-aware Decoder for Target-oriented Opinion Words Extraction 58 4. Conclusion 79๋ฐ•

    Deep Memory Networks for Attitude Identification

    Full text link
    We consider the task of identifying attitudes towards a given set of entities from text. Conventionally, this task is decomposed into two separate subtasks: target detection that identifies whether each entity is mentioned in the text, either explicitly or implicitly, and polarity classification that classifies the exact sentiment towards an identified entity (the target) into positive, negative, or neutral. Instead, we show that attitude identification can be solved with an end-to-end machine learning architecture, in which the two subtasks are interleaved by a deep memory network. In this way, signals produced in target detection provide clues for polarity classification, and reversely, the predicted polarity provides feedback to the identification of targets. Moreover, the treatments for the set of targets also influence each other -- the learned representations may share the same semantics for some targets but vary for others. The proposed deep memory network, the AttNet, outperforms methods that do not consider the interactions between the subtasks or those among the targets, including conventional machine learning methods and the state-of-the-art deep learning models.Comment: Accepted to WSDM'1

    Weakly supervised aspect extraction for domain-specific texts

    Get PDF
    Aspect extraction, identifying aspects of text segments from a pre-defined set of aspects, is one of the keystones in text understanding. It benefits numerous applications, including sentiment analysis and product review summarization. Most existing aspect extraction methods heavily rely on human-curated aspect annotations of massive text segments, thus making them expensive to be applied in specific domains. Recent attempts leveraging clustering methods can alleviate such annotation effort, but they require domain-specific knowledge and effort to further filter, aggregate, and align the clustering results to desired aspects. Therefore, in this paper, we explore to extract aspects from the domain-specific raw texts with very limited supervision โ€“ only a few user-provided seed words per each aspect. Specifically, our proposed neural model is equipped with multi-head attention and self-training. The multi-head attention is learned from the seed words to ensure that the aspect-related words in text segments are weighted higher than those unrelated ones. The self-training mechanism provides more pseudo labels in addition to limited supervision. Extensive experiments on real-world datasets demonstrate the superior performance of our proposed framework, as well as the effectiveness of both the attention module and the self-training mechanism. Case studies on the attention weights further shed lights on the interpretability of our aspect extraction results

    Emotion AI-Driven Sentiment Analysis: A Survey, Future Research Directions, and Open Issues

    Get PDF
    The essential use of natural language processing is to analyze the sentiment of the author via the context. This sentiment analysis (SA) is said to determine the exactness of the underlying emotion in the context. It has been used in several subject areas such as stock market prediction, social media data on product reviews, psychology, judiciary, forecasting, disease prediction, agriculture, etc. Many researchers have worked on these areas and have produced significant results. These outcomes are beneficial in their respective fields, as they help to understand the overall summary in a short time. Furthermore, SA helps in understanding actual feedback shared across di erent platforms such as Amazon, TripAdvisor, etc. The main objective of this thorough survey was to analyze some of the essential studies done so far and to provide an overview of SA models in the area of emotion AI-driven SA. In addition, this paper o ers a review of ontology-based SA and lexicon-based SA along with machine learning models that are used to analyze the sentiment of the given context. Furthermore, this work also discusses di erent neural network-based approaches for analyzing sentiment. Finally, these di erent approaches were also analyzed with sample data collected from Twitter. Among the four approaches considered in each domain, the aspect-based ontology method produced 83% accuracy among the ontology-based SAs, the term frequency approach produced 85% accuracy in the lexicon-based analysis, and the support vector machine-based approach achieved 90% accuracy among the other machine learning-based approaches.Ministerio de Educaciรณn (MOE) en Taiwรกn N/

    Identifying sources of opinions with conditional random fields and extraction patterns

    Get PDF
    Journal ArticleRecent systems have been developed for sentiment classification, opinion recognition, and opinion analysis (e.g., detecting polarity and strength). We pursue another aspect of opinion analysis: identifying the sources of opinions, emotions, and sentiments. We view this problem as an information extraction task and adopt a hybrid approach that combines Conditional Random Fields (Lafferty et al., 2001) and a variation of AutoSlog (Riloff, 1996a). While CRFs model source identification as a sequence tagging task, AutoSlog learns extraction patterns. Our results show that the combination of these two methods performs better than either one alone. The resulting system identifies opinion sources with 79:3% precision and 59:5% recall using a head noun matching measure, and 81:2% precision and 60:6% recall using an overlap measure
    • โ€ฆ
    corecore