13 research outputs found

    Lifelong Learning CRF for Supervised Aspect Extraction

    Full text link
    This paper makes a focused contribution to supervised aspect extraction. It shows that if the system has performed aspect extraction from many past domains and retained their results as knowledge, Conditional Random Fields (CRF) can leverage this knowledge in a lifelong learning manner to extract in a new domain markedly better than the traditional CRF without using this prior knowledge. The key innovation is that even after CRF training, the model can still improve its extraction with experiences in its applications.Comment: Accepted at ACL 2017. arXiv admin note: text overlap with arXiv:1612.0794

    A CONSTRUCTIVE FINE-GRAINED OPINION REVIEW SERVICE FOR EXTRACTION

    Get PDF
    A point of view target is certainly an item concerning which clients will convey their opinions, generally as nouns otherwise phrases of nouns. Opinion targets additionally to extraction of opinion word aren't novel tasks within opinion mining. Inside our work we advise a method that is dependent on partially-supervised kind of alignment that will help in identification of opinion relations as the whole process of alignment. Our work focus on recognition of opinion relations among opinion targets additionally to opinion words.  Candidates by means of advanced confidence are located as opinion targets. When compared to traditional kinds of not being watched alignment, forecasted model will acquire enhanced precision due to practice of partial supervision. Our representation will captures opinion relations more precisely, designed for extended-span relations when compared to earlier techniques that are on first step toward nearest-neighbour rules

    Opinion Holder and Target Extraction for Verb-based Opinion Predicates – The Problem is Not Solved

    Get PDF
    We offer a critical review of the current state of opinion role extraction involving opinion verbs. We argue that neither the currently available lexical resources nor the manually annotated text corpora are sufficient to appropriately study this task. We introduce a new corpus focusing on opinion roles of opinion verbs from the Subjectivity Lexicon and show potential benefits of this corpus. We also demonstrate that state-of-the-art classifiers perform rather poorly on this new dataset compared to the standard dataset for the task showing that there still remains significant research to be done

    Latent Opinions Transfer Network for Target-Oriented Opinion Words Extraction

    Full text link
    Target-oriented opinion words extraction (TOWE) is a new subtask of ABSA, which aims to extract the corresponding opinion words for a given opinion target in a sentence. Recently, neural network methods have been applied to this task and achieve promising results. However, the difficulty of annotation causes the datasets of TOWE to be insufficient, which heavily limits the performance of neural models. By contrast, abundant review sentiment classification data are easily available at online review sites. These reviews contain substantial latent opinions information and semantic patterns. In this paper, we propose a novel model to transfer these opinions knowledge from resource-rich review sentiment classification datasets to low-resource task TOWE. To address the challenges in the transfer process, we design an effective transformation method to obtain latent opinions, then integrate them into TOWE. Extensive experimental results show that our model achieves better performance compared to other state-of-the-art methods and significantly outperforms the base model without transferring opinions knowledge. Further analysis validates the effectiveness of our model.Comment: Accepted by the 34th AAAI Conference on Artificial Intelligence (AAAI 2020

    Prompt Tuning Large Language Models on Personalized Aspect Extraction for Recommendations

    Full text link
    Existing aspect extraction methods mostly rely on explicit or ground truth aspect information, or using data mining or machine learning approaches to extract aspects from implicit user feedback such as user reviews. It however remains under-explored how the extracted aspects can help generate more meaningful recommendations to the users. Meanwhile, existing research on aspect-based recommendations often relies on separate aspect extraction models or assumes the aspects are given, without accounting for the fact the optimal set of aspects could be dependent on the recommendation task at hand. In this work, we propose to combine aspect extraction together with aspect-based recommendations in an end-to-end manner, achieving the two goals together in a single framework. For the aspect extraction component, we leverage the recent advances in large language models and design a new prompt learning mechanism to generate aspects for the end recommendation task. For the aspect-based recommendation component, the extracted aspects are concatenated with the usual user and item features used by the recommendation model. The recommendation task mediates the learning of the user embeddings and item embeddings, which are used as soft prompts to generate aspects. Therefore, the extracted aspects are personalized and contextualized by the recommendation task. We showcase the effectiveness of our proposed method through extensive experiments on three industrial datasets, where our proposed framework significantly outperforms state-of-the-art baselines in both the personalized aspect extraction and aspect-based recommendation tasks. In particular, we demonstrate that it is necessary and beneficial to combine the learning of aspect extraction and aspect-based recommendation together. We also conduct extensive ablation studies to understand the contribution of each design component in our framework
    corecore