546 research outputs found

    Words are Malleable: Computing Semantic Shifts in Political and Media Discourse

    Get PDF
    Recently, researchers started to pay attention to the detection of temporal shifts in the meaning of words. However, most (if not all) of these approaches restricted their efforts to uncovering change over time, thus neglecting other valuable dimensions such as social or political variability. We propose an approach for detecting semantic shifts between different viewpoints--broadly defined as a set of texts that share a specific metadata feature, which can be a time-period, but also a social entity such as a political party. For each viewpoint, we learn a semantic space in which each word is represented as a low dimensional neural embedded vector. The challenge is to compare the meaning of a word in one space to its meaning in another space and measure the size of the semantic shifts. We compare the effectiveness of a measure based on optimal transformations between the two spaces with a measure based on the similarity of the neighbors of the word in the respective spaces. Our experiments demonstrate that the combination of these two performs best. We show that the semantic shifts not only occur over time, but also along different viewpoints in a short period of time. For evaluation, we demonstrate how this approach captures meaningful semantic shifts and can help improve other tasks such as the contrastive viewpoint summarization and ideology detection (measured as classification accuracy) in political texts. We also show that the two laws of semantic change which were empirically shown to hold for temporal shifts also hold for shifts across viewpoints. These laws state that frequent words are less likely to shift meaning while words with many senses are more likely to do so.Comment: In Proceedings of the 26th ACM International on Conference on Information and Knowledge Management (CIKM2017

    Better Document-level Sentiment Analysis from RST Discourse Parsing

    Full text link
    Discourse structure is the hidden link between surface features and document-level properties, such as sentiment polarity. We show that the discourse analyses produced by Rhetorical Structure Theory (RST) parsers can improve document-level sentiment analysis, via composition of local information up the discourse tree. First, we show that reweighting discourse units according to their position in a dependency representation of the rhetorical structure can yield substantial improvements on lexicon-based sentiment analysis. Next, we present a recursive neural network over the RST structure, which offers significant improvements over classification-based methods.Comment: Published at Empirical Methods in Natural Language Processing (EMNLP 2015

    Building Contrastive Summaries of Subjective Text Via Opinion Ranking

    Get PDF
    This article investigates methods to automatically compare entities from opinionated text to help users to obtain important information from a large amount of data, a task known as “contrastive opinion summarization”. The task aims at generating contrastive summaries that highlight differences between entities given opinionated text (written about each entity individually) where opinions have been previously identified. These summaries are made by selecting sentences from the input data. The core of the problem is to find out how to choose these more relevant sentences in an appropriate manner. The proposed method uses a heuristic that makesdecisions according to the opinions found in the input text and to traits that a summary is expected to present. The evaluation is made by measuring three characteristics that contrastive summaries are expected to have: representativity (presence of opinions that are frequent in the input), contrastivity (presence of opinions that highlight differences between entities) and diversity (presence of different opinions to avoid redundancy). The novel method is compared to methods previously published and performs significantly better than them according to the measures used. The main contributions of this work are: a comparative analysis of methods of contrastive opinion summarization, the proposal of a systematic way to evaluate summaries, the development of a new method that performs better than others previously known and the creation of a dataset for the task

    Generating comparative summaries of contradictory opinions in text

    Full text link

    Towards Argument-Aware Abstractive Summarization of Long Legal Opinions with Summary Reranking

    Full text link
    We propose a simple approach for the abstractive summarization of long legal opinions that considers the argument structure of the document. Legal opinions often contain complex and nuanced argumentation, making it challenging to generate a concise summary that accurately captures the main points of the legal opinion. Our approach involves using argument role information to generate multiple candidate summaries, then reranking these candidates based on alignment with the document's argument structure. We demonstrate the effectiveness of our approach on a dataset of long legal opinions and show that it outperforms several strong baselines

    Basic tasks of sentiment analysis

    Full text link
    Subjectivity detection is the task of identifying objective and subjective sentences. Objective sentences are those which do not exhibit any sentiment. So, it is desired for a sentiment analysis engine to find and separate the objective sentences for further analysis, e.g., polarity detection. In subjective sentences, opinions can often be expressed on one or multiple topics. Aspect extraction is a subtask of sentiment analysis that consists in identifying opinion targets in opinionated text, i.e., in detecting the specific aspects of a product or service the opinion holder is either praising or complaining about
    • …
    corecore