8 research outputs found

    Análise de Sentimento pela ótica da abordagem multimodal

    Get PDF
    A Análise de Sentimento é um campo de estudo importante para a Computação Afetiva e para a compreensão dos processos cognitivos em geral. Dois aspectos embasam essa constatação: a web ter se tornado uma arena pública para difusão de opiniões; e a comunicação, desde a mais tenra idade, ser expressada a partir de múltiplas linguagens. Este artigo apresenta um estudo sobre análise de sentimento via web com base em objetos multimodais e multilíngues. Com base em pressupostos teóricos sobre análise de sentimento e multimodalidade com abordagem multilíngue, pretende-se explorar aspectos relevantes dos estudos em questão, bem como comparar atuais técnicas desenvolvidas sobre esse tema

    Abstractive Summarization of Voice Communications

    Get PDF
    Abstract summarization of conversations is a very challenging task that requires full understanding of the dialog turns, their roles and relationships in the conversations. We present an efficient system, derived from a fully-fledged text analysis system that performs the necessary linguistic analysis of turns in conversations and provides useful argumentative labels to build synthetic abstractive summaries of conversations

    Clasificación de subjetividad utilizando técnicas de aprendizaje automático

    Get PDF
    La clasificación de subjetividad es un ámbito de la minería de texto poco estudiado en el idioma español, y sin embargo sus aplicaciones son extensas. Su estudio permite comprender mejor la semántica de un texto y la intención de su autor, sin mencionar las implicaciones de su uso en la inteligencia de negocios, para identificar las necesidades de los clientes y obtener métricas valiosas a partir de sus críticas. En este trabajo se intenta aplicar técnicas conocidas de análisis de subjetividad en inglés, adaptadas al español, construyendo en el proceso una base de datos y un sistema clasificador de oraciones.Facultad de Informátic

    A Survey on Semantic Processing Techniques

    Full text link
    Semantic processing is a fundamental research domain in computational linguistics. In the era of powerful pre-trained language models and large language models, the advancement of research in this domain appears to be decelerating. However, the study of semantics is multi-dimensional in linguistics. The research depth and breadth of computational semantic processing can be largely improved with new technologies. In this survey, we analyzed five semantic processing tasks, e.g., word sense disambiguation, anaphora resolution, named entity recognition, concept extraction, and subjectivity detection. We study relevant theoretical research in these fields, advanced methods, and downstream applications. We connect the surveyed tasks with downstream applications because this may inspire future scholars to fuse these low-level semantic processing tasks with high-level natural language processing tasks. The review of theoretical research may also inspire new tasks and technologies in the semantic processing domain. Finally, we compare the different semantic processing techniques and summarize their technical trends, application trends, and future directions.Comment: Published at Information Fusion, Volume 101, 2024, 101988, ISSN 1566-2535. The equal contribution mark is missed in the published version due to the publication policies. Please contact Prof. Erik Cambria for detail

    Discourse-level Relations For Opinion Analysis

    Get PDF
    Opinion analysis deals with subjective phenomena such as judgments, evaluations, feelings, emotions, beliefs and stances. The availability of public opinion over the Internet and face to face conversations; coupled with the need to understand and mine these for end applications has motivated a great amount of research in this field in recent times. Researchers have explored a wide array of knowledge resources for opinion analysis, from words and phrases to syntactic dependencies and semantic relations.In this thesis, we investigate a discourse-level treatment for opinion analysis.In order to realize the discourse-level analysis, we propose a new linguistic representational scheme designed to support interdependent interpretations of opinions in the discourse. We adapt and extend an existing subjectivity annotation scheme to capture discourse-level relations in multi-party meeting corpus. Human inter-annotator agreement studies show that trained human annotators can recognize the elements of our linguistic scheme. Empirically, we test the impact of our discourse-level relations on fine-grained polarity classification. In this process, we also explore two different global inference models for incorporating discourse-based information to augment word-based information. Our results show that the discourse-level relations can augment and improve upon word-based methods for effective fine-grained opinion polarity classification. Further, in this thesis, we explore linguistically motivated features and a global inference paradigm for learning the discourse-level relations form the annotated data. We employ the ideas from our linguistic scheme for recognizing stances in dual-sided debates from the product and political domains. For product debates, we use web mining and rules to learn and employ elements of our discourse-level relations in an unsupervised fashion. For political debates, on the other hand, we take a more exploratory, supervised approach, and encode the building blocks of our discourse-level relations as features for stance classification. Our results show that, the ideas behind the discourse level relations can be learnt and employed effectively to improve overall stance recognition in product debates

    Multimodal subjectivity analysis of multiparty conversation

    No full text
    We investigate the combination of several sources of information for the purpose of subjectivity recognition and polarity classification in meetings. We focus on features from two modalities, transcribed words and acoustics, and we compare the performance of three different textual representations: words, characters, and phonemes. Our experiments show that character-level features outperform word-level features for these tasks, and that a careful fusion of all features yields the best performance. © 2008 Association for Computational Linguistics
    corecore