9 research outputs found

    A multimodal feature learning approach for sentiment analysis of social network multimedia

    Get PDF
    Investigation of the use of a multimodal feature learning approach, using neural network based models such as Skip-gram and Denoising Autoencoders, to address sentiment analysis of micro-blogging content, such as Twitter short messages, that are composed by a short text and, possibly, an image

    Violence Detection in Social Media-Review

    Get PDF
    Social media has become a vital part of humans’ day to day life. Different users engage with social media differently. With the increased usage of social media, many researchers have investigated different aspects of social media. Many examples in the recent past show, content in the social media can generate violence in the user community. Violence in social media can be categorised into aggregation in comments, cyber-bullying and incidents like protests, murders. Identifying violent content in social media is a challenging task: social media posts contain both the visual and text as well as these posts may contain hidden meaning according to the users’ context and other background information. This paper summarizes the different social media violent categories and existing methods to detect the violent content.Keywords: Machine learning, natural language processing, violence, social media, convolution neural networ

    Deep Sentiment Features of Context and Faces for Affective Video Analysis

    Get PDF
    Given the huge quantity of hours of video available on video sharing platforms such as YouTube, Vimeo, etc. development of automatic tools that help users nd videos that t their interests has attracted the attention of both scienti c and industrial communities. So far the majority of the works have addressed semantic analysis, to identify objects, scenes and events depicted in videos, but more recently a ective analysis of videos has started to gain more at- tention. In this work we investigate the use of sentiment driven features to classify the induced sentiment of a video, i.e. the senti- ment reaction of the user. Instead of using standard computer vision features such as CNN features or SIFT features trained to recognize objects and scenes, we exploit sentiment related features such as the ones provided by Deep-SentiBank [4], and features extracted from models that exploit deep networks trained on face expressions. We experiment on two recently introduced datasets: LIRIS-ACCEDE [2] and MEDIAEVAL-2015, that provide sentiment annotations of a large set of short videos. We show that our approach not only outperforms the current state-of-the-art in terms of valence and arousal classi cation accuracy, but it also uses a smaller number of features, requiring thus less video processing

    A Deep Multi-Level Attentive network for Multimodal Sentiment Analysis

    Full text link
    Multimodal sentiment analysis has attracted increasing attention with broad application prospects. The existing methods focuses on single modality, which fails to capture the social media content for multiple modalities. Moreover, in multi-modal learning, most of the works have focused on simply combining the two modalities, without exploring the complicated correlations between them. This resulted in dissatisfying performance for multimodal sentiment classification. Motivated by the status quo, we propose a Deep Multi-Level Attentive network, which exploits the correlation between image and text modalities to improve multimodal learning. Specifically, we generate the bi-attentive visual map along the spatial and channel dimensions to magnify CNNs representation power. Then we model the correlation between the image regions and semantics of the word by extracting the textual features related to the bi-attentive visual features by applying semantic attention. Finally, self-attention is employed to automatically fetch the sentiment-rich multimodal features for the classification. We conduct extensive evaluations on four real-world datasets, namely, MVSA-Single, MVSA-Multiple, Flickr, and Getty Images, which verifies the superiority of our method.Comment: 11 pages, 7 figure

    Transmedia Context and Twitter As Conditioning the Ecuadorian Government’s Action. The Case of the “Guayaquil Emergency” During the COVID-19 Pandemic

    Get PDF
    Communication ecosystems have multiplexed and increased their capacity to act, distort, and fight. COVID-19 pandemic and the response of the Ecuadorian Government to it are clear examples of the power of media to erode, to influence, and also to produce fake news. In this context, Twitter has become more than just a social platform, as it helped spread catastrophic pictures of the country, especially of Guayaquil. This article analyzes the tweets posted by the main domestic and global media and by the Ecuadorian government accounts since the outbreak of the pandemic in Ecuador, as well as the interrelations among them and their polarity score. The aim is to show how the government changed its action plan by focusing on exogenous elements that had been excluded from its (pre)established strategy, which consisted in neglecting and deliberately minimizing a situation that turned out to be more serious than officially deemed and that was exposed by unofficial global media

    Desarrollo de una arquitectura conceptual para el análisis de contenidos en redes sociales sobre el tema del aborto usando Python

    Get PDF
    Desarrollar una arquitectura conceptual para el análisis de contenidos en redes sociales sobre el tema del aborto usando Python.El presente trabajo de titulación DESARROLLO DE UNA ARQUITECTURA CONCEPTUAL PARA EL ANÁLISIS DE CONTENIDOS EN REDES SOCIALES SOBRE EL TEMA DEL ABORTO USANDO PYTHON, pretende presentar el proceso completo de Minería de Datos y lograr un estudio detallado de las opiniones expresadas en redes sociales sobre el tema del aborto, para ello, en primer lugar se presenta un marco teórico sobre el proceso de extracción de conocimiento, para luego pasar a la fase experimental donde en la recolección de datos, se tomó una muestra desde el 16 de agosto hasta el 29 de septiembre de 2018 en Twitter, y las publicaciones de dos páginas de Facebook que fueron seleccionadas porque tenían más seguidores en el lapso de la toma de muestras. A partir de estos datos se llegó a determinar a través del análisis de los resultados, las posiciones a favor y en contra del aborto en nuestro país Ecuador. En el desarrollo del presente trabajo, se ha tomado datos de las redes sociales Facebook y principalmente Twitter, además se usó la plataforma de Google maps, los scripts se los realizó en Python usando el Entorno de Desarrollo Integrado (IDE) Spyder (Python 3.6), que es parte de la plataforma Anaconda una de las más utilizadas para programar en este lenguaje. Se presentan las interpretaciones de los resultados obtenidos, tomando en cuenta algunas directrices planteadas en trabajos anteriores realizados en varios lugares del mundo que abordan el análisis de contenidos y también análisis de sentimientos para extraer información sobre temas que van, desde la popularidad de una institución o marca comercial hasta las reacciones que genera la gente ante un hecho social o político. Los resultados obtenidos en esta investigación marcan en promedio una posición mayoritaria en contra del aborto más que a favor
    corecore