7,798 research outputs found

    Temporal Analysis of Sentiment Events – A Visual Realization and Tracking

    Full text link
    Abstract. In recent years, extraction of temporal relations for events that express sentiments has drawn great attention of the Natural Language Processing (NLP) research communities. In this work, we propose a method that involves the association and contribution of sentiments in determining the event-event relations from texts. Firstly, we employ a machine learning approach based on Conditional Random Field (CRF) for solving the problem of Task C (identification of event-event relations) of TempEval-2007 within TimeML framework by considering sentiment as a feature of an event. Incorporating sentiment property, our system achieves the performance that is better than all the participated state-of-the-art systems of TempEval 2007. Evaluation results on the Task C test set yield the F-score values of 57.2% under the strict evaluation scheme and 58.6 % under the relaxed evaluation scheme. The positive or negative coarse grained sentiments as well as the Ekman’s six basic universal emotions (or, fine grained sentiments) are assigned to the events. Thereafter, we analyze the temporal relations between events in order to track the sentiment events. Representation of the temporal relations in a graph format shows the shallow visual realization path for tracking the sentiments over events. Manual evaluation of temporal relations of sentiment events identified in 20 documents sounds satisfactory from the purview of event-sentiment tracking

    360 Quantified Self

    Get PDF
    Wearable devices with a wide range of sensors have contributed to the rise of the Quantified Self movement, where individuals log everything ranging from the number of steps they have taken, to their heart rate, to their sleeping patterns. Sensors do not, however, typically sense the social and ambient environment of the users, such as general life style attributes or information about their social network. This means that the users themselves, and the medical practitioners, privy to the wearable sensor data, only have a narrow view of the individual, limited mainly to certain aspects of their physical condition. In this paper we describe a number of use cases for how social media can be used to complement the check-up data and those from sensors to gain a more holistic view on individuals' health, a perspective we call the 360 Quantified Self. Health-related information can be obtained from sources as diverse as food photo sharing, location check-ins, or profile pictures. Additionally, information from a person's ego network can shed light on the social dimension of wellbeing which is widely acknowledged to be of utmost importance, even though they are currently rarely used for medical diagnosis. We articulate a long-term vision describing the desirable list of technical advances and variety of data to achieve an integrated system encompassing Electronic Health Records (EHR), data from wearable devices, alongside information derived from social media data.Comment: QCRI Technical Repor

    A study of feature exraction techniques for classifying topics and sentiments from news posts

    Get PDF
    Recently, many news channels have their own Facebook pages in which news posts have been released in a daily basis. Consequently, these news posts contain temporal opinions about social events that may change over time due to external factors as well as may use as a monitor to the significant events happened around the world. As a result, many text mining researches have been conducted in the area of Temporal Sentiment Analysis, which one of its most challenging tasks is to detect and extract the key features from news posts that arrive continuously overtime. However, extracting these features is a challenging task due to post’s complex properties, also posts about a specific topic may grow or vanish overtime leading in producing imbalanced datasets. Thus, this study has developed a comparative analysis on feature extraction Techniques which has examined various feature extraction techniques (TF-IDF, TF, BTO, IG, Chi-square) with three different n-gram features (Unigram, Bigram, Trigram), and using SVM as a classifier. The aim of this study is to discover the optimal Feature Extraction Technique (FET) that could achieve optimum accuracy results for both topic and sentiment classification. Accordingly, this analysis is conducted on three news channels’ datasets. The experimental results for topic classification have shown that Chi-square with unigram have proven to be the best FET compared to other techniques. Furthermore, to overcome the problem of imbalanced data, this study has combined the best FET with OverSampling technology. The evaluation results have shown an improvement in classifier’s performance and has achieved a higher accuracy at 93.37%, 92.89%, and 91.92 for BBC, Al-Arabiya, and Al-Jazeera, respectively, compared to what have been obtained on original datasets. Similarly, same combination (Chi-square+Unigram) has been used for sentiment classification and obtained accuracies at rates of 81.87%, 70.01%, 77.36%. However, testing the recognized optimal FET on unseen randomly selected news posts has shown a relatively very low accuracies for both topic and sentiment classification due to the changes of topics and sentiments over time

    Psychopower and Ordinary Madness: Reticulated Dividuals in Cognitive Capitalism

    Get PDF
    Despite the seemingly neutral vantage of using nature for widely-distributed computational purposes, neither post-biological nor post-humanist teleology simply concludes with the real "end of nature" as entailed in the loss of the specific ontological status embedded in the identifier "natural." As evinced by the ecological crises of the Anthropocene—of which the 2019 Brazil Amazon rainforest fires are only the most recent—our epoch has transfixed the “natural order" and imposed entropic artificial integration, producing living species that become “anoetic,” made to serve as automated exosomatic residues, or digital flecks. I further develop Gilles Deleuze’s description of control societies to upturn Foucauldian biopower, replacing its spacio-temporal bounds with the exographic excesses in psycho-power; culling and further detailing Bernard Stiegler’s framework of transindividuation and hyper-control, I examine how becoming-subject is predictively facilitated within cognitive capitalism and what Alexander Galloway terms “deep digitality.” Despite the loss of material vestiges qua virtualization—which I seek to trace in an historical review of industrialization to postindustrialization—the drive-based and reticulated "internet of things" facilitates a closed loop from within the brain to the outside environment, such that the aperture of thought is mediated and compressed. The human brain, understood through its material constitution, is susceptible to total datafication’s laminated process of “becoming-mnemotechnical,” and, as neuroplasticity is now a valid description for deep-learning and neural nets, we are privy to the rebirth of the once-discounted metaphor of the “cybernetic brain.” Probing algorithmic governmentality while posing noetic dreaming as both technical and pharmacological, I seek to analyze how spirit is blithely confounded with machine-thinking’s gelatinous cognition, as prosthetic organ-adaptation becomes probabilistically molded, networked, and agentially inflected (rather than simply externalized)

    An original framework for understanding human actions and body language by using deep neural networks

    Get PDF
    The evolution of both fields of Computer Vision (CV) and Artificial Neural Networks (ANNs) has allowed the development of efficient automatic systems for the analysis of people's behaviour. By studying hand movements it is possible to recognize gestures, often used by people to communicate information in a non-verbal way. These gestures can also be used to control or interact with devices without physically touching them. In particular, sign language and semaphoric hand gestures are the two foremost areas of interest due to their importance in Human-Human Communication (HHC) and Human-Computer Interaction (HCI), respectively. While the processing of body movements play a key role in the action recognition and affective computing fields. The former is essential to understand how people act in an environment, while the latter tries to interpret people's emotions based on their poses and movements; both are essential tasks in many computer vision applications, including event recognition, and video surveillance. In this Ph.D. thesis, an original framework for understanding Actions and body language is presented. The framework is composed of three main modules: in the first one, a Long Short Term Memory Recurrent Neural Networks (LSTM-RNNs) based method for the Recognition of Sign Language and Semaphoric Hand Gestures is proposed; the second module presents a solution based on 2D skeleton and two-branch stacked LSTM-RNNs for action recognition in video sequences; finally, in the last module, a solution for basic non-acted emotion recognition by using 3D skeleton and Deep Neural Networks (DNNs) is provided. The performances of RNN-LSTMs are explored in depth, due to their ability to model the long term contextual information of temporal sequences, making them suitable for analysing body movements. All the modules were tested by using challenging datasets, well known in the state of the art, showing remarkable results compared to the current literature methods

    Micro-entries: Encouraging Deeper Evaluation of Mental Models Over Time for Interactive Data Systems

    Full text link
    Many interactive data systems combine visual representations of data with embedded algorithmic support for automation and data exploration. To effectively support transparent and explainable data systems, it is important for researchers and designers to know how users understand the system. We discuss the evaluation of users' mental models of system logic. Mental models are challenging to capture and analyze. While common evaluation methods aim to approximate the user's final mental model after a period of system usage, user understanding continuously evolves as users interact with a system over time. In this paper, we review many common mental model measurement techniques, discuss tradeoffs, and recommend methods for deeper, more meaningful evaluation of mental models when using interactive data analysis and visualization systems. We present guidelines for evaluating mental models over time that reveal the evolution of specific model updates and how they may map to the particular use of interface features and data queries. By asking users to describe what they know and how they know it, researchers can collect structured, time-ordered insight into a user's conceptualization process while also helping guide users to their own discoveries.Comment: 10 pages, submitted to BELIV 2020 Worksho

    Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation

    Get PDF
    This paper surveys the current state of the art in Natural Language Generation (NLG), defined as the task of generating text or speech from non-linguistic input. A survey of NLG is timely in view of the changes that the field has undergone over the past decade or so, especially in relation to new (usually data-driven) methods, as well as new applications of NLG technology. This survey therefore aims to (a) give an up-to-date synthesis of research on the core tasks in NLG and the architectures adopted in which such tasks are organised; (b) highlight a number of relatively recent research topics that have arisen partly as a result of growing synergies between NLG and other areas of artificial intelligence; (c) draw attention to the challenges in NLG evaluation, relating them to similar challenges faced in other areas of Natural Language Processing, with an emphasis on different evaluation methods and the relationships between them.Comment: Published in Journal of AI Research (JAIR), volume 61, pp 75-170. 118 pages, 8 figures, 1 tabl

    Exploring Life in Concentration Camps through a Visual Analysis of Prisoners’ Diaries

    Get PDF
    Diaries are private documentations of people’s lives. They contain descriptions of events, thoughts, fears, and desires. While diaries are usually kept in private, published ones, such as the diary of Anne Frank, show that they bear the potential to give personal insight into events and into the emotional impact on their authors. We present a visualization tool that provides insight into the Bergen-Belsen memorial’s diary corpus, which consists of dozens of diaries written by concentration camp prisoners. We designed a calendar view that documents when authors wrote about concentration camp life. Different modes support quantitative and sentiment analyses, and we provide a solution for historians to create thematic concepts that can be used for searching and filtering for specific diary entries. The usage scenarios illustrate the importance of the tool for researchers and memorial visitors as well as for commemorating the Holocaust

    Understanding the bi-directional relationship between analytical processes and interactive visualization systems

    Get PDF
    Interactive visualizations leverage the human visual and reasoning systems to increase the scale of information with which we can effectively work, therefore improving our ability to explore and analyze large amounts of data. Interactive visualizations are often designed with target domains in mind, such as analyzing unstructured textual information, which is a main thrust in this dissertation. Since each domain has its own existing procedures of analyzing data, a good start to a well-designed interactive visualization system is to understand the domain experts' workflow and analysis processes. This dissertation recasts the importance of understanding domain users' analysis processes and incorporating such understanding into the design of interactive visualization systems. To meet this aim, I first introduce considerations guiding the gathering of general and domain-specific analysis processes in text analytics. Two interactive visualization systems are designed by following the considerations. The first system is Parallel-Topics, a visual analytics system supporting analysis of large collections of documents by extracting semantically meaningful topics. Based on lessons learned from Parallel-Topics, this dissertation further presents a general visual text analysis framework, I-Si, to present meaningful topical summaries and temporal patterns, with the capability to handle large-scale textual information. Both systems have been evaluated by expert users and deemed successful in addressing domain analysis needs. The second contribution lies in preserving domain users' analysis process while using interactive visualizations. Our research suggests the preservation could serve multiple purposes. On the one hand, it could further improve the current system. On the other hand, users often need help in recalling and revisiting their complex and sometimes iterative analysis process with an interactive visualization system. This dissertation introduces multiple types of evidences available for capturing a user's analysis process within an interactive visualization and analyzes cost/benefit ratios of the capturing methods. It concludes that tracking interaction sequences is the most un-intrusive and feasible way to capture part of a user's analysis process. To validate this claim, a user study is presented to theoretically analyze the relationship between interactions and problem-solving processes. The results indicate that constraining the way a user interacts with a mathematical puzzle does have an effect on the problemsolving process. As later evidenced in an evaluative study, a fair amount of high-level analysis can be recovered through merely analyzing interaction logs
    • …
    corecore