297 research outputs found

    Data analytics 2016: proceedings of the fifth international conference on data analytics

    Get PDF

    Sentiment analysis in SemEval: a review of sentiment identification approaches

    Get PDF
    ocial media platforms are becoming the foundations of social interactions including messaging and opinion expression. In this regard, sentiment analysis techniques focus on providing solutions to ensure the retrieval and analysis of generated data including sentiments, emotions, and discussed topics. International competitions such as the International Workshop on Semantic Evaluation (SemEval) have attracted many researchers and practitioners with a special research interest in building sentiment analysis systems. In our work, we study top-ranking systems for each SemEval edition during the 2013-2021 period, a total of 658 teams participated in these editions with increasing interest over years. We analyze the proposed systems marking the evolution of research trends with a focus on the main components of sentiment analysis systems including data acquisition, preprocessing, and classification. Our study shows an active use of preprocessing techniques, an evolution of features engineering and word representation from lexicon-based approaches to word embeddings, and the dominance of neural networks and transformers over the classification phasefostering the use of ready-to-use models. Moreover, we provide researchers with insights based on experimented systems which will allow rapid prototyping of new systems and help practitioners build for future SemEval editions

    Towards robust real-world historical handwriting recognition

    Get PDF
    In this thesis, we make a bridge from the past to the future by using artificial-intelligence methods for text recognition in a historical Dutch collection of the Natuurkundige Commissie that explored Indonesia (1820-1850). In spite of the successes of systems like 'ChatGPT', reading historical handwriting is still quite challenging for AI. Whereas GPT-like methods work on digital texts, historical manuscripts are only available as an extremely diverse collections of (pixel) images. Despite the great results, current DL methods are very data greedy, time consuming, heavily dependent on the human expert from the humanities for labeling and require machine-learning experts for designing the models. Ideally, the use of deep learning methods should require minimal human effort, have an algorithm observe the evolution of the training process, and avoid inefficient use of the already sparse amount of labeled data. We present several approaches towards dealing with these problems, aiming to improve the robustness of current methods and to improve the autonomy in training. We applied our novel word and line text recognition approaches on nine data sets differing in time period, language, and difficulty: three locally collected historical Latin-based data sets from Naturalis, Leiden; four public Latin-based benchmark data sets for comparability with other approaches; and two Arabic data sets. Using ensemble voting of just five neural networks, a level of accuracy was achieved which required hundreds of neural networks in earlier studies. Moreover, we increased the speed of evaluation of each training epoch without the need of labeled data

    Unsupervised Biomedical Named Entity Recognition

    Get PDF
    Named entity recognition (NER) from text is an important task for several applications, including in the biomedical domain. Supervised machine learning based systems have been the most successful on NER task, however, they require correct annotations in large quantities for training. Annotating text manually is very labor intensive and also needs domain expertise. The purpose of this research is to reduce human annotation effort and to decrease cost of annotation for building NER systems in the biomedical domain. The method developed in this work is based on leveraging the availability of resources like UMLS (Unified Medical Language System), that contain a list of biomedical entities and a large unannotated corpus to build an unsupervised NER system that does not require any manual annotations. The method that we developed in this research has two phases. In the first phase, a biomedical corpus is automatically annotated with some named entities using UMLS through unambiguous exact matching which we call weakly-labeled data. In this data, positive examples are the entities in the text that exactly match in UMLS and have only one semantic type which belongs to the desired entity class to be extracted (for example, diseases and disorders). Negative examples are the entities in the text that exactly match in UMLS but are of semantic types other than those that belong to the desired entity class. These examples are then used to train a machine learning classifier using features that represent the contexts in which they appeared in the text. The trained classifier is applied back to the text to gather more examples iteratively through the process of self-training. The trained classifier is then capable of classifying mentions in an unseen text as of the desired entity class or not from the contexts in which they appear. Although the trained named entity detector is good at detecting the presence of entities of the desired class in text, it cannot determine their correct boundaries. In the second phase of our method, called “Boundary Expansion”, the correct boundaries of the entities are determined. This method is based on a novel idea that utilizes machine learning and UMLS. Training examples for boundary expansion are gathered directly from UMLS and do not require any manual annotations. We also developed a new WordNet based approach for boundary expansion. Our developed method was evaluated on three datasets - SemEval 2014 Task 7 dataset that has diseases and disorders as the desired entity class, GENIA dataset that has proteins, DNAs, RNAs, cell types, and cell lines as the desired entity classes, and i2b2 dataset that has problems, tests, and treatments as the desired entity classes. Our method performed well and obtained performance close to supervised methods on the SemEval dataset. On the other datasets, it outperformed an existing unsupervised method on most entity classes. Availability of a list of entity names with their semantic types and a large unannotated corpus are the only requirements of our method to work well. Given these, our method generalizes across different types of entities and different types of biomedical text. Being unsupervised, the method can be easily applied to new NER tasks without needing costly annotations

    Ensemble of classifiers based data fusion of EEG and MRI for diagnosis of neurodegenerative disorders

    Get PDF
    The prevalence of Alzheimer\u27s disease (AD), Parkinson\u27s disease (PD), and mild cognitive impairment (MCI) are rising at an alarming rate as the average age of the population increases, especially in developing nations. The efficacy of the new medical treatments critically depends on the ability to diagnose these diseases at the earliest stages. To facilitate the availability of early diagnosis in community hospitals, an accurate, inexpensive, and noninvasive diagnostic tool must be made available. As biomarkers, the event related potentials (ERP) of the electroencephalogram (EEG) - which has previously shown promise in automated diagnosis - in addition to volumetric magnetic resonance imaging (MRI), are relatively low cost and readily available tools that can be used as an automated diagnosis tool. 16-electrode EEG data were collected from 175 subjects afflicted with Alzheimer\u27s disease, Parkinson\u27s disease, mild cognitive impairment, as well as non-disease (normal control) subjects. T2 weighted MRI volumetric data were also collected from 161 of these subjects. Feature extraction methods were used to separate diagnostic information from the raw data. The EEG signals were decomposed using the discrete wavelet transform in order to isolate informative frequency bands. The MR images were processed through segmentation software to provide volumetric data of various brain regions in order to quantize potential brain tissue atrophy. Both of these data sources were utilized in a pattern recognition based classification algorithm to serve as a diagnostic tool for Alzheimer\u27s and Parkinson\u27s disease. Support vector machine and multilayer perceptron classifiers were used to create a classification algorithm trained with the EEG and MRI data. Extracted features were used to train individual classifiers, each learning a particular subset of the training data, whose decisions were combined using decision level fusion. Additionally, a severity analysis was performed to diagnose between various stages of AD as well as a cognitively normal state. The study found that EEG and MRI data hold complimentary information for the diagnosis of AD as well as PD. The use of both data types with a decision level fusion improves diagnostic accuracy over the diagnostic accuracy of each individual data source. In the case of AD only diagnosis, ERP data only provided a 78% diagnostic performance, MRI alone was 89% and ERP and MRI combined was 94%. For PD only diagnosis, ERP only performance was 67%, MRI only was 70%, and combined performance was 78%. MCI only diagnosis exhibited a similar effect with a 71% ERP performance, 82% MRI performance, and 85% combined performance. Diagnosis among three subject groups showed the same trend. For PD, AD, and normal diagnosis ERP only performance was 43%, MRI only was 66%, and combined performance was 71%. The severity analysis for mild AD, severe AD, and normal subjects showed the same combined effect

    CLOUD-BASED MACHINE LEARNING AND SENTIMENT ANALYSIS

    Get PDF
    The role of a Data Scientist is becoming increasingly ubiquitous as companies and institutions see the need to gain additional insights and information from data to make better decisions to improve the quality-of-service delivery to customers. This thesis document contains three aspects of data science projects aimed at improving tools and techniques used in analyzing and evaluating data. The first research study involved the use of a standard cybersecurity dataset and cloud-based auto-machine learning algorithms were applied to detect vulnerabilities in the network traffic data. The performance of the algorithms was measured and compared using standard evaluation metrics. The second research study involved the use of text-mining social media, specifically Reddit. We mined up to 100,000 comments in multiple subreddits and tested for hate speech via a custom designed version of the Python Vader sentiment analysis package. Our work integrated standard sentiment analysis with Hatebase.org and we demonstrate our new method can better detect hate speech in social media. Following sentiment analysis and hate speech detection, in the third research project, we applied statistical techniques in evaluating the significant difference in text analytics, specifically the sentiment-categories for both lexicon-based software and cloud-based tools. We compared the three big cloud providers, AWS, Azure, and GCP with the standard python Vader sentiment analysis library. We utilized statistical analysis to determine a significant difference between the cloud platforms utilized as well as Vader and demonstrated that each platform is unique in its analysis scoring mechanism

    Mapping (Dis-)Information Flow about the MH17 Plane Crash

    Get PDF
    Digital media enables not only fast sharing of information, but also disinformation. One prominent case of an event leading to circulation of disinformation on social media is the MH17 plane crash. Studies analysing the spread of information about this event on Twitter have focused on small, manually annotated datasets, or used proxys for data annotation. In this work, we examine to what extent text classifiers can be used to label data for subsequent content analysis, in particular we focus on predicting pro-Russian and pro-Ukrainian Twitter content related to the MH17 plane crash. Even though we find that a neural classifier improves over a hashtag based baseline, labeling pro-Russian and pro-Ukrainian content with high precision remains a challenging problem. We provide an error analysis underlining the difficulty of the task and identify factors that might help improve classification in future work. Finally, we show how the classifier can facilitate the annotation task for human annotators

    Cartoons as interdiscourse : a quali-quantitative analysis of social representations based on collective imagination in cartoons produced after the Charlie Hebdo attack

    No full text
    The attacks against Charlie Hebdo in Paris at the beginning of the year 2015 urged many cartoonists – most professionals but some laymen as well – to create cartoons as a reaction to this tragedy. The main goal of this article is to show how traumatic events like this one can converge in a rather limited set of metaphors, ranging from easily recognizable topoi to rather vague interdiscourses that circulate in contemporary societies. To do so, we analyzed 450 cartoons that were produced as a reaction to the Charlie Hebdo attacks, and took a quali-quantitative approach that draws both on discourse analysis and semiotics. In this paper, we identified eight main themes and we analyzed the five ones which are anchored in collective imagination (the pen against the sword, the journalist as a modern hero, etc.). Then, we studied the cartoons at figurative, narrative and thematic levels thanks to Greimas’ model of the semiotic square. This paper shows the ways in which these cartoons build upon a memory-based network of events from the recent past (particularly 9/11), and more generally on a collective imagination which can be linked to Western values.SCOPUS: ar.jinfo:eu-repo/semantics/publishe

    Advancing Pattern Recognition Techniques for Brain-Computer Interfaces: Optimizing Discriminability, Compactness, and Robustness

    Get PDF
    In dieser Dissertation formulieren wir drei zentrale Zielkriterien zur systematischen Weiterentwicklung der Mustererkennung moderner Brain-Computer Interfaces (BCIs). Darauf aufbauend wird ein Rahmenwerk zur Mustererkennung von BCIs entwickelt, das die drei Zielkriterien durch einen neuen Optimierungsalgorithmus vereint. DarĂĽber hinaus zeigen wir die erfolgreiche Umsetzung unseres Ansatzes fĂĽr zwei innovative BCI Paradigmen, fĂĽr die es bisher keine etablierte Mustererkennungsmethodik gibt

    Expressions of psychological stress on Twitter: detection and characterisation

    Get PDF
    A thesis submitted in partial fulfilment of the requirements of the University of Wolverhampton for the degree of Doctor of Philosophy.Long-term psychological stress is a significant predictive factor for individual mental health and short-term stress is a useful indicator of an immediate problem. Traditional psychology studies have relied on surveys to understand reasons for stress in general and in specific contexts. The popularity and ubiquity of social media make it a potential data source for identifying and characterising aspects of stress. Previous studies of stress in social media have focused on users responding to stressful personal life events. Prior social media research has not explored expressions of stress in other important domains, however, including travel and politics. This thesis detects and analyses expressions of psychological stress in social media. So far, TensiStrength is the only existing lexicon for stress and relaxation scores in social media. Using a word-vector based word sense disambiguation method, the TensiStrength lexicon was modified to include the stress scores of the different senses of the same word. On a dataset of 1000 tweets containing ambiguous stress-related words, the accuracy of the modified TensiStrength increased by 4.3%. This thesis also finds and reports characteristics of a multiple-domain stress dataset of 12000 tweets, 3000 each for airlines, personal events, UK politics, and London traffic. A two-step method for identifying stressors in tweets was implemented. The first step used LDA topic modelling and k-means clustering to find a set of types of stressors (e.g., delay, accident). Second, three word-vector based methods - maximum-word similarity, context-vector similarity, and cluster-vector similarity - were used to detect the stressors in each tweet. The cluster vector similarity method was found to identify the stressors in tweets in all four domains better than machine learning classifiers, based on the performance metrics of accuracy, precision, recall, and f-measure. Swearing and sarcasm were also analysed in high-stress and no-stress datasets from the four domains using a Convolutional Neural Network and Multilayer Perceptron, respectively. The presence of swearing and sarcasm was higher in the high-stress tweets compared to no-stress tweets in all the domains. The stressors in each domain with higher percentages of swearing or sarcasm were identified. Furthermore, the distribution of the temporal classes (past, present, future, and atemporal) in high-stress tweets was found using an ensemble classifier. The distribution depended on the domain and the stressors. This study contributes a modified and improved lexicon for the identification of stress scores in social media texts. The two-step method to identify stressors follows a general framework that can be used for domains other than those which were studied. The presence of swearing, sarcasm, and the temporal classes of high-stress tweets belonging to different domains are found and compared to the findings from traditional psychology, for the first time. The algorithms and knowledge may be useful for travel, political, and personal life systems that need to identify stressful events in order to take appropriate action.European Union's Horizon 2020 research and innovation programme under grant agreement No 636160-2, the Optimum project (www.optimumproject.eu)
    • …
    corecore