1,209 research outputs found

    Thumbs up? Sentiment Classification using Machine Learning Techniques

    Full text link
    We consider the problem of classifying documents not by topic, but by overall sentiment, e.g., determining whether a review is positive or negative. Using movie reviews as data, we find that standard machine learning techniques definitively outperform human-produced baselines. However, the three machine learning methods we employed (Naive Bayes, maximum entropy classification, and support vector machines) do not perform as well on sentiment classification as on traditional topic-based categorization. We conclude by examining factors that make the sentiment classification problem more challenging.Comment: To appear in EMNLP-200

    Morphological Disambiguation from Stemming Data

    Full text link
    Morphological analysis and disambiguation is an important task and a crucial preprocessing step in natural language processing of morphologically rich languages. Kinyarwanda, a morphologically rich language, currently lacks tools for automated morphological analysis. While linguistically curated finite state tools can be easily developed for morphological analysis, the morphological richness of the language allows many ambiguous analyses to be produced, requiring effective disambiguation. In this paper, we propose learning to morphologically disambiguate Kinyarwanda verbal forms from a new stemming dataset collected through crowd-sourcing. Using feature engineering and a feed-forward neural network based classifier, we achieve about 89% non-contextualized disambiguation accuracy. Our experiments reveal that inflectional properties of stems and morpheme association rules are the most discriminative features for disambiguation

    CheXpert: A Large Chest Radiograph Dataset with Uncertainty Labels and Expert Comparison

    Full text link
    Large, labeled datasets have driven deep learning methods to achieve expert-level performance on a variety of medical imaging tasks. We present CheXpert, a large dataset that contains 224,316 chest radiographs of 65,240 patients. We design a labeler to automatically detect the presence of 14 observations in radiology reports, capturing uncertainties inherent in radiograph interpretation. We investigate different approaches to using the uncertainty labels for training convolutional neural networks that output the probability of these observations given the available frontal and lateral radiographs. On a validation set of 200 chest radiographic studies which were manually annotated by 3 board-certified radiologists, we find that different uncertainty approaches are useful for different pathologies. We then evaluate our best model on a test set composed of 500 chest radiographic studies annotated by a consensus of 5 board-certified radiologists, and compare the performance of our model to that of 3 additional radiologists in the detection of 5 selected pathologies. On Cardiomegaly, Edema, and Pleural Effusion, the model ROC and PR curves lie above all 3 radiologist operating points. We release the dataset to the public as a standard benchmark to evaluate performance of chest radiograph interpretation models. The dataset is freely available at https://stanfordmlgroup.github.io/competitions/chexpert .Comment: Published in AAAI 201

    The impact of pretrained language models on negation and speculation detection in cross-lingual medical text: Comparative study

    Get PDF
    Background: Negation and speculation are critical elements in natural language processing (NLP)-related tasks, such as information extraction, as these phenomena change the truth value of a proposition. In the clinical narrative that is informal, these linguistic facts are used extensively with the objective of indicating hypotheses, impressions, or negative findings. Previous state-of-the-art approaches addressed negation and speculation detection tasks using rule-based methods, but in the last few years, models based on machine learning and deep learning exploiting morphological, syntactic, and semantic features represented as spare and dense vectors have emerged. However, although such methods of named entity recognition (NER) employ a broad set of features, they are limited to existing pretrained models for a specific domain or language. Objective: As a fundamental subsystem of any information extraction pipeline, a system for cross-lingual and domain-independent negation and speculation detection was introduced with special focus on the biomedical scientific literature and clinical narrative. In this work, detection of negation and speculation was considered as a sequence-labeling task where cues and the scopes of both phenomena are recognized as a sequence of nested labels recognized in a single step. Methods: We proposed the following two approaches for negation and speculation detection: (1) bidirectional long short-term memory (Bi-LSTM) and conditional random field using character, word, and sense embeddings to deal with the extraction of semantic, syntactic, and contextual patterns and (2) bidirectional encoder representations for transformers (BERT) with fine tuning for NER. Results: The approach was evaluated for English and Spanish languages on biomedical and review text, particularly with the BioScope corpus, IULA corpus, and SFU Spanish Review corpus, with F-measures of 86.6%, 85.0%, and 88.1%, respectively, for NeuroNER and 86.4%, 80.8%, and 91.7%, respectively, for BERT. Conclusions: These results show that these architectures perform considerably better than the previous rule-based and conventional machine learning-based systems. Moreover, our analysis results show that pretrained word embedding and particularly contextualized embedding for biomedical corpora help to understand complexities inherent to biomedical text.This work was supported by the Research Program of the Ministry of Economy and Competitiveness, Government of Spain (DeepEMR Project TIN2017-87548-C2-1-R)

    Sentiment Classification Using a Sense Enriched Lexicon-based Approach

    Get PDF
    The prominent approach in sentiment polarity classification is the Lexicon-based approach which relies on a dictionary to assign a score to subjective words. Most of the existing work use score of the most dominant sense in this process instead of using the contextually appropriate sense. The use of Word Sense Disambiguation (WSD) is less investigated in the sentiment classification tasks. This paper investigates the effect of integrating WSD into a Lexicon-based approach for Sentiment Polarity classification and compares it with the existing Lexicon-based approaches and the state-of-art supervised approaches. The lexicon used in this work is SentiWordNet v2.0. The proposed approach, called Sense Enriched Lexicon-based Approach (SELSA), uses a word sense disambiguation module to identify the correct sense of subjective words. Instead of using the score of the most frequent sense, it uses the score of the contextually appropriate sense only. For the purpose of comparison with the supervised approaches, the authors investigate Naïve Bayes (NB) and Support Vector Machines (SVM) classifiers which tend to perform better in earlier research. The performance of these classifiers is evaluated using Word2vec, Hashing Vectorizer, and bi-gram feature. The best-performing classifier-feature combination is used for comparison. All the evaluations are done on the Movie Review dataset. SELSA achieves an accuracy of 96.25% which is significantly better than the accuracy obtained by SentiWordNet-based approach without WSD on the same dataset. The performance of the proposed algorithm is also compared with the best-performing supervised classifier investigated in this work and earlier reported works on the same dataset. The results reveal that the SVM classifier performs better than SentiWordNet approach without WSD. However, after incorporating WSD the performance of the proposed Lexicon-based approach is significantly improved and it surpasses the best-performing supervised classifier (SVM with bi-gram features)
    corecore