293 research outputs found

    Tension Analysis in Survivor Interviews: A Computational Approach

    Get PDF
    Tension is an emotional experience that can occur in different contexts. This phenomenon can originate from a conflict of interest or uneasiness during an interview. In some contexts, such experiences are associated with negative emotions such as fear or distress. People tend to adopt different hedging strategies in such situations to avoid criticism or evade questions. In this thesis, we analyze several survivor interview transcripts to determine different characteristics that play crucial roles during tension situation. We discuss key components of tension experiences and propose a natural language processing model which can effectively combine these components to identify tension points in text-based oral history interviews. We validate the efficacy of our model and its components with experimentation on some standard datasets. The model provides a framework that can be used in future research on tension phenomena in oral history interviews

    Order in NP conjuncts in spoken English and Japanese

    No full text
    In the emerging field of cross-linguistic studies on language production, one particularly interesting line of inquiry is possible differences between English and Japanese in ordering words and phrases. Previous research gives rise to the idea that there is a difference in accessing meaning versus form during linearization between these two languages. This assumption is based on observations of language-specific effects of the length factor on the order of phrases (short-before-long in English, long-before-short in Japanese). We contribute to the cross-linguistic exploration of such differences by investigating the variables underlying the internal order of NP conjuncts in spoken English and Japanese. Our quantitative analysis shows that similar influences underlie the ordering process across the two languages. Thus we do not find evidence for the aforementioned difference in accessing meaning versus form with this syntactic phenomenon. With regard to length, Japanese also exhibits a short-before-long preference. However, this tendency is significantly weaker in Japanese than in English, which we explain through an attenuating influence of the typical Japanese phrase structure pattern on the universal effect of short phrases being more accessible. We propose that a similar interaction between entrenched long-before-short schemas and universal accessibility effects is responsible for the varying effects of length in Japanese

    Predicting speculation: a simple disambiguation approach to hedge detection in biomedical literature

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>This paper presents a novel approach to the problem of <it>hedge detection</it>, which involves identifying so-called hedge cues for labeling sentences as certain or uncertain. This is the classification problem for Task 1 of the CoNLL-2010 Shared Task, which focuses on hedging in the biomedical domain. We here propose to view hedge detection as a simple disambiguation problem, restricted to words that have previously been observed as hedge cues. As the feature space for the classifier is still very large, we also perform experiments with dimensionality reduction using the method of <it>random indexing</it>.</p> <p>Results</p> <p>The SVM-based classifiers developed in this paper achieves the best published results so far for sentence-level uncertainty prediction on the CoNLL-2010 Shared Task test data. We also show that the technique of random indexing can be successfully applied for reducing the dimensionality of the original feature space by several orders of magnitude, without sacrificing classifier performance.</p> <p>Conclusions</p> <p>This paper introduces a simplified approach to detecting speculation or uncertainty in text, focusing on the biomedical domain. Evaluated at the sentence-level, our SVM-based classifiers achieve the best published results so far. We also show that the feature space can be aggressively compressed using random indexing while still maintaining comparable classifier performance.</p

    Recognizing speculative language in research texts

    Get PDF
    This thesis studies the use of sequential supervised learning methods on two tasks related to the detection of hedging in scientific articles: those of hedge cue identification and hedge cue scope detection. Both tasks are addressed using a learning methodology that proposes the use of an iterative, error-based approach to improve classification performance, suggesting the incorporation of expert knowledge into the learning process through the use of knowledge rules. Results are promising: for the first task, we improved baseline results by 2.5 points in terms of F-score by incorporating cue cooccurence information, while for scope detection, the incorporation of syntax information and rules for syntax scope pruning allowed us to improve classification performance from an F-score of 0.712 to a final number of 0.835. Compared with state-of-the-art methods, the results are very competitive, suggesting that the approach to improving classifiers based only on the errors commited on a held out corpus could be successfully used in other, similar tasks. Additionaly, this thesis presents a class schema for representing sentence analysis in a unique structure, including the results of different linguistic analysis. This allows us to better manage the iterative process of classifier improvement, where different attribute sets for learning are used in each iteration. We also propose to store attributes in a relational model, instead of the traditional text-based structures, to facilitate learning data analysis and manipulation

    A unified framework to identify and extract uncertainty cues, holders, and scopes in one fell-swoop

    Get PDF
    Uncertainty refers to the language aspects that express hypotheses and speculations where propositions are held as (un)certain, (im)probable, or (im)possible. Automatic uncertainty analysis is crucial for several Natural Language Processing (NLP) applications that need to distinguish between factual (i.e. certain) and nonfactual (i.e. negated or uncertain) information. Typically, a comprehensive automatic uncertainty analyzer has three machine learning models for uncertainty detection, attribution, and scope extraction. To-date, and to the best of my knowledge, current research on uncertainty automatic analysis has only focused on uncertainty attribution and scope extraction, and has typically tackled each task with a different machine learning approach. Furthermore, current research on uncertainty automatic analysis has been restricted to specific languages, particularly English, and to specific linguistic genres, including biomedical and newswire texts, Wikipedia articles, and product reviews. In this research project, I attempt to address the aforementioned limitations of current research on automatic uncertainty analysis. First, I develop a machine learning model for uncertainty attribution, the task typically neglected in automatic uncertainty analysis. Second, I propose a unified framework to identify and extract uncertainty cues, holders, and scopes in one-fell swoop by casting each task as a supervised token sequence labeling problem. Third, I choose to work on the Arabic language, in contrast to English, the most commonly studied language in the literature of automatic uncertainty analysis. Finally, I work on the understudied linguistic genre of tweets. This research project results in a novel NLP tool, i.e., a comprehensive automatic uncertainty analyzer for Arabic tweets, with a practical impact on NLP applications that rely on uncertainty automatic analysis. The tool yields an F1 score of 0.759, averaged across its three machine learning models. Furthermore, through this research, the research community and I gain insights into (1) the challenges presented by Arabic as an agglutinative morphologically-rich language with a flexible word order, in contrast to English; (2) the challenges of the linguistic genre of tweets for uncertainty automatic analysis; and (3) the type of challenges that my proposed unified framework successfully addresses and boosts performance for

    Making decisions based on context: models and applications in cognitive sciences and natural language processing

    Full text link
    It is known that humans are capable of making decisions based on context and generalizing what they have learned. This dissertation considers two related problem areas and proposes different models that take context information into account. By including the context, the proposed models exhibit strong performance in each of the problem areas considered. The first problem area focuses on a context association task studied in cognitive science, which evaluates the ability of a learning agent to associate specific stimuli with an appropriate response in particular spatial contexts. Four neural circuit models are proposed to model how the stimulus and context information are processed to produce a response. The neural networks are trained by modifying the strength of neural connections (weights) using principles of Hebbian learning. Such learning is considered biologically plausible, in contrast to back propagation techniques that do not have a solid neurophysiological basis. A series of theoretical results for the neural circuit models are established, guaranteeing convergence to an optimal configuration when all the stimulus-context pairs are provided during training. Among all the models, a specific model based on ideas from recommender systems trained with a primal-dual update rule, achieves perfect performance in learning and generalizing the mapping from context-stimulus pairs to correct responses. The second problem area considered in the thesis focuses on clinical natural language processing (NLP). A particular application is the development of deep-learning models for analyzing radiology reports. Four NLP tasks are considered including anatomy named entity recognition, negation detection, incidental finding detection, and clinical concept extraction. A hierarchical Recurrent Neural Network (RNN) is proposed for anatomy named entity recognition, which is then used to produce a set of features for incidental finding detection of pulmonary nodules. A clinical context word embedding model is obtained, which is used with an RNN to model clinical concept extraction. Finally, feature-enriched RNN and transformer-based models with contextual word embedding are proposed for negation detection. All these models take the (clinical) context information into account. The models are evaluated on different datasets and are shown to achieve strong performance, largely outperforming the state-of-art
    corecore