1,991 research outputs found

    Using Description Logics for Recognising Textual Entailment

    Get PDF
    The aim of this paper is to show how we can handle the Recognising Textual Entailment (RTE) task by using Description Logics (DLs). To do this, we propose a representation of natural language semantics in DLs inspired by existing representations in first-order logic. But our most significant contribution is the definition of two novel inference tasks: A-Box saturation and subgraph detection which are crucial for our approach to RTE

    A Knowledge-Based Model for Polarity Shifters

    Full text link
    [EN] Polarity shifting can be considered one of the most challenging problems in the context of Sentiment Analysis. Polarity shifters, also known as contextual valence shifters (Polanyi and Zaenen 2004), are treated as linguistic contextual items that can increase, reduce or neutralise the prior polarity of a word called focus included in an opinion. The automatic detection of such items enhances the performance and accuracy of computational systems for opinion mining, but this challenge remains open, mainly for languages other than English. From a symbolic approach, we aim to advance in the automatic processing of the polarity shifters that affect the opinions expressed on tweets, both in English and Spanish. To this end, we describe a novel knowledge-based model to deal with three dimensions of contextual shifters: negation, quantification, and modality (or irrealis).This work is part of the project grant PID2020-112827GB-I00, funded by MCIN/AEI/10.13039/501100011033, and the SMARTLAGOON project [101017861], funded by Horizon 2020 - European Union Framework Programme for Research and Innovation.Blázquez-López, Y. (2022). A Knowledge-Based Model for Polarity Shifters. Journal of Computer-Assisted Linguistic Research. 6:87-107. https://doi.org/10.4995/jclr.2022.1880787107

    An NLP Analysis of Health Advice Giving in the Medical Research Literature

    Get PDF
    Health advice – clinical and policy recommendations – plays a vital role in guiding medical practices and public health policies. Whether or not authors should give health advice in medical research publications is a controversial issue. The proponents of actionable research advocate for the more efficient and effective transmission of science evidence into practice. The opponents are concerned about the quality of health advice in individual research papers, especially that in observational studies. Arguments both for and against giving advice in individual studies indicate a strong need for identifying and accessing health advice, for either practical use or quality evaluation purposes. However, current information services do not support the direct retrieval of health advice. Compared to other natural language processing (NLP) applications, health advice has not been computationally modeled as a language construct either. A new information service for directly accessing health advice should be able to reduce information barriers and to provide external assessment in science communication. This dissertation work built an annotated corpus of scientific claims that distinguishes health advice according to its occurrence and strength. The study developed NLP-based prediction models to identify health advice in the PubMed literature. Using the annotated corpus and prediction models, the study answered research questions regarding the practice of advice giving in medical research literature. To test and demonstrate the potential use of the prediction model, it was used to retrieve health advice regarding the use of hydroxychloroquine (HCQ) as a treatment for COVID-19 from LitCovid, a large COVID-19 research literature database curated by the National Institutes of Health. An evaluation of sentences extracted from both abstracts and discussions showed that BERT-based pre-trained language models performed well at detecting health advice. The health advice prediction model may be combined with existing health information service systems to provide more convenient navigation of a large volume of health literature. Findings from the study also show researchers are careful not to give advice solely in abstracts. They also tend to give weaker and non-specific advice in abstracts than in discussions. In addition, the study found that health advice has appeared consistently in the abstracts of observational studies over the past 25 years. In the sample, 41.2% of the studies offered health advice in their conclusions, which is lower than earlier estimations based on analyses of much smaller samples processed manually. In the abstracts of observational studies, journals with a lower impact are more likely to give health advice than those with a higher impact, suggesting the significance of the role of journals as gatekeepers of science communication. For the communities of natural language processing, information science, and public health, this work advances knowledge of the automated recognition of health advice in scientific literature. The corpus and code developed for the study have been made publicly available to facilitate future efforts in health advice retrieval and analysis. Furthermore, this study discusses the ways in which researchers give health advice in medical research articles, knowledge of which could be an essential step towards curbing potential exaggeration in the current global science communication. It also contributes to ongoing discussions of the integrity of scientific output. This study calls for caution in advice-giving in medical research literature, especially in abstracts alone. It also calls for open access to medical research publications, so that health researchers and practitioners can fully review the advice in scientific outputs and its implications. More evaluative strategies that can increase the overall quality of health advice in research articles are needed by journal editors and reviewers, given their gatekeeping role in science communication

    Negative Statements Considered Useful

    No full text
    Knowledge bases (KBs), pragmatic collections of knowledge about notable entities, are an important asset in applications such as search, question answering and dialogue. Rooted in a long tradition in knowledge representation, all popular KBs only store positive information, while they abstain from taking any stance towards statements not contained in them. In this paper, we make the case for explicitly stating interesting statements which are not true. Negative statements would be important to overcome current limitations of question answering, yet due to their potential abundance, any effort towards compiling them needs a tight coupling with ranking. We introduce two approaches towards compiling negative statements. (i) In peer-based statistical inferences, we compare entities with highly related entities in order to derive potential negative statements, which we then rank using supervised and unsupervised features. (ii) In query-log-based text extraction, we use a pattern-based approach for harvesting search engine query logs. Experimental results show that both approaches hold promising and complementary potential. Along with this paper, we publish the first datasets on interesting negative information, containing over 1.1M statements for 100K popular Wikidata entities

    Sentiment Analysis: An Overview from Linguistics

    Get PDF
    Sentiment analysis is a growing field at the intersection of linguistics and computer science, which attempts to automatically determine the sentiment, or positive/negative opinion, contained in text. Sentiment can be characterized as positive or negative evaluation expressed through language. Common applications of sentiment analysis include the automatic determination of whether a review posted online (of a movie, a book, or a consumer product) is positive or negative towards the item being reviewed. Sentiment analysis is now a common tool in the repertoire of social media analysis carried out by companies, marketers and political analysts. Research on sentiment analysis extracts information from positive and negative words in text, from the context of those words, and the linguistic structure of the text. This brief survey examines in particular the contributions that linguistic knowledge can make to the problem of automatically determining sentiment

    DEEPEN: A negation detection system for clinical text incorporating dependency relation into NegEx

    Get PDF
    In Electronic Health Records (EHRs), much of valuable information regarding patients’ conditions is embedded in free text format. Natural language processing (NLP) techniques have been developed to extract clinical information from free text. One challenge faced in clinical NLP is that the meaning of clinical entities is heavily affected by modifiers such as negation. A negation detection algorithm, NegEx, applies a simplistic approach that has been shown to be powerful in clinical NLP. However, due to the failure to consider the contextual relationship between words within a sentence, NegEx fails to correctly capture the negation status of concepts in complex sentences. Incorrect negation assignment could cause inaccurate diagnosis of patients’ condition or contaminated study cohorts. We developed a negation algorithm called DEEPEN to decrease NegEx’s false positives by taking into account the dependency relationship between negation words and concepts within a sentence using Stanford dependency parser. The system was developed and tested using EHR data from Indiana University (IU) and it was further evaluated on Mayo Clinic dataset to assess its generalizability. The evaluation results demonstrate DEEPEN, which incorporates dependency parsing into NegEx, can reduce the number of incorrect negation assignment for patients with positive findings, and therefore improve the identification of patients with the target clinical findings in EHRs

    The impact of pretrained language models on negation and speculation detection in cross-lingual medical text: Comparative study

    Get PDF
    Background: Negation and speculation are critical elements in natural language processing (NLP)-related tasks, such as information extraction, as these phenomena change the truth value of a proposition. In the clinical narrative that is informal, these linguistic facts are used extensively with the objective of indicating hypotheses, impressions, or negative findings. Previous state-of-the-art approaches addressed negation and speculation detection tasks using rule-based methods, but in the last few years, models based on machine learning and deep learning exploiting morphological, syntactic, and semantic features represented as spare and dense vectors have emerged. However, although such methods of named entity recognition (NER) employ a broad set of features, they are limited to existing pretrained models for a specific domain or language. Objective: As a fundamental subsystem of any information extraction pipeline, a system for cross-lingual and domain-independent negation and speculation detection was introduced with special focus on the biomedical scientific literature and clinical narrative. In this work, detection of negation and speculation was considered as a sequence-labeling task where cues and the scopes of both phenomena are recognized as a sequence of nested labels recognized in a single step. Methods: We proposed the following two approaches for negation and speculation detection: (1) bidirectional long short-term memory (Bi-LSTM) and conditional random field using character, word, and sense embeddings to deal with the extraction of semantic, syntactic, and contextual patterns and (2) bidirectional encoder representations for transformers (BERT) with fine tuning for NER. Results: The approach was evaluated for English and Spanish languages on biomedical and review text, particularly with the BioScope corpus, IULA corpus, and SFU Spanish Review corpus, with F-measures of 86.6%, 85.0%, and 88.1%, respectively, for NeuroNER and 86.4%, 80.8%, and 91.7%, respectively, for BERT. Conclusions: These results show that these architectures perform considerably better than the previous rule-based and conventional machine learning-based systems. Moreover, our analysis results show that pretrained word embedding and particularly contextualized embedding for biomedical corpora help to understand complexities inherent to biomedical text.This work was supported by the Research Program of the Ministry of Economy and Competitiveness, Government of Spain (DeepEMR Project TIN2017-87548-C2-1-R)

    Modality and Negation in Event Extraction

    Get PDF

    Enhancing automatic extration of biomedical relations using different linguistic features extracted from text

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Informática, Departamento de Ingeniería del Software e Inteligencia Artificial, leída el 08-06-2017La extracción de relaciones entre entidades es una tarea muy importante dentro del procesamiento de textos biomédicos. Cada vez hay más información sobre este tipo de interacciones almacenada en bases de datos, pero sin embargo la mayor cantidad de información relacionada con el tema está presente en artículos científicos o en recursos donde la información se almacena en formato textual.Las interacciones entre fármacos son, en particular, una preocupación generalizada en medicina, por esa razón la extracción automática de este tipo de relaciones es una tarea muy demandada en el procesamiento de textos biomédicos. Una interacción entre 2 fármacos normalmente se produce cuando un fármaco altera el nivel de actividad de otro fármaco. De acuerdo a los informes presentados por la Adminsitración Nacional de Alimentos y Fármacos de Estados Unidos y otros estudios reconocidos [1], cada año se producen más de 2 millones de interacciones mortales entre fármacos. Muchos investigadores y compañías farmaceúticas han desarrollado bases de datos donde estas interacciones son almacenadas. Sin embargo, la información más actualizada y valiosa sigue apareciendo sólo en documentos no estructurados en formato textual, incluyendo publicaciones científicas e informes técnicos.En esta tesis se estudian 3 conjuntos de características lingüísticas de los textos: negación,dependencia clausal y candidatos neutros. El objetivo final de la investigación es mejorar el rendimiento de la tarea de extracción de interacciones entre fármacos considerando las combinaciones de las características lingüísticas extraídas de los textos con métodos de aprendizaje basados en kernel...Extracting biomedical relations from texts is a relatively new, but rapidly growing researchfield in natural language processing (NLP). Due to the increasing number of biomedicalresearch publications and the key role of databases of biomedical relations in biological andmedical research, extracting biomedical relations from scientific articles and text resourcesis of utmost importance.Drug-drug interactions (DDI) are, in particular, a widespread concern in medicine, and thus,extracting this kind of interactions automatically from texts is of high demand in BioNLP. Adrug-drug interaction usually occurs when one drug alters the activity level of another drug.According to the reports prepared by the U. S. Food and Drug Administration (the FDA) andother acknowledged studies [1], over 2 million life-threatening DDIs occur in the UnitedStates every year. Many academic researchers and pharmaceutical companies havedeveloped relational and structural databases, where DDIs are recorded. Nevertheless,most up-to-date and valuable information is still found only in unstructured research textdocuments, including scientific publications and technical reports.In this thesis, three complementary, linguistically driven, feature sets, are studied: negation,clause dependency, and neutral candidates. The ultimate aim of this research is to enhancethe performance of the DDI extraction task by considering the combinations of theextracted features with well-established kernel methods...Depto. de Ingeniería de Software e Inteligencia Artificial (ISIA)Fac. de InformáticaTRUEunpu
    corecore