11 research outputs found

    Extraction of events and temporal expressions from clinical narratives

    Get PDF
    AbstractThis paper addresses an important task of event and timex extraction from clinical narratives in context of the i2b2 2012 challenge. State-of-the-art approaches for event extraction use a multi-class classifier for finding the event types. However, such approaches consider each event in isolation. In this paper, we present a sentence-level inference strategy which enforces consistency constraints on attributes of those events which appear close to one another. Our approach is general and can be used for other tasks as well. We also design novel features like clinical descriptors (from medical ontologies) which encode a lot of useful information about the concepts. For timex extraction, we adapt a state-of-the-art system, HeidelTime, for use in clinical narratives and also develop several rules which complement HeidelTime. We also give a robust algorithm for date extraction. For the event extraction task, we achieved an overall F1 score of 0.71 for determining span of the events along with their attributes. For the timex extraction task, we achieved an F1 score of 0.79 for determining span of the temporal expressions. We present detailed error analysis of our system and also point out some factors which can help to improve its accuracy

    Clinical Temporal Relation Extraction with Probabilistic Soft Logic Regularization and Global Inference

    Full text link
    There has been a steady need in the medical community to precisely extract the temporal relations between clinical events. In particular, temporal information can facilitate a variety of downstream applications such as case report retrieval and medical question answering. Existing methods either require expensive feature engineering or are incapable of modeling the global relational dependencies among the events. In this paper, we propose a novel method, Clinical Temporal ReLation Exaction with Probabilistic Soft Logic Regularization and Global Inference (CTRL-PG) to tackle the problem at the document level. Extensive experiments on two benchmark datasets, I2B2-2012 and TB-Dense, demonstrate that CTRL-PG significantly outperforms baseline methods for temporal relation extraction.Comment: 10 pages, 4 figures, 7 tables, accepted by AAAI 202

    Алгоритмы и программное обеспечение идентификации временных конструкций в слабоструктурированных электронных медицинских текстах

    Get PDF
    Работа направлена на повышение эффективности анализа электронных медицинских карт (ЭМК) с помощью разработки инструментов автоматического извлечения временных конструкций из медицинской документации. Полученные инструменты позволят перенести данные конструкции на временную шкалу и представить их в удобном для медицинских сотрудников виде.The work is aimed at increasing the efficiency of the analysis of electronic medical records (EMR) by developing tools for the automatic extraction of temporary structures from medical records. The resulting tools will allow doctors to transfer these structures to the timeline and present them in a form convenient for medical staff

    Алгоритмы и программное обеспечение идентификации временных конструкций в слабоструктурированных электронных медицинских текстах

    Get PDF
    Работа направлена на повышение эффективности анализа электронных медицинских карт (ЭМК) с помощью разработки инструментов автоматического извлечения временных конструкций из медицинской документации. Полученные инструменты позволят перенести данные конструкции на временную шкалу и представить их в удобном для медицинских сотрудников виде.The work is aimed at increasing the efficiency of the analysis of electronic medical records (EMR) by developing tools for the automatic extraction of temporary structures from medical records. The resulting tools will allow doctors to transfer these structures to the timeline and present them in a form convenient for medical staff

    Improving Syntactic Parsing of Clinical Text Using Domain Knowledge

    Get PDF
    Syntactic parsing is one of the fundamental tasks of Natural Language Processing (NLP). However, few studies have explored syntactic parsing in the medical domain. This dissertation systematically investigated different methods to improve the performance of syntactic parsing of clinical text, including (1) Constructing two clinical treebanks of discharge summaries and progress notes by developing annotation guidelines that handle missing elements in clinical sentences; (2) Retraining four state-of-the-art parsers, including the Stanford parser, Berkeley parser, Charniak parser, and Bikel parser, using clinical treebanks, and comparing their performance to identify better parsing approaches; and (3) Developing new methods to reduce syntactic ambiguity caused by Prepositional Phrase (PP) attachment and coordination using semantic information. Our evaluation showed that clinical treebanks greatly improved the performance of existing parsers. The Berkeley parser achieved the best F-1 score of 86.39% on the MiPACQ treebank. For PP attachment, our proposed methods improved the accuracies of PP attachment by 2.35% on the MiPACQ corpus and 1.77% on the I2b2 corpus. For coordination, our method achieved a precision of 94.9% and a precision of 90.3% for the MiPACQ and i2b2 corpus, respectively. To further demonstrate the effectiveness of the improved parsing approaches, we applied outputs of our parsers to two external NLP tasks: semantic role labeling and temporal relation extraction. The experimental results showed that performance of both tasks’ was improved by using the parse tree information from our optimized parsers, with an improvement of 3.26% in F-measure for semantic role labelling and an improvement of 1.5% in F-measure for temporal relation extraction

    Named Entity Recognition in Chinese Clinical Text

    Get PDF
    Objective: Named entity recognition (NER) is one of the fundamental tasks in natural language processing (NLP). In the medical domain, there have been a number of studies on NER in English clinical notes; however, very limited NER research has been done on clinical notes written in Chinese. The goal of this study is to develop corpora, methods, and systems for NER in Chinese clinical text. Materials and methods: To study entities in Chinese clinical text, we started with building annotated clinical corpora in Chinese. We developed an NER annotation guideline in Chinese by extending the one used in the 2010 i2b2 NLP challenge. We randomly selected 400 admission notes and 400 discharge summaries from Peking Union Medical College Hospital (PUMCH) in China. For each note, four types of entities including clinical problems, procedures, labs, and medications were annotated according to the developed guideline. In addition, an annotation tool was developed to assist two MD students to annotate Chinese clinical documents. A comparison of entity distribution between Chinese and English clinical notes (646 English and 400 Chinese discharge summaries) was performed using the annotated corpora, to identify the important features for NER. In the NER study, two-thirds of the 400 notes were used for training the NER systems and one-third were used for testing. We investigated the effects of different types of features including bag-of-characters, word segmentation, part-of-speech, and section information, with different machine learning (ML) algorithms including Conditional Random Fields (CRF), Support Vector Machines (SVM), Maximum Entropy (ME), and Structural Support Vector Machines (SSVM) on the Chinese clinical NER task. All classifiers were trained on the training dataset, evaluated on the test set, and microaveraged precision, recall, and F-measure were reported. Results: Our evaluation on the independent test set showed that most types of features were beneficial to Chinese NER systems, although the improvements were limited. By combining word segmentation and section information, the system achieved the highest performance, indicating that these two types of features are complementary to each other. When the same types of optimized features were used, CRF and SSVM outperformed SVM and ME. More specifically, SSVM reached the highest performance among the four algorithms, with F-measures of 93.51% and 90.01% for admission notes and discharge summaries respectively. Conclusions: In this study, we created large annotated datasets of Chinese admission notes and discharge summaries and then systematically evaluated different types of features (e.g., syntactic, semantic, and segmentation information) and four ML algorithms including CRF, SVM, SSVM, and ME for clinical NER in Chinese. To the best of our knowledge, this is one of the earliest comprehensive effort in Chinese clinical NER research and we believe it will provide valuable insights to NLP research in Chinese clinical text. Our results suggest that both word segmentation and section information improves NER in Chinese clinical text, and SSVM, a recent sequential labelling algorithm, outperformed CRF and other classification algorithms. Our best system achieved F-measures of 90.01% and 93.52% on Chinese discharge summaries and admission notes, respectively, indicating a promising start on Chinese NLP research

    Modelo para descoberta de conhecimento baseado em associação semântica e temporal entre elementos textuais

    Get PDF
    Tese (doutorado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Engenharia e Gestão do Conhecimento, Florianópolis, 2016.O aumento da complexidade nas atividades organizacionais, a vertiginosa expansão da Internet e os avanços da sociedade do conhecimento são alguns dos responsáveis pelo volume inédito de dados digitais. Essa crescente massa de dados apresenta grande potencial para a análise de padrões e descoberta de conhecimento. Nesse sentido, a análise dos relacionamentos presentes nesse imenso volume de informações pode proporcionar novos e, possivelmente, inesperados insights. A presente pesquisa constatou a escassez de trabalhos que consideram adequadamente a semântica e a temporalidade dos relacionamentos entre elementos textuais, características consideradas importantes para a descoberta de conhecimento. Assim, este trabalho propõe um modelo para descoberta de conhecimento que conta com uma ontologia de alto-nível para a representação de relacionamentos e com a técnica Latent Semantic Indexing (LSI) para determinar a força de associação entre termos que não se relacionam diretamente. A representação do conhecimento de domínio, bem como, a determinação da força associativa entre os termos são realizadas levando em conta o tempo em que os relacionamentos ocorrem. A avaliação do modelo foi realizada a partir de dois tipos de experimentos: um que trata da classificação de documentos e outro que trata da associação semântica e temporal entre termos. Os resultados demonstram que o modelo: i) possui potencial para ser aplicado em tarefas intensivas em conhecimento, como a classificação e ii) é capaz de apresentar curvas da força associativa entre dois termos ao longo do tempo, contribuindo para o levantamento de hipóteses e, consequentemente, para a descoberta de conhecimento.Abstract : The increased complexity in organizational activities, the rapid expansion of the Internet and advances in the knowledge society are some of those responsible for the unprecedented volume of digital data. This growing body of data has great potential for pattern analysis and knowledge discovery. In this sense, the analysis of relationships present in this immense volume of information can provide new and possibly unexpected insights. This research found shortages of studies that adequately consider the semantics and the temporality of relationships between textual elements considered important features for knowledge discovery. This work proposes a model of knowledge discovery comprising a high-level ontology for the representation of relationships and the LSI technique to determine the strength of association between terms that do not relate directly. The representation of domain knowledge and the determination of the associative strength between the terms are made taking into account the time in which the relationships occur. The evaluation of the model was made from two types of experiments: one that deals with the classification of documents and another concerning semantics and temporal association between terms. The results show that the model: i) has the potential to be used as a text classifier and ii) is capable of displaying curves of associative force between two terms over time, contributing to the raising of hypotheses and therefore to discover of knowledge
    corecore