1,247 research outputs found

    Normalisation of imprecise temporal expressions extracted from text

    Get PDF
    Information extraction systems and techniques have been largely used to deal with the increasing amount of unstructured data available nowadays. Time is among the different kinds of information that may be extracted from such unstructured data sources, including text documents. However, the inability to correctly identify and extract temporal information from text makes it difficult to understand how the extracted events are organised in a chronological order. Furthermore, in many situations, the meaning of temporal expressions (timexes) is imprecise, such as in “less than 2 years” and “several weeks”, and cannot be accurately normalised, leading to interpretation errors. Although there are some approaches that enable representing imprecise timexes, they are not designed to be applied to specific scenarios and difficult to generalise. This paper presents a novel methodology to analyse and normalise imprecise temporal expressions by representing temporal imprecision in the form of membership functions, based on human interpretation of time in two different languages (Portuguese and English). Each resulting model is a generalisation of probability distributions in the form of trapezoidal and hexagonal fuzzy membership functions. We use an adapted F1-score to guide the choice of the best models for each kind of imprecise timex and a weighted F1-score (F1 3 D ) as a complementary metric in order to identify relevant differences when comparing two normalisation models. We apply the proposed methodology for three distinct classes of imprecise timexes, and the resulting models give distinct insights in the way each kind of temporal expression is interpreted

    Normalisation of imprecise temporal expressions extracted from text

    Get PDF
    Orientador : Prof. Dr. Marcos Didonet Del FabroCo-Orientador : Prof. Dr. Angus RobertsTese (doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa: Curitiba, 05/04/2016Inclui referências : f. 95-105Resumo: Técnicas e sistemas de extração de informações são capazes de lidar com a crescente quantidade de dados não estruturados disponíveis hoje em dia. A informação temporal está entre os diferentes tipos de informações que podem ser extraídos a partir de tais fontes de dados não estruturados, como documentos de texto. Informações temporais descrevem as mudanças que acontecem através da ocorrência de eventos, e fornecem uma maneira de gravar, ordenar e medir a duração de tais ocorrências. A impossibilidade de identificar e extrair informação temporal a partir de documentos textuais faz com que seja difícil entender como os eventos são organizados em ordem cronológica. Além disso, em muitas situações, o significado das expressões temporais é impreciso, e não pode ser descrito com precisão, o que leva a erros de interpretação. As soluções existentes proporcionam formas alternativas de representar expressões temporais imprecisas. Elas são, entretanto, específicas e difíceis de generalizar. Além disso, a análise de dados temporais pode ser particularmente ineficiente na presença de erros ortográficos. As abordagens existentes usam métodos de similaridade para procurar palavras válidas dentro de um texto. No entanto, elas não são suficientes para processos erros de ortografia de uma forma eficiente. Nesta tese é apresentada uma metodologia para analisar e normalizar das expressões temporais imprecisas, em que, após a coleta e pré-processamento de dados sobre a forma como as pessoas interpretam descrições vagas de tempo no texto, diferentes técnicas são comparadas a fim de criar e selecionar o modelo de normalização mais apropriada para diferentes tipos de expressões imprecisas. Também são comparados um sistema baseado em regras e uma abordagem de aprendizagem de máquina na tentativa de identificar expressões temporais em texto, e é analisado o processo de produção de padrões de anotação, identificando possíveis fontes de problemas, dando algumas recomendações para serem consideradas no futuro esforços de anotação manual. Finalmente, é proposto um mapa fonético e é avaliado como a codificação de informação fonética poderia ser usado a fim de auxiliar os métodos de busca de similaridade e melhorar a qualidade da informação extraída.Abstract: Information Extraction systems and techniques are able to deal with the increasing amount of unstructured data available nowadays. Time is amongst the different kinds of information that may be extracted from such unstructured data sources, including text documents. Time describes changes which happen through the occurrence of events, and provides a way to record, order, and measure the duration of such occurrences. The inability to identify and extract temporal information from text makes it difficult to understand how the events are organized in a chronological order. Moreover, in many situations, the meaning of temporal expressions is imprecise, and cannot be accurately described, leading to interpretation errors. Existing solutions provide alternative ways of representing imprecise temporal expressions, though they are specific and hard to generalise. Furthermore, the analysis of temporal data may be particularly inefficient in the presence of spelling errors. Existing approaches use string similarity methods to search for valid words within a text. However, they are not rich enough to processes misspellings in an efficient way. In this thesis, we present a methodology to analyse and normalise of imprecise temporal expressions, in which, after collecting and pre-processing data on how people interpret vague descriptions of time in text, we compare different techniques in order to create and select the most appropriate normalisation model for different kinds of imprecise expressions. We also compare how a rule-based system and a machine learning approach perform on trying to identify temporal expression from text, and we analyse the process of producing gold standards, identifying possible sources of issues, giving some recommendations to be considered in future manual annotation efforts. Finally, we propose a phonetic map and evaluate how encoding phonetic information could be used in order to assist similarity search methods and improve information extraction quality

    Natural language processing for mimicking clinical trial recruitment in critical care: a semi-automated simulation based on the LeoPARDS trial

    Get PDF
    Clinical trials often fail to recruit an adequate number of appropriate patients. Identifying eligible trial participants is resource-intensive when relying on manual review of clinical notes, particularly in critical care settings where the time window is short. Automated review of electronic health records (EHR) may help, but much of the information is in free text rather than a computable form. We applied natural language processing (NLP) to free text EHR data using the CogStack platform to simulate recruitment into the LeoPARDS study, a clinical trial aiming to reduce organ dysfunction in septic shock. We applied an algorithm to identify eligible patients using a moving 1-hour time window, and compared patients identified by our approach with those actually screened and recruited for the trial, for the time period that data were available. We manually reviewed records of a random sample of patients identified by the algorithm but not screened in the original trial. Our method identified 376 patients, including 34 patients with EHR data available who were actually recruited to LeoPARDS in our centre. The sensitivity of CogStack for identifying patients screened was 90% (95% CI 85%, 93%). Of the 203 patients identified by both manual screening and CogStack, the index date matched in 95 (47%) and CogStack was earlier in 94 (47%). In conclusion, analysis of EHR data using NLP could effectively replicate recruitment in a critical care trial, and identify some eligible patients at an earlier stage, potentially improving trial recruitment if implemented in real time

    Formulaic sequences in Early Modern English: A corpus-assisted historical pragmatic study

    Get PDF
    This doctoral project identifies formulaic sequences (hereinafter FS and the plural form FSs) in Early Modern English (hereinafter EModE) and intends to investigate the functions they serve in communication and different text types, namely EModE dialogues and letters. Main contributions of the study include, firstly, the study provides solid arguments and further evidence that FSs are constructions in the Construction Grammar instead of exceptions in the traditional grammar-dictionary model. Within this theoreticall framework, I proposed a new working definition of FSs that is inclusive, descriptive, and methodologically neutral. The study also argues that there are fundamental differences between FSs and lexical bundles (LBs), although the latter often treated as an alternative term of FSs or sub-groups of FSs. Nevertheless, after a thorogh review of the characteristics of the two mult-word units, the study argues that despite of the differences, LBs can be upgrated to FSs as long as they fulfill certail sematic, syntactic, and pragmatic criteria. THis forms the fundation of the methodology design of the study. Secondly, the study enhanced the corpus-assisted approach to the identification of FSs, esp. in EModE texts. The approach consists of three steps: preparation, identification, and generalisation. The identification step was further conducted within two phases: automatic generation of LBs for a corpus and manual identification of FSs from LBs. Specifically, in the preparation step, the dissertation critically discussed how spelling variation in EModE texts shall be dealt with in investigations on FSs. I designed a series of criteria for the two-phase identification of FSs. For one thing, I disagree with previous research that two-word LBs shall be excluded from examination by arguing that many of them are formulaic and cannot be captured from longer LBs and the workload of processing the massive number of two-word LBs is actually manageable. For another, the study contributes an easy-to-follow flow chart demonstrating the procedure of the manual identification of FSs from LBs and listing the criteria that guide the decision-making process. Thirdly, the study provides systematic and comprehensive accounts of FSs in EModE dialogues and letters, esp. how their forms are conventionally mapped to their functions. Data analysis were conducted from aspects such as degree of fixedness, grammatical structures, distribution across function categories, multi-functional FSs, genre-specific FSs, etc. General findings suggest that EModE dialogues and letters actually have many similarities regarding the form and function of FSs and general trends of distribution across function categories. However, outstanding differences between the two text types can be observed too. From the perspective of form, the distinction lies in word choice in realisations of certain FSs. From the perspective of meaning/function, the distinction lies in the kinds of functions that need FSs the most or the least and common function combinations. More importantly, the study observed two types of relationships among FSs themselves and the discourse, including horizonal networks and vertical networks, which reflects the complexity of FSs and their identity as constructions. Specifically, three types of horizontal networks of FSs are embedding, attaching, and joining. A pair of new concepts is proposed to describe the vertical networks: superordinate FSs and subordinate FSs. As a result of the vertical networks, three types of functional diviation are observed: function extension, shifting, and specification

    Accessing natural history:Discoveries in data cleaning, structuring, and retrieval

    Get PDF

    Programmiersprachen und Rechenkonzepte

    Get PDF
    Seit 1984 veranstaltet die GI-Fachgruppe "Programmiersprachen und Rechenkonzepte", die aus den ehemaligen Fachgruppen 2.1.3 "Implementierung von Programmiersprachen" und 2.1.4 "Alternative Konzepte für Sprachen und Rechner" hervorgegangen ist, regelmäßig im Frühjahr einen Workshop im Physikzentrum Bad Honnef. Das Treffen dient in erster Linie dem gegenseitigen Kennenlernen, dem Erfahrungsaustausch, der Diskussion und der Vertiefung gegenseitiger Kontakte

    Formal concept matching and reinforcement learning in adaptive information retrieval

    Get PDF
    The superiority of the human brain in information retrieval (IR) tasks seems to come firstly from its ability to read and understand the concepts, ideas or meanings central to documents, in order to reason out the usefulness of documents to information needs, and secondly from its ability to learn from experience and be adaptive to the environment. In this work we attempt to incorporate these properties into the development of an IR model to improve document retrieval. We investigate the applicability of concept lattices, which are based on the theory of Formal Concept Analysis (FCA), to the representation of documents. This allows the use of more elegant representation units, as opposed to keywords, in order to better capture concepts/ideas expressed in natural language text. We also investigate the use of a reinforcement leaming strategy to learn and improve document representations, based on the information present in query statements and user relevance feedback. Features or concepts of each document/query, formulated using FCA, are weighted separately with respect to the documents they are in, and organised into separate concept lattices according to a subsumption relation. Furthen-nore, each concept lattice is encoded in a two-layer neural network structure known as a Bidirectional Associative Memory (BAM), for efficient manipulation of the concepts in the lattice representation. This avoids implementation drawbacks faced by other FCA-based approaches. Retrieval of a document for an information need is based on concept matching between concept lattice representations of a document and a query. The learning strategy works by making the similarity of relevant documents stronger and non-relevant documents weaker for each query, depending on the relevance judgements of the users on retrieved documents. Our approach is radically different to existing FCA-based approaches in the following respects: concept formulation; weight assignment to object-attribute pairs; the representation of each document in a separate concept lattice; and encoding concept lattices in BAM structures. Furthermore, in contrast to the traditional relevance feedback mechanism, our learning strategy makes use of relevance feedback information to enhance document representations, thus making the document representations dynamic and adaptive to the user interactions. The results obtained on the CISI, CACM and ASLIB Cranfield collections are presented and compared with published results. In particular, the performance of the system is shown to improve significantly as the system learns from experience.The School of Computing, University of Plymouth, UK
    corecore