132,799 research outputs found

    A processing framework for temporal analysis and its application to instructional texts

    Get PDF
    Temporal analysis is the task of determining the temporal structure of a given text. Such a structure represents the order of the events and states mentioned in the text on a time line. The main contribution of this thesis is in presenting a new processing framework for temporal analysis.The framework is a computational one and has been implemented in a system called taste for the temporal analysis of instructional texts. In particular, taste has been successfully tested on nine cookery recipes. Amongst the more important ideas explored in this thesis are the following: • We integrate qualitative information (as expressed by temporal connectives like before and after) and quantitative information (as expressed in phrases like for 20 minutes and 20 minutes before) in a text into the temporal analysis framework. Previous work has only considered qualitative information but ignored the quantitative kind. • We propose a new approach to the problem of integrating the current event or state into the preceding discourse. This problem has been identified as important for solving the temporal analysis task. • We show how information from the environment surrounding a text can affect the temporal analysis of instructional texts. In particular, we show that different temporal structures for the same text can be derived in different environments. Note that the environment information is in addition to the usual information considered in temporal analysis such as information from tense and aspect, temporal connectives and real-world knowledge. An example of information from the environment for the domain of cookery recipes is the availability of resources for carrying out an action. • We incorporate techniques developed in the field of temporal reasoning into the temporal analysis task. In addition, we analyse the complexity of temporal reasoning algorithm needed in the temporal analysis of instructional texts. • We propose a novel ontology for representing the composite and repetitive events that are mentioned in cookery recipes.Finally, the thesis ends with some suggestions for extending the work reported here

    Systemic Strategies to Improve the Readability of the English Version of Indonesian Children Stories

    Full text link
    The paper discusses languge exploitation for the children story books and offers several systemic strategies to improve the quality of language exploitation so that the books have a better quality for their readabality. Thirty children story books which are classified as narratives according to the publishers were randomly selected for the analysis. The books are targeted for children from five to twelve years old. The analysis on the text structure shows that all the stories have the three obligatory discourse units, namely orientation, complication, and resolution. Meanwhile, seen from the lexicogrammatical exploitation, most of the books have various grammatical mistakes and difficult words

    A Web-Based Tool for Analysing Normative Documents in English

    Full text link
    Our goal is to use formal methods to analyse normative documents written in English, such as privacy policies and service-level agreements. This requires the combination of a number of different elements, including information extraction from natural language, formal languages for model representation, and an interface for property specification and verification. We have worked on a collection of components for this task: a natural language extraction tool, a suitable formalism for representing such documents, an interface for building models in this formalism, and methods for answering queries asked of a given model. In this work, each of these concerns is brought together in a web-based tool, providing a single interface for analysing normative texts in English. Through the use of a running example, we describe each component and demonstrate the workflow established by our tool

    EliXR-TIME: A Temporal Knowledge Representation for Clinical Research Eligibility Criteria.

    Get PDF
    Effective clinical text processing requires accurate extraction and representation of temporal expressions. Multiple temporal information extraction models were developed but a similar need for extracting temporal expressions in eligibility criteria (e.g., for eligibility determination) remains. We identified the temporal knowledge representation requirements of eligibility criteria by reviewing 100 temporal criteria. We developed EliXR-TIME, a frame-based representation designed to support semantic annotation for temporal expressions in eligibility criteria by reusing applicable classes from well-known clinical temporal knowledge representations. We used EliXR-TIME to analyze a training set of 50 new temporal eligibility criteria. We evaluated EliXR-TIME using an additional random sample of 20 eligibility criteria with temporal expressions that have no overlap with the training data, yielding 92.7% (76 / 82) inter-coder agreement on sentence chunking and 72% (72 / 100) agreement on semantic annotation. We conclude that this knowledge representation can facilitate semantic annotation of the temporal expressions in eligibility criteria

    Quantifying origin and character of long-range correlations in narrative texts

    Full text link
    In natural language using short sentences is considered efficient for communication. However, a text composed exclusively of such sentences looks technical and reads boring. A text composed of long ones, on the other hand, demands significantly more effort for comprehension. Studying characteristics of the sentence length variability (SLV) in a large corpus of world-famous literary texts shows that an appealing and aesthetic optimum appears somewhere in between and involves selfsimilar, cascade-like alternation of various lengths sentences. A related quantitative observation is that the power spectra S(f) of thus characterized SLV universally develop a convincing `1/f^beta' scaling with the average exponent beta =~ 1/2, close to what has been identified before in musical compositions or in the brain waves. An overwhelming majority of the studied texts simply obeys such fractal attributes but especially spectacular in this respect are hypertext-like, "stream of consciousness" novels. In addition, they appear to develop structures characteristic of irreducibly interwoven sets of fractals called multifractals. Scaling of S(f) in the present context implies existence of the long-range correlations in texts and appearance of multifractality indicates that they carry even a nonlinear component. A distinct role of the full stops in inducing the long-range correlations in texts is evidenced by the fact that the above quantitative characteristics on the long-range correlations manifest themselves in variation of the full stops recurrence times along texts, thus in SLV, but to a much lesser degree in the recurrence times of the most frequent words. In this latter case the nonlinear correlations, thus multifractality, disappear even completely for all the texts considered. Treated as one extra word, the full stops at the same time appear to obey the Zipfian rank-frequency distribution, however.Comment: 28 pages, 8 figures, accepted for publication in Information Science

    Neurocognitive Informatics Manifesto.

    Get PDF
    Informatics studies all aspects of the structure of natural and artificial information systems. Theoretical and abstract approaches to information have made great advances, but human information processing is still unmatched in many areas, including information management, representation and understanding. Neurocognitive informatics is a new, emerging field that should help to improve the matching of artificial and natural systems, and inspire better computational algorithms to solve problems that are still beyond the reach of machines. In this position paper examples of neurocognitive inspirations and promising directions in this area are given

    Automatic case acquisition from texts for process-oriented case-based reasoning

    Get PDF
    This paper introduces a method for the automatic acquisition of a rich case representation from free text for process-oriented case-based reasoning. Case engineering is among the most complicated and costly tasks in implementing a case-based reasoning system. This is especially so for process-oriented case-based reasoning, where more expressive case representations are generally used and, in our opinion, actually required for satisfactory case adaptation. In this context, the ability to acquire cases automatically from procedural texts is a major step forward in order to reason on processes. We therefore detail a methodology that makes case acquisition from processes described as free text possible, with special attention given to assembly instruction texts. This methodology extends the techniques we used to extract actions from cooking recipes. We argue that techniques taken from natural language processing are required for this task, and that they give satisfactory results. An evaluation based on our implemented prototype extracting workflows from recipe texts is provided.Comment: Sous presse, publication pr\'evue en 201
    • …
    corecore