256,907 research outputs found

    Text adaptation using formal concept analysis

    Get PDF
    The original publication is available at www.springerlink.comInternational audienceThis paper addresses the issue of adapting cases represented by plain text with the help of formal concept analysis and natural language processing technologies. The actual cases represent recipes in which we classify ingredients according to culinary techniques applied to them. The complex nature of linguistic anaphoras in recipe texts make usual text mining techniques inefficient so a stronger approach, using syntactic and dynamic semantic analysis to build a formal representation of a recipe, had to be used. This representation is useful for various applications but, in this paper, we show how one can extract ingredient–action relations from it in order to use formal concept analysis and select an appropriate replacement sequence of culinary actions to use in adapting the recipe text

    Semi-automatic annotation process for procedural texts: An application on cooking recipes

    Get PDF
    Taaable is a case-based reasoning system that adapts cooking recipes to user constraints. Within it, the preparation part of recipes is formalised as a graph. This graph is a semantic representation of the sequence of instructions composing the cooking process and is used to compute the procedure adaptation, conjointly with the textual adaptation. It is composed of cooking actions and ingredients, among others, represented as vertices, and semantic relations between those, shown as arcs, and is built automatically thanks to natural language processing. The results of the automatic annotation process is often a disconnected graph, representing an incomplete annotation, or may contain errors. Therefore, a validating and correcting step is required. In this paper, we present an existing graphic tool named \kcatos, conceived for representing and editing decision trees, and show how it has been adapted and integrated in WikiTaaable, the semantic wiki in which the knowledge used by Taaable is stored. This interface provides the wiki users with a way to correct the case representation of the cooking process, improving at the same time the quality of the knowledge about cooking procedures stored in WikiTaaable

    Local flavors and regional markers : The Low Countries and their commercially driven and proximity-focused film remake practice

    Get PDF
    The practice of Dutch-Flemish film remaking that came into existence in the new millennium quickly appeared to be of great importance in the film industries of Flanders and The Netherlands – and consequently of Europe. Inspired by methods used in television (format) studies, this article conducts a systematic comparative film analysis of nine Dutch-Flemish remakes together with their nine source films. Considering the remake as a prism that aids in dissecting different formal, transtextual, and cultural codes, and subsequently embedding the practice in its specific socio-cultural and industrial context, we found several similarities and differences between the Dutch and Flemish film versions and showed how these can be made sense of. More generally, we distilled two encompassing principles that administer the remake practice: even though a great deal of the remake process can be explained through the concept of localization – or, more precisely, through the concepts of ‘manufacturing proximity’ and ‘banal aboutness’ – we found that it should certainly not be limited to these processes – as both (trans)textual, such as the mechanism of ‘filling in the gaps’, and contextual elements were found

    Automatic case acquisition from texts for process-oriented case-based reasoning

    Get PDF
    This paper introduces a method for the automatic acquisition of a rich case representation from free text for process-oriented case-based reasoning. Case engineering is among the most complicated and costly tasks in implementing a case-based reasoning system. This is especially so for process-oriented case-based reasoning, where more expressive case representations are generally used and, in our opinion, actually required for satisfactory case adaptation. In this context, the ability to acquire cases automatically from procedural texts is a major step forward in order to reason on processes. We therefore detail a methodology that makes case acquisition from processes described as free text possible, with special attention given to assembly instruction texts. This methodology extends the techniques we used to extract actions from cooking recipes. We argue that techniques taken from natural language processing are required for this task, and that they give satisfactory results. An evaluation based on our implemented prototype extracting workflows from recipe texts is provided.Comment: Sous presse, publication pr\'evue en 201

    Cloud service localisation

    Get PDF
    The essence of cloud computing is the provision of software and hardware services to a range of users in dierent locations. The aim of cloud service localisation is to facilitate the internationalisation and localisation of cloud services by allowing their adaption to dierent locales. We address the lingual localisation by providing service-level language translation techniques to adopt services to dierent languages and regulatory localisation by providing standards-based mappings to achieve regulatory compliance with regionally varying laws, standards and regulations. The aim is to support and enforce the explicit modelling of aspects particularly relevant to localisation and runtime support consisting of tools and middleware services to automating the deployment based on models of locales, driven by the two localisation dimensions. We focus here on an ontology-based conceptual information model that integrates locale specication in a coherent way

    The systematic guideline review: method, rationale, and test on chronic heart failure

    Get PDF
    Background: Evidence-based guidelines have the potential to improve healthcare. However, their de-novo-development requires substantial resources-especially for complex conditions, and adaptation may be biased by contextually influenced recommendations in source guidelines. In this paper we describe a new approach to guideline development-the systematic guideline review method (SGR), and its application in the development of an evidence-based guideline for family physicians on chronic heart failure (CHF). Methods: A systematic search for guidelines was carried out. Evidence-based guidelines on CHF management in adults in ambulatory care published in English or German between the years 2000 and 2004 were included. Guidelines on acute or right heart failure were excluded. Eligibility was assessed by two reviewers, methodological quality of selected guidelines was appraised using the AGREE instrument, and a framework of relevant clinical questions for diagnostics and treatment was derived. Data were extracted into evidence tables, systematically compared by means of a consistency analysis and synthesized in a preliminary draft. Most relevant primary sources were re-assessed to verify the cited evidence. Evidence and recommendations were summarized in a draft guideline. Results: Of 16 included guidelines five were of good quality. A total of 35 recommendations were systematically compared: 25/35 were consistent, 9/35 inconsistent, and 1/35 un-rateable (derived from a single guideline). Of the 25 consistencies, 14 were based on consensus, seven on evidence and four differed in grading. Major inconsistencies were found in 3/9 of the inconsistent recommendations. We re-evaluated the evidence for 17 recommendations (evidence-based, differing evidence levels and minor inconsistencies) - the majority was congruent. Incongruity was found where the stated evidence could not be verified in the cited primary sources, or where the evaluation in the source guidelines focused on treatment benefits and underestimated the risks. The draft guideline was completed in 8.5 man-months. The main limitation to this study was the lack of a second reviewer. Conclusion: The systematic guideline review including framework development, consistency analysis and validation is an effective, valid, and resource saving-approach to the development of evidence-based guidelines

    Transductive Learning with String Kernels for Cross-Domain Text Classification

    Full text link
    For many text classification tasks, there is a major problem posed by the lack of labeled data in a target domain. Although classifiers for a target domain can be trained on labeled text data from a related source domain, the accuracy of such classifiers is usually lower in the cross-domain setting. Recently, string kernels have obtained state-of-the-art results in various text classification tasks such as native language identification or automatic essay scoring. Moreover, classifiers based on string kernels have been found to be robust to the distribution gap between different domains. In this paper, we formally describe an algorithm composed of two simple yet effective transductive learning approaches to further improve the results of string kernels in cross-domain settings. By adapting string kernels to the test set without using the ground-truth test labels, we report significantly better accuracy rates in cross-domain English polarity classification.Comment: Accepted at ICONIP 2018. arXiv admin note: substantial text overlap with arXiv:1808.0840

    Institutions for Climate Adaptation: An Inventory of Institutions in the Netherlands that are Relevant for Climate Change

    Get PDF
    One of the goals of project IC12, a research project of the Climate changes Spatial Planning programme, is to assess if the formal institutions operating in the Netherlands are improving or hampering adaptive capacity. In order to answer the research question, the most important documents referring to those institutions need to be evaluated. This document presents an initial inventory of these adaptation institutions – i.e. policy plans, laws and directives, reports and other documents that seemed relevant to the question at hand
    corecore