52,196 research outputs found

    Improving institutional memory on challenges and methods for estimation of pig herd antimicrobial exposure based on data from the Danish Veterinary Medicines Statistics Program (VetStat)

    Get PDF
    With the increasing occurrence of antimicrobial resistance, more attention has been directed towards surveillance of both human and veterinary antimicrobial use. Since the early 2000s, several research papers on Danish pig antimicrobial usage have been published, based on data from the Danish Veterinary Medicines Statistics Program (VetStat). VetStat was established in 2000, as a national database containing detailed information on purchases of veterinary medicine. This paper presents a critical set of challenges originating from static system features, which researchers must address when estimating antimicrobial exposure in Danish pig herds. Most challenges presented are followed by at least one robust solution. A set of challenges requiring awareness from the researcher, but for which no immediate solution was available, were also presented. The selection of challenges and solutions was based on a consensus by a cross-institutional group of researchers working in projects using VetStat data. No quantitative data quality evaluations were performed, as the frequency of errors and inconsistencies in a dataset will vary, depending on the period covered in the data. Instead, this paper focuses on clarifying how VetStat data may be translated to an estimation of the antimicrobial exposure at herd level, by suggesting uniform methods of extracting and editing data, in order to obtain reliable and comparable estimates on pig antimicrobial consumption for research purposes.Comment: 25 pages, including two Appendices (pages not numbered). Title page, including abstract, is on page 1. Body of text, including references, abbreviation list and disclaimers for conflict of interest and funding, are on pages 2-18. Two figures embedded in the text on pages 3 and 5. Appendix 1 starts on page 19, and Appendix 2 on page 2

    Historical collaborative geocoding

    Full text link
    The latest developments in digital have provided large data sets that can increasingly easily be accessed and used. These data sets often contain indirect localisation information, such as historical addresses. Historical geocoding is the process of transforming the indirect localisation information to direct localisation that can be placed on a map, which enables spatial analysis and cross-referencing. Many efficient geocoders exist for current addresses, but they do not deal with the temporal aspect and are based on a strict hierarchy (..., city, street, house number) that is hard or impossible to use with historical data. Indeed historical data are full of uncertainties (temporal aspect, semantic aspect, spatial precision, confidence in historical source, ...) that can not be resolved, as there is no way to go back in time to check. We propose an open source, open data, extensible solution for geocoding that is based on the building of gazetteers composed of geohistorical objects extracted from historical topographical maps. Once the gazetteers are available, geocoding an historical address is a matter of finding the geohistorical object in the gazetteers that is the best match to the historical address. The matching criteriae are customisable and include several dimensions (fuzzy semantic, fuzzy temporal, scale, spatial precision ...). As the goal is to facilitate historical work, we also propose web-based user interfaces that help geocode (one address or batch mode) and display over current or historical topographical maps, so that they can be checked and collaboratively edited. The system is tested on Paris city for the 19-20th centuries, shows high returns rate and is fast enough to be used interactively.Comment: WORKING PAPE

    Learning signals of adverse drug-drug interactions from the unstructured text of electronic health records.

    Get PDF
    Drug-drug interactions (DDI) account for 30% of all adverse drug reactions, which are the fourth leading cause of death in the US. Current methods for post marketing surveillance primarily use spontaneous reporting systems for learning DDI signals and validate their signals using the structured portions of Electronic Health Records (EHRs). We demonstrate a fast, annotation-based approach, which uses standard odds ratios for identifying signals of DDIs from the textual portion of EHRs directly and which, to our knowledge, is the first effort of its kind. We developed a gold standard of 1,120 DDIs spanning 14 adverse events and 1,164 drugs. Our evaluations on this gold standard using millions of clinical notes from the Stanford Hospital confirm that identifying DDI signals from clinical text is feasible (AUROC=81.5%). We conclude that the text in EHRs contain valuable information for learning DDI signals and has enormous utility in drug surveillance and clinical decision support

    Design and Implementation of a Method Base Management System for a Situational CASE Environment

    Get PDF
    Situational method engineering focuses on configuration of system development methods (SDMs) tuned to the situation of a project at hand. Situational methods are assembled from parts of existing SDMs, so called method fragments, that are selected to match the project situation. The complex task of selecting appropriate method fragments and assembling them into a method requires effective automated support. The paper describes the architecture of a tool prototype offering such support. We present the structure of its central repository, a method base containing method fragments. The functions to store, select and assemble these method fragments are offered by a stratified method base management system tool component, which is described as wel

    Towards a New Science of a Clinical Data Intelligence

    Full text link
    In this paper we define Clinical Data Intelligence as the analysis of data generated in the clinical routine with the goal of improving patient care. We define a science of a Clinical Data Intelligence as a data analysis that permits the derivation of scientific, i.e., generalizable and reliable results. We argue that a science of a Clinical Data Intelligence is sensible in the context of a Big Data analysis, i.e., with data from many patients and with complete patient information. We discuss that Clinical Data Intelligence requires the joint efforts of knowledge engineering, information extraction (from textual and other unstructured data), and statistics and statistical machine learning. We describe some of our main results as conjectures and relate them to a recently funded research project involving two major German university hospitals.Comment: NIPS 2013 Workshop: Machine Learning for Clinical Data Analysis and Healthcare, 201

    Digital Image Access & Retrieval

    Get PDF
    The 33th Annual Clinic on Library Applications of Data Processing, held at the University of Illinois at Urbana-Champaign in March of 1996, addressed the theme of "Digital Image Access & Retrieval." The papers from this conference cover a wide range of topics concerning digital imaging technology for visual resource collections. Papers covered three general areas: (1) systems, planning, and implementation; (2) automatic and semi-automatic indexing; and (3) preservation with the bulk of the conference focusing on indexing and retrieval.published or submitted for publicatio

    The Application of the Hermeneutic Process to Qualitative Safety Data: A Case Study using Data from the CIRAS project

    Get PDF
    This article describes the new qualitative methodology developed for use in CIRAS (Confidential Incident Reporting and Analysis System), the confidential database set up for the UK railways by the University of Strathclyde. CIRAS is a project in which qualitative safety data are disidentified and then stored and analysed in a central database. Due to the confidential nature of the data provided, conventional (positivist) methods of checking their accuracy are not applicable; therefore a new methodology was developed - the Applied Hermeneutic Methodology (AHM). Based on Paul Ricoeur's `hermeneutic arc', this methodology uses appropriate computer software to provide a method of analysis that can be shown to be reliable (in the sense that consensus in interpretations between different interpreters can be demonstrated). Moreover, given that the classifiers of the textual elements can be represented in numeric form, AHM crosses the `qualitative-quantitative divide'. It is suggested that this methodology is more rigorous and philosophically coherent than existing methodologies and that it has implications for all areas of the social sciences where qualitative texts are analysed
    • …
    corecore