17 research outputs found

    ETL steps are represented with ontologies.

    No full text
    <p>Components and processes involved in the extraction, transformation and loading of data are represented with ontologies. The mappings (1) and (2) illustrate “simple” and “complex” mappings, respectively.</p

    Command type definitions describe how to process mapping nodes from the mapping ontology.

    No full text
    <p>All intermediate nodes in the mapping ontology are connected to a command type definition. They contain SQL code fragments, which describe how to filter and transform the facts data derived from operands 1 and 2 (OP1 and OP2).</p

    Overloading internal data model properties.

    No full text
    <p>This real-world example illustrates how semantic relationships between source data elements are stored explicitly in the ontologies and how they can be processed: Stating that <i>Gleason1 hasDateStartValueColumn DateBiops</i> e.g. tells the export software to use the data entry in the Value column of DateBiops as DateStartValue in Gleason1. Gleason2 is processed the same way.</p

    Overview of the approach.

    No full text
    <p>The illustration shows an overview of our approach by combining several of the previous figures in a simplified fashion. The upper part (blue box) represents a mapping, which is visible to the user. The parts in the middle are internal ontology concepts that are hidden for the user. The SQL code in the lower part has been automatically compiled from the above ontologies.</p

    Cascading of mapping nodes.

    No full text
    <p>Cascaded mapping nodes allow the definition of arbitrary data transformations. The illustration has to be read from the right to the left, hasOperand1 before hasOperand2. Paraphrased, it means: If no data for Gleason 3 (left side) exist, add the Gleason 1 and Gleason 2 data and export these as Gleason 3 (right side) records. Details about the NOTEXISTS, ADD and IF nodes’ semantics and why NOTEXISTS requires a second operand are given in Tables F and G in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0116656#pone.0116656.s001" target="_blank">S1 File</a>.</p

    Acceptance by laypersons and medical professionals of the personalized eHealth platform, eHealthMonitor

    No full text
    <p><i>Introduction and background</i>: Often, eHealth services are not accepted because of factors such as eHealth literacy or trust. Within this study, eHealthMonitor was evaluated in three European countries (Germany, Greece, and Poland) by medical professionals and laypersons with respect to numerous acceptance factors. <i>Methods</i>: Questionnaires were created on the basis of factors from literature and with the help of scales which have already been validated. A qualitative survey was conducted in Germany, Poland, and Greece. <i>Results</i>: The eHealth literacy of all participants was medium/high. Laypersons mostly agreed that they could easily become skillful with eHealthMonitor and that other people thought that they should use eHealthMonitor. Amongst medical professionals, a large number were afraid that eHealthMonitor could violate their privacy or the privacy of their patients. Overall, the participants thought that eHealthMonitor was a good concept and that they would use it. <i>Discussion and conclusion</i>: The main hindrances to the use of eHealthMonitor were found in trust issues including data privacy. In the future, more research on the linkage of all measured factors is needed, for example, to address the question of whether highly educated people tend to mistrust eHealth information more than people with lower levels of education.</p

    Towards a Computable Data Corpus of Temporal Correlations between Drug Administration and Lab Value Changes

    No full text
    <div><p>Background</p><p>The analysis of electronic health records for an automated detection of adverse drug reactions is an approach to solve the problems that arise from traditional methods like spontaneous reporting or manual chart review. Algorithms addressing this task should be modeled on the criteria for a standardized case causality assessment defined by the World Health Organization. One of these criteria is the temporal relationship between drug intake and the occurrence of a reaction or a laboratory test abnormality. Appropriate data that would allow for developing or validating related algorithms is not publicly available, though.</p><p>Methods</p><p>In order to provide such data, retrospective routine data of drug administrations and temporally corresponding laboratory observations from a university clinic were extracted, transformed and evaluated by experts in terms of a reasonable time relationship between drug administration and lab value alteration.</p><p>Result</p><p>The result is a data corpus of 400 episodes of normalized laboratory parameter values in temporal context with drug administrations. Each episode has been manually classified whether it contains data that might indicate a temporal correlation between the drug administration and the change of the lab value course, whether such a change is not observable or whether a decision between those two options is not possible due to the data. In addition, each episode has been assigned a concordance value which indicates how difficult it is to assess. This is the first open data corpus of a computable ground truth of temporal correlations between drug administration and lab value alterations.</p><p>Discussion</p><p>The main purpose of this data corpus is the provision of data for further research and the provision of a ground truth which allows for comparing the outcome of other assessments of this data with the outcome of assessments made by human experts. It can serve as a contribution towards systematic, computerized ADR detection in retrospective data. With this lab value curve data as a basis, algorithms for detecting temporal relationships can be developed, and with the classification made by human experts, these algorithms can immediately be validated. Due to the normalization of the lab value data, it allows for a generic approach rather than for specific or solitary drug/lab value combinations.</p></div
    corecore