13 research outputs found

    Patientenübergreifende, multiple Verwendung von Patientendaten für die klinische Forschung unter Nutzung von Archetypen

    Get PDF
    Sowohl in der Routineversorgung als auch in klinischen Studien werden immer mehr Daten elektronisch verarbeitet. Trotzdem ist ein Austausch von Daten zwischen beiden Bereichen häufig noch nicht etabliert. Dies führt dazu, dass Daten mehrfach erfasst werden müssen. Die redundante Datenerfassung ist zeitaufwändig und kann zu Inkonsistenzen zwischen Krankenhausinformationssystem (KIS) und Studiendatenmanagementsystem (SDMS) führen. Obwohl ein Datenaustausch zwischen Forschung und Versorgung oft technisch möglich wäre, scheitert er meist noch an mangelnder semantischer Interoperabilität. Archetypen sind ein innovatives Konzept zur Gestaltung von flexiblen und leicht erweiterbaren elektronischen Gesundheitsakten. Sie ermöglichen semantische Interopera-bilität zwischen Systemen, welche dieselben Archetypen nutzen. Das Archetypen-Konzept hat mittlerweile auch Eingang in internationale Standards gefunden (ISO 13606). Die openEHR-Spezifikationen definieren ein mit ISO 13606 kompatibles jedoch weiter-gehendes Modell für elektronische Gesundheitsakten. Bisher wurden Archetypen hauptsächlich für Informationssysteme in der Routineversorgung und weniger für die klinische Forschung entwickelt und genutzt. Ziel dieser Arbeit war es daher, basierend auf den openEHR-Spezifikationen und Archetypen generische Ansätze zu erarbeiten, die eine multiple Verwendung von Daten aus der Versorgung in der Forschung ermöglichen und deren Umsetzbarkeit zu prüfen. In einer Voruntersuchung wurde ermittelt, dass 35 % der in der betrachteten Studie zu erhebenden Merkmalsarten aus dem untersuchten KIS übernommen werden könnten, wenn die Daten dort elektronisch und ausreichend strukturiert vorlägen. In einem zweiten Schritt wurde mit openSDMS der Prototyp eines auf Archetypen basierenden integrierten elektronischen Gesundheitsakten- und Studiendatenmanagementsystems zur Verfügung gestellt. Aus der Voruntersuchung und der Implementierung von openSDMS wurden Anforderungen abgeleitet und eine auf openEHR-Archetypen basierende Referenzarchitektur entwickelt, welche die Nutzung von Daten aus KIS in klinischen Studien unterstützt. Dabei wird sowohl die Integration von KIS beschrieben, die auf Archetypen basieren, als auch von klassischen KIS. Kernkomponenten dieser Architektur sind auf Archetypen basierende semantische Annotationen von Studiendaten sowie Import- und Exportmodule, welche die Archetype Query Language nutzen. Die vorgestellte Referenzarchitektur ermöglicht den Übergang von der multiplen Erfassung hin zur multiplen Verwendung von Daten in Forschung und Versorgung. Um die entwickelte Referenzarchitektur realisieren zu können, werden geeignete Archetypen auch für Forschungsdaten benötigt. Daher wurden Archetypen zur Dokumentation aller Datenelemente der vier CDASH Domänen ‚Common Identifier Variables‘, ‚Common Timing Variables‘, ‚Adverse Events‘ sowie ‚Prior and Concomitant Medications‘ spezifiziert (Studiendaten). Hierzu wurden insgesamt 23 Merkmalsarten basierend auf Archetypen neu definiert, wozu drei bestehende Archetypen spezialisiert und zwei neu entwickelt wurden. Zur Definition von CDASH-konformen elektronischen Datenerhebungsbogen für die betrachteten Domänen wurden, basierend auf den spezifizierten Archetypen, vier openEHR-Templates entworfen. Ferner wurden 71 Merkmalsarten in 16 Archetypen zur Dokumentation von Studien-Metadaten definiert. Alle neu entworfenen Archetypen wurden jeweils in englischer und deutscher Sprache beschrieben und können nun als Referenzinformationsmodell für Forschungsdaten genutzt werden. Ergänzend wurden alle von den bereitgestellten Archetypen definierten Merkmalsarten auf die im Bereich der klinischen Forschung etablierten Modelle BRIDG, CDASH und ODM abgebildet

    Abstracts from the 8th International Conference on cGMP Generators, Effectors and Therapeutic Implications

    Get PDF
    This work was supported by a restricted research grant of Bayer AG

    Resiliency in Numerical Algorithm Design for Extreme Scale Simulations

    Get PDF
    This work is based on the seminar titled ``Resiliency in Numerical Algorithm Design for Extreme Scale Simulations'' held March 1-6, 2020 at Schloss Dagstuhl, that was attended by all the authors. Naive versions of conventional resilience techniques will not scale to the exascale regime: with a main memory footprint of tens of Petabytes, synchronously writing checkpoint data all the way to background storage at frequent intervals will create intolerable overheads in runtime and energy consumption. Forecasts show that the mean time between failures could be lower than the time to recover from such a checkpoint, so that large calculations at scale might not make any progress if robust alternatives are not investigated. More advanced resilience techniques must be devised. The key may lie in exploiting both advanced system features as well as specific application knowledge. Research will face two essential questions: (1) what are the reliability requirements for a particular computation and (2) how do we best design the algorithms and software to meet these requirements? One avenue would be to refine and improve on system- or application-level checkpointing and rollback strategies in the case an error is detected. Developers might use fault notification interfaces and flexible runtime systems to respond to node failures in an application-dependent fashion. Novel numerical algorithms or more stochastic computational approaches may be required to meet accuracy requirements in the face of undetectable soft errors. The goal of this Dagstuhl Seminar was to bring together a diverse group of scientists with expertise in exascale computing to discuss novel ways to make applications resilient against detected and undetected faults. In particular, participants explored the role that algorithms and applications play in the holistic approach needed to tackle this challenge

    Resiliency in Numerical Algorithm Design for Extreme Scale Simulations

    Get PDF
    This work is based on the seminar titled ``Resiliency in Numerical Algorithm Design for Extreme Scale Simulations'' held March 1-6, 2020 at Schloss Dagstuhl, that was attended by all the authors. Naive versions of conventional resilience techniques will not scale to the exascale regime: with a main memory footprint of tens of Petabytes, synchronously writing checkpoint data all the way to background storage at frequent intervals will create intolerable overheads in runtime and energy consumption. Forecasts show that the mean time between failures could be lower than the time to recover from such a checkpoint, so that large calculations at scale might not make any progress if robust alternatives are not investigated. More advanced resilience techniques must be devised. The key may lie in exploiting both advanced system features as well as specific application knowledge. Research will face two essential questions: (1) what are the reliability requirements for a particular computation and (2) how do we best design the algorithms and software to meet these requirements? One avenue would be to refine and improve on system- or application-level checkpointing and rollback strategies in the case an error is detected. Developers might use fault notification interfaces and flexible runtime systems to respond to node failures in an application-dependent fashion. Novel numerical algorithms or more stochastic computational approaches may be required to meet accuracy requirements in the face of undetectable soft errors. The goal of this Dagstuhl Seminar was to bring together a diverse group of scientists with expertise in exascale computing to discuss novel ways to make applications resilient against detected and undetected faults. In particular, participants explored the role that algorithms and applications play in the holistic approach needed to tackle this challenge

    Resiliency in numerical algorithm design for extreme scale simulations

    Get PDF
    This work is based on the seminar titled ‘Resiliency in Numerical Algorithm Design for Extreme Scale Simulations’ held March 1–6, 2020, at Schloss Dagstuhl, that was attended by all the authors. Advanced supercomputing is characterized by very high computation speeds at the cost of involving an enormous amount of resources and costs. A typical large-scale computation running for 48 h on a system consuming 20 MW, as predicted for exascale systems, would consume a million kWh, corresponding to about 100k Euro in energy cost for executing 1023 floating-point operations. It is clearly unacceptable to lose the whole computation if any of the several million parallel processes fails during the execution. Moreover, if a single operation suffers from a bit-flip error, should the whole computation be declared invalid? What about the notion of reproducibility itself: should this core paradigm of science be revised and refined for results that are obtained by large-scale simulation? Naive versions of conventional resilience techniques will not scale to the exascale regime: with a main memory footprint of tens of Petabytes, synchronously writing checkpoint data all the way to background storage at frequent intervals will create intolerable overheads in runtime and energy consumption. Forecasts show that the mean time between failures could be lower than the time to recover from such a checkpoint, so that large calculations at scale might not make any progress if robust alternatives are not investigated. More advanced resilience techniques must be devised. The key may lie in exploiting both advanced system features as well as specific application knowledge. Research will face two essential questions: (1) what are the reliability requirements for a particular computation and (2) how do we best design the algorithms and software to meet these requirements? While the analysis of use cases can help understand the particular reliability requirements, the construction of remedies is currently wide open. One avenue would be to refine and improve on system- or application-level checkpointing and rollback strategies in the case an error is detected. Developers might use fault notification interfaces and flexible runtime systems to respond to node failures in an application-dependent fashion. Novel numerical algorithms or more stochastic computational approaches may be required to meet accuracy requirements in the face of undetectable soft errors. These ideas constituted an essential topic of the seminar. The goal of this Dagstuhl Seminar was to bring together a diverse group of scientists with expertise in exascale computing to discuss novel ways to make applications resilient against detected and undetected faults. In particular, participants explored the role that algorithms and applications play in the holistic approach needed to tackle this challenge. This article gathers a broad range of perspectives on the role of algorithms, applications and systems in achieving resilience for extreme scale simulations. The ultimate goal is to spark novel ideas and encourage the development of concrete solutions for achieving such resilience holistically

    Clinical phenotype and course of PDE6A-associated retinitis pigmentosa disease, characterized in preparation for a gene supplementation trial

    No full text
    IMPORTANCE Treatment trials require sound knowledge on the natural course of disease. OBJECTIVE To assess clinical features, genetic findings, and genotype-phenotype correlations in patients with retinitis pigmentosa (RP) associated with biallelic sequence variations in the PDE6A gene in preparation for a gene supplementation trial. DESIGN, SETTING, AND PARTICIPANTS This prospective, longitudinal, observational cohort study was conducted from January 2001 to December 2019 in a single center (Centre for Ophthalmology of the University of Tübingen, Germany) with patients recruited multinationally from 12 collaborating European tertiary referral centers. Patients with retinitis pigmentosa, sequence variants in PDE6A, and the ability to provide informed consent were included. EXPOSURES Comprehensive ophthalmological examinations; validation of compound heterozygosity and biallelism by familial segregation analysis, allelic cloning, or assessment of next-generation sequencing-read data, where possible. MAIN OUTCOMES AND MEASURES Genetic findings and clinical features describing the entire cohort and comparing patients harboring the 2 most common disease-causing variants in a homozygous state (c.304C>A;p.(R102S) and c.998 + 1G>A;p.?). RESULTS Fifty-seven patients (32 female patients [56%]; mean [SD], 40 [14] years) from 44 families were included. All patients completed the study. Thirty patients were homozygous for disease-causing alleles. Twenty-seven patients were heterozygous for 2 different PDE6A variants each. The most frequently observed alleles were c.304C>A;p.(R102S), c.998 + 1G>A;p.?, and c.2053G>A;p.(V685M). The mean (SD) best-corrected visual acuity was 0.43 (0.48) logMAR (Snellen equivalent, 20/50). The median visual field area with object III4e was 660 square degrees (5th and 95th percentiles, 76 and 11 019 square degrees; 25th and 75th percentiles, 255 and 3923 square degrees). Dark-adapted and light-adapted full-field electroretinography showed no responses in 88 of 108 eyes (81.5%). Sixty-nine of 108 eyes (62.9%) showed additional findings on optical coherence tomography imaging (eg, cystoid macular edema or macular atrophy). The variant c.998 + 1G>A;p.? led to a more severe phenotype when compared with the variant c.304C>A;p.(R102S). CONCLUSIONS AND RELEVANCE Seventeen of the PDE6A variants found in these patients appeared to be novel. Regarding the clinical findings, disease was highly symmetrical between the right and left eyes and visual impairment was mild or moderate in 90% of patients, providing a window of opportunity for gene therapy
    corecore