9 research outputs found

    Ways to Cancer Counseling Centers: How do People Become Aware of these Centers?

    No full text
    Schranz M, Bayer O, Xyländer M, et al. Ways to Cancer Counseling Centers: How do People Become Aware of these Centers? In: ONCOLOGY RESEARCH AND TREATMENT. Vol 43. Basel: Karger; 2020: 180

    Development of interoperable, domain-specific extensions for the German Corona Consensus (GECCO) COVID-19 research dataset using an interdisciplinary, consensus-based workflow

    No full text
    Background The COVID-19 pandemic has spurred large-scale, inter-institutional research efforts. To enable these efforts, researchers must agree on dataset definitions that not only cover all elements relevant to the respective medical specialty but that are also syntactically and semantically interoperable. Following such an effort, the German Corona Consensus (GECCO) dataset has been developed previously as a harmonized, interoperable collection of the most relevant data elements for COVID-19-related patient research. As GECCO has been developed as a compact core dataset across all medical fields, the focused research within particular medical domains demands the definition of extension modules that include those data elements that are most relevant to the research performed in these individual medical specialties. Objective To (i) specify a workflow for the development of interoperable dataset definitions that involves a close collaboration between medical experts and information scientists and to (ii) apply the workflow to develop dataset definitions that include data elements most relevant to COVID-19-related patient research in immunization, pediatrics, and cardiology. Methods We developed a workflow to create dataset definitions that are (i) content-wise as relevant as possible to a specific field of study and (ii) universally usable across computer systems, institutions, and countries, i.e., interoperable. We then gathered medical experts from three specialties (immunization, pediatrics, and cardiology) to the select data elements most relevant to COVID-19-related patient research in the respective specialty. We mapped the data elements to international standardized vocabularies and created data exchange specifications using HL7 FHIR. All steps were performed in close interdisciplinary collaboration between medical domain experts and medical information scientists. The profiles and vocabulary mappings were syntactically and semantically validated in a two-stage process. Results We created GECCO extension modules for the immunization, pediatrics, and cardiology domains with respect to the pandemic requests. The data elements included in each of these modules were selected according to the here developed consensus-based workflow by medical experts from the respective specialty to ensure that the contents are aligned with the respective research needs. We defined dataset specifications for a total number of 48 (immunization), 150 (pediatrics), and 52 (cardiology) data elements that complement the GECCO core dataset. We created and published implementation guides and example implementations as well as dataset annotations for each extension module. Conclusions These here presented GECCO extension modules, which contain data elements most relevant to COVID-19-related patient research in immunization, pediatrics and cardiology, were defined in an interdisciplinary, iterative, consensus-based workflow that may serve as a blueprint for the development of further dataset definitions. The GECCO extension modules provide a standardized and harmonized definition of specialty-related datasets that can help to enable inter-institutional and cross-country COVID-19 research in these specialties

    Online data compression in the ALICE O2^2 facility

    No full text
    The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects of the data handling concept are partial reconstruction of raw data organized in so called time frames, and based on that information reduction of the data rate without significant loss in the physics information. A production solution for data compression has been in operation for the ALICE Time Projection Chamber (TPC) in the ALICE High Level Trigger online system since 2011. The solution is based on reconstruction of space points from raw data. These so called clusters are the input for reconstruction of particle trajectories. Clusters are stored instead of raw data after a transformation of required parameters into an optimized format and subsequent lossless data compression techniques. With this approach, a reduction of 4.4 has been achieved on average. For Run 3, not only a significantly higher reduction is required but also improvements in the implementation of the actual algorithms. The innermost operations of the processing loop effectively need to be called up to O 101110^{11} /s to cope with the data rate. This can only be achieved in a parallel scheme and makes these operations candidates for optimization. The potential of template programming and static dispatch in a polymorphic implementation has been studied as an alternative to the commonly used dynamic dispatch at runtime. In this contribution we report on the development of a specific programming technique to efficiently combine compile time and runtime domains and present results for the speedup of the algorithm

    Online data compression in the ALICE O2 facility

    Get PDF
    The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects of the data handling concept are partial reconstruction of raw data organized in so called time frames, and based on that information reduction of the data rate without significant loss in the physics information. A production solution for data compression has been in operation for the ALICE Time Projection Chamber (TPC) in the ALICE High Level Trigger online system since 2011. The solution is based on reconstruction of space points from raw data. These so called clusters are the input for reconstruction of particle trajectories. Clusters are stored instead of raw data after a transformation of required parameters into an optimized format and subsequent lossless data compression techniques. With this approach, a reduction of 4.4 has been achieved on average. For Run 3, not only a significantly higher reduction is required but also improvements in the implementation of the actual algorithms. The innermost operations of the processing loop effectively need to be called up to O(1011)/s to cope with the data rate. This can only be achieved in a parallel scheme and makes these operations candidates for optimization. The potential of template programming and static dispatch in a polymorphic implementation has been studied as an alternative to the commonly used dynamic dispatch at runtime. In this contribution we report on the development of a specific programming technique to efficiently combine compile time and runtime domains and present results for the speedup of the algorithm

    Effects of zolpidem on sedation, anxiety, and memory in the plus-maze discriminative avoidance task

    No full text
    corecore