14 research outputs found
Additional file 3: of Leveraging the EHR4CR platform to support patient inclusion in academic studies: challenges and lessons learned
HEGP-CDW Data Completeness for each Medical Concept Individualized During the Normalization Process (DOCX 28Â kb
Additional file 2: of Leveraging the EHR4CR platform to support patient inclusion in academic studies: challenges and lessons learned
Normalized Criteria for the DERENEDIAB, aXa and EWING 2008 studies (DOCX 49Â kb
Additional file 1: of Leveraging the EHR4CR platform to support patient inclusion in academic studies: challenges and lessons learned
Free Text Eligibility Criteria for DERENEDIAB, aXa and EWING 2008 studies (DOCX 202Â kb
Completeness page of the web-application.
Calculated completeness of the study data, i.e., if all items have been completed for each subject. The hierarchical structure of the metadata is displayed similar to the analysis page. Colored bars indicate the completeness of each metadata element.</p
Table illustrating the five different categories the application distinguishes and their calculated statistics and charts.
<p>Table illustrating the five different categories the application distinguishes and their calculated statistics and charts.</p
ODM Data Analysis—A tool for the automatic validation, monitoring and generation of generic descriptive statistics of patient data
<div><p>Introduction</p><p>A required step for presenting results of clinical studies is the declaration of participants demographic and baseline characteristics as claimed by the FDAAA 801. The common workflow to accomplish this task is to export the clinical data from the used electronic data capture system and import it into statistical software like SAS software or IBM SPSS. This software requires trained users, who have to implement the analysis individually for each item. These expenditures may become an obstacle for small studies. Objective of this work is to design, implement and evaluate an open source application, called ODM Data Analysis, for the semi-automatic analysis of clinical study data.</p><p>Methods</p><p>The system requires clinical data in the CDISC Operational Data Model format. After uploading the file, its syntax and data type conformity of the collected data is validated. The completeness of the study data is determined and basic statistics, including illustrative charts for each item, are generated. Datasets from four clinical studies have been used to evaluate the application’s performance and functionality.</p><p>Results</p><p>The system is implemented as an open source web application (available at <a href="https://odmanalysis.uni-muenster.de" target="_blank">https://odmanalysis.uni-muenster.de</a>) and also provided as Docker image which enables an easy distribution and installation on local systems. Study data is only stored in the application as long as the calculations are performed which is compliant with data protection endeavors. Analysis times are below half an hour, even for larger studies with over 6000 subjects.</p><p>Discussion</p><p>Medical experts have ensured the usefulness of this application to grant an overview of their collected study data for monitoring purposes and to generate descriptive statistics without further user interaction. The semi-automatic analysis has its limitations and cannot replace the complex analysis of statisticians, but it can be used as a starting point for their examination and reporting.</p></div
PDF page of the web-application.
This page consists of a PDF viewer to view the generated PDF in the browser. It contains all calculated statistics and can be downloaded for later usage.</p
Cover page of the web-application.
The image shows the first page presented to the user after starting ODM-DA. Besides allowing the upload of an ODM file for the analysis, the download of a test file and a link to the user manual are provided. In addition, the different analysis options can be specified.</p
Schematic overview of the application’s workflow.
<p>The user can upload an ODM file via the web-based front-end. After parsing the file, its content is temporarily stored in a database. The calculated statistics and charts are presented on the result pages and are also generated as PDF. The PDF can be downloaded via the front-end and will be deleted from the server, equally the database content, after the session ends.</p
Table of dataset characteristics used for the benchmark.
<p>Table of dataset characteristics used for the benchmark.</p
