93 research outputs found

    A Query Integrator and Manager for the Query Web

    Get PDF
    We introduce two concepts: the Query Web as a layer of interconnected queries over the document web and the semantic web, and a Query Web Integrator and Manager (QI) that enables the Query Web to evolve. QI permits users to write, save and reuse queries over any web accessible source, including other queries saved in other installations of QI. The saved queries may be in any language (e.g. SPARQL, XQuery); the only condition for interconnection is that the queries return their results in some form of XML. This condition allows queries to chain off each other, and to be written in whatever language is appropriate for the task. We illustrate the potential use of QI for several biomedical use cases, including ontology view generation using a combination of graph-based and logical approaches, value set generation for clinical data management, image annotation using terminology obtained from an ontology web service, ontology-driven brain imaging data integration, small-scale clinical data integration, and wider-scale clinical data integration. Such use cases illustrate the current range of applications of QI and lead us to speculate about the potential evolution from smaller groups of interconnected queries into a larger query network that layers over the document and semantic web. The resulting Query Web could greatly aid researchers and others who now have to manually navigate through multiple information sources in order to answer specific questions

    SEINE: Methods for Electronic Data Capture and Integrated Data Repository Synthesis with Patient Registry Use Cases

    Get PDF
    Integrated Data Repositories (IDR) allow clinical research to leverage electronic health records (EHR) and other data sources while Electronic Data Capture (EDC) applications often support manually maintained patient registries. Using i2b2 and REDCap, (IDR and EDC platforms respectively) we have developed methods that integrate IDR and EDC strengths supporting: 1) data delivery from the IDR as ready-to-use registries to exploit the annotation and data collection capabilities unique to EDC applications; 2) integrating EDC managed registries into data repositories allows investigators to use hypothesis generation and cohort discovery methods. This round-trip integration can lower lag between cohort discovery and establishing a registry. Investigators can also periodically augment their registry cohort as the IDR is enriched with additional data elements, data sources, and patients. We describe our open-source automated methods and provide three example registry uses cases for these methods: triple negative breast cancer, vertiginous syndrome, cancer distress

    The Community-Acquired Pneumonia Organization (CAPO) Cloud-Based Research Platform (the CAPO-Cloud): Facilitating Data Sharing in Clinical Research

    Get PDF
    Background: Pneumonia is a costly and deadly respiratory disease that afflicts millions every year. Advances in pneumonia care require significant research investment and collaboration among pneumonia investigators. Despite the importance of data sharing for clinical research it remains difficult to share datasets with old and new investigators. We present CAPOCloud, a web-based pneumonia research platform intended to facilitate data sharing and make data more accessible to new investigators. Methods: We establish the first two use cases for CAPOCloud to be the automatic subsetting and constraining of the CAPO database and the automatic summarization of the database in aggregate. We use the REDCap data capture software and the R programming language to facilitate these use cases. Results: CAPOCloud allows CAPO investigators to access the CAPO clinical database and explore subsets of the data including demographics, comorbidities, and geographic regions. It also allows them to summarize these subsets or the entire CAPO database in aggregate while preserving privacy restrictions. Discussion: CAPOCloud demonstrates the viability of a research platform combining data capture, data quality, hypothesis generation, data exploration and data sharing in one interface. Future use cases for the software include automated univariate hypothesis testing, automated bivariate hypothesis testing, and principal component analysis

    A Comparison of Neuroelectrophysiology Databases

    Full text link
    As data sharing has become more prevalent, three pillars - archives, standards, and analysis tools - have emerged as critical components in facilitating effective data sharing and collaboration. This paper compares four freely available intracranial neuroelectrophysiology data repositories: Data Archive for the BRAIN Initiative (DABI), Distributed Archives for Neurophysiology Data Integration (DANDI), OpenNeuro, and Brain-CODE. These archives provide researchers with tools to store, share, and reanalyze neurophysiology data though the means of accomplishing these objectives differ. The Brain Imaging Data Structure (BIDS) and Neurodata Without Borders (NWB) are utilized by these archives to make data more accessible to researchers by implementing a common standard. While many tools are available to reanalyze data on and off the archives' platforms, this article features Reproducible Analysis and Visualization of Intracranial EEG (RAVE) toolkit, developed specifically for the analysis of intracranial signal data and integrated with the discussed standards and archives. Neuroelectrophysiology data archives improve how researchers can aggregate, analyze, distribute, and parse these data, which can lead to more significant findings in neuroscience research.Comment: 25 pages, 8 figures, 1 tabl

    Program evaluation dashboard design and development for Missouri telehealth network show-me echo program

    Get PDF
    Background: The Show-Me ECHO Program is a state funded telehealth project, established in 2014, that connects interdisciplinary teams of experts with rural and isolated primary care providers (PCPs) and other professionals using videoconferencing and interactive case-based learning in an effort to develop advanced skills, best practices and ultimately improve patient care access, quality, and efficiency. Since inception, the Show-Me ECHO program has experienced rapid growth and expansion to over 40 ECHO topics, impacting all 114 Missouri Counties and over 2,300 health/community organizations. The exponential growth experienced by the ECHO model highlights a crucial need for adept program evaluation, reporting tools and resources which will facilitate the process of systematically examining the implementation, quality, impact, and value of the program. Objective: The objective of this project is to design and build data dashboards that support a macro-evaluation and management of Missouri Telehealth Network's Show-Me ECHO program and contributes to program improvement activities. Methods: A stakeholder identification and needs analysis was completed to ensure comprehensive measurement of program performance metrics. Show-Me ECHO program administrative data, clinic information, attendance records for participants and facilitators, case presentation metrics, didactic presentations, and more were extracted from MTN data repositories for the 2014-2021 period and analyzed for dashboard development. Data cleaning and preprocessing was conducted in a combination of Excel, Python and Tableau. The dashboards and other data visualization metrics were created in Tableau. Results: Data extraction generated a total of 70,910 observations across three reports ('Clinic Data', 'Didactic Presentation Data' and 'Patient Presentation Data'). Three preliminary dashboards -- "Show-Me ECHO Project Reach and Attendance" "Show-Me ECHO Project Overview" and "ECHO Clinic Performance Report" were established to provide Missouri Telehealth Network (MTN) teams and stakeholders detailed insight into growth and performance of the Show-Me ECHO project and support development and management of action plans. Conclusions: The constructed MTN Dashboards support organization efforts to establish a single unified approach to monitor program progress, identify and prioritize efforts and resource allocation, identify specific Missouri counties that may benefit from interventions and ECHO clinic expansions, and provide appropriate performance metrics that can be shared with both decision makers and relevant stakeholders. Future considerations for dashboard expansion include incorporating PCP self-efficacy and knowledge surveys and Claims data analysis to enable further tracking of Provider and Patient outcomes. A feasibility assessment of the implementation of dashboards at other superhubs for benchmarking and program outcome comparison studies should also be considered.Includes bibliographical references

    Integrated IT tools for the management and exploitation of European paediatric transplantation data

    Get PDF
    This thesis was developed with the aim of building and implementing new informatic tools that could facilitate research and international benchmarking in paediatric transplantation. Two different studies were created for that purpose. The first one consisted on an analysis of the current paediatric transplantation activity situation in the European Union, utilizing Google My Maps to obtain a general geographical distribution per countries, and Data Studio to complement the previous one and get more complete results. The second method was the construction of a patient registry for TransplantChild, the European Reference Network for paediatric transplantation, so that all the transplanted children in the member hospitals are registered, and their information is stored together to be analysed. The results obtained give an accurate vision of the unequal transplantation distribution across the continent and allowed the identification of the most expert and specialized centres in Europe. Also, it was possible to recognize potential cases in which the child needed to be moved to receive a transplant, and proposed solutions for them. On the other hand, two platforms, one for data collection and one for data exploitation, were integrated together to build up the patient registry. The latter was developed from scratch in this project using Python and Flask. At last, it was concluded that by implementing the mentioned tools it is possible to promote paediatric transplantation research and perform a benchmarking across the countries and hospitals. Hence, this turns out to be an indirect way to improve transplants success rate and, in the end, patients’ survival and quality of life.Ingeniería Biomédic

    The CAMH Neuroinformatics Platform: A Hospital-Focused Brain-CODE Implementation

    Get PDF
    Investigations of mental illness have been enriched by the advent and maturation of neuroimaging technologies and the rapid pace and increased affordability of molecular sequencing techniques, however, the increased volume, variety and velocity of research data, presents a considerable technical and analytic challenge to curate, federate and interpret. Aggregation of high-dimensional datasets across brain disorders can increase sample sizes and may help identify underlying causes of brain dysfunction, however, additional barriers exist for effective data harmonization and integration for their combined use in research. To help realize the potential of multi-modal data integration for the study of mental illness, the Centre for Addiction and Mental Health (CAMH) constructed a centralized data capture, visualization and analytics environment—the CAMH Neuroinformatics Platform—based on the Ontario Brain Institute (OBI) Brain-CODE architecture, towards the curation of a standardized, consolidated psychiatric hospital-wide research dataset, directly coupled to high performance computing resources

    FIMED: Flexible management of biomedical data

    Get PDF
    Background and objectives: In the last decade, clinical trial management systems have become an essential support tool for data management and analysis in clinical research. However, these clinical tools have design limitations, since they are currently not able to cover the needs of adaptation to the continuous changes in the practice of the trials due to the heterogeneous and dynamic nature of the clinical research data. These systems are usually proprietary solutions provided by vendors for specific tasks. In this work, we propose FIMED, a software solution for the flexible management of clinical data from multiple trials, moving towards personalized medicine, which can contribute positively by improving clinical researchers quality and ease in clinical trials. Methods: This tool allows a dynamic and incremental design of patients’ profiles in the context of clinical trials, providing a flexible user interface that hides the complexity of using databases. Clinical researchers will be able to define personalized data schemas according to their needs and clinical study specifications. Thus, FIMED allows the incorporation of separate clinical data analysis from multiple trials. Results: The efficiency of the software has been demonstrated by a real-world use case for a clinical assay in Melanoma disease, which has been indeed anonymized to provide a user demonstration. FIMED currently provides three data analysis and visualization components, guaranteeing a clinical exploration for gene expression data: heatmap visualization, clusterheatmap visualization, as well as gene regulatory network inference and visualization. An instance of this tool is freely available on the web at https://khaos.uma.es/fimed. It can be accessed with a demo user account, “researcher”, using the password “demo”. (...)Funding for open access charge: Universidad de Málaga / CBUA. This work has been partially funded by the Spanish Ministry of Science and Innovation via Grant PID2020-112540RB-C41 (AEI/FEDER, UE) and Andalusian PAIDI program with grant P18-RT2799
    • …
    corecore