1,178 research outputs found

    From access and integration to mining of secure genomic data sets across the grid

    Get PDF
    The UK Department of Trade and Industry (DTI) funded BRIDGES project (Biomedical Research Informatics Delivered by Grid Enabled Services) has developed a Grid infrastructure to support cardiovascular research. This includes the provision of a compute Grid and a data Grid infrastructure with security at its heart. In this paper we focus on the BRIDGES data Grid. A primary aim of the BRIDGES data Grid is to help control the complexity in access to and integration of a myriad of genomic data sets through simple Grid based tools. We outline these tools, how they are delivered to the end user scientists. We also describe how these tools are to be extended in the BBSRC funded Grid Enabled Microarray Expression Profile Search (GEMEPS) to support a richer vocabulary of search capabilities to support mining of microarray data sets. As with BRIDGES, fine grain Grid security underpins GEMEPS

    Challenging Problems in Data Mining and Data Warehousing

    Get PDF
    Data mining is a process which is used by companies to turn raw data into useful information. By using software to look for patterns in large batches of data, businesses can learn more about their customers and develop more effective marketing strategies as well as increase sales and decrease costs. It depends on constructive data collection and warehousing as well as computer processing. Data mining used to analyze patterns and relationships in data based on what users request. For example, data mining software can be used to create classes of information. When companies centralize their data into one database or program, it is known as data warehousing. Accompanied a data warehouse, an organization may spin off segments of the data for particular users and utilize. While, in other cases, analysts may begin with the type of data they want and create a data warehouse based on those specs. Regardless of how businesses and other entities systemize their data, they use it to support management's decision-making processes

    Blockchain-Supported Food Supply Chain Reference Architecture

    Get PDF
    Department of Management EngineeringA food security issue increased rapidly due to numerous food frauds and tragic incidents and overall growth in the scale of food supply chain network in the last years. Since the recent evolution of Blockchain technology, it promises high potential ability to guarantee and trace the originality of products in supply chain network The main purpose of this research work is to build general Blockchain-supported food supply chain reference architecture model along with supplementary guidelines which could be applied in real-life supply chain cases with or without customization or inspire their design of supply chain system. A case driven bottom-up approach is used to create the reference architecture with the help of BOAT framework as a base tool to align the case details. A total of three food supply chain cases were utilized for the development of reference architecture and third case study of Mongolian meat trade supply chain was examined with the proposed solution and finally evaluated by the local experts. I believe this reference framework will help fellow researchers and industry practitioners to use this as a base knowledge without beginning from the scratches because current literature lacks extremely in this field. In overall, I expect this work will contribute to the current literature in the followings: 1. To expand the implementation mechanism of Blockchain solutions in general supply chain cases especially in food supply chain. 2. To provide practical exemplary implementation of real life case scenarios 3. To provide detailed analysis of benefits and weaknesses of using Blockchain in food supply chainope

    Addendum to Informatics for Health 2017: Advancing both science and practice

    Get PDF
    This article presents presentation and poster abstracts that were mistakenly omitted from the original publication

    Non-invasive lightweight integration engine for building EHR from autonomous distributed systems

    Full text link
    [EN] In this paper we describe Pangea-LE, a message-oriented lightweight data integration engine that allows homogeneous and concurrent access to clinical information from disperse and heterogeneous data sources. The engine extracts the information and passes it to the requesting client applications in a flexible XML format. The XML response message can be formatted on demand by appropriate Extensible Stylesheet Language (XSL) transformations in order to meet the needs of client applications. We also present a real deployment in a hospital where Pangea-LE collects and generates an XML view of all the available patient clinical information. The information is presented to healthcare professionals in an Electronic Health Record (EHR) viewer Web application with patient search and EHR browsing capabilities. Implantation in a real setting has been a success due to the non-invasive nature of Pangea-LE which respects the existing information systems.This work was partially funded by the Spanish Ministry of Science and Technology (MEC-TSI2004-06475-102-01) and the Spanish Ministry of Health (PI052245)Angulo Fernández, C.; Crespo Molina, PM.; Maldonado Segura, JA.; Moner Cano, D.; Perez Cuesta, D.; Abad, I.; Mandingorra Gimenez, J.... (2007). Non-invasive lightweight integration engine for building EHR from autonomous distributed systems. International Journal of Medical Informatics. 76(Supplement 3):417-424. https://doi.org/10.1016/j.ijmedinf.2007.05.002S41742476Supplement

    An Evaluation of the Use of a Clinical Research Data Warehouse and I2b2 Infrastructure to Facilitate Replication of Research

    Get PDF
    Replication of clinical research is requisite for forming effective clinical decisions and guidelines. While rerunning a clinical trial may be unethical and prohibitively expensive, the adoption of EHRs and the infrastructure for distributed research networks provide access to clinical data for observational and retrospective studies. Herein I demonstrate a means of using these tools to validate existing results and extend the findings to novel populations. I describe the process of evaluating published risk models as well as local data and infrastructure to assess the replicability of the study. I use an example of a risk model unable to be replicated as well as a study of in-hospital mortality risk I replicated using UNMC’s clinical research data warehouse. In these examples and other studies we have participated in, some elements are commonly missing or under-developed. One such missing element is a consistent and computable phenotype for pregnancy status based on data recorded in the EHR. I survey local clinical data and identify a number of variables correlated with pregnancy as well as demonstrate the data required to identify the temporal bounds of a pregnancy episode. Next, another common obstacle to replicating risk models is the necessity of linking to alternative data sources while maintaining data in a de-identified database. I demonstrate a pipeline for linking clinical data to socioeconomic variables and indices obtained from the American Community Survey (ACS). While these data are location-based, I provide a method for storing them in a HIPAA compliant fashion so as not to identify a patient’s location. While full and efficient replication of all clinical studies is still a future goal, the demonstration of replication as well as beginning the development of a computable phenotype for pregnancy and the incorporation of location based data in a de-identified data warehouse demonstrate how the EHR data and a research infrastructure may be used to facilitate this effort
    corecore