280 research outputs found

    An Ontology-Based Framework for Clinical Research Databases

    Get PDF
    The Ontology-Based eXtensible data model (OBX) was developed to serve as a framework for the development of a clinical research database in the Immunology Database and Analysis Portal (ImmPort) system. OBX was designed around the logical structure provided by the Basic Formal Ontology (BFO) and the Ontology for Biomedical Investigations (OBI). By using the logical structure provided by these two well-formulated ontologies, we have found that a relatively simple, extensible data model could be developed to represent the relatively complex domain of clinical research. In addition, the common framework provided by the BFO should make it straightforward to utilize OBX database data dictionaries based on reference and application ontologies from the OBO Foundry

    Innovative Evaluation System – IESM: An Architecture for the Database Management System for Mobile Application

    Get PDF
    As the mobile applications are constantly facing a rapid development in the recent years especially in the academic environment such as student response system [1-8] used in universities and other educational institutions; there has not been reported an effective and scalable Database Management System to support fast and reliable data storage and retrieval. This paper presents Database Management Architecture for an Innovative Evaluation System based on Mobile Learning Applications. The need for a relatively stable, independent and extensible data model for faster data storage and retrieval is analyzed and investigated. It concludes by emphasizing further investigation for high throughput so as to support multimedia data such as video clips, images and documents

    The design and implementation of an infrastructure for multimedia digital libraries

    Get PDF
    We develop an infrastructure for managing, indexing and serving multimedia content in digital libraries. This infrastructure follows the model of the Web, and thereby is distributed in nature. We discuss the design of the Librarian, the component that manages meta data about the content. The management of meta data has been separated from the media servers that manage the content itself. Also, the extraction of the meta data is largely independent of the Librarian. We introduce our extensible data model and the daemon paradigm that are the core pieces of this architecture. We evaluate our initial implementation using a relational database. We conclude with a discussion of the lessons we learned in building this system, and proposals for improving the flexibility, reliability, and performance of the syste

    XGAP: a uniform and extensible data model and software platform for genotype and phenotype experiments.

    Get PDF
    We present an extensible software model for the genotype and phenotype community, XGAP. Readers can download a standard XGAP (http://www.xgap.org) or auto-generate a custom version using MOLGENIS with programming interfaces to R-software and web-services or user interfaces for biologists. XGAP has simple load formats for any type of genotype, epigenotype, transcript, protein, metabolite or other phenotype data. Current functionality includes tools ranging from eQTL analysis in mouse to genome-wide association studies in humans.RIGHTS : This article is licensed under the BioMed Central licence at http://www.biomedcentral.com/about/license which is similar to the 'Creative Commons Attribution Licence'. In brief you may : copy, distribute, and display the work; make derivative works; or make commercial use of the work - under the following conditions: the original author must be given credit; for any reuse or distribution, it must be made clear to others what the license terms of this work are

    Ontology-based knowledge representation of experiment metadata in biological data mining

    Get PDF
    According to the PubMed resource from the U.S. National Library of Medicine, over 750,000 scientific articles have been published in the ~5000 biomedical journals worldwide in the year 2007 alone. The vast majority of these publications include results from hypothesis-driven experimentation in overlapping biomedical research domains. Unfortunately, the sheer volume of information being generated by the biomedical research enterprise has made it virtually impossible for investigators to stay aware of the latest findings in their domain of interest, let alone to be able to assimilate and mine data from related investigations for purposes of meta-analysis. While computers have the potential for assisting investigators in the extraction, management and analysis of these data, information contained in the traditional journal publication is still largely unstructured, free-text descriptions of study design, experimental application and results interpretation, making it difficult for computers to gain access to the content of what is being conveyed without significant manual intervention. In order to circumvent these roadblocks and make the most of the output from the biomedical research enterprise, a variety of related standards in knowledge representation are being developed, proposed and adopted in the biomedical community. In this chapter, we will explore the current status of efforts to develop minimum information standards for the representation of a biomedical experiment, ontologies composed of shared vocabularies assembled into subsumption hierarchical structures, and extensible relational data models that link the information components together in a machine-readable and human-useable framework for data mining purposes

    A digital repository with an extensible data model for biobanking and genomic analysis management

    Get PDF
    Motivation: Molecular biology laboratories require extensive metadata to improve data collection and analysis. The heterogeneity of the collected metadata grows as research is evolving in to international multi-disciplinary collaborations and increasing data sharing among institutions. Single standardization is not feasible and it becomes crucial to develop digital repositories with flexible and extensible data models, as in the case of modern integrated biobanks management. Results: We developed a novel data model in JSON format to describe heterogeneous data in a generic biomedical science scenario. The model is built on two hierarchical entities: processes and events, roughly corresponding to research studies and analysis steps within a single study. A number of sequential events can be grouped in a process building up a hierarchical structure to track patient and sample history. Each event can produce new data. Data is described by a set of user-defined metadata, and may have one or more associated files. We integrated the model in a web based digital repository with a data grid storage to manage large data sets located in geographically distinct areas. We built a graphical interface that allows authorized users to define new data types dynamically, according to their requirements. Operators compose queries on metadata fields using a flexible search interface and run them on the database and on the grid. We applied the digital repository to the integrated management of samples, patients and medical history in the BIT-Gaslini biobank. The platform currently manages 1800 samples of over 900 patients. Microarray data from 150 analyses are stored on the grid storage and replicated on two physical resources for preservation. The system is equipped with data integration capabilities with other biobanks for worldwide information sharing. Conclusions: Our data model enables users to continuously define flexible, ad hoc, and loosely structured metadata, for information sharing in specific research projects and purposes. This approach can improve sensitively interdisciplinary research collaboration and allows to track patients' clinical records, sample management information, and genomic data. The web interface allows the operators to easily manage, query, and annotate the files, without dealing with the technicalities of the data grid.Peer reviewe

    SmartQC: An Extensible DLT-Based Framework for Trusted Data Workflows in Smart Manufacturing

    Full text link
    Recent developments in Distributed Ledger Technology (DLT), including Blockchain offer new opportunities in the manufacturing domain, by providing mechanisms to automate trust services (digital identity, trusted interactions, and auditable transactions) and when combined with other advanced digital technologies (e.g. machine learning) can provide a secure backbone for trusted data flows between independent entities. This paper presents an DLT-based architectural pattern and technology solution known as SmartQC that aims to provide an extensible and flexible approach to integrating DLT technology into existing workflows and processes. SmartQC offers an opportunity to make processes more time efficient, reliable, and robust by providing two key features i) data integrity through immutable ledgers and ii) automation of business workflows leveraging smart contracts. The paper will present the system architecture, extensible data model and the application of SmartQC in the context of example smart manufacturing applications.Comment: 33 Pages, 9 Figures, Under Peer Review Proces

    Spacialist – A Virtual Research Environment for the Spatial Humanities

    Get PDF
    Many archaeological research projects generate data and tools that are unusable or abandoned after the funding period ends. To counter this unsustainable practice, the Spacialist project was tasked to create a virtual research environment that offers an integrated, web-based user interface to record, browse, analyze, and visualize all spatial, graphical, textual and statistical data from archaeological or cultural heritage research projects. Spacialist is developed as an open-source software platform composed of modules providing the required functionality to end-users. It builds on controlled multi-language vocabularies and an abstract, extensible data model to facilitate data recording and analysis, as well as interoperability with other projects and infrastructures. Development of Spacialist is driven by an interdisciplinary team in collaboration with various pilot projects in different areas of archaeology. To support the complete research lifecycle, the platform is being integrated with the University’s research-data archive, guaranteeing long-term availability of project data
    • …
    corecore