9,942 research outputs found

    A network approach for managing and processing big cancer data in clouds

    Get PDF
    Translational cancer research requires integrative analysis of multiple levels of big cancer data to identify and treat cancer. In order to address the issues that data is decentralised, growing and continually being updated, and the content living or archiving on different information sources partially overlaps creating redundancies as well as contradictions and inconsistencies, we develop a data network model and technology for constructing and managing big cancer data. To support our data network approach for data process and analysis, we employ a semantic content network approach and adopt the CELAR cloud platform. The prototype implementation shows that the CELAR cloud can satisfy the on-demanding needs of various data resources for management and process of big cancer data

    Repository-based plasmid design

    Get PDF
    There was an explosion in the amount of commercially available DNA in sequence repositories over the last decade. The number of such plasmids increased from 12,000 to over 300,000 among three of the largest repositories: iGEM, Addgene, and DNASU. A challenge in biodesign remains how to use these and other repository-based sequences effectively, correctly, and seamlessly. This work describes an approach to plasmid design where a plasmid is specified as simply a DNA sequence or list of features. The proposed software then finds the most cost-effective combination of synthetic and PCR-prepared repository fragments to build the plasmid via Gibson assembly®. It finds existing DNA sequences in both user-specified and public DNA databases: iGEM, Addgene, and DNASU. Such a software application is introduced and characterized against all post-2005 iGEM composite parts and all Addgene vectors submitted in 2018 and found to reduce costs by 34% versus a purely synthetic plasmid design approach. The described software will improve current plasmid assembly workflows by shortening design times, improving build quality, and reducing costs.Accepted manuscrip

    Integration of Biological Sources: Exploring the Case of Protein Homology

    Get PDF
    Data integration is a key issue in the domain of bioin- formatics, which deals with huge amounts of heteroge- neous biological data that grows and changes rapidly. This paper serves as an introduction in the field of bioinformatics and the biological concepts it deals with, and an exploration of the integration problems a bioinformatics scientist faces. We examine ProGMap, an integrated protein homology system used by bioin- formatics scientists at Wageningen University, and several use cases related to protein homology. A key issue we identify is the huge manual effort required to unify source databases into a single resource. Un- certain databases are able to contain several possi- ble worlds, and it has been proposed that they can be used to significantly reduce initial integration efforts. We propose several directions for future work where uncertain databases can be applied to bioinformatics, with the goal of furthering the cause of bioinformatics integration

    Synthetic biology and microdevices : a powerful combination

    Get PDF
    Recent developments demonstrate that the combination of microbiology with micro-and nanoelectronics is a successful approach to develop new miniaturized sensing devices and other technologies. In the last decade, there has been a shift from the optimization of the abiotic components, for example, the chip, to the improvement of the processing capabilities of cells through genetic engineering. The synthetic biology approach will not only give rise to systems with new functionalities, but will also improve the robustness and speed of their response towards applied signals. To this end, the development of new genetic circuits has to be guided by computational design methods that enable to tune and optimize the circuit response. As the successful design of genetic circuits is highly dependent on the quality and reliability of its composing elements, intense characterization of standard biological parts will be crucial for an efficient rational design process in the development of new genetic circuits. Microengineered devices can thereby offer a new analytical approach for the study of complex biological parts and systems. By summarizing the recent techniques in creating new synthetic circuits and in integrating biology with microdevices, this review aims at emphasizing the power of combining synthetic biology with microfluidics and microelectronics

    TinkerCell: Modular CAD Tool for Synthetic Biology

    Get PDF
    Synthetic biology brings together concepts and techniques from engineering and biology. In this field, computer-aided design (CAD) is necessary in order to bridge the gap between computational modeling and biological data. An application named TinkerCell has been created in order to serve as a CAD tool for synthetic biology. TinkerCell is a visual modeling tool that supports a hierarchy of biological parts. Each part in this hierarchy consists of a set of attributes that define the part, such as sequence or rate constants. Models that are constructed using these parts can be analyzed using various C and Python programs that are hosted by TinkerCell via an extensive C and Python API. TinkerCell supports the notion of a module, which are networks with interfaces. Such modules can be connected to each other, forming larger modular networks. Because TinkerCell associates parameters and equations in a model with their respective part, parts can be loaded from databases along with their parameters and rate equations. The modular network design can be used to exchange modules as well as test the concept of modularity in biological systems. The flexible modeling framework along with the C and Python API allows TinkerCell to serve as a host to numerous third-party algorithms. TinkerCell is a free and open-source project under the Berkeley Software Distribution license. Downloads, documentation, and tutorials are available at www.tinkercell.com.Comment: 23 pages, 20 figure

    Infectious Disease Ontology

    Get PDF
    Technological developments have resulted in tremendous increases in the volume and diversity of the data and information that must be processed in the course of biomedical and clinical research and practice. Researchers are at the same time under ever greater pressure to share data and to take steps to ensure that data resources are interoperable. The use of ontologies to annotate data has proven successful in supporting these goals and in providing new possibilities for the automated processing of data and information. In this chapter, we describe different types of vocabulary resources and emphasize those features of formal ontologies that make them most useful for computational applications. We describe current uses of ontologies and discuss future goals for ontology-based computing, focusing on its use in the field of infectious diseases. We review the largest and most widely used vocabulary resources relevant to the study of infectious diseases and conclude with a description of the Infectious Disease Ontology (IDO) suite of interoperable ontology modules that together cover the entire infectious disease domain

    Public or private economies of knowledge: The economics of diffusion and appropriation of bioinformatics tools

    Get PDF
    The past three decades have witnessed a period of great turbulence in the economies of biological knowledge, during which there has been great uncertainty as to how and where boundaries could be drawn between public or private knowledge especially with regard to the explosive growth in biological databases and their related bioinformatic tools. This paper will focus on some of the key software tools developed in relation to bio-databases. It will argue that bioinformatic tools are particularly economically unstable, and that there is a continuing tension and competition between their public and private modes of production, appropriation, distribution, and use. The paper adopts an ?instituted economic process? approach, and in this paper will elaborate on processes of making knowledge public in the creation of ?public goods?. The question is one of continuously creating and sustaining new institutions of the commons. We believe this critical to an understanding of the division and interdependency between public and private economies of knowledge
    corecore