29 research outputs found

    StrainInfo : from microbial information to microbiological knowledge

    Get PDF

    The challenges faced by living stock collections in the USA

    Get PDF
    Citation: McCluskey, K., Boundy-Mills, K., Dye, G., Ehmke, E., Gunnell, G. F., Kiaris, H., . . . Grotewold, E. (2017). The challenges faced by living stock collections in the USA. Elife, 6, 8. doi:10.7554/eLife.24611Many discoveries in the life sciences have been made using material from living stock collections. These collections provide a uniform and stable supply of living organisms and related materials that enhance the reproducibility of research and minimize the need for repetitive calibration. While collections differ in many ways, they all require expertise in maintaining living organisms and good logistical systems for keeping track of stocks and fulfilling requests for specimens. Here, we review some of the contributions made by living stock collections to research across all branches of the tree of life, and outline the challenges they face

    Data shopping in an open marketplace: Introducing the Ontogrator web application for marking up data using ontologies and browsing using facets

    Get PDF
    In the future, we hope to see an open and thriving data market in which users can find and select data from a wide range of data providers. In such an open access market, data are products that must be packaged accordingly. Increasingly, eCommerce sellers present heterogeneous product lines to buyers using faceted browsing. Using this approach we have developed the Ontogrator platform, which allows for rapid retrieval of data in a way that would be familiar to any online shopper. Using Knowledge Organization Systems (KOS), especially ontologies, Ontogrator uses text mining to mark up data and faceted browsing to help users navigate, query and retrieve data. Ontogrator offers the potential to impact scientific research in two major ways: 1) by significantly improving the retrieval of relevant information; and 2) by significantly reducing the time required to compose standard database queries and assemble information for further research. Here we present a pilot implementation developed in collaboration with the Genomic Standards Consortium (GSC) that includes content from the StrainInfo, GOLD, CAMERA, Silva and Pubmed databases. This implementation demonstrates the power of ontogration and highlights that the usefulness of this approach is fully dependent on both the quality of data and the KOS (ontologies) used. Ideally, the use and further expansion of this collaborative system will help to surface issues associated with the underlying quality of annotation and could lead to a systematic means for accessing integrated data resources

    Data shopping in an open marketplace: introducing the Ontogrator web application for marking up data using ontologies and browsing using facets

    Get PDF
    In the future, we hope to see an open and thriving data market in which users can find and select data from a wide range of data providers. In such an open access market, data are products that must be packaged accordingly. Increasingly, eCommerce sellers present heterogeneous product lines to buyers using faceted browsing. Using this approach we have developed the Ontogrator platform, which allows for rapid retrieval of data in a way that would be familiar to any online shopper. Using Knowledge Organization Systems (KOS), especially ontologies, Ontogrator uses text mining to mark up data and faceted browsing to help users navigate, query and retrieve data. Ontogrator offers the potential to impact scientific research in two major ways: 1) by significantly improving the retrieval of relevant information; and 2) by significantly reducing the time required to compose standard database queries and assemble information for further research. Here we present a pilot implementation developed in collaboration with the Genomic Standards Consortium (GSC) that includes content from the StrainInfo, GOLD, CAMERA, Silva and Pubmed databases. This implementation demonstrates the power of ontogration and highlights that the usefulness of this approach is fully dependent on both the quality of data and the KOS (ontologies) used. Ideally, the use and further expansion of this collaborative system will help to surface issues associated with the underlying quality of annotation and could lead to a systematic means for accessing integrated data resources

    Web scraping technologies in an API world

    Get PDF
    Web services are the de facto standard in biomedical data integration. However, there are data integration scenarios that cannot be fully covered by Web services. A number of Web databases and tools do not support Web services, and existing Web services do not cover for all possible user data demands. As a consequence, Web data scraping, one of the oldest techniques for extracting Web contents, is still in position to offer a valid and valuable service to a wide range of bioinformatics applications, ranging from simple extraction robots to online meta-servers. This article reviews existing scraping frameworks and tools, identifying their strengths and limitations in terms of extraction capabilities. The main focus is set on showing how straightforward it is today to set up a data scraping pipeline, with minimal programming effort, and answer a number of practical needs. For exemplification purposes, we introduce a biomedical data extraction scenario where the desired data sources, well-known in clinical microbiology and similar domains, do not offer programmatic interfaces yet. Moreover, we describe the operation of WhichGenes and PathJam, two bioinformatics meta-servers that use scraping as means to cope with gene set enrichment analysis.This work was partially funded by (i) the [TIN2009-14057-C03-02] project from the Spanish Ministry of Science and Innovation, the Plan E from the Spanish Government and the European Union from the European Regional Development Fund (ERDF), (ii) the Portugal-Spain cooperation action sponsored by the Foundation of Portuguese Universities [E 48/11] and the Spanish Ministry of Science and Innovation [AIB2010PT-00353] and (iii) the Agrupamento INBIOMED [2012/273] from the DXPCTSUG (Direccion Xeral de Promocion Cientifica e Tecnoloxica do Sistema Universitario de Galicia) from the Galician Government and the European Union from the ERDF unha maneira de facer Europa. H. L. F. was supported by a pre-doctoral fellowship from the University of Vigo

    A genome-based species taxonomy of the Lactobacillus genus complex

    Get PDF
    YesThere are more than 200 published species within the Lactobacillus genus complex (LGC), the majority of which have sequenced type strain genomes available. Although genome-based species delimitation cutoffs are accepted as the gold standard by the community, these are seldom actually checked for new or already published species. In addition, the availability of genome data is revealing inconsistencies in the species-level classification of many strains. We constructed a de novo species taxonomy for the LGC based on 2,459 publicly available genomes, using a 94% core nucleotide identity cutoff. We reconciled these de novo species with published species and subspecies names by (i) identifying genomes of type strains and (ii) comparing 16S rRNA genes of the genomes with 16S rRNA genes of type strains. We found that genomes within the LGC could be divided into 239 de novo species that were discontinuous and exclusive. Comparison of these de novo species to published species led to the identification of nine sets of published species that can be merged and one species that can be split. Further, we found at least eight de novo species that constitute new, unpublished species. Finally, we reclassified 74 genomes on the species level and identified for the first time the species of 98 genomes. Overall, the current state of LGC species taxonomy is largely consistent with genome-based species delimitation cutoffs. There are, however, exceptions that should be resolved to evolve toward a taxonomy where species share a consistent diversity in terms of sequence divergence.This study was supported by the Research Foundation Flanders (grant 11A0618N), the Flanders Innovation and Entrepreneurship Agency (grants IWT-SB 141198 and IWT/50052), and the University of Antwerp (grant FFB150344)

    Microbiological Common Language (MCL): a standard for electronic information exchange in the Microbial Commons

    No full text
    Although Biological Resource Centers (BRCs) traditionally have open catalogs of their holdings, it is quite cumbersome to access meta-information about microorganisms electronically due to the variety of access methods used by those catalogs. Therefore, we propose Microbiological Common Language (MCL), aimed at standardizing the electronic exchange of meta-information about microorganisms. Its application ranges from representing the online catalog of a single collection to accessing the results of StrainInfo integration and ad hoc use in other contexts. The abstract model of the standard precisely defines the elements of the standard, which enables implementation using a variety of representation technologies. Currently, XML and RDF/XML implementations are readily available. MCL is an open standard, and therefore greatly encourages input from the microbiological community
    corecore