853 research outputs found

    Automatic categorization of diverse experimental information in the bioscience literature

    Get PDF
    Background: Curation of information from bioscience literature into biological knowledge databases is a crucial way of capturing experimental information in a computable form. During the biocuration process, a critical first step is to identify from all published literature the papers that contain results for a specific data type the curator is interested in annotating. This step normally requires curators to manually examine many papers to ascertain which few contain information of interest and thus, is usually time consuming. We developed an automatic method for identifying papers containing these curation data types among a large pool of published scientific papers based on the machine learning method Support Vector Machine (SVM). This classification system is completely automatic and can be readily applied to diverse experimental data types. It has been in use in production for automatic categorization of 10 different experimental datatypes in the biocuration process at WormBase for the past two years and it is in the process of being adopted in the biocuration process at FlyBase and the Saccharomyces Genome Database (SGD). We anticipate that this method can be readily adopted by various databases in the biocuration community and thereby greatly reducing time spent on an otherwise laborious and demanding task. We also developed a simple, readily automated procedure to utilize training papers of similar data types from different bodies of literature such as C. elegans and D. melanogaster to identify papers with any of these data types for a single database. This approach has great significance because for some data types, especially those of low occurrence, a single corpus often does not have enough training papers to achieve satisfactory performance. Results: We successfully tested the method on ten data types from WormBase, fifteen data types from FlyBase and three data types from Mouse Genomics Informatics (MGI). It is being used in the curation work flow at WormBase for automatic association of newly published papers with ten data types including RNAi, antibody, phenotype, gene regulation, mutant allele sequence, gene expression, gene product interaction, overexpression phenotype, gene interaction, and gene structure correction. Conclusions: Our methods are applicable to a variety of data types with training set containing several hundreds to a few thousand documents. It is completely automatic and, thus can be readily incorporated to different workflow at different literature-based databases. We believe that the work presented here can contribute greatly to the tremendous task of automating the important yet labor-intensive biocuration effort

    UIMA in the Biocuration Workflow: A coherent framework for cooperation between biologists and computational linguists

    Get PDF
    As collaborating partners, Barcelona Media Innovation Centre and GRIB (Universitat Pompeu Fabra) seek to combine strengths from Computational Linguistics and Biomedicine to produce a robust Text Mining system to generate data that will help biocurators in their daily work. The first version of this system will focus on the discovery of relationships between genes, SNPs (Single Nucleotide Polymorphisms) and diseases from the literature.

A first challenge that we were faced with during the setup of this project is the fact that most current tools that support the curation workflow are complex, ad-hoc built applications which sometimes make difficult the interoperability and results sharing between research groups from different and unrelated expert fields. Often, biologists (even computer-savvy ones) are hard pressed to use and adapt sophisticated Natural Language Processing systems, and computational linguists are challenged by the intricacies of biology in applying their processing pipelines to elicit knowledge from texts. The flow of knowledge (needed to develop a usable, practical tool) to and from the parties involved in the development of such systems is not always easy or straightforward.

The modular and versatile architecture of UIMA (Unstructed Information Management Architecture) provides a framework to address these challenges. UIMA is a component architecture and software framework implementation (including a UIMA SDK) to develop applications that analyse large volumes of unstructured information, and has been increasingly adopted by a significant part of the BioNLP community that needs industrial-grade and robust applications to exploit the whole bibliome. The use of UIMA to develop Text Mining applications useful for curation purposes allows the combination of diverse expertises which is beyond the individual know-how of biologists, computer scientists or linguists in isolation. A good synergy and circulation of knowledge between these experts is fundamental to the development of a successful curation tool

    Systematic analysis of primary sequence domain segments for the discrimination between class C GPCR subtypes

    Get PDF
    G-protein-coupled receptors (GPCRs) are a large and diverse super-family of eukaryotic cell membrane proteins that play an important physiological role as transmitters of extracellular signal. In this paper, we investigate Class C, a member of this super-family that has attracted much attention in pharmacology. The limited knowledge about the complete 3D crystal structure of Class C receptors makes necessary the use of their primary amino acid sequences for analytical purposes. Here, we provide a systematic analysis of distinct receptor sequence segments with regard to their ability to differentiate between seven class C GPCR subtypes according to their topological location in the extracellular, transmembrane, or intracellular domains. We build on the results from the previous research that provided preliminary evidence of the potential use of separated domains of complete class C GPCR sequences as the basis for subtype classification. The use of the extracellular N-terminus domain alone was shown to result in a minor decrease in subtype discrimination in comparison with the complete sequence, despite discarding much of the sequence information. In this paper, we describe the use of Support Vector Machine-based classification models to evaluate the subtype-discriminating capacity of the specific topological sequence segments.Peer ReviewedPostprint (author's final draft

    The Data Management Skills Support Initiative: synthesising postgraduate training in research data management

    Get PDF
    <p>This paper will describe the efforts and findings of the JISC Data Management Skills Support Initiative (‘DaMSSI’). DaMSSI was co-funded by the JISC Managing Research Data programme and the Research Information Network (RIN), in partnership with the Digital Curation Centre, to review, synthesise and augment the training offerings of the JISC Research Data Management Training Materials (‘RDMTrain’) projects.</p> <p>DaMSSI tested the effectiveness of the Society of College, National and University Libraries’ Seven Pillars of Information Literacy model (SCONUL, 2011), and Vitae’s Researcher Development Framework (‘Vitae RDF’) for consistently describing research data management (‘RDM’) skills and skills development paths in UK HEI postgraduate courses.</p> <p>With the collaboration of the RDMTrain projects, we mapped individual course modules to these two models and identified basic generic data management skills alongside discipline-specific requirements. A synthesis of the training outputs of the projects was then carried out, which further investigated the generic versus discipline-specific considerations and other successful approaches to training that had been identified as a result of the projects’ work. In addition we produced a series of career profiles to help illustrate the fact that data management is an essential component – in obvious and not-so-obvious ways – of a wide range of professions.</p> <p>We found that both models had potential for consistently and coherently describing data management skills training and embedding this within broader institutional postgraduate curricula. However, we feel that additional discipline-specific references to data management skills could also be beneficial for effective use of these models. Our synthesis work identified that the majority of core skills were generic across disciplines at the postgraduate level, with the discipline-specific approach showing its value in engaging the audience and providing context for the generic principles.</p> <p>Findings were fed back to SCONUL and Vitae to help in the refinement of their respective models, and we are working with a number of other projects, such as the DCC and the EC-funded Digital Curator Vocational Education Europe (DigCurV2) initiative, to investigate ways to take forward the training profiling work we have begun.</p&gt

    Text mining meets community curation: a newly designed curation platform to improve author experience and participation at WormBase

    Get PDF
    Biological knowledgebases rely on expert biocuration of the research literature to maintain up-to-date collections of data organized in machine-readable form. To enter information into knowledgebases, curators need to follow three steps: (i) identify papers containing relevant data, a process called triaging; (ii) recognize named entities; and (iii) extract and curate data in accordance with the underlying data models. WormBase (WB), the authoritative repository for research data on Caenorhabditis elegans and other nematodes, uses text mining (TM) to semi-automate its curation pipeline. In addition, WB engages its community, via an Author First Pass (AFP) system, to help recognize entities and classify data types in their recently published papers. In this paper, we present a new WB AFP system that combines TM and AFP into a single application to enhance community curation. The system employs string-searching algorithms and statistical methods (e.g. support vector machines (SVMs)) to extract biological entities and classify data types, and it presents the results to authors in a web form where they validate the extracted information, rather than enter it de novo as the previous form required. With this new system, we lessen the burden for authors, while at the same time receive valuable feedback on the performance of our TM tools. The new user interface also links out to specific structured data submission forms, e.g. for phenotype or expression pattern data, giving the authors the opportunity to contribute a more detailed curation that can be incorporated into WB with minimal curator review. Our approach is generalizable and could be applied to additional knowledgebases that would like to engage their user community in assisting with the curation. In the five months succeeding the launch of the new system, the response rate has been comparable with that of the previous AFP version, but the quality and quantity of the data received has greatly improved

    Integrative biological simulation praxis: Considerations from physics, philosophy, and data/model curation practices

    Get PDF
    Integrative biological simulations have a varied and controversial history in the biological sciences. From computational models of organelles, cells, and simple organisms, to physiological models of tissues, organ systems, and ecosystems, a diverse array of biological systems have been the target of large-scale computational modeling efforts. Nonetheless, these research agendas have yet to prove decisively their value among the broader community of theoretical and experimental biologists. In this commentary, we examine a range of philosophical and practical issues relevant to understanding the potential of integrative simulations. We discuss the role of theory and modeling in different areas of physics and suggest that certain sub-disciplines of physics provide useful cultural analogies for imagining the future role of simulations in biological research. We examine philosophical issues related to modeling which consistently arise in discussions about integrative simulations and suggest a pragmatic viewpoint that balances a belief in philosophy with the recognition of the relative infancy of our state of philosophical understanding. Finally, we discuss community workflow and publication practices to allow research to be readily discoverable and amenable to incorporation into simulations. We argue that there are aligned incentives in widespread adoption of practices which will both advance the needs of integrative simulation efforts as well as other contemporary trends in the biological sciences, ranging from open science and data sharing to improving reproducibility.Comment: 10 page

    Metacuration Standards and Minimum Information about a Bioinformatics Investigation

    Get PDF
    Many bioinformatics databases published in journals are here this year and gone the next. There is generally (i) no requirement, mandatory or otherwise, by reviewers, editors or publishers for full disclosure of how databases are built and how they are maintained; (ii) no standardized requirement for data in public access databases to be kept as backup for release and access when a project ends, when funds expire and website terminates; (iii) the case of proprietary resources, there is no requirement for data to be kept in escrow for release under stated conditions such as when a published database disappears due to company closure. Consequently, much of the biological databases published in the past twenty years are easily lost, even though the publications describing or referencing these databases and webservices remain. Given the volume of publications today, even though it is practically possible for reviewers to re-create databases as described in a manuscript, there is usually insufficient disclosure and raw data for this to be done, even if there is sufficient time and resources available to perform this. Consequently, verification and validation is assumed, and claims of the paper accepted as true and correct at face value. A solution to this growing problem is to experiment with some kind of minimum standards of reporting such as the Minimum Information About a Bioinformatics Investigation (MIABi) and standardized requirements of data deposition and escrow for enabling persistence and reproducibility. With easy availability of cloud computing, such a level of reproducibility can become a reality in the near term. Through standards in meta-curation and minimum standards of reporting that uphold the tenets of scientific reproducibility, verifiability, sustainability and continuity of data resources, the knowledge preserved will underpin tomorrow's scientific research. Other issues include disambiguation of authors or database names, and unique identifiers to support non-repudiability, possibly in multiple languages. The International Conference on Bioinformatics and its publications are now in the process of making attempts at addressing these issues and this presentation will highlight some of the current efforts
    • …
    corecore