71 research outputs found

    Text mining meets community curation: a newly designed curation platform to improve author experience and participation at WormBase

    Get PDF
    Biological knowledgebases rely on expert biocuration of the research literature to maintain up-to-date collections of data organized in machine-readable form. To enter information into knowledgebases, curators need to follow three steps: (i) identify papers containing relevant data, a process called triaging; (ii) recognize named entities; and (iii) extract and curate data in accordance with the underlying data models. WormBase (WB), the authoritative repository for research data on Caenorhabditis elegans and other nematodes, uses text mining (TM) to semi-automate its curation pipeline. In addition, WB engages its community, via an Author First Pass (AFP) system, to help recognize entities and classify data types in their recently published papers. In this paper, we present a new WB AFP system that combines TM and AFP into a single application to enhance community curation. The system employs string-searching algorithms and statistical methods (e.g. support vector machines (SVMs)) to extract biological entities and classify data types, and it presents the results to authors in a web form where they validate the extracted information, rather than enter it de novo as the previous form required. With this new system, we lessen the burden for authors, while at the same time receive valuable feedback on the performance of our TM tools. The new user interface also links out to specific structured data submission forms, e.g. for phenotype or expression pattern data, giving the authors the opportunity to contribute a more detailed curation that can be incorporated into WB with minimal curator review. Our approach is generalizable and could be applied to additional knowledgebases that would like to engage their user community in assisting with the curation. In the five months succeeding the launch of the new system, the response rate has been comparable with that of the previous AFP version, but the quality and quantity of the data received has greatly improved

    Text mining meets community curation: a newly designed curation platform to improve author experience and participation at WormBase

    Get PDF
    Biological knowledgebases rely on expert biocuration of the research literature to maintain up-to-date collections of data organized in machine-readable form. To enter information into knowledgebases, curators need to follow three steps: (i) identify papers containing relevant data, a process called triaging; (ii) recognize named entities; and (iii) extract and curate data in accordance with the underlying data models. WormBase (WB), the authoritative repository for research data on Caenorhabditis elegans and other nematodes, uses text mining (TM) to semi-automate its curation pipeline. In addition, WB engages its community, via an Author First Pass (AFP) system, to help recognize entities and classify data types in their recently published papers. In this paper, we present a new WB AFP system that combines TM and AFP into a single application to enhance community curation. The system employs string-searching algorithms and statistical methods (e.g. support vector machines (SVMs)) to extract biological entities and classify data types, and it presents the results to authors in a web form where they validate the extracted information, rather than enter it de novo as the previous form required. With this new system, we lessen the burden for authors, while at the same time receive valuable feedback on the performance of our TM tools. The new user interface also links out to specific structured data submission forms, e.g. for phenotype or expression pattern data, giving the authors the opportunity to contribute a more detailed curation that can be incorporated into WB with minimal curator review. Our approach is generalizable and could be applied to additional knowledgebases that would like to engage their user community in assisting with the curation. In the five months succeeding the launch of the new system, the response rate has been comparable with that of the previous AFP version, but the quality and quantity of the data received has greatly improved

    Textpresso Central: a customizable platform for searching, text mining, viewing, and curating biomedical literature

    Get PDF
    Background: The biomedical literature continues to grow at a rapid pace, making the challenge of knowledge retrieval and extraction ever greater. Tools that provide a means to search and mine the full text of literature thus represent an important way by which the efficiency of these processes can be improved. Results: We describe the next generation of the Textpresso information retrieval system, Textpresso Central (TPC). TPC builds on the strengths of the original system by expanding the full text corpus to include the PubMed Central Open Access Subset (PMC OA), as well as the WormBase C. elegans bibliography. In addition, TPC allows users to create a customized corpus by uploading and processing documents of their choosing. TPC is UIMA compliant, to facilitate compatibility with external processing modules, and takes advantage of Lucene indexing and search technology for efficient handling of millions of full text documents. Like Textpresso, TPC searches can be performed using keywords and/or categories (semantically related groups of terms), but to provide better context for interpreting and validating queries, search results may now be viewed as highlighted passages in the context of full text. To facilitate biocuration efforts, TPC also allows users to select text spans from the full text and annotate them, create customized curation forms for any data type, and send resulting annotations to external curation databases. As an example of such a curation form, we describe integration of TPC with the Noctua curation tool developed by the Gene Ontology (GO) Consortium. Conclusion: Textpresso Central is an online literature search and curation platform that enables biocurators and biomedical researchers to search and mine the full text of literature by integrating keyword and category searches with viewing search results in the context of the full text. It also allows users to create customized curation interfaces, use those interfaces to make annotations linked to supporting evidence statements, and then send those annotations to any database in the world

    Preliminary evaluation of the CellFinder literature curation pipeline for gene expression in kidney cells and anatomical parts

    Get PDF
    Biomedical literature curation is the process of automatically and/or manually deriving knowledge from scientific publications and recording it into specialized databases for structured delivery to users. It is a slow, error-prone, complex, costly and, yet, highly important task. Previous experiences have proven that text mining can assist in its many phases, especially, in triage of relevant documents and extraction of named entities and biological events. Here, we present the curation pipeline of the CellFinder database, a repository of cell research, which includes data derived from literature curation and microarrays to identify cell types, cell lines, organs and so forth, and especially patterns in gene expression. The curation pipeline is based on freely available tools in all text mining steps, as well as the manual validation of extracted data. Preliminary results are presented for a data set of 2376 full texts from which >4500 gene expression events in cell or anatomical part have been extracted. Validation of half of this data resulted in a precision of ~50% of the extracted data, which indicates that we are on the right track with our pipeline for the proposed task. However, evaluation of the methods shows that there is still room for improvement in the named-entity recognition and that a larger and more robust corpus is needed to achieve a better performance for event extraction. Database URL: http://www.cellfinder.org

    Organizing knowledge to enable personalization of medicine in cancer

    Get PDF
    Interpretation of the clinical significance of genomic alterations remains the most severe bottleneck preventing the realization of personalized medicine in cancer. We propose a knowledge commons to facilitate collaborative contributions and open discussion of clinical decision-making based on genomic events in cancer

    HDNetDB: A Molecular Interaction Database for Network-Oriented Investigations into Huntington’s Disease

    Get PDF
    Huntington's disease (HD) is a progressive and fatal neurodegenerative disorder caused by an expanded CAG repeat in the huntingtin gene. Although HD is monogenic, its molecular manifestation appears highly complex and involves multiple cellular processes. The recent application of high throughput platforms such as microarrays and mass-spectrometry has indicated multiple pathogenic routes. The massive data generated by these techniques together with the complexity of the pathogenesis, however, pose considerable challenges to researchers. Network-based methods can provide valuable tools to consolidate newly generated data with existing knowledge, and to decipher the interwoven molecular mechanisms underlying HD. To facilitate research on HD in a network-oriented manner, we have developed HDNetDB, a database that integrates molecular interactions with many HD-relevant datasets. It allows users to obtain, visualize and prioritize molecular interaction networks using HD-relevant gene expression, phenotypic and other types of data obtained from human samples or model organisms. We illustrated several HDNetDB functionalities through a case study and identified proteins that constitute potential cross-talk between HD and the unfolded protein response (UPR). HDNetDB is publicly accessible at http://hdnetdb.sysbiolab.eu.CHDI Foundation [A-2666]; Portuguese Fundacao para a Ciencia e a Tecnologia [SFRH/BPD/70718/2010, SFRH/BPD/96890/2013, IF/00881/2013, UID/BIM/04773/2013 - CBMR, UID/Multi/04326/2013 - CCMAR]info:eu-repo/semantics/publishedVersio

    The fully automated construction of metabolic pathways using text mining and knowledge-based constraints

    Get PDF
    Understanding metabolic pathways is one of the most important fields in bioscience in the post-genomic era, but curating metabolic pathways requires considerable man-power. As such there is a lack of reliable experimentally verified metabolic pathways in databases and databases are forced to predict all but the most immediately useful pathways by inheriting annotations from other organisms where the pathway has been curated. Due to the lack of curated data there has been no large scale study to assess the accuracy of current methods for inheriting metabolic pathway annotations. In this thesis I describe the development of the Literature Metabolic Pathway Extraction Tool (LiMPET), a text-mining tool designed for the automated extraction of metabolic pathways from article abstracts and full-text open-access articles. I propose the use of LiMPET by metabolic pathway curators to increase the rate of curation and by individual researchers interested in a particular pathway. The mining of metabolic pathways from the literature has been largely neglected by the textmining community. The work described in this thesis shows the tractability of the problem, however, and it is my hope that it attracts more research into the area

    Research Data Management Practices And Impacts on Long-term Data Sustainability: An Institutional Exploration

    Get PDF
    With the \u27data deluge\u27 leading to an institutionalized research environment for data management, U.S. academic faculty have increasingly faced pressure to deposit research data into open online data repositories, which, in turn, is engendering a new set of practices to adapt formal mandates to local circumstances. When these practices involve reorganizing workflows to align the goals of local and institutional stakeholders, we might call them \u27data articulations.\u27 This dissertation uses interviews to establish a grounded understanding of the data articulations behind deposit in 3 studies: (1) a phenomenological study of genomics faculty data management practices; (2) a grounded theory study developing a theory of data deposit as articulation work in genomics; and (3) a comparative case study of genomics and social science researchers to identify factors associated with the institutionalization of research data management (RDM). The findings of this research offer an in-depth understanding of the data management and deposit practices of academic research faculty, and surfaced institutional factors associated with data deposit. Additionally, the studies led to a theoretical framework of data deposit to open research data repositories. The empirical insights into the impacts of institutionalization of RDM and data deposit on long-term data sustainability update our knowledge of the impacts of increasing guidelines for RDM. The work also contributes to the body of data management literature through the development of the data articulation framework which can be applied and further validated by future work. In terms of practice, the studies offer recommendations for data policymakers, data repositories, and researchers on defining strategies and initiatives to leverage data reuse and employ computational approaches to support data management and deposit
    corecore