15 research outputs found

    Integrating protein-protein interactions and text mining for protein function prediction

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Functional annotation of proteins remains a challenging task. Currently the scientific literature serves as the main source for yet uncurated functional annotations, but curation work is slow and expensive. Automatic techniques that support this work are still lacking reliability. We developed a method to identify conserved protein interaction graphs and to predict missing protein functions from orthologs in these graphs. To enhance the precision of the results, we furthermore implemented a procedure that validates all predictions based on findings reported in the literature.</p> <p>Results</p> <p>Using this procedure, more than 80% of the GO annotations for proteins with highly conserved orthologs that are available in UniProtKb/Swiss-Prot could be verified automatically. For a subset of proteins we predicted new GO annotations that were not available in UniProtKb/Swiss-Prot. All predictions were correct (100% precision) according to the verifications from a trained curator.</p> <p>Conclusion</p> <p>Our method of integrating CCSs and literature mining is thus a highly reliable approach to predict GO annotations for weakly characterized proteins with orthologs.</p

    Semi-automated curation of protein subcellular localization: a text mining-based approach to Gene Ontology (GO) Cellular Component curation

    Get PDF
    Background: Manual curation of experimental data from the biomedical literature is an expensive and time-consuming endeavor. Nevertheless, most biological knowledge bases still rely heavily on manual curation for data extraction and entry. Text mining software that can semi- or fully automate information retrieval from the literature would thus provide a significant boost to manual curation efforts. Results: We employ the Textpresso category-based information retrieval and extraction system http://www.textpresso.org webcite, developed by WormBase to explore how Textpresso might improve the efficiency with which we manually curate C. elegans proteins to the Gene Ontology's Cellular Component Ontology. Using a training set of sentences that describe results of localization experiments in the published literature, we generated three new curation task-specific categories (Cellular Components, Assay Terms, and Verbs) containing words and phrases associated with reports of experimentally determined subcellular localization. We compared the results of manual curation to that of Textpresso queries that searched the full text of articles for sentences containing terms from each of the three new categories plus the name of a previously uncurated C. elegans protein, and found that Textpresso searches identified curatable papers with recall and precision rates of 79.1% and 61.8%, respectively (F-score of 69.5%), when compared to manual curation. Within those documents, Textpresso identified relevant sentences with recall and precision rates of 30.3% and 80.1% (F-score of 44.0%). From returned sentences, curators were able to make 66.2% of all possible experimentally supported GO Cellular Component annotations with 97.3% precision (F-score of 78.8%). Measuring the relative efficiencies of Textpresso-based versus manual curation we find that Textpresso has the potential to increase curation efficiency by at least 8-fold, and perhaps as much as 15-fold, given differences in individual curatorial speed. Conclusion: Textpresso is an effective tool for improving the efficiency of manual, experimentally based curation. Incorporating a Textpresso-based Cellular Component curation pipeline at WormBase has allowed us to transition from strictly manual curation of this data type to a more efficient pipeline of computer-assisted validation. Continued development of curation task-specific Textpresso categories will provide an invaluable resource for genomics databases that rely heavily on manual curation

    The High Throughput Sequence Annotation Service (HT-SAS) – the shortcut from sequence to true Medline words

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Advances in high-throughput technologies available to modern biology have created an increasing flood of experimentally determined facts. Ordering, managing and describing these raw results is the first step which allows facts to become knowledge. Currently there are limited ways to automatically annotate such data, especially utilizing information deposited in published literature.</p> <p>Results</p> <p>To aid researchers in describing results from high-throughput experiments we developed HT-SAS, a web service for automatic annotation of proteins using general English words. For each protein a poll of Medline abstracts connected to homologous proteins is gathered using the UniProt-Medline link. Overrepresented words are detected using binomial statistics approximation. We tested our automatic approach with a protein test set from SGD to determine the accuracy and usefulness of our approach. We also applied the automatic annotation service to improve annotations of proteins from <it>Plasmodium bergei </it>expressed exclusively during the blood stage.</p> <p>Conclusion</p> <p>Using HT-SAS we created new, or enriched already established annotations for over 20% of proteins from <it>Plasmodium bergei </it>expressed in the blood stage, deposited in PlasmoDB. Our tests show this approach to information extraction provides highly specific keywords, often also when the number of abstracts is limited. Our service should be useful for manual curators, as a complement to manually curated information sources and for researchers working with protein datasets, especially from poorly characterized organisms.</p

    Instant-on scientific data warehouses: Lazy ETL for data-intensive research

    Get PDF
    In the dawning era of data intensive research, scientific discovery deploys data analysis techniques similar to those that drive business intelligence. Similar to classical Extract, Transform and Load (ETL) processes, data is loaded entirely from external data sources (repositories) into a scientific data warehouse before it can be analyzed. This process is both, time and resource intensive and may not be entirely necessary if only a subset of the data is of interest to a particular user. To overcome this problem, we propose a novel technique to lower the costs for data loading: Lazy ETL. Data is extracted and loaded transparently on-the-fly only for the required data items. Extensive experiments demonstrate the significant reduction of the time from source data availability to query answer compared to state-of-the-art solutions. In addition to reducing the costs for bootstrapping a scientific data warehouse, our approach also reduces the costs for loading new incoming data

    Mining Host-Pathogen Interactions

    Get PDF

    A Comprehensive Benchmark of Kernel Methods to Extract Protein–Protein Interactions from Literature

    Get PDF
    The most important way of conveying new findings in biomedical research is scientific publication. Extraction of protein–protein interactions (PPIs) reported in scientific publications is one of the core topics of text mining in the life sciences. Recently, a new class of such methods has been proposed - convolution kernels that identify PPIs using deep parses of sentences. However, comparing published results of different PPI extraction methods is impossible due to the use of different evaluation corpora, different evaluation metrics, different tuning procedures, etc. In this paper, we study whether the reported performance metrics are robust across different corpora and learning settings and whether the use of deep parsing actually leads to an increase in extraction quality. Our ultimate goal is to identify the one method that performs best in real-life scenarios, where information extraction is performed on unseen text and not on specifically prepared evaluation data. We performed a comprehensive benchmarking of nine different methods for PPI extraction that use convolution kernels on rich linguistic information. Methods were evaluated on five different public corpora using cross-validation, cross-learning, and cross-corpus evaluation. Our study confirms that kernels using dependency trees generally outperform kernels based on syntax trees. However, our study also shows that only the best kernel methods can compete with a simple rule-based approach when the evaluation prevents information leakage between training and test corpora. Our results further reveal that the F-score of many approaches drops significantly if no corpus-specific parameter optimization is applied and that methods reaching a good AUC score often perform much worse in terms of F-score. We conclude that for most kernels no sensible estimation of PPI extraction performance on new text is possible, given the current heterogeneity in evaluation data. Nevertheless, our study shows that three kernels are clearly superior to the other methods

    Extracting biomedical relations from biomedical literature

    Get PDF
    Tese de mestrado em Bioinformática e Biologia Computacional, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, em 2018A ciência, e em especial o ramo biomédico, testemunham hoje um crescimento de conhecimento a uma taxa que clínicos, cientistas e investigadores têm dificuldade em acompanhar. Factos científicos espalhados por diferentes tipos de publicações, a riqueza de menções etiológicas, mecanismos moleculares, pontos anatómicos e outras terminologias biomédicas que não se encontram uniformes ao longo das várias publicações, para além de outros constrangimentos, encorajaram a aplicação de métodos de text mining ao processo de revisão sistemática. Este trabalho pretende testar o impacto positivo que as ferramentas de text mining juntamente com vocabulários controlados (enquanto forma de organização de conhecimento, para auxílio num posterior momento de recolha de informação) têm no processo de revisão sistemática, através de um sistema capaz de criar um modelo de classificação cujo treino é baseado num vocabulário controlado (MeSH), que pode ser aplicado a uma panóplia de literatura biomédica. Para esse propósito, este projeto divide-se em duas tarefas distintas: a criação de um sistema, constituído por uma ferramenta que pesquisa a base de dados PubMed por artigos científicos e os grava de acordo com etiquetas pré-definidas, e outra ferramenta que classifica um conjunto de artigos; e a análise dos resultados obtidos pelo sistema criado, quando aplicado a dois casos práticos diferentes. O sistema foi avaliado através de uma série de testes, com recurso a datasets cuja classificação era conhecida, permitindo a confirmação dos resultados obtidos. Posteriormente, o sistema foi testado com recurso a dois datasets independentes, manualmente curados por investigadores cuja área de investigação se relaciona com os dados. Esta forma de avaliação atingiu, por exemplo, resultados de precisão cujos valores oscilam entre os 68% e os 81%. Os resultados obtidos dão ênfase ao uso das tecnologias e ferramentas de text mining em conjunto com vocabulários controlados, como é o caso do MeSH, como forma de criação de pesquisas mais complexas e dinâmicas que permitam melhorar os resultados de problemas de classificação, como são aqueles que este trabalho retrata.Science, and the biomedical field especially, is witnessing a growth in knowledge at a rate at which clinicians and researchers struggle to keep up with. Scientific evidence spread across multiple types of scientific publications, the richness of mentions of etiology, molecular mechanisms, anatomical sites, as well as other biomedical terminology that is not uniform across different writings, among other constraints, have encouraged the application of text mining methods in the systematic reviewing process. This work aims to test the positive impact that text mining tools together with controlled vocabularies (as a way of organizing knowledge to aid, at a later time, to collect information) have on the systematic reviewing process, through a system capable of creating a classification model which training is based on a controlled vocabulary (MeSH) that can be applied to a variety of biomedical literature. For that purpose, this project was divided into two distinct tasks: the creation a system, consisting of a tool that searches the PubMed search engine for scientific articles and saves them according to pre-defined labels, and another tool that classifies a set of articles; and the analysis of the results obtained by the created system when applied to two different practical cases. The system was evaluated through a series of tests, using datasets whose classification results were previously known, allowing the confirmation of the obtained results. Afterwards, the system was tested by using two independently-created datasets which were manually curated by researchers working in the field of study. This last form of evaluation achieved, for example, precision scores as low as 68%, and as high as 81%. The results obtained emphasize the use of text mining tools, along with controlled vocabularies, such as MeSH, as a way to create more complex and comprehensive queries to improve the performance scores of classification problems, with which the theme of this work relates

    Systematic analysis of protein complexes involved in the human RNA polymerase II machinery

    Get PDF
    La transcription, la maturation d’ARN, et le remodelage de la chromatine sont tous des processus centraux dans l'interprétation de l'information contenue dans l’ADN. Bien que beaucoup de complexes de protéines formant la machinerie cellulaire de transcription aient été étudiés, plusieurs restent encore à identifier et caractériser. En utilisant une approche protéomique, notre laboratoire a purifié plusieurs composantes de la machinerie de transcription de l’ARNPII humaine par double chromatographie d’affinité "TAP". Cette procédure permet l'isolement de complexes protéiques comme ils existent vraisemblablement in vivo dans les cellules mammifères, et l'identification de partenaires d'interactions par spectrométrie de masse. Les interactions protéiques qui sont validées bioinformatiquement, sont choisies et utilisées pour cartographier un réseau connectant plusieurs composantes de la machinerie transcriptionnelle. En appliquant cette procédure, notre laboratoire a identifié, pour la première fois, un groupe de protéines, qui interagit physiquement et fonctionnellement avec l’ARNPII humaine. Les propriétés de ces protéines suggèrent un rôle dans l'assemblage de complexes à plusieurs sous-unités, comme les protéines d'échafaudage et chaperonnes. L'objectif de mon projet était de continuer la caractérisation du réseau de complexes protéiques impliquant les facteurs de transcription. Huit nouveaux partenaires de l’ARNPII (PIH1D1, GPN3, WDR92, PFDN2, KIAA0406, PDRG1, CCT4 et CCT5) ont été purifiés par la méthode TAP, et la spectrométrie de masse a permis d’identifier de nouvelles interactions. Au cours des années, l’analyse par notre laboratoire des mécanismes de la transcription a contribué à apporter de nouvelles connaissances et à mieux comprendre son fonctionnement. Cette connaissance est essentielle au développement de médicaments qui cibleront les mécanismes de la transcription.Genomes encode most of the functions necessary for cell growth and differentiation. Gene transcription, RNA processing, and chromatin remodeling are central processes in the interpretation of the information contained in genomic DNA. Although many protein complexes forming the cellular machinery that interprets mammalian genomes have been studied, a number of additional complexes remain to be identified and characterized. Using proteomic approaches, Dr. Benoit Coulombe’s laboratory purified many components of the RNAPII transcription machinery using tandem affinity purification (TAP), a procedure that allows the isolation of protein complexes as they likely exist in live mammalian cells, and the identification of interaction partners using mass spectrometry. High confidence interactions were selected computationally and used to draw the map of a network connecting many components of the mRNA transcriptional machinery. By applying this procedure, our lab has identified, for the first time, a group of proteins, that interacts both physically and functionally with human RNAPII, and whose properties suggest a role in the assembly of multi-subunit complexes, acting as RNAPII-specific scaffolding proteins and chaperones. The aim of my project was to continue the characterization of the network of protein complexes involving transcription factors, and thus, further pursuing our survey of protein complexes in whole cell extracts. Eight novel RNAPII interaction partners (PIH1D1, GPN3, WDR92, PFDN2, KIAA0406, PDRG1, CCT4 and CCT5) were purified using the tandem affinity purification (TAP) method, and their interaction partners were identified by mass spectrometry. Over the years, our lab’s analysis of transcriptional regulation and mechanisms has contributed novel and important knowledge that provided better understanding of mRNA synthesis. This knowledge is paramount to the development of therapeutics that will target transcriptional mechanisms
    corecore