681 research outputs found

    Updates in metabolomics tools and resources: 2014-2015

    Get PDF
    Data processing and interpretation represent the most challenging and time-consuming steps in high-throughput metabolomic experiments, regardless of the analytical platforms (MS or NMR spectroscopy based) used for data acquisition. Improved machinery in metabolomics generates increasingly complex datasets that create the need for more and better processing and analysis software and in silico approaches to understand the resulting data. However, a comprehensive source of information describing the utility of the most recently developed and released metabolomics resources—in the form of tools, software, and databases—is currently lacking. Thus, here we provide an overview of freely-available, and open-source, tools, algorithms, and frameworks to make both upcoming and established metabolomics researchers aware of the recent developments in an attempt to advance and facilitate data processing workflows in their metabolomics research. The major topics include tools and researches for data processing, data annotation, and data visualization in MS and NMR-based metabolomics. Most in this review described tools are dedicated to untargeted metabolomics workflows; however, some more specialist tools are described as well. All tools and resources described including their analytical and computational platform dependencies are summarized in an overview Table

    Proteomics in cardiovascular disease: recent progress and clinical implication and implementation

    Get PDF
    Introduction: Although multiple efforts have been initiated to shed light into the molecular mechanisms underlying cardiovascular disease, it still remains one of the major causes of death worldwide. Proteomic approaches are unequivocally powerful tools that may provide deeper understanding into the molecular mechanisms associated with cardiovascular disease and improve its management. Areas covered: Cardiovascular proteomics is an emerging field and significant progress has been made during the past few years with the aim of defining novel candidate biomarkers and obtaining insight into molecular pathophysiology. To summarize the recent progress in the field, a literature search was conducted in PubMed and Web of Science. As a result, 704 studies from PubMed and 320 studies from Web of Science were retrieved. Findings from original research articles using proteomics technologies for the discovery of biomarkers for cardiovascular disease in human are summarized in this review. Expert commentary: Proteins associated with cardiovascular disease represent pathways in inflammation, wound healing and coagulation, proteolysis and extracellular matrix organization, handling of cholesterol and LDL. Future research in the field should target to increase proteome coverage as well as integrate proteomics with other omics data to facilitate both drug development as well as clinical implementation of findings

    Distributed computing and data storage in proteomics: many hands make light work, and a stronger memory

    Get PDF
    Modern day proteomics generates ever more complex data, causing the requirements on the storage and processing of such data to outgrow the capacity of most desktop computers. To cope with the increased computational demands, distributed architectures have gained substantial popularity in the recent years. In this review, we provide an overview of the current techniques for distributed computing, along with examples of how the techniques are currently being employed in the field of proteomics. We thus underline the benefits of distributed computing in proteomics, while also pointing out the potential issues and pitfalls involved.acceptedVersio

    Immunoreactivity of anti-gelsolin antibodies: implications for biomarker validation

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Proteomic-based discovery of biomarkers for disease has recently come under scrutiny for a variety of issues; one prominent issue is the lack of orthogonal validation for biomarkers following discovery. Validation by ELISA or Western blot requires the use of antibodies, which for many potential biomarkers are under-characterized and may lead to misleading or inconclusive results. Gelsolin is one such biomarker candidate in HIV-associated neurocognitive disorders.</p> <p>Methods</p> <p>Samples from human (plasma and CSF), monkey (plasma), monocyte-derived macrophage (supernatants), and commercial gelsolin (recombinant and purified) were quantitated using Western blot assay and a variety of anti-gelsolin antibodies. Plasma and CSF was used for immunoaffinity purification of gelsolin which was identified in eight bands by tandem mass spectrometry.</p> <p>Results</p> <p>Immunoreactivity of gelsolin within samples and between antibodies varied greatly. In several instances, multiple bands were identified (corresponding to different gelsolin forms) by one antibody, but not identified by another. Moreover, in some instances immunoreactivity depended on the source of gelsolin, e.g. plasma or CSF. Additionally, some smaller forms of gelsolin were identified by mass spectrometry but not by any antibody. Recombinant gelsolin was used as reference sample.</p> <p>Conclusions</p> <p>Orthogonal validation using specific monoclonal or polyclonal antibodies may reject biomarker candidates from further studies based on misleading or even false quantitation of those proteins, which circulate in various forms in body fluids.</p

    MetaboAnalyst 5.0: narrowing the gap between raw spectra and functional insights.

    Get PDF
    Since its first release over a decade ago, the MetaboAnalyst web-based platform has become widely used for comprehensive metabolomics data analysis and interpretation. Here we introduce MetaboAnalyst version 5.0, aiming to narrow the gap from raw data to functional insights for global metabolomics based on high-resolution mass spectrometry (HRMS). Three modules have been developed to help achieve this goal, including: (i) a LC-MS Spectra Processing module which offers an easy-to-use pipeline that can perform automated parameter optimization and resumable analysis to significantly lower the barriers to LC-MS1 spectra processing; (ii) a Functional Analysis module which expands the previous MS Peaks to Pathways module to allow users to intuitively select any peak groups of interest and evaluate their enrichment of potential functions as defined by metabolic pathways and metabolite sets; (iii) a Functional Meta-Analysis module to combine multiple global metabolomics datasets obtained under complementary conditions or from similar studies to arrive at comprehensive functional insights. There are many other new functions including weighted joint-pathway analysis, data-driven network analysis, batch effect correction, merging technical replicates, improved compound name matching, etc. The web interface, graphics and underlying codebase have also been refactored to improve performance and user experience. At the end of an analysis session, users can now easily switch to other compatible modules for a more streamlined data analysis. MetaboAnalyst 5.0 is freely available at https://www.metaboanalyst.ca

    Corra: Computational framework and tools for LC-MS discovery and targeted mass spectrometry-based proteomics

    Get PDF
    BACKGROUND: Quantitative proteomics holds great promise for identifying proteins that are differentially abundant between populations representing different physiological or disease states. A range of computational tools is now available for both isotopically labeled and label-free liquid chromatography mass spectrometry (LC-MS) based quantitative proteomics. However, they are generally not comparable to each other in terms of functionality, user interfaces, information input/output, and do not readily facilitate appropriate statistical data analysis. These limitations, along with the array of choices, present a daunting prospect for biologists, and other researchers not trained in bioinformatics, who wish to use LC-MS-based quantitative proteomics. RESULTS: We have developed Corra, a computational framework and tools for discovery-based LC-MS proteomics. Corra extends and adapts existing algorithms used for LC-MS-based proteomics, and statistical algorithms, originally developed for microarray data analyses, appropriate for LC-MS data analysis. Corra also adapts software engineering technologies (e.g. Google Web Toolkit, distributed processing) so that computationally intense data processing and statistical analyses can run on a remote server, while the user controls and manages the process from their own computer via a simple web interface. Corra also allows the user to output significantly differentially abundant LC-MS-detected peptide features in a form compatible with subsequent sequence identification via tandem mass spectrometry (MS/MS). We present two case studies to illustrate the application of Corra to commonly performed LC-MS-based biological workflows: a pilot biomarker discovery study of glycoproteins isolated from human plasma samples relevant to type 2 diabetes, and a study in yeast to identify in vivo targets of the protein kinase Ark1 via phosphopeptide profiling. CONCLUSION: The Corra computational framework leverages computational innovation to enable biologists or other researchers to process, analyze and visualize LC-MS data with what would otherwise be a complex and not user-friendly suite of tools. Corra enables appropriate statistical analyses, with controlled false-discovery rates, ultimately to inform subsequent targeted identification of differentially abundant peptides by MS/MS. For the user not trained in bioinformatics, Corra represents a complete, customizable, free and open source computational platform enabling LC-MS-based proteomic workflows, and as such, addresses an unmet need in the LC-MS proteomics field

    The Path to Clinical Proteomics Research: Integration of Proteomics, Genomics, Clinical Laboratory and Regulatory Science

    Get PDF
    Better biomarkers are urgently needed to cancer detection, diagnosis, and prognosis. While the genomics community is making significant advances in understanding the molecular basis of disease, proteomics will delineate the functional units of a cell, proteins and their intricate interaction network and signaling pathways for the underlying disease. Great progress has been made to characterize thousands of proteins qualitatively and quantitatively in complex biological systems by utilizing multi-dimensional sample fractionation strategies, mass spectrometry and protein microarrays. Comparative/quantitative analysis of high-quality clinical biospecimen (e.g., tissue and biofluids) of human cancer proteome landscape has the potential to reveal protein/peptide biomarkers responsible for this disease by means of their altered levels of expression, post-translational modifications as well as different forms of protein variants. Despite technological advances in proteomics, major hurdles still exist in every step of the biomarker development pipeline. The National Cancer Institute's Clinical Proteomic Technologies for Cancer initiative (NCI-CPTC) has taken a critical step to close the gap between biomarker discovery and qualification by introducing a pre-clinical "verification" stage in the pipeline, partnering with clinical laboratory organizations to develop and implement common standards, and developing regulatory science documents with the US Food and Drug Administration to educate the proteomics community on analytical evaluation requirements for multiplex assays in order to ensure the safety and effectiveness of these tests for their intended use

    OpenMS - A Framework for Quantitative HPLC/MS-Based Proteomics

    Get PDF
    In the talk we describe the freely available software library OpenMS which is currently under development at the Freie Universität Berlin and the Eberhardt-Karls Universität Tübingen. We give an overview of the goals and problems in differential proteomics with HPLC and then describe in detail the implemented approaches for signal processing, peak detection and data reduction currently employed in OpenMS. After this we describe methods to identify the differential expression of peptides and propose strategies to avoid MS/MS identification of peptides of interest. We give an overview of the capabilities and design principles of OpenMS and demonstrate its ease of use. Finally we describe projects in which OpenMS will be or was already deployed and thereby demonstrate its versatility
    corecore