3 research outputs found

    What is Normalization? The Strategies Employed in Top-Down and Bottom-Up Proteome Analysis Workflows.

    Full text link
    The accurate quantification of changes in the abundance of proteins is one of the main applications of proteomics. The maintenance of accuracy can be affected by bias and error that can occur at many points in the experimental process, and normalization strategies are crucial to attempt to overcome this bias and return the sample to its regular biological condition, or normal state. Much work has been published on performing normalization on data post-acquisition with many algorithms and statistical processes available. However, there are many other sources of bias that can occur during experimental design and sample handling that are currently unaddressed. This article aims to cast light on the potential sources of bias and where normalization could be applied to return the sample to its normal state. Throughout we suggest solutions where possible but, in some cases, solutions are not available. Thus, we see this article as a starting point for discussion of the definition of and the issues surrounding the concept of normalization as it applies to the proteomic analysis of biological samples. Specifically, we discuss a wide range of different normalization techniques that can occur at each stage of the sample preparation and analysis process

    Advanced bioinformatics methods for practical applications in proteomics

    No full text
    Mass spectrometry (MS)-based proteomics has undergone rapid advancements in recent years, creating challenging problems for bioinformatics. We focus on four aspects where bioinformatics plays a crucial role (and proteomics is needed for clinical application): peptide-spectra matching (PSM) based on the new data-independent acquisition (DIA) paradigm, resolving missing proteins (MPs), dealing with biological and technical heterogeneity in data and statistical feature selection (SFS). DIA is a brute-force strategy that provides greater width and depth but, because it indiscriminately captures spectra such that signal from multiple peptides is mixed, getting good PSMs is difficult. We consider two strategies: simplification of DIA spectra to pseudo-data-dependent acquisition spectra or, alternatively, brute-force search of each DIA spectra against known reference libraries. The MP problem arises when proteins are never (or inconsistently) detected by MS. When observed in at least one sample, imputation methods can be used to guess the approximate protein expression level. If never observed at all, network/protein complex-based contextualization provides an independent prediction platform. Data heterogeneity is a difficult problem with two dimensions: technical (batch effects), which should be removed, and biological (including demography and disease subpopulations), which should be retained. Simple normalization is seldom sufficient, while batch effect-correction algorithms may create errors. Batch effect-resistant normalization methods are a viable alternative. Finally, SFS is vital for practical applications. While many methods exist, there is no best method, and both upstream (e.g. normalization) and downstream processing (e.g. multiple-testing correction) are performance confounders. We also discuss signal detection when class effects are weak.Ministry of Education (MOE)Accepted versionA Singapore Ministry of Education tier-2 grant (MOE2012-T2-1-061) to L. W

    Advanced bioinformatics methods for practical applications in proteomics

    Full text link
    corecore