57 research outputs found

    Selection of adequate optimization criteria in chromatographic separations

    Get PDF
    Computer-assisted optimization of chromatographic separations is still a fruitful activity. In fact, advances in computerized data handling should make the application of systematic optimization strategies much easier. However, in most contemporary applications, the optimization criterion is not considered to be a key issue (Vanbel, J Pharm Biomed, 21:603–610, 1999). In this paper, an update of the importance of selecting adequate criteria in chromatographic separation is presented

    Baitmet, a computational approach for GC–MS library-driven metabolite profiling

    Get PDF
    Current computational tools for gas chromatography – mass spectrometry (GC – MS) metabolomics profiling do not focus on metabolite identification, that still remains as the entire workflow bottleneck and it relies on manual d ata reviewing. Metabolomics ad vent has fostered the development of public metabolite repositories containing mass spectra and retentio n indices, two orthogonal prop erties needed for metabol ite identification. Such libraries can be used for library - driven compound profiling of large datasets produced in metabolomics, a complementary approach to current GC – MS non - targeted data analysis solutions that can eventually help to assess metabolite i dentities more efficiently. Results: This paper introduces Baitmet, an integrated open - source computational tool written in R enclosing a complete workflow to perform high - throughput library - driven GC – MS profiling in complex samples. Baitmet capabilities w ere assa yed in a metabolomics study in volving 182 human serum samples where a set of 61 metabolites were profiled given a reference library. Conclusions: Baitmet allows high - thr oughput and wide scope interro gation on the metabolic composition of complex sa mples analyzed using GC – MS via freely available spectral dataPeer ReviewedPostprint (author's final draft

    Bayesian approach for peak detection in two-dimensional chromatography

    No full text
    A new method for peak detection in two-dimensional chromatography is presented. In a first step, the method starts with a conventional one-dimensional peak detection algorithm to detect modulated peaks. In a second step, a sophisticated algorithm is constructed to decide which of the individual one-dimensional peaks have been originated from the same compound and should then be arranged in a two-dimensional peak. The merging algorithm is based on Bayesian inference. The user sets prior information about certain parameters (e.g., second-dimension retention time variability, first-dimension band broadening, chromatographic noise). On the basis of these priors, the algorithm calculates the probability of myriads of peak arrangements (i.e., ways of merging one-dimensional peaks), finding which of them holds the highest value. Uncertainty in each parameter can be accounted by adapting conveniently its probability distribution function, which in turn may change the final decision of the most probable peak arrangement. It has been demonstrated that the Bayesian approach presented in this paper follows the chromatographers’ intuition. The algorithm has been applied and tested with LC × LC and GC × GC data and takes around 1 min to process chromatograms with several thousands of peak

    Probabilistic model for untargeted peak detection in LC-MS using Bayesian statistics

    No full text
    We introduce a novel Bayesian probabilistic peak detection algorithm for liquid chromatography mass spectroscopy (LC-MS). The final probabilistic result allows the user to make a final decision about which points in a 2 chromatogram are affected by a chromatographic peak and which ones are only affected by noise. The use of probabilities contrasts with the traditional method in which a binary answer is given, relying on a threshold. By contrast, with the Bayesian peak detection presented here, the values of probability can be further propagated into other preprocessing steps, which will increase (or decrease) the importance of chromatographic regions into the final results. The present work is based on the use of the statistical overlap theory of component overlap from Davis and Giddings (Davis, J. M.; Giddings, J. Anal. Chem. 1983, 55, 418-424) as prior probability in the Bayesian formulation. The algorithm was tested on LC-MS Orbitrap data and was able to successfully distinguish chemical noise from actual peaks without any data preprocessing

    Use of Bayesian Statistics for Pairwise Comparison of Megavariate Data Sets: Extracting Meaningful Differences between GCxGC-MS Chromatograms Using Jensen-Shannon Divergence

    No full text
    A new method for comparison of GCxGC-MS is proposed. The method is aimed at spotting the differences between two GCxGC-MS injections, in order to highlight the differences between two samples, in order to flag differences in composition, or to spot compounds only present in one of the samples. The method is based on application of the Jensen-Shannon divergence (JS) analysis combined with Bayesian hypothesis testing. In order to make the method robust against misalignment in both time dimensions, a moving-window approach is proposed. Using a Bayesian framework, we provide a probabilistic visual map (i.e., log likelihood ratio map) of the significant differences between two data sets consequently excluding the deterministic (i.e., "yes" or "no") decision. We proved this approach to be a versatile tool in GCxGC-MS data analysis, especially when the differences are embedded inside a complex matrix. We tested the approach to spot contamination of diesel samples

    A New Bayesian Approach for Estimating the Presence of a Suspected Compound in Routine Screening Analysis

    No full text
    A novel method for compound identification in liquid chromatography-high resolution mass spectrometry (LC-HRMS) is proposed. The method, based on Bayesian statistics, accommodates all possible uncertainties involved, from instrumentation up to data analysis into a single model yielding the probability of the compound of interest being present/absent in the sample. This approach differs from the classical methods in two ways. First, it is probabilistic (instead of deterministic); hence, it computes the probability that the compound is (or is not) present in a sample. Second, it answers the hypothesis “the compound is present”, opposed to answering the question “the compound feature is present”. This second difference implies a shift in the way data analysis is tackled, since the probability of interfering compounds (i.e., isomers and isobaric compounds) is also taken into account
    corecore