79 research outputs found

    Overlooked persistent and bioaccumulative pollutants in Lake Geneva:their measurement, occurrence, and concentration distribution in the water column and sediments

    Get PDF
    Persistent and bioaccumulative pollutants (PBPs) are continually introduced into the environment as a part of the massive ongoing chemical production that began several decades ago. The PBPs include several chemical families such as: industrial compounds, personal care products, agricultural chemicals, and pharmaceuticals. Most of these PBPs are neutral and more than 60% of them are halogenated. However the concentration distributions in the environment, partitioning properties, and environmental fate and behavior of many PBPs have not been investigated . Recent studies have reported on the lack of information regarding the occurrence, fate, and behavior of these PBPs in the environment (Howard and Muir 2006, 2010 and 2013). The authors emphasized the need for measurements of these PBPs in different environmental compartments in order to better understand their environmental fate and behavior. In this thesis, I investigated the occurrence of legacy and novel PBPs in a deep aquatic system, Lake Geneva. Measuring these compounds in environmental samples is a challenging task due to their trace level concentrations and due to the complexity of the samples, manifest as matrix effect. I employed comprehensive two-dimensional gas chromatography (GCĂ—GC) to tackle these challenges. Throughout the thesis I refer to "novel PBPs" as PBPs that are neutral, organic, non-legacy, and that have not been measured in the environment. This terminology is similar to that adopted by Howard and Muir, 2010. I report results for several water column and sediment samples that were analyzed for a suite of 69 PBPs, including novel PBPs, PBDEs, PCBs, OCPs and halogenated benzenes. This leads to the first reported detection and quantification for several novel PBPs (i.e. 4-bromobiphenyl (4BBP), tribromobenzene (TBB), and pentachlorothiophenol (PCTP)) in a lake environment. Results for several legacy PBPs, including PBDEs and PCBs, are also reported. In Chapter 2 of the thesis I developed an analytical protocol for detection, quantification, and identity confirmation of trace level PBPs in environmental samples. This method took advantage of the separation power of GCĂ—GC combined to highly sensitive detectors, including electron capture negative chemical ionization (ENCI)-TOFMS, micro electron capture detector (ÂżECD), and flame ionization detector (FID). Chapter 2 evaluates the effectiveness of the application of GCĂ—GC-ECD for the detection and quantification of trace-level PBPs in the lake environment. In particular, I investigate automated baseline correction and peak delineation algorithms for their ability to remove matrix effect and quantify trace level PBPs in complex environmental samples. By employing a suite of chemometric tests, I systematically assessed different baseline correction and peak delineation algorithms for their confidence and accuracy of target analyte quantification. The results of chemometric tests showed the crucial importance of the baseline correction algorithm for accurate peak integration. An aggressive baseline correction method systematically produced the best results for the chemometric tests, which indicated a better matrix effect removal. The results of the analytical protocol were also validated using a certified reference material. The validated analytical procedure led to the successful detection and quantification of 18 trace level target analytes, including 7 PAHs in a light diesel fuel and 11 chlorinated hydrocarbons

    Statistical variable selection: an alternative prioritization strategy during the nontarget analysis of LC-HR-MS data

    Get PDF
    Liquid chromatography coupled to high resolution mass spectrometry (LC-HR-MS) has been one of the main analytical tools for the analysis of small polar organic pollutants in the environment. LC-HR-MS typically produces a large amount of data for a single chromatogram. The analyst is therefore required to perform prioritization prior to nontarget structural elucidation. In the present study, we have combined the F-ratio statistical variable selection and the apex detection algorithms in order to perform prioritization in data sets produced via LC-HR-MS. The approach was validated through the use of semisynthetic data, which was a combination of real environmental data and the artificially added signal of 31 alkanes in that sample. We evaluated the performance of this method as a function of four false detection probabilities, namely: 0.01, 0.02, 0.05, and 0.1%. We generated 100 different semisynthetic data sets for each F-ratio and evaluated that data set using this method. This design of experiment created a population of 30 000 true positives and 32 000 true negatives for each F-ratio, which was considered sufficiently large enough in order to fully validate this method for analysis of LC-HR-MS data. The effect of both the F-ratio and signal-to-noise ratio (S/N) on the performance of the suggested approach were evaluated through normalized statistical tests. We also compared this method to the pixel-by-pixel as well as peak list approaches. More than 92% of features present in the final feature list via the F-ratio method were also present in the conventional peak list generated by MZmine. However, this method was the only approach successful in the classification of samples, and thus prioritization, when compared to the other evaluated approaches. The application potential and limitations of the suggested method are discussed.acceptedVersio

    Examining the Relevance of the Microplastic-Associated Additive Fraction in Environmental Compartments

    Get PDF
    Plastic contamination is ubiquitous in the environment and has been related to increasing global plastic usage since the 1950s. Considering the omnipresence of additives in plastics, the risk posed by this contamination is related not only to the physical effects of plastic particles but also to their additive content. Until now, most routine environmental monitoring programs involving additives have not considered the presence of these additives still associated with the plastic they were added to during their production. Understanding environmental additive speciation is essential to address the risk they pose through their bioavailability and plastic-associated transport. Here, we present and apply a theoretical framework for sampling and analytical procedures to characterize the speciation of hydrophobic nonionized additives in environmental compartments. We show that this simple framework can help develop sampling and sample treatment procedures to quantify plastic-associated additives and understand additive distribution between plastics and organic matter. When applied to concrete cases, internal consistency checks with the model allowed for identifying plastic-associated additives in a sample. In other cases, the plastic-organic carbon ratio and additive concentration in the matrix are key factors affecting the ability to identify plastic-associated additives. The effect of additive dissipation through diffusion out of plastic particles is also considered.publishedVersio

    Metabolome-Based Classification of Snake Venoms by Bioinformatic Tools

    Get PDF
    Snakebite is considered a neglected tropical disease, and it is one of the most intricate ones. The variability found in snake venom is what makes it immensely complex to study. These variations are present both in the big and the small molecules found in snake venom. This study focused on examining the variability found in the venom’s small molecules (i.e., mass range of 100–1000 Da) between two main families of venomous snakes—Elapidae and Viperidae—managing to create a model able to classify unknown samples by means of specific features, which can be extracted from their LC–MS data and output in a comprehensive list. The developed model also allowed further insight into the composition of snake venom by highlighting the most relevant metabolites of each group by clustering similarly composed venoms. The model was created by means of support vector machines and used 20 features, which were merged into 10 principal components. All samples from the first and second validation data subsets were correctly classified. Biological hypotheses relevant to the variation regarding the metabolites that were identified are also given

    Wastewater-based estimation of the prevalence of gout in Australia

    Get PDF
    Embargo until 25 Jan 2022Allopurinol, a first-line gout treatment drug in Australia, was assessed as a wastewater-based epidemiology biomarker of gout via quantification of the urinary metabolite, oxypurinol in wastewater. The in-sewer stability of oxypurinol was examined using laboratory-scale sewer reactors. Wastewater from 75 wastewater treatment plants across Australia, covering approximately 52% (12.2 million) of the country's population, was collected on the 2016 census day. Oxypurinol was quantified in the wastewater samples and population-weighted mass loads calculated. Pearson and Spearman rank-order correlations were applied to investigate any link between allopurinol, other selected wastewater biomarkers, and socio-economic indicators. Oxypurinol was shown to be stable in sewer conditions and suitable as a WBE biomarker. Oxypurinol was detected in all wastewater samples. The estimated consumption of allopurinol ranged from 1.9 to 32 g/day/1000 people equating to 4.8 to 80 DDD/day/1000 people. The prevalence of gout across all tested sewer catchments was between 0.5% to 8%, with a median of 2.9% nationally. No significant positive correlation was observed between allopurinol consumption and alcohol consumption, mean age of catchment population, remoteness or higher socioeconomic status. There was a significant positive correlation with selective analgesic drug use. Wastewater analysis can be used to study gout prevalence and can provide additional insights on population level risk factors when triangulated with other biomarkers.acceptedVersio

    From Centroided to Profile Mode: Machine Learning for Prediction of Peak Width in HRMS Data

    Get PDF
    Centroiding is one of the major approaches used for size reduction of the data generated by high-resolution mass spectrometry. During centroiding, performed either during acquisition or as a pre-processing step, the mass profiles are represented by a single value (i.e., the centroid). While being effective in reducing the data size, centroiding also reduces the level of information density present in the mass peak profile. Moreover, each step of the centroiding process and their consequences on the final results may not be completely clear. Here, we present Cent2Prof, a package containing two algorithms that enables the conversion of the centroided data to mass peak profile data and vice versa. The centroiding algorithm uses the resolution-based mass peak width parameter as the first guess and self-adjusts to fit the data. In addition to the m/z values, the centroiding algorithm also generates the measured mass peak widths at half-height, which can be used during the feature detection and identification. The mass peak profile prediction algorithm employs a random-forest model for the prediction of mass peak widths, which is consequently used for mass profile reconstruction. The centroiding results were compared to the outputs of the MZmine-implemented centroiding algorithm. Our algorithm resulted in rates of false detection ≤5% while the MZmine algorithm resulted in 30% rate of false positive and 3% rate of false negative. The error in profile prediction was ≤56% independent of the mass, ionization mode, and intensity, which was 6 times more accurate than the resolution-based estimated values.publishedVersio

    Assessing sample extraction efficiencies for the analysis of complex unresolved mixtures of organic pollutants: a comprehensive non-target approach

    Get PDF
    The comprehensive extraction recovery assessment of organic analytes from complex samples such as oil field produced water (PW) is a challenging task. A targeted approach is usually used for recovery and determination of compounds in these types of analysis. Here we suggest a more comprehensive and less biased approach for the extraction recovery assessment of complex samples. This method combines conventional targeted analysis with a non-targeted approach to evaluate the extraction recovery of complex mixtures. Three generic extraction methods: liquid-liquid extraction (Lq), and solid phase extraction using HLB cartridges (HLB), and the combination of ENV+ and C8 (ENV) cartridges, were selected for evaluation. PW was divided into three parts: non-spiked, spiked level 1, and spiked level 2 for analysis. The spiked samples were used for targeted evaluation of extraction recoveries of 65 added target analytes comprising alkanes, phenols, and polycyclic aromatic hydrocarbons, producing absolute recoveries. The non-spiked samples were used for the non-targeted approach, which used a combination of the F-ratio method and apex detection algorithm. Targeted analysis showed that the use of ENV cartridges and the Lq method performed better than use of HLB cartridges, producing absolute recoveries of 53.1 ± 15.2 for ENV and 46.8 ± 13.2 for Lq versus 19.7 ± 6.7 for HLB. These two methods appeared to produce statistically similar results for recoveries of analytes, whereas they were both different from the produced recoveries via the HLB method. The non-targeted approach captured unique features that were specific to each extraction method. This approach generated 26 unique features (mass spectral ions), which were significantly different between samples and were relevant in differentiating each extract from each method. Using a combination of these targeted and non-targeted methods we evaluated the extraction recoveries of the three extraction methods for analysis of PW

    Studying Venom Toxin Variation Using Accurate Masses from Liquid Chromatography–Mass Spectrometry Coupled with Bioinformatic Tools

    Get PDF
    This study provides a new methodology for the rapid analysis of numerous venom samples in an automated fashion. Here, we use LC-MS (Liquid Chromatography–Mass Spectrometry) for venom separation and toxin analysis at the accurate mass level combined with new in-house written bioinformatic scripts to obtain high-throughput results. This analytical methodology was validated using 31 venoms from all members of a monophyletic clade of Australian elapids: brown snakes (Pseudonaja spp.) and taipans (Oxyuranus spp.). In a previous study, we revealed extensive venom variation within this clade, but the data was manually processed and MS peaks were integrated into a time-consuming and labour-intensive approach. By comparing the manual approach to our new automated approach, we now present a faster and more efficient pipeline for analysing venom variation. Pooled venom separations with post-column toxin fractionations were performed for subsequent high-throughput venomics to obtain toxin IDs correlating to accurate masses for all fractionated toxins. This workflow adds another dimension to the field of venom analysis by providing opportunities to rapidly perform in-depth studies on venom variation. Our pipeline opens new possibilities for studying animal venoms as evolutionary model systems and investigating venom variation to aid in the development of better antivenoms

    Inter-laboratory mass spectrometry dataset based on passive sampling of drinking water for non-target analysis

    Get PDF
    Non-target analysis (NTA) employing high-resolution mass spectrometry is a commonly applied approach for the detection of novel chemicals of emerging concern in complex environmental samples. NTA typically results in large and information-rich datasets that require computer aided (ideally automated) strategies for their processing and interpretation. Such strategies do however raise the challenge of reproducibility between and within different processing workflows. An effective strategy to mitigate such problems is the implementation of inter-laboratory studies (ILS) with the aim to evaluate different workflows and agree on harmonized/standardized quality control procedures. Here we present the data generated during such an ILS. This study was organized through the Norman Network and included 21 participants from 11 countries. A set of samples based on the passive sampling of drinking water pre and post treatment was shipped to all the participating laboratories for analysis, using one pre-defined method and one locally (i.e. in-house) developed method. The data generated represents a valuable resource (i.e. benchmark) for future developments of algorithms and workflows for NTA experiments

    The NORMAN Association and the European Partnership for Chemicals Risk Assessment (PARC): let’s cooperate! [Commentary]

    Get PDF
    The Partnership for Chemicals Risk Assessment (PARC) is currently under development as a joint research and innovation programme to strengthen the scientific basis for chemical risk assessment in the EU. The plan is to bring chemical risk assessors and managers together with scientists to accelerate method development and the production of necessary data and knowledge, and to facilitate the transition to next-generation evidence-based risk assessment, a non-toxic environment and the European Green Deal. The NORMAN Network is an independent, well-established and competent network of more than 80 organisations in the field of emerging substances and has enormous potential to contribute to the implementation of the PARC partnership. NORMAN stands ready to provide expert advice to PARC, drawing on its long experience in the development, harmonisation and testing of advanced tools in relation to chemicals of emerging concern and in support of a European Early Warning System to unravel the risks of contaminants of emerging concern (CECs) and close the gap between research and innovation and regulatory processes. In this commentary we highlight the tools developed by NORMAN that we consider most relevant to supporting the PARC initiative: (i) joint data space and cutting-edge research tools for risk assessment of contaminants of emerging concern; (ii) collaborative European framework to improve data quality and comparability; (iii) advanced data analysis tools for a European early warning system and (iv) support to national and European chemical risk assessment thanks to harnessing, combining and sharing evidence and expertise on CECs. By combining the extensive knowledge and experience of the NORMAN network with the financial and policy-related strengths of the PARC initiative, a large step towards the goal of a non-toxic environment can be taken
    • …
    corecore