112 research outputs found

    Examples of the effects of different averaging methods on carbon dioxide fluxes calculated using the eddy correlation method

    No full text
    International audienceThree hours of high frequency vertical windspeed and carbon dioxide concentration data recorded over tropical forest in Brazil are presented and discussed in relation to various detrending techniques used in eddy correlation analysis. Running means with time constants 100, 1000 and 1875s and a 30 minute linear detrend, as commonly used to determine fluxes, have been calculated for each case study and are presented. It is shown that, for different trends in the background concentration of carbon dioxide, the different methods can lead to the calculation of radically different fluxes over an hourly period. The examples emphasise the need for caution when interpreting eddy correlation derived fluxes especially for short term process studies. Keywords: Eddy covariance; detrending; running mean; carbon dioxide; tropical fores

    Selective complexation of divalent cations by a cyclic α,β-peptoid hexamer: a spectroscopic and computational study

    Get PDF
    We describe the qualitative and quantitative analysis of the complexation properties towards cations of a cyclic peptoid hexamer composed of alternating α- and β-peptoid monomers, which bear exclusively chiral (S)-phenylethyl side chains (spe) that have no noticeable chelating properties. The binding of a series of monovalent and divalent cations was assessed by 1H NMR, circular dichroism, fluorescence and molecular modelling. In contrast to previous studies on cations binding by 18-membered α-cyclopeptoid hexamers, the 21-membered cyclopeptoid cP1 did not complex monovalent cations (Na+, K+, Ag+) but showed selectivity for divalent cations (Ca2+, Ba2+, Sr2+ and Mg2+). Hexacoordinated C-3 symmetrical complexes were demonstrated for divalent cations with ionic radii around 1 Å (Ca2+ and Ba2+), while 5-coordination is preferred for divalent cations with larger (Ba2+) or smaller ionic radii (Mg2+)

    Statistical HOmogeneous Cluster SpectroscopY (SHOCSY): an optimized statistical approach for clustering of ¹H NMR spectral data to reduce interference and enhance robust biomarkers selection.

    Get PDF
    We propose a novel statistical approach to improve the reliability of (1)H NMR spectral analysis in complex metabolic studies. The Statistical HOmogeneous Cluster SpectroscopY (SHOCSY) algorithm aims to reduce the variation within biological classes by selecting subsets of homogeneous (1)H NMR spectra that contain specific spectroscopic metabolic signatures related to each biological class in a study. In SHOCSY, we used a clustering method to categorize the whole data set into a number of clusters of samples with each cluster showing a similar spectral feature and hence biochemical composition, and we then used an enrichment test to identify the associations between the clusters and the biological classes in the data set. We evaluated the performance of the SHOCSY algorithm using a simulated (1)H NMR data set to emulate renal tubule toxicity and further exemplified this method with a (1)H NMR spectroscopic study of hydrazine-induced liver toxicity study in rats. The SHOCSY algorithm improved the predictive ability of the orthogonal partial least-squares discriminatory analysis (OPLS-DA) model through the use of "truly" representative samples in each biological class (i.e., homogeneous subsets). This method ensures that the analyses are no longer confounded by idiosyncratic responders and thus improves the reliability of biomarker extraction. SHOCSY is a useful tool for removing irrelevant variation that interfere with the interpretation and predictive ability of models and has widespread applicability to other spectroscopic data, as well as other "omics" type of data

    Development and Experimental Validation of a 20K Atlantic Cod (Gadus morhua) Oligonucleotide Microarray Based on a Collection of over 150,000 ESTs

    Get PDF
    The collapse of Atlantic cod (Gadus morhua) wild populations strongly impacted the Atlantic cod fishery and led to the development of cod aquaculture. In order to improve aquaculture and broodstock quality, we need to gain knowledge of genes and pathways involved in Atlantic cod responses to pathogens and other stressors. The Atlantic Cod Genomics and Broodstock Development Project has generated over 150,000 expressed sequence tags from 42 cDNA libraries representing various tissues, developmental stages, and stimuli. We used this resource to develop an Atlantic cod oligonucleotide microarray containing 20,000 unique probes. Selection of sequences from the full range of cDNA libraries enables application of the microarray for a broad spectrum of Atlantic cod functional genomics studies. We included sequences that were highly abundant in suppression subtractive hybridization (SSH) libraries, which were enriched for transcripts responsive to pathogens or other stressors. These sequences represent genes that potentially play an important role in stress and/or immune responses, making the microarray particularly useful for studies of Atlantic cod gene expression responses to immune stimuli and other stressors. To demonstrate its value, we used the microarray to analyze the Atlantic cod spleen response to stimulation with formalin-killed, atypical Aeromonas salmonicida, resulting in a gene expression profile that indicates a strong innate immune response. These results were further validated by quantitative PCR analysis and comparison to results from previous analysis of an SSH library. This study shows that the Atlantic cod 20K oligonucleotide microarray is a valuable new tool for Atlantic cod functional genomics research

    Gene and genon concept: coding versus regulation: A conceptual and information-theoretic analysis of genetic storage and expression in the light of modern molecular biology

    Get PDF
    We analyse here the definition of the gene in order to distinguish, on the basis of modern insight in molecular biology, what the gene is coding for, namely a specific polypeptide, and how its expression is realized and controlled. Before the coding role of the DNA was discovered, a gene was identified with a specific phenotypic trait, from Mendel through Morgan up to Benzer. Subsequently, however, molecular biologists ventured to define a gene at the level of the DNA sequence in terms of coding. As is becoming ever more evident, the relations between information stored at DNA level and functional products are very intricate, and the regulatory aspects are as important and essential as the information coding for products. This approach led, thus, to a conceptual hybrid that confused coding, regulation and functional aspects. In this essay, we develop a definition of the gene that once again starts from the functional aspect. A cellular function can be represented by a polypeptide or an RNA. In the case of the polypeptide, its biochemical identity is determined by the mRNA prior to translation, and that is where we locate the gene. The steps from specific, but possibly separated sequence fragments at DNA level to that final mRNA then can be analysed in terms of regulation. For that purpose, we coin the new term “genon”. In that manner, we can clearly separate product and regulative information while keeping the fundamental relation between coding and function without the need to introduce a conceptual hybrid. In mRNA, the program regulating the expression of a gene is superimposed onto and added to the coding sequence in cis - we call it the genon. The complementary external control of a given mRNA by trans-acting factors is incorporated in its transgenon. A consequence of this definition is that, in eukaryotes, the gene is, in most cases, not yet present at DNA level. Rather, it is assembled by RNA processing, including differential splicing, from various pieces, as steered by the genon. It emerges finally as an uninterrupted nucleic acid sequence at mRNA level just prior to translation, in faithful correspondence with the amino acid sequence to be produced as a polypeptide. After translation, the genon has fulfilled its role and expires. The distinction between the protein coding information as materialised in the final polypeptide and the processing information represented by the genon allows us to set up a new information theoretic scheme. The standard sequence information determined by the genetic code expresses the relation between coding sequence and product. Backward analysis asks from which coding region in the DNA a given polypeptide originates. The (more interesting) forward analysis asks in how many polypeptides of how many different types a given DNA segment is expressed. This concerns the control of the expression process for which we have introduced the genon concept. Thus, the information theoretic analysis can capture the complementary aspects of coding and regulation, of gene and genon
    corecore