263 research outputs found

    Introducing a new breed of wine yeast: interspecific hybridisation between a commercial Saccharomyces cerevisiae wine yeast and Saccharomyces mikatae

    Get PDF
    Interspecific hybrids are commonplace in agriculture and horticulture; bread wheat and grapefruit are but two examples. The benefits derived from interspecific hybridisation include the potential of generating advantageous transgressive phenotypes. This paper describes the generation of a new breed of wine yeast by interspecific hybridisation between a commercial Saccharomyces cerevisiae wine yeast strain and Saccharomyces mikatae, a species hitherto not associated with industrial fermentation environs. While commercially available wine yeast strains provide consistent and reliable fermentations, wines produced using single inocula are thought to lack the sensory complexity and rounded palate structure obtained from spontaneous fermentations. In contrast, interspecific yeast hybrids have the potential to deliver increased complexity to wine sensory properties and alternative wine styles through the formation of novel, and wider ranging, yeast volatile fermentation metabolite profiles, whilst maintaining the robustness of the wine yeast parent. Screening of newly generated hybrids from a cross between a S. cerevisiae wine yeast and S. mikatae (closely-related but ecologically distant members of the Saccharomyces sensu stricto clade), has identified progeny with robust fermentation properties and winemaking potential. Chemical analysis showed that, relative to the S. cerevisiae wine yeast parent, hybrids produced wines with different concentrations of volatile metabolites that are known to contribute to wine flavour and aroma, including flavour compounds associated with non-Saccharomyces species. The new S. cerevisiae x S. mikatae hybrids have the potential to produce complex wines akin to products of spontaneous fermentation while giving winemakers the safeguard of an inoculated ferment.Jennifer R. Bellon, Frank Schmid, Dimitra L. Capone, Barbara L. Dunn, Paul J. Chamber

    Robust probabilistic superposition and comparison of protein structures

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Protein structure comparison is a central issue in structural bioinformatics. The standard dissimilarity measure for protein structures is the root mean square deviation (RMSD) of representative atom positions such as α-carbons. To evaluate the RMSD the structures under comparison must be superimposed optimally so as to minimize the RMSD. How to evaluate optimal fits becomes a matter of debate, if the structures contain regions which differ largely - a situation encountered in NMR ensembles and proteins undergoing large-scale conformational transitions.</p> <p>Results</p> <p>We present a probabilistic method for robust superposition and comparison of protein structures. Our method aims to identify the largest structurally invariant core. To do so, we model non-rigid displacements in protein structures with outlier-tolerant probability distributions. These distributions exhibit heavier tails than the Gaussian distribution underlying standard RMSD minimization and thus accommodate highly divergent structural regions. The drawback is that under a heavy-tailed model analytical expressions for the optimal superposition no longer exist. To circumvent this problem we work with a scale mixture representation, which implies a weighted RMSD. We develop two iterative procedures, an Expectation Maximization algorithm and a Gibbs sampler, to estimate the local weights, the optimal superposition, and the parameters of the heavy-tailed distribution. Applications demonstrate that heavy-tailed models capture differences between structures undergoing substantial conformational changes and can be used to assess the precision of NMR structures. By comparing Bayes factors we can automatically choose the most adequate model. Therefore our method is parameter-free.</p> <p>Conclusions</p> <p>Heavy-tailed distributions are well-suited to describe large-scale conformational differences in protein structures. A scale mixture representation facilitates the fitting of these distributions and enables outlier-tolerant superposition.</p

    Integrating Sequencing Technologies in Personal Genomics: Optimal Low Cost Reconstruction of Structural Variants

    Get PDF
    The goal of human genome re-sequencing is obtaining an accurate assembly of an individual's genome. Recently, there has been great excitement in the development of many technologies for this (e.g. medium and short read sequencing from companies such as 454 and SOLiD, and high-density oligo-arrays from Affymetrix and NimbelGen), with even more expected to appear. The costs and sensitivities of these technologies differ considerably from each other. As an important goal of personal genomics is to reduce the cost of re-sequencing to an affordable point, it is worthwhile to consider optimally integrating technologies. Here, we build a simulation toolbox that will help us optimally combine different technologies for genome re-sequencing, especially in reconstructing large structural variants (SVs). SV reconstruction is considered the most challenging step in human genome re-sequencing. (It is sometimes even harder than de novo assembly of small genomes because of the duplications and repetitive sequences in the human genome.) To this end, we formulate canonical problems that are representative of issues in reconstruction and are of small enough scale to be computationally tractable and simulatable. Using semi-realistic simulations, we show how we can combine different technologies to optimally solve the assembly at low cost. With mapability maps, our simulations efficiently handle the inhomogeneous repeat-containing structure of the human genome and the computational complexity of practical assembly algorithms. They quantitatively show how combining different read lengths is more cost-effective than using one length, how an optimal mixed sequencing strategy for reconstructing large novel SVs usually also gives accurate detection of SNPs/indels, how paired-end reads can improve reconstruction efficiency, and how adding in arrays is more efficient than just sequencing for disentangling some complex SVs. Our strategy should facilitate the sequencing of human genomes at maximum accuracy and low cost

    Early Treatment with Basal Insulin Glargine in People with Type 2 Diabetes: Lessons from ORIGIN and Other Cardiovascular Trials

    Get PDF
    Dysglycemia results from a deficit in first-phase insulin secretion compounded by increased insulin insensitivity, exposing beta cells to chronic hyperglycemia and excessive glycemic variability. Initiation of intensive insulin therapy at diagnosis of type 2 diabetes mellitus (T2DM) to achieve normoglycemia has been shown to reverse glucotoxicity, resulting in recovery of residual beta-cell function. The United Kingdom Prospective Diabetes Study (UKPDS) 10-year post-trial follow-up reported reductions in cardiovascular outcomes and all-cause mortality in persons with T2DM who initially received intensive glucose control compared with standard therapy. In the cardiovascular outcome trial, outcome reduction with an initial glargine intervention (ORIGIN), a neutral effect on cardiovascular disease was observed in the population comprising prediabetes and T2DM. Worsening of glycemic control was prevented over the 6.7 year treatment period, with few serious hypoglycemic episodes and only moderate weight gain, with a lesser need for dual or triple oral treatment versus standard care. Several other studies have also highlighted the benefits of early insulin initiation as first-line or add-on therapy to metformin. The decision to introduce basal insulin to metformin must, however be individualized based on a risk-benefit analysis. The landmark ORIGIN trial provides many lessons relating to the concept and application of early insulin therapy for the prevention and safe and effective induction and maintenance of glycemic control in type 2 diabetes

    Attention-dependent modulation of cortical taste circuits revealed by granger causality with signal-dependent noise

    Get PDF
    We show, for the first time, that in cortical areas, for example the insular, orbitofrontal, and lateral prefrontal cortex, there is signal-dependent noise in the fMRI blood-oxygen level dependent (BOLD) time series, with the variance of the noise increasing approximately linearly with the square of the signal. Classical Granger causal models are based on autoregressive models with time invariant covariance structure, and thus do not take this signal-dependent noise into account. To address this limitation, here we describe a Granger causal model with signal-dependent noise, and a novel, likelihood ratio test for causal inferences. We apply this approach to the data from an fMRI study to investigate the source of the top-down attentional control of taste intensity and taste pleasantness processing. The Granger causality with signal-dependent noise analysis reveals effects not identified by classical Granger causal analysis. In particular, there is a top-down effect from the posterior lateral prefrontal cortex to the insular taste cortex during attention to intensity but not to pleasantness, and there is a top-down effect from the anterior and posterior lateral prefrontal cortex to the orbitofrontal cortex during attention to pleasantness but not to intensity. In addition, there is stronger forward effective connectivity from the insular taste cortex to the orbitofrontal cortex during attention to pleasantness than during attention to intensity. These findings indicate the importance of explicitly modeling signal-dependent noise in functional neuroimaging, and reveal some of the processes involved in a biased activation theory of selective attention

    Search for Kaluza-Klein Graviton Emission in ppˉp\bar{p} Collisions at s=1.8\sqrt{s}=1.8 TeV using the Missing Energy Signature

    Get PDF
    We report on a search for direct Kaluza-Klein graviton production in a data sample of 84 pb1{pb}^{-1} of \ppb collisions at s\sqrt{s} = 1.8 TeV, recorded by the Collider Detector at Fermilab. We investigate the final state of large missing transverse energy and one or two high energy jets. We compare the data with the predictions from a 3+1+n3+1+n-dimensional Kaluza-Klein scenario in which gravity becomes strong at the TeV scale. At 95% confidence level (C.L.) for nn=2, 4, and 6 we exclude an effective Planck scale below 1.0, 0.77, and 0.71 TeV, respectively.Comment: Submitted to PRL, 7 pages 4 figures/Revision includes 5 figure

    Measurement of the average time-integrated mixing probability of b-flavored hadrons produced at the Tevatron

    Get PDF
    We have measured the number of like-sign (LS) and opposite-sign (OS) lepton pairs arising from double semileptonic decays of bb and bˉ\bar{b}-hadrons, pair-produced at the Fermilab Tevatron collider. The data samples were collected with the Collider Detector at Fermilab (CDF) during the 1992-1995 collider run by triggering on the existence of μμ\mu \mu and eμe \mu candidates in an event. The observed ratio of LS to OS dileptons leads to a measurement of the average time-integrated mixing probability of all produced bb-flavored hadrons which decay weakly, χˉ=0.152±0.007\bar{\chi} = 0.152 \pm 0.007 (stat.) ±0.011\pm 0.011 (syst.), that is significantly larger than the world average χˉ=0.118±0.005\bar{\chi} = 0.118 \pm 0.005.Comment: 47 pages, 10 figures, 15 tables Submitted to Phys. Rev.
    corecore