175 research outputs found

    Diversity and biosynthetic potential of culturable microbes associated with toxic marine animals

    Get PDF
    Tetrodotoxin (TTX) is a neurotoxin that has been reported from taxonomically diverse organisms across 14 different phyla. The biogenic origin of tetrodotoxin is still disputed, however, TTX biosynthesis by host-associated bacteria has been reported. An investigation into the culturable microbial populations from the TTX-associated blue-ringed octopus Hapalochlaena sp. and sea slug Pleurobranchaea maculata revealed a surprisingly high microbial diversity. Although TTX was not detected among the cultured isolates, PCR screening identifiedsome natural product biosynthesis genes putatively involved in its assembly. This study is the first to report on the microbial diversity of culturable communities from H. maculosa and P. maculata and common natural product biosynthesis genes from their microbiota. We also reassess the production of TTX reported from three bacterial strains isolated from the TTX-containing gastropod Nassarius semiplicatus

    Residual Component Analysis

    Get PDF
    Probabilistic principal component analysis (PPCA) seeks a low dimensional representation of a data set in the presence of independent spherical Gaussian noise, Sigma = (sigma^2)*I. The maximum likelihood solution for the model is an eigenvalue problem on the sample covariance matrix. In this paper we consider the situation where the data variance is already partially explained by other factors, e.g. covariates of interest, or temporal correlations leaving some residual variance. We decompose the residual variance into its components through a generalized eigenvalue problem, which we call residual component analysis (RCA). We show that canonical covariates analysis (CCA) is a special case of our algorithm and explore a range of new algorithms that arise from the framework. We illustrate the ideas on a gene expression time series data set and the recovery of human pose from silhouette

    Bayesian inference via projections

    Get PDF
    Bayesian inference often poses difficult computational problems. Even when off-the-shelf Markov chain Monte Carlo (MCMC) methods are available to the problem at hand, mixing issues might compromise the quality of the results. We introduce a framework for situations where the model space can be naturally divided into two components: (i) a baseline black-box probability distribution for the observed variables and (ii) constraints enforced on functionals of this probability distribution. Inference is performed by sampling from the posterior implied by the first component, and finding projections on the space defined by the second component. We discuss the implications of this separation in terms of priors, model selection, and MCMC mixing in latent variable models. Case studies include probabilistic principal component analysis, models of marginal independence, and a interpretable class of structured ordinal probit models

    Residual Component Analysis: Generalising PCA for more flexible inference in linear-Gaussian models

    Get PDF
    Probabilistic principal component analysis (PPCA) seeks a low dimensional representation of a data set in the presence of independent spherical Gaussian noise. The maximum likelihood solution for the model is an eigenvalue problem on the sample covariance matrix. In this paper we consider the situation where the data variance is already partially explained by other actors, for example sparse conditional dependencies between the covariates, or temporal correlations leaving some residual variance. We decompose the residual variance into its components through a generalised eigenvalue problem, which we call residual component analysis (RCA). We explore a range of new algorithms that arise from the framework, including one that factorises the covariance of a Gaussian density into a low-rank and a sparse-inverse component. We illustrate the ideas on the recovery of a protein-signaling network, a gene expression time-series data set and the recovery of the human skeleton from motion capture 3-D cloud data

    Context-Dependent Behavior of the Enterocin Iterative Polyketide Synthase A New Model for Ketoreduction

    Get PDF
    AbstractHeterologous expression and mutagenesis of the enterocin type II polyketide synthase (PKS) system suggest for the first time that the association of an extended set of proteins and substrates is needed for the effective production of the enterocin-wailupemycin polyketides. In the absence of its endogenous ketoreductase (KR) EncD in either the enterocin producer “Streptomyces maritimus” or the engineered host S. lividans K4-114, the enterocin minimal PKS is unable to produce benzoate-primed polyketides, even when complemented with the homologous actinorhodin KR ActIII or with EncD active site mutants. These data suggest that the enterocin PKS requires EncD to serve a catalytic and not just a structural role in the functional PKS enzyme complex. This strongly implies that EncD reduces the polyketide chain during elongation rather than after its complete assembly, as suggested for most type II PKSs

    Alternariol 9- O

    Full text link

    Probabilistic Super-Resolution of Solar Magnetograms: Generating Many Explanations and Measuring Uncertainties

    Get PDF
    Machine learning techniques have been successfully applied to super-resolution tasks on natural images where visually pleasing results are sufficient. However in many scientific domains this is not adequate and estimations of errors and uncertainties are crucial. To address this issue we propose a Bayesian framework that decomposes uncertainties into epistemic and aleatoric uncertainties. We test the validity of our approach by super-resolving images of the Sun's magnetic field and by generating maps measuring the range of possible high resolution explanations compatible with a given low resolution magnetogram

    Single-Frame Super-Resolution of Solar Magnetograms: Investigating Physics-Based Metrics \& Losses

    Get PDF
    Breakthroughs in our understanding of physical phenomena have traditionally followed improvements in instrumentation. Studies of the magnetic field of the Sun, and its influence on the solar dynamo and space weather events, have benefited from improvements in resolution and measurement frequency of new instruments. However, in order to fully understand the solar cycle, high-quality data across time-scales longer than the typical lifespan of a solar instrument are required. At the moment, discrepancies between measurement surveys prevent the combined use of all available data. In this work, we show that machine learning can help bridge the gap between measurement surveys by learning to \textbf{super-resolve} low-resolution magnetic field images and \textbf{translate} between characteristics of contemporary instruments in orbit. We also introduce the notion of physics-based metrics and losses for super-resolution to preserve underlying physics and constrain the solution space of possible super-resolution outputs
    corecore