9,281 research outputs found

    Large deviation principles for the Ewens-Pitman sampling model

    Get PDF
    Let Ml,nM_{l,n} be the number of blocks with frequency ll in the exchangeable random partition induced by a sample of size nn from the Ewens-Pitman sampling model. We show that, as nn tends to infinity, n−1Ml,nn^{-1}M_{l,n} satisfies a large deviation principle and we characterize the corresponding rate function. A conditional counterpart of this large deviation principle is also presented. Specifically, given an initial sample of size nn from the Ewens-Pitman sampling model, we consider an additional sample of size mm. For any fixed nn and as mm tends to infinity, we establish a large deviation principle for the conditional number of blocks with frequency ll in the enlarged sample, given the initial sample. Interestingly, the conditional and unconditional large deviation principles coincide, namely there is no long lasting impact of the given initial sample. Potential applications of our results are discussed in the context of Bayesian nonparametric inference for discovery probabilities.Comment: 30 pages, 2 figure

    Consolidated bioprocessing of starchy substrates into ethanol by industrial Saccharomyces cerevisiae strains secreting fungal amylases

    Get PDF
    The development of a yeast strain that converts raw starch to ethanol in one step (called Consolidated Bioprocessing, CBP) could significantly reduce the commercial costs of starch-based bioethanol. An efficient amylolytic Saccharomyces cerevisiae strain suitable for industrial bioethanol production was developed in this study. Codon-optimized variants of the Thermomyces lanuginosus glucoamylase (TLG1) and Saccharomycopsis fibuligera -amylase (SFA1) genes were -integrated into two S. cerevisiae yeast with promising industrial traits, i.e., strains M2n and MEL2. The recombinant M2n[TLG1-SFA1] and MEL2[TLG1-SFA1] yeast displayed high enzyme activities on soluble and raw starch (up to 8118 and 4461nkat/g dry cell weight, respectively) and produced about 64g/L ethanol from 200g/L raw corn starch in a bioreactor, corresponding to 55% of the theoretical maximum ethanol yield (g of ethanol/g of available glucose equivalent). Their starch-to-ethanol conversion efficiencies were even higher on natural sorghum and triticale substrates (62 and 73% of the theoretical yield, respectively). This is the first report of direct ethanol production from natural starchy substrates (without any pre-treatment or commercial enzyme addition) using industrial yeast strains co-secreting both a glucoamylase and -amylase

    Negative index of refraction, perfect lenses and transformation optics -- some words of caution

    Full text link
    In this paper we show that a negative index of refraction is not a direct implication of transformation optics with orientation-reversing diffeomorphisms. Rather a negative index appears due to a specific choice of sign freedom. Furthermore, we point out that the transformation designed lens, which relies on the concept of spacetime folding, does not amplify evanescent modes, in contrast to the Pendry-Veselago lens. Instead, evanescent modes at the image point are produced by a duplicated source and thus no imaging of the near field (perfect lensing) takes place.Comment: 13 pages, 3 figures, LaTe

    A Berry-Esseen theorem for Pitman's α\alpha-diversity

    Full text link
    This paper is concerned with the study of the random variable KnK_n denoting the number of distinct elements in a random sample (X1,
,Xn)(X_1, \dots, X_n) of exchangeable random variables driven by the two parameter Poisson-Dirichlet distribution, PD(α,Ξ)PD(\alpha,\theta). For α∈(0,1)\alpha\in(0,1), Theorem 3.8 in \cite{Pit(06)} shows that Knnα⟶a.s.Sα,Ξ\frac{K_n}{n^{\alpha}}\stackrel{\text{a.s.}}{\longrightarrow} S_{\alpha,\theta} as n→+∞n\rightarrow+\infty. Here, Sα,ΞS_{\alpha,\theta} is a random variable distributed according to the so-called scaled Mittag-Leffler distribution. Our main result states that \sup_{x \geq 0} \Big| \ppsf\Big[\frac{K_n}{n^{\alpha}} \leq x \Big] - \ppsf[S_{\alpha,\theta} \leq x] \Big| \leq \frac{C(\alpha, \theta)}{n^{\alpha}} holds with an explicit constant C(α,Ξ)C(\alpha, \theta). The key ingredients of the proof are a novel probabilistic representation of KnK_n as compound distribution and new, refined versions of certain quantitative bounds for the Poisson approximation and the compound Poisson distribution

    Emergence of Object Segmentation in Perturbed Generative Models

    Get PDF
    We introduce a novel framework to build a model that can learn how to segment objects from a collection of images without any human annotation. Our method builds on the observation that the location of object segments can be perturbed locally relative to a given background without affecting the realism of a scene. Our approach is to first train a generative model of a layered scene. The layered representation consists of a background image, a foreground image and the mask of the foreground. A composite image is then obtained by overlaying the masked foreground image onto the background. The generative model is trained in an adversarial fashion against a discriminator, which forces the generative model to produce realistic composite images. To force the generator to learn a representation where the foreground layer corresponds to an object, we perturb the output of the generative model by introducing a random shift of both the foreground image and mask relative to the background. Because the generator is unaware of the shift before computing its output, it must produce layered representations that are realistic for any such random perturbation. Finally, we learn to segment an image by defining an autoencoder consisting of an encoder, which we train, and the pre-trained generator as the decoder, which we freeze. The encoder maps an image to a feature vector, which is fed as input to the generator to give a composite image matching the original input image. Because the generator outputs an explicit layered representation of the scene, the encoder learns to detect and segment objects. We demonstrate this framework on real images of several object categories.Comment: 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Spotlight presentatio
    • 

    corecore