3,053 research outputs found

    Multiscale stick-breaking mixture models

    Get PDF
    We introduce a family of multiscale stick-breaking mixture models for Bayesian nonparametric density estimation. The Bayesian nonparametric literature is dominated by single scale methods, exception made for P\`olya trees and allied approaches. Our proposal is based on a mixture specification exploiting an infinitely-deep binary tree of random weights that grows according to a multiscale generalization of a large class of stick-breaking processes; this multiscale stick-breaking is paired with specific stochastic processes generating sequences of parameters that induce stochastically ordered kernel functions. Properties of this family of multiscale stick-breaking mixtures are described. Focusing on a Gaussian specification, a Markov Chain Montecarlo algorithm for posterior computation is introduced. The performance of the method is illustrated analyzing both synthetic and real data sets. The method is well-suited for data living in R\mathbb{R} and is able to detect densities with varying degree of smoothness and local features

    Multiscale stick-breaking mixture models

    Get PDF

    msBP: An R package to perform Bayesian nonparametric inference using multiscale Bernstein polynomials mixtures

    Get PDF
    msBP is an R package that implements a new method to perform Bayesian multiscale nonparametric inference introduced by Canale and Dunson (2016). The method, based on mixtures of multiscale beta dictionary densities, overcomes the drawbacks of PĂłlya trees and inherits many of the advantages of Dirichlet process mixture models. The key idea is that an infinitely-deep binary tree is introduced, with a beta dictionary density assigned to each node of the tree. Using a multiscale stick-breaking characterization, stochastically decreasing weights are assigned to each node. The result is an infinite mixture model. The package msBP implements a series of basic functions to deal with this family of priors such as random densities and numbers generation, creation and manipulation of binary tree objects, and generic functions to plot and print the results. In addition, it implements the Gibbs samplers for posterior computation to perform multiscale density estimation and multiscale testing of group differences described in Canale and Dunson (2016)

    Multiscale Bernstein polynomials for densities

    Full text link
    Our focus is on constructing a multiscale nonparametric prior for densities. The Bayes density estimation literature is dominated by single scale methods, with the exception of Polya trees, which favor overly-spiky densities even when the truth is smooth. We propose a multiscale Bernstein polynomial family of priors, which produce smooth realizations that do not rely on hard partitioning of the support. At each level in an infinitely-deep binary tree, we place a beta dictionary density; within a scale the densities are equivalent to Bernstein polynomials. Using a stick-breaking characterization, stochastically decreasing weights are allocated to the finer scale dictionary elements. A slice sampler is used for posterior computation, and properties are described. The method characterizes densities with locally-varying smoothness, and can produce a sequence of coarse to fine density estimates. An extension for Bayesian testing of group differences is introduced and applied to DNA methylation array data

    Multiscale Dictionary Learning for Estimating Conditional Distributions

    Full text link
    Nonparametric estimation of the conditional distribution of a response given high-dimensional features is a challenging problem. It is important to allow not only the mean but also the variance and shape of the response density to change flexibly with features, which are massive-dimensional. We propose a multiscale dictionary learning model, which expresses the conditional response density as a convex combination of dictionary densities, with the densities used and their weights dependent on the path through a tree decomposition of the feature space. A fast graph partitioning algorithm is applied to obtain the tree decomposition, with Bayesian methods then used to adaptively prune and average over different sub-trees in a soft probabilistic manner. The algorithm scales efficiently to approximately one million features. State of the art predictive performance is demonstrated for toy examples and two neuroscience applications including up to a million features

    A multiscale tribological study of nacre : Evidence of wear nanomechanisms controlled by the frictional dissipated power

    Get PDF
    Sheet nacre is a hybrid biocomposite with a multiscale structure, including nanograins of CaCO3 (97% wt.% – 40 nm in size) and two organic matrices: (i) the “interlamellar” mainly composed of β-chitin and proteins, and (ii) the “intracrystalline” mainly composed by silk-fibroin-like proteins. This material is currently studied as small prostheses with its tribological behaviour. In this work, the latter is studied by varying the frictional dissipated power from few nW to several hundreds mW, in order to study the various responses of the different nacre’s components, independently. Results reveal various dissipative mechanisms vs. dissipated frictional power: organic thin film lubrication, tablet’s elastoplastic deformations, stick-slip phenomenon and/or multiscale wear processes, including various thermo-mechanical processes (i.e., mineral phase transformation, organics melting and friction-induced nanoshocks process on a large range). All these mechanisms are controlled by the multiscale structure of nacre – and especially by its both matrices and respective orientation vs. the sliding direction
    • …
    corecore