9,179 research outputs found

    On the Equivalence Between Deep NADE and Generative Stochastic Networks

    Full text link
    Neural Autoregressive Distribution Estimators (NADEs) have recently been shown as successful alternatives for modeling high dimensional multimodal distributions. One issue associated with NADEs is that they rely on a particular order of factorization for P(x)P(\mathbf{x}). This issue has been recently addressed by a variant of NADE called Orderless NADEs and its deeper version, Deep Orderless NADE. Orderless NADEs are trained based on a criterion that stochastically maximizes P(x)P(\mathbf{x}) with all possible orders of factorizations. Unfortunately, ancestral sampling from deep NADE is very expensive, corresponding to running through a neural net separately predicting each of the visible variables given some others. This work makes a connection between this criterion and the training criterion for Generative Stochastic Networks (GSNs). It shows that training NADEs in this way also trains a GSN, which defines a Markov chain associated with the NADE model. Based on this connection, we show an alternative way to sample from a trained Orderless NADE that allows to trade-off computing time and quality of the samples: a 3 to 10-fold speedup (taking into account the waste due to correlations between consecutive samples of the chain) can be obtained without noticeably reducing the quality of the samples. This is achieved using a novel sampling procedure for GSNs called annealed GSN sampling, similar to tempering methods that combines fast mixing (obtained thanks to steps at high noise levels) with accurate samples (obtained thanks to steps at low noise levels).Comment: ECML/PKDD 201

    Exact Bayesian curve fitting and signal segmentation.

    Get PDF
    We consider regression models where the underlying functional relationship between the response and the explanatory variable is modeled as independent linear regressions on disjoint segments. We present an algorithm for perfect simulation from the posterior distribution of such a model, even allowing for an unknown number of segments and an unknown model order for the linear regressions within each segment. The algorithm is simple, can scale well to large data sets, and avoids the problem of diagnosing convergence that is present with Monte Carlo Markov Chain (MCMC) approaches to this problem. We demonstrate our algorithm on standard denoising problems, on a piecewise constant AR model, and on a speech segmentation problem

    On a fast bilateral filtering formulation using functional rearrangements

    Full text link
    We introduce an exact reformulation of a broad class of neighborhood filters, among which the bilateral filters, in terms of two functional rearrangements: the decreasing and the relative rearrangements. Independently of the image spatial dimension (one-dimensional signal, image, volume of images, etc.), we reformulate these filters as integral operators defined in a one-dimensional space corresponding to the level sets measures. We prove the equivalence between the usual pixel-based version and the rearranged version of the filter. When restricted to the discrete setting, our reformulation of bilateral filters extends previous results for the so-called fast bilateral filtering. We, in addition, prove that the solution of the discrete setting, understood as constant-wise interpolators, converges to the solution of the continuous setting. Finally, we numerically illustrate computational aspects concerning quality approximation and execution time provided by the rearranged formulation.Comment: 29 pages, Journal of Mathematical Imaging and Vision, 2015. arXiv admin note: substantial text overlap with arXiv:1406.712
    corecore