520 research outputs found

    Monotone deep Boltzmann machines

    Full text link
    Deep Boltzmann machines (DBMs), one of the first ``deep'' learning methods ever studied, are multi-layered probabilistic models governed by a pairwise energy function that describes the likelihood of all variables/nodes in the network. In practice, DBMs are often constrained, i.e., via the \emph{restricted} Boltzmann machine (RBM) architecture (which does not permit intra-layer connections), in order to allow for more efficient inference. In this work, we revisit the generic DBM approach, and ask the question: are there other possible restrictions to their design that would enable efficient (approximate) inference? In particular, we develop a new class of restricted model, the monotone DBM, which allows for arbitrary self-connection in each layer, but restricts the \emph{weights} in a manner that guarantees the existence and global uniqueness of a mean-field fixed point. To do this, we leverage tools from the recently-proposed monotone Deep Equilibrium model and show that a particular choice of activation results in a fixed-point iteration that gives a variational mean-field solution. While this approach is still largely conceptual, it is the first architecture that allows for efficient approximate inference in fully-general weight structures for DBMs. We apply this approach to simple deep convolutional Boltzmann architectures and demonstrate that it allows for tasks such as the joint completion and classification of images, within a single deep probabilistic setting, while avoiding the pitfalls of mean-field inference in traditional RBMs

    Analysis of the accuracy and convergence of equation-free projection to a slow manifold

    Get PDF
    In [C.W. Gear, T.J. Kaper, I.G. Kevrekidis, and A. Zagaris, Projecting to a Slow Manifold: Singularly Perturbed Systems and Legacy Codes, SIAM J. Appl. Dyn. Syst. 4 (2005) 711-732], we developed a class of iterative algorithms within the context of equation-free methods to approximate low-dimensional, attracting, slow manifolds in systems of differential equations with multiple time scales. For user-specified values of a finite number of the observables, the m-th member of the class of algorithms (m = 0, 1, ...) finds iteratively an approximation of the appropriate zero of the (m+1)-st time derivative of the remaining variables and uses this root to approximate the location of the point on the slow manifold corresponding to these values of the observables. This article is the first of two articles in which the accuracy and convergence of the iterative algorithms are analyzed. Here, we work directly with explicit fast--slow systems, in which there is an explicit small parameter, epsilon, measuring the separation of time scales. We show that, for each m = 0, 1, ..., the fixed point of the iterative algorithm approximates the slow manifold up to and including terms of O(epsilon^m). Moreover, for each m, we identify explicitly the conditions under which the m-th iterative algorithm converges to this fixed point. Finally, we show that when the iteration is unstable (or converges slowly) it may be stabilized (or its convergence may be accelerated) by application of the Recursive Projection Method. Alternatively, the Newton-Krylov Generalized Minimal Residual Method may be used. In the subsequent article, we will consider the accuracy and convergence of the iterative algorithms for a broader class of systems-in which there need not be an explicit small parameter-to which the algorithms also apply
    • …
    corecore