368 research outputs found

    High-Order Stochastic Gradient Thermostats for Bayesian Learning of Deep Models

    Full text link
    Learning in deep models using Bayesian methods has generated significant attention recently. This is largely because of the feasibility of modern Bayesian methods to yield scalable learning and inference, while maintaining a measure of uncertainty in the model parameters. Stochastic gradient MCMC algorithms (SG-MCMC) are a family of diffusion-based sampling methods for large-scale Bayesian learning. In SG-MCMC, multivariate stochastic gradient thermostats (mSGNHT) augment each parameter of interest, with a momentum and a thermostat variable to maintain stationary distributions as target posterior distributions. As the number of variables in a continuous-time diffusion increases, its numerical approximation error becomes a practical bottleneck, so better use of a numerical integrator is desirable. To this end, we propose use of an efficient symmetric splitting integrator in mSGNHT, instead of the traditional Euler integrator. We demonstrate that the proposed scheme is more accurate, robust, and converges faster. These properties are demonstrated to be desirable in Bayesian deep learning. Extensive experiments on two canonical models and their deep extensions demonstrate that the proposed scheme improves general Bayesian posterior sampling, particularly for deep models.Comment: AAAI 201

    Preconditioned Stochastic Gradient Langevin Dynamics for Deep Neural Networks

    Full text link
    Effective training of deep neural networks suffers from two main issues. The first is that the parameter spaces of these models exhibit pathological curvature. Recent methods address this problem by using adaptive preconditioning for Stochastic Gradient Descent (SGD). These methods improve convergence by adapting to the local geometry of parameter space. A second issue is overfitting, which is typically addressed by early stopping. However, recent work has demonstrated that Bayesian model averaging mitigates this problem. The posterior can be sampled by using Stochastic Gradient Langevin Dynamics (SGLD). However, the rapidly changing curvature renders default SGLD methods inefficient. Here, we propose combining adaptive preconditioners with SGLD. In support of this idea, we give theoretical properties on asymptotic convergence and predictive risk. We also provide empirical results for Logistic Regression, Feedforward Neural Nets, and Convolutional Neural Nets, demonstrating that our preconditioned SGLD method gives state-of-the-art performance on these models.Comment: AAAI 201

    Limited Angle Acousto-Electrical Tomography

    Get PDF
    This paper considers the reconstruction problem in Acousto-Electrical Tomography, i.e., the problem of estimating a spatially varying conductivity in a bounded domain from measurements of the internal power densities resulting from different prescribed boundary conditions. Particular emphasis is placed on the limited angle scenario, in which the boundary conditions are supported only on a part of the boundary. The reconstruction problem is formulated as an optimization problem in a Hilbert space setting and solved using Landweber iteration. The resulting algorithm is implemented numerically in two spatial dimensions and tested on simulated data. The results quantify the intuition that features close to the measurement boundary are stably reconstructed and features further away are less well reconstructed. Finally, the ill-posedness of the limited angle problem is quantified numerically using the singular value decomposition of the corresponding linearized problem.Comment: 23 page

    On a variational problem of nematic liquid crystal droplets

    Full text link
    Let μ>0\mu>0 be a fixed constant, and we prove that minimizers to the following energy functional \begin{align*} E_f(u,\Omega):=\int_{\Omega}|\nabla u|^2+\mu P(\Omega) \end{align*}exist among pairs (Ω,u)(\Omega,u) such that Ω\Omega is an MM-uniform domain with finite perimeter and fixed volume, and u∈H1(Ω,S2)u \in H^1(\Omega,\mathbb{S}^2) with u=νΩu =\nu_{\Omega}, the measure-theoretical outer unit normal, almost everywhere on the reduced boundary of Ω\Omega. The uniqueness of optimal configurations in various settings is also obtained. In addition, we consider a general energy functional given by \begin{align*} E_f(u,\Omega):=\int_{\Omega} |\nabla u(x)|^2 \,dx + \int_{\partial^* \Omega} f\big(u(x)\cdot \nu_{\Omega}(x)\big) \,d\mathcal{H}^2(x), \end{align*}where ∂∗Ω\partial^* \Omega is the reduced boundary of Ω\Omega and ff is a convex positive function on R\mathbb R. We prove that minimizers of EfE_f also exist among MM-uniform outer-minimizing domains Ω\Omega with fixed volume and u∈H1(Ω,S2)u \in H^1(\Omega,\mathbb{S}^2).Comment: 19 page

    Harmonic maps on domains with piecewise Lipschitz continuous metrics

    Full text link
    For a bounded domain equipped with a piecewise Lipschitz continuous Riemannian metric g, we consider harmonic map from (Ω,g)(\Omega, g) to a compact Riemannian manifold (N,h)⊂Rk(N,h)\subset\mathbb R^k without boundary. We generalize the notion of stationary harmonic map and prove the partial regularity. We also discuss the global Lipschitz and piecewise C1,αC^{1,\alpha}-regularity of harmonic maps from (Ω,g)(\Omega, g) manifolds that support convex distance functions.Comment: 24 page
    • …
    corecore