1,033 research outputs found

    Plug-and-Play Methods Provably Converge with Properly Trained Denoisers

    Full text link
    Plug-and-play (PnP) is a non-convex framework that integrates modern denoising priors, such as BM3D or deep learning-based denoisers, into ADMM or other proximal algorithms. An advantage of PnP is that one can use pre-trained denoisers when there is not sufficient data for end-to-end training. Although PnP has been recently studied extensively with great empirical success, theoretical analysis addressing even the most basic question of convergence has been insufficient. In this paper, we theoretically establish convergence of PnP-FBS and PnP-ADMM, without using diminishing stepsizes, under a certain Lipschitz condition on the denoisers. We then propose real spectral normalization, a technique for training deep learning-based denoisers to satisfy the proposed Lipschitz condition. Finally, we present experimental results validating the theory.Comment: Published in the International Conference on Machine Learning, 201

    System Identification Through Lipschitz Regularized Deep Neural Networks

    Get PDF
    In this paper we use neural networks to learn governing equations from data. Specifically we reconstruct the right-hand side of a system of ODEs x˙(t)=f(t,x(t)) directly from observed uniformly time-sampled data using a neural network. In contrast with other neural network-based approaches to this problem, we add a Lipschitz regularization term to our loss function. In the synthetic examples we observed empirically that this regularization results in a smoother approximating function and better generalization properties when compared with non-regularized models, both on trajectory and non-trajectory data, especially in presence of noise. In contrast with sparse regression approaches, since neural networks are universal approximators, we do not need any prior knowledge on the ODE system. Since the model is applied component wise, it can handle systems of any dimension, making it usable for real-world data

    The Extended Regularized Dual Averaging Method for Composite Optimization

    Full text link
    We present a new algorithm, extended regularized dual averaging (XRDA), for solving composite optimization problems, which are a generalization of the regularized dual averaging (RDA) method. The main novelty of the method is that it allows more flexible control of the backward step size. For instance, the backward step size for RDA grows without bound, while XRDA the backward step size can be kept bounded

    CLIP: Cheap Lipschitz Training of Neural Networks

    Full text link
    Despite the large success of deep neural networks (DNN) in recent years, most neural networks still lack mathematical guarantees in terms of stability. For instance, DNNs are vulnerable to small or even imperceptible input perturbations, so called adversarial examples, that can cause false predictions. This instability can have severe consequences in applications which influence the health and safety of humans, e.g., biomedical imaging or autonomous driving. While bounding the Lipschitz constant of a neural network improves stability, most methods rely on restricting the Lipschitz constants of each layer which gives a poor bound for the actual Lipschitz constant. In this paper we investigate a variational regularization method named CLIP for controlling the Lipschitz constant of a neural network, which can easily be integrated into the training procedure. We mathematically analyze the proposed model, in particular discussing the impact of the chosen regularization parameter on the output of the network. Finally, we numerically evaluate our method on both a nonlinear regression problem and the MNIST and Fashion-MNIST classification databases, and compare our results with a weight regularization approach.Comment: 12 pages, 2 figures, accepted at SSVM 202
    corecore