384 research outputs found

    Unified Convergence Results on a Minimax Algorithm for Finding Multiple Critical Points in Banach Spaces

    Get PDF

    Minimax methods for finding multiple saddle critical points in Banach spaces and their applications

    Get PDF
    This dissertation was to study computational theory and methods for ?nding multiple saddle critical points in Banach spaces. Two local minimax methods were developed for this purpose. One was for unconstrained cases and the other was for constrained cases. First, two local minmax characterization of saddle critical points in Banach spaces were established. Based on these two local minmax characterizations, two local minimax algorithms were designed. Their ?ow charts were presented. Then convergence analysis of the algorithms were carried out. Under certain assumptions, a subsequence convergence and a point-to-set convergence were obtained. Furthermore, a relation between the convergence rates of the functional value sequence and corresponding gradient sequence was derived. Techniques to implement the algorithms were discussed. In numerical experiments, those techniques have been successfully implemented to solve for multiple solutions of several quasilinear elliptic boundary value problems and multiple eigenpairs of the well known nonlinear p-Laplacian operator. Numerical solutions were presented by their pro?les for visualization. Several interesting phenomena of the solutions of quasilinear elliptic boundary value problems and the eigenpairs of the p-Laplacian operator have been observed and are open for further investigation. As a generalization of the above results, nonsmooth critical points were considered for locally Lipschitz continuous functionals. A local minmax characterization of nonsmooth saddle critical points was also established. To establish its version in Banach spaces, a new notion, pseudo-generalized-gradient has to be introduced. Based on the characterization, a local minimax algorithm for ?nding multiple nonsmooth saddle critical points was proposed for further study

    Generative Adversarial Networks (GANs): Challenges, Solutions, and Future Directions

    Full text link
    Generative Adversarial Networks (GANs) is a novel class of deep generative models which has recently gained significant attention. GANs learns complex and high-dimensional distributions implicitly over images, audio, and data. However, there exists major challenges in training of GANs, i.e., mode collapse, non-convergence and instability, due to inappropriate design of network architecture, use of objective function and selection of optimization algorithm. Recently, to address these challenges, several solutions for better design and optimization of GANs have been investigated based on techniques of re-engineered network architectures, new objective functions and alternative optimization algorithms. To the best of our knowledge, there is no existing survey that has particularly focused on broad and systematic developments of these solutions. In this study, we perform a comprehensive survey of the advancements in GANs design and optimization solutions proposed to handle GANs challenges. We first identify key research issues within each design and optimization technique and then propose a new taxonomy to structure solutions by key research issues. In accordance with the taxonomy, we provide a detailed discussion on different GANs variants proposed within each solution and their relationships. Finally, based on the insights gained, we present the promising research directions in this rapidly growing field.Comment: 42 pages, Figure 13, Table

    Atomic norm denoising with applications to line spectral estimation

    Get PDF
    Motivated by recent work on atomic norms in inverse problems, we propose a new approach to line spectral estimation that provides theoretical guarantees for the mean-squared-error (MSE) performance in the presence of noise and without knowledge of the model order. We propose an abstract theory of denoising with atomic norms and specialize this theory to provide a convex optimization problem for estimating the frequencies and phases of a mixture of complex exponentials. We show that the associated convex optimization problem can be solved in polynomial time via semidefinite programming (SDP). We also show that the SDP can be approximated by an l1-regularized least-squares problem that achieves nearly the same error rate as the SDP but can scale to much larger problems. We compare both SDP and l1-based approaches with classical line spectral analysis methods and demonstrate that the SDP outperforms the l1 optimization which outperforms MUSIC, Cadzow's, and Matrix Pencil approaches in terms of MSE over a wide range of signal-to-noise ratios.Comment: 27 pages, 10 figures. A preliminary version of this work appeared in the Proceedings of the 49th Annual Allerton Conference in September 2011. Numerous numerical experiments added to this version in accordance with suggestions by anonymous reviewer

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of 2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem
    corecore