20 research outputs found

    Efficient Reconstruction of Piecewise Constant Images Using Nonsmooth Nonconvex Minimization

    Get PDF
    We consider the restoration of piecewise constant images where the number of the regions and their values are not fixed in advance, with a good difference of piecewise constant values between neighboring regions, from noisy data obtained at the output of a linear operator (e.g., a blurring kernel or a Radon transform). Thus we also address the generic problem of unsupervised segmentation in the context of linear inverse problems. The segmentation and the restoration tasks are solved jointly by minimizing an objective function (an energy) composed of a quadratic data-fidelity term and a nonsmooth nonconvex regularization term. The pertinence of such an energy is ensured by the analytical properties of its minimizers. However, its practical interest used to be limited by the difficulty of the computational stage which requires a nonsmooth nonconvex minimization. Indeed, the existing methods are unsatisfactory since they (implicitly or explicitly) involve a smooth approximation of the regularization term and often get stuck in shallow local minima. The goal of this paper is to design a method that efficiently handles the nonsmooth nonconvex minimization. More precisely, we propose a continuation method where one tracks the minimizers along a sequence of approximate nonsmooth energies {Jε}, the first of which being strictly convex and the last one the original energy to minimize. Knowing the importance of the nonsmoothness of the regularization term for the segmentation task, each Jε is nonsmooth and is expressed as the sum of an l1 regularization term and a smooth nonconvex function. Furthermore, the local minimization of each Jε is reformulated as the minimization of a smooth function subject to a set of linear constraints. The latter problem is solved by the modified primal-dual interior point method, which guarantees the descent direction at each step. Experimental results are presented and show the effectiveness and the efficiency of the proposed method. Comparison with simulated annealing methods further shows the advantage of our method.published_or_final_versio

    Group-Sparse Signal Denoising: Non-Convex Regularization, Convex Optimization

    Full text link
    Convex optimization with sparsity-promoting convex regularization is a standard approach for estimating sparse signals in noise. In order to promote sparsity more strongly than convex regularization, it is also standard practice to employ non-convex optimization. In this paper, we take a third approach. We utilize a non-convex regularization term chosen such that the total cost function (consisting of data consistency and regularization terms) is convex. Therefore, sparsity is more strongly promoted than in the standard convex formulation, but without sacrificing the attractive aspects of convex optimization (unique minimum, robust algorithms, etc.). We use this idea to improve the recently developed 'overlapping group shrinkage' (OGS) algorithm for the denoising of group-sparse signals. The algorithm is applied to the problem of speech enhancement with favorable results in terms of both SNR and perceptual quality.Comment: 14 pages, 11 figure

    On the Link between Gaussian Homotopy Continuation and Convex Envelopes

    Full text link
    Abstract. The continuation method is a popular heuristic in computer vision for nonconvex optimization. The idea is to start from a simpli-fied problem and gradually deform it to the actual task while tracking the solution. It was first used in computer vision under the name of graduated nonconvexity. Since then, it has been utilized explicitly or im-plicitly in various applications. In fact, state-of-the-art optical flow and shape estimation rely on a form of continuation. Despite its empirical success, there is little theoretical understanding of this method. This work provides some novel insights into this technique. Specifically, there are many ways to choose the initial problem and many ways to progres-sively deform it to the original task. However, here we show that when this process is constructed by Gaussian smoothing, it is optimal in a specific sense. In fact, we prove that Gaussian smoothing emerges from the best affine approximation to Vese’s nonlinear PDE. The latter PDE evolves any function to its convex envelope, hence providing the optimal convexification

    On Graduated Optimization for Stochastic Non-Convex Problems

    Full text link
    The graduated optimization approach, also known as the continuation method, is a popular heuristic to solving non-convex problems that has received renewed interest over the last decade. Despite its popularity, very little is known in terms of theoretical convergence analysis. In this paper we describe a new first-order algorithm based on graduated optimiza- tion and analyze its performance. We characterize a parameterized family of non- convex functions for which this algorithm provably converges to a global optimum. In particular, we prove that the algorithm converges to an {\epsilon}-approximate solution within O(1/\epsilon^2) gradient-based steps. We extend our algorithm and analysis to the setting of stochastic non-convex optimization with noisy gradient feedback, attaining the same convergence rate. Additionally, we discuss the setting of zero-order optimization, and devise a a variant of our algorithm which converges at rate of O(d^2/\epsilon^4).Comment: 17 page

    영상 복원 문제의 변분법적 접근

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 수리과학부, 2013. 2. 강명주.Image restoration has been an active research area in image processing and computer vision during the past several decades. We explore variational partial differential equations (PDE) models in image restoration problem. We start our discussion by reviewing classical models, by which the works of this dissertation are highly motivated. The content of the dissertation is divided into two main subjects. First topic is on image denoising, where we propose non-convex hybrid total variation model, and then we apply iterative reweighted algorithm to solve the proposed model. Second topic is on image decomposition, in which we separate an image into structural component and oscillatory component using local gradient constraint.Abstract i 1 Introduction 1 1.1 Image restoration 2 1.2 Brief overview of the dissertation 3 2 Previous works 4 2.1 Image denoising 4 2.1.1 Fundamental model 4 2.1.2 Higher order model 7 2.1.3 Hybrid model 9 2.1.4 Non-convex model 12 2.2 Image decomposition 22 2.2.1 Meyers model 23 2.2.2 Nonlinear filter 24 3 Non-convex hybrid TV for image denoising 28 3.1 Variational model with non-convex hybrid TV 29 3.1.1 Non-convex TV model and non-convex HOTV model 29 3.1.2 The Proposed model: Non-convex hybrid TV model 31 3.2 Iterative reweighted hybrid Total Variation algorithm 33 3.3 Numerical experiments 35 3.3.1 Parameter values 37 3.3.2 Comparison between the non-convex TV model and the non-convex HOTV model 38 3.3.3 Comparison with other non-convex higher order regularizers 40 3.3.4 Comparison between two non-convex hybrid TV models 42 3.3.5 Comparison with Krishnan et al. [39] 43 3.3.6 Comparison with state-of-the-art 44 4 Image decomposition 59 4.1 Local gradient constraint 61 4.1.1 Texture estimator 62 4.2 The proposed model 65 4.2.1 Algorithm : Anisotropic TV-L2 67 4.2.2 Algorithm : Isotropic TV-L2 69 4.2.3 Algorithm : Isotropic TV-L1 71 4.3 Numerical experiments and discussion 72 5 Conclusion and future works 80 Abstract (in Korean) 92Docto

    Efficient Algorithms for Mumford-Shah and Potts Problems

    Get PDF
    In this work, we consider Mumford-Shah and Potts models and their higher order generalizations. Mumford-Shah and Potts models are among the most well-known variational approaches to edge-preserving smoothing and partitioning of images. Though their formulations are intuitive, their application is not straightforward as it corresponds to solving challenging, particularly non-convex, minimization problems. The main focus of this thesis is the development of new algorithmic approaches to Mumford-Shah and Potts models, which is to this day an active field of research. We start by considering the situation for univariate data. We find that switching to higher order models can overcome known shortcomings of the classical first order models when applied to data with steep slopes. Though the existing approaches to the first order models could be applied in principle, they are slow or become numerically unstable for higher orders. Therefore, we develop a new algorithm for univariate Mumford-Shah and Potts models of any order and show that it solves the models in a stable way in O(n^2). Furthermore, we develop algorithms for the inverse Potts model. The inverse Potts model can be seen as an approach to jointly reconstructing and partitioning images that are only available indirectly on the basis of measured data. Further, we give a convergence analysis for the proposed algorithms. In particular, we prove the convergence to a local minimum of the underlying NP-hard minimization problem. We apply the proposed algorithms to numerical data to illustrate their benefits. Next, we apply the multi-channel Potts prior to the reconstruction problem in multi-spectral computed tomography (CT). To this end, we propose a new superiorization approach, which perturbs the iterates of the conjugate gradient method towards better results with respect to the Potts prior. In numerical experiments, we illustrate the benefits of the proposed approach by comparing it to the existing Potts model approach from the literature as well as to the existing total variation type methods. Hereafter, we consider the second order Mumford-Shah model for edge-preserving smoothing of images which –similarly to the univariate case– improves upon the classical Mumford-Shah model for images with linear color gradients. Based on reformulations in terms of Taylor jets, i.e. specific fields of polynomials, we derive discrete second order Mumford-Shah models for which we develop an efficient algorithm using an ADMM scheme. We illustrate the potential of the proposed method by comparing it with existing methods for the second order Mumford-Shah model. Further, we illustrate its benefits in connection with edge detection. Finally, we consider the affine-linear Potts model for the image partitioning problem. As many images possess linear trends within homogeneous regions, the classical Potts model frequently leads to oversegmentation. The affine-linear Potts model accounts for that problem by allowing for linear trends within segments. We lift the corresponding minimization problem to the jet space and develop again an ADMM approach. In numerical experiments, we show that the proposed algorithm achieves lower energy values as well as faster runtimes than the method of comparison, which is based on the iterative application of the graph cut algorithm (with α-expansion moves)
    corecore