17 research outputs found

    Block-Simultaneous Direction Method of Multipliers: A proximal primal-dual splitting algorithm for nonconvex problems with multiple constraints

    Full text link
    We introduce a generalization of the linearized Alternating Direction Method of Multipliers to optimize a real-valued function ff of multiple arguments with potentially multiple constraints gg_\circ on each of them. The function ff may be nonconvex as long as it is convex in every argument, while the constraints gg_\circ need to be convex but not smooth. If ff is smooth, the proposed Block-Simultaneous Direction Method of Multipliers (bSDMM) can be interpreted as a proximal analog to inexact coordinate descent methods under constraints. Unlike alternative approaches for joint solvers of multiple-constraint problems, we do not require linear operators LL of a constraint function g(L )g(L\ \cdot) to be invertible or linked between each other. bSDMM is well-suited for a range of optimization problems, in particular for data analysis, where ff is the likelihood function of a model and LL could be a transformation matrix describing e.g. finite differences or basis transforms. We apply bSDMM to the Non-negative Matrix Factorization task of a hyperspectral unmixing problem and demonstrate convergence and effectiveness of multiple constraints on both matrix factors. The algorithms are implemented in python and released as an open-source package.Comment: 13 pages, 4 figure

    A variational approach to Gibbs artifacts removal in MRI

    Get PDF
    Gibbs ringing is a feature of MR images caused by the finite sampling of the acquisition space (k-space). It manifests itself with ringing patterns around sharp edges which become increasingly significant for low-resolution acquisitions. In this paper, we model the Gibbs artefact removal as a constrained variational problem where the data discrepancy, represented in denoising and convolutive form, is balanced to sparsity-promoting regularization functions such as Total Variation, Total Generalized Variation and L1 norm of the Wavelet transform. The efficacy of such models is evaluated by running a set of numerical experiments both on synthetic data and real acquisitions of brain images. The Total Generalized Variation penalty coupled with the convolutive data discrepancy term yields, in general, the best results both on synthetic and real data

    Image Reconstruction from Undersampled Confocal Microscopy Data using Multiresolution Based Maximum Entropy Regularization

    Full text link
    We consider the problem of reconstructing 2D images from randomly under-sampled confocal microscopy samples. The well known and widely celebrated total variation regularization, which is the L1 norm of derivatives, turns out to be unsuitable for this problem; it is unable to handle both noise and under-sampling together. This issue is linked with the notion of phase transition phenomenon observed in compressive sensing research, which is essentially the break-down of total variation methods, when sampling density gets lower than certain threshold. The severity of this breakdown is determined by the so-called mutual incoherence between the derivative operators and measurement operator. In our problem, the mutual incoherence is low, and hence the total variation regularization gives serious artifacts in the presence of noise even when the sampling density is not very low. There has been very few attempts in developing regularization methods that perform better than total variation regularization for this problem. We develop a multi-resolution based regularization method that is adaptive to image structure. In our approach, the desired reconstruction is formulated as a series of coarse-to-fine multi-resolution reconstructions; for reconstruction at each level, the regularization is constructed to be adaptive to the image structure, where the information for adaption is obtained from the reconstruction obtained at coarser resolution level. This adaptation is achieved by using maximum entropy principle, where the required adaptive regularization is determined as the maximizer of entropy subject to the information extracted from the coarse reconstruction as constraints. We demonstrate the superiority of the proposed regularization method over existing ones using several reconstruction examples

    Indefinite linearized augmented Lagrangian method for convex programming with linear inequality constraints

    Full text link
    The augmented Lagrangian method (ALM) is a benchmark for tackling the convex optimization problem with linear constraints; ALM and its variants for linearly equality-constrained convex minimization models have been well studied in the literatures. However, much less attention has been paid to ALM for efficiently solving the linearly inequality-constrained convex minimization model. In this paper, we exploit an enlightening reformulation of the most recent indefinite linearized (equality-constrained) ALM, and present a novel indefinite linearized ALM scheme for efficiently solving the convex optimization problem with linear inequality constraints. The proposed method enjoys great advantages, especially for large-scale optimization cases, in two folds mainly: first, it significantly simplifies the optimization of the challenging key subproblem of the classical ALM by employing its linearized reformulation, while keeping low complexity in computation; second, we prove that a smaller proximity regularization term is needed for convergence guarantee, which allows a bigger step-size and can largely reduce required iterations for convergence. Moreover, we establish an elegant global convergence theory of the proposed scheme upon its equivalent compact expression of prediction-correction, along with a worst-case O(1/N)\mathcal{O}(1/N) convergence rate. Numerical results demonstrate that the proposed method can reach a faster converge rate for a higher numerical efficiency as the regularization term turns smaller, which confirms the theoretical results presented in this study

    Relaxed regularization for linear inverse problems

    Get PDF
    We consider regularized least-squares problems of the form minx12Axb22+R(Lx)\min_{x} \frac{1}{2}\Vert Ax - b\Vert_2^2 + \mathcal{R}(Lx). Recently, Zheng et al., 2019, proposed an algorithm called Sparse Relaxed Regularized Regression (SR3) that employs a splitting strategy by introducing an auxiliary variable yy and solves minx,y12Axb22+κ2Lxy22+R(x)\min_{x,y} \frac{1}{2}\Vert Ax - b\Vert_2^2 + \frac{\kappa}{2}\Vert Lx - y\Vert_2^2 + \mathcal{R}(x). By minimizing out the variable xx we obtain an equivalent system miny12Fκygκ22+R(y)\min_{y} \frac{1}{2} \Vert F_{\kappa}y - g_{\kappa}\Vert_2^2+\mathcal{R}(y). In our work we view the SR3 method as a way to approximately solve the regularized problem. We analyze the conditioning of the relaxed problem in general and give an expression for the SVD of FκF_{\kappa} as a function of κ\kappa. Furthermore, we relate the Pareto curve of the original problem to the relaxed problem and we quantify the error incurred by relaxation in terms of κ\kappa. Finally, we propose an efficient iterative method for solving the relaxed problem with inexact inner iterations. Numerical examples illustrate the approach.Comment: 25 pages, 14 figures, submitted to SIAM Journal for Scientific Computing special issue Sixteenth Copper Mountain Conference on Iterative Method
    corecore