4 research outputs found

    A Stochastic Majorize-Minimize Subspace Algorithm for Online Penalized Least Squares Estimation

    Full text link
    Stochastic approximation techniques play an important role in solving many problems encountered in machine learning or adaptive signal processing. In these contexts, the statistics of the data are often unknown a priori or their direct computation is too intensive, and they have thus to be estimated online from the observed signals. For batch optimization of an objective function being the sum of a data fidelity term and a penalization (e.g. a sparsity promoting function), Majorize-Minimize (MM) methods have recently attracted much interest since they are fast, highly flexible, and effective in ensuring convergence. The goal of this paper is to show how these methods can be successfully extended to the case when the data fidelity term corresponds to a least squares criterion and the cost function is replaced by a sequence of stochastic approximations of it. In this context, we propose an online version of an MM subspace algorithm and we study its convergence by using suitable probabilistic tools. Simulation results illustrate the good practical performance of the proposed algorithm associated with a memory gradient subspace, when applied to both non-adaptive and adaptive filter identification problems

    A new sparse representation framework for compressed sensing MRI

    Get PDF
    Abstract(#br)Compressed sensing based Magnetic Resonance imaging (MRI) via sparse representation (or transform) has recently attracted broad interest. The tight frame (TF)-based sparse representation is a promising approach in compressed sensing MRI. However, the conventional TF-based sparse representation is difficult to utilize the sparsity of the whole image. Since the whole image usually has different structure textures and a kind of tight frame can only represent a particular kind of ground object, how to reconstruct high-quality of magnetic resonance (MR) image is a challenge. In this work, we propose a new sparse representation framework, which fuses the double tight frame (DTF) into the mixed-norm regularization for MR image reconstruction from undersampled k -space data. In this framework, MR image is decomposed into smooth and nonsmooth regions. For the smooth regions, the wavelet TF-based weighted L 1 -norm regularization is developed to reconstruct piecewise-smooth information of image. For nonsmooth regions, we introduce the curvelet TF-based robust L 1 , a -norm regularization with the parameter to preserve the edge structural details and texture. To estimate the reasonable parameter, an adaptive parameter selection scheme is designed in robust L 1 , a -norm regularization. Experimental results demonstrate that the proposed method can achieve the best image reconstruction results when compared with other existing methods in terms of quantitative metrics and visual effect

    Rational optimization for nonlinear reconstruction with approximate â„“0 penalization

    Get PDF
    International audienceRecovering nonlinearly degraded signal in the presence of noise is a challenging problem. In this work, this problem is tackled by minimizing the sum of a non convex least-squares fit criterion and a penalty term. We assume that the nonlinearity of the model can be accounted for by a rational function. In addition, we suppose that the signal to be sought is sparse and a rational approximation of the â„“0 pseudo-norm thus constitutes a suitable penalization. The resulting composite cost function belongs to the broad class of semi-algebraic functions. To find a globally optimal solution to such an optimization problem, it can be transformed into a generalized moment problem, for which a hierarchy of semidefinite programming relaxations can be built. Global optimality comes at the expense of an increased dimension and, to overcome computational limitations concerning the number of involved variables, the structure of the problem has to be carefully addressed. A situation of practical interest is when the nonlinear model consists of a convolutive transform followed by a componentwise nonlinear rational saturation. We then propose to use a sparse relaxation able to deal with up to several hundreds of optimized variables. In contrast with the naive approach consisting of linearizing the model, our experiments show that the proposed approach offers good performanc
    corecore