196 research outputs found

    Non-convex Fraction Function Penalty: Sparse Signals Recovered from Quasi-linear Systems

    Full text link
    The goal of compressed sensing is to reconstruct a sparse signal under a few linear measurements far less than the dimension of the ambient space of the signal. However, many real-life applications in physics and biomedical sciences carry some strongly nonlinear structures, and the linear model is no longer suitable. Compared with the compressed sensing under the linear circumstance, this nonlinear compressed sensing is much more difficult, in fact also NP-hard, combinatorial problem, because of the discrete and discontinuous nature of the 0\ell_{0}-norm and the nonlinearity. In order to get a convenience for sparse signal recovery, we set most of the nonlinear models have a smooth quasi-linear nature in this paper, and study a non-convex fraction function ρa\rho_{a} in this quasi-linear compressed sensing. We propose an iterative fraction thresholding algorithm to solve the regularization problem (QPaλ)(QP_{a}^{\lambda}) for all a>0a>0. With the change of parameter a>0a>0, our algorithm could get a promising result, which is one of the advantages for our algorithm compared with other algorithms. Numerical experiments show that our method performs much better compared with some state-of-art methods

    Modified lp-norm regularization minimization for sparse signal recovery

    Full text link
    In numerous substitution models for the \l_{0}-norm minimization problem (P0)(P_{0}), the \l_{p}-norm minimization (Pp)(P_{p}) with 0<p<10<p<1 have been considered as the most natural choice. However, the non-convex optimization problem (Pp)(P_{p}) are much more computational challenges, and are also NP-hard. Meanwhile, the algorithms corresponding to the proximal mapping of the regularization \l_{p}-norm minimization (Ppλ)(P_{p}^{\lambda}) are limited to few specific values of parameter pp. In this paper, we replace the p\ell_{p}-norm xpp\|x\|_{p}^{p} with a modified function i=1nxi(xi+ϵi)1p\sum_{i=1}^{n}\frac{|x_{i}|}{(|x_{i}|+\epsilon_{i})^{1-p}}. With change the parameter ϵ>0\epsilon>0, this modified function would like to interpolate the \l_{p}-norm xpp\|x\|_{p}^{p}. By this transformation, we translated the \l_{p}-norm regularization minimization (Ppλ)(P_{p}^{\lambda}) into a modified \l_{p}-norm regularization minimization (Ppλ,ϵ)(P_{p}^{\lambda,\epsilon}). Then, we develop the thresholding representation theory of the problem (Ppλ,ϵ)(P_{p}^{\lambda,\epsilon}), and based on it, the IT algorithm is proposed to solve the problem (Ppλ,ϵ)(P_{p}^{\lambda,\epsilon}) for all 0<p<10<p<1. Indeed, we could get some much better results by choosing proper pp, which is one of the advantages for our algorithm compared with other methods. Numerical results also show that, for some proper pp, our algorithm performs the best in some sparse signal recovery problems compared with some state-of-art methods

    Minimization of fraction function penalty in compressed sensing

    Full text link
    In the paper, we study the minimization problem of a non-convex sparsity promoting penalty function Pa(x)=i=1npa(xi)=i=1naxi1+axiP_{a}(x)=\sum_{i=1}^{n}p_{a}(x_{i})=\sum_{i=1}^{n}\frac{a|x_{i}|}{1+a|x_{i}|} in compressed sensing, which is called fraction function. Firstly, we discuss the equivalence of 0\ell_{0} minimization and fraction function minimization. It is proved that there corresponds a constant a>0a^{**}>0 such that, whenever a>aa>a^{**}, every solution to (FPa)(FP_{a}) also solves (P0)(P_{0}), that the uniqueness of global minimizer of (FPa)(FP_{a}) and its equivalence to (P0)(P_{0}) if the sensing matrix AA satisfies a restricted isometry property (RIP) and, last but the most important, that the optimal solution to the regularization problem (FPaλ)(FP_{a}^\lambda) also solves (FPa)(FP_{a}) if the certain condition is satisfied, which is similar to the regularization problem in convex optimal theory. Secondly, we study the properties of the optimal solution to the regularization problem (FPaλ)(FP^{\lambda}_{a}) including the first-order and the second optimality condition and the lower and upper bound of the absolute value for its nonzero entries. Finally, we derive the closed form representation of the optimal solution to the regularization problem (FPaλFP_{a}^{\lambda}) for all positive values of parameter aa, and propose an iterative FPFP thresholding algorithm to solve the regularization problem (FPaλ)(FP_{a}^{\lambda}). We also provide a series of experiments to assess performance of the FPFP algorithm, and the experiment results show that, compared with soft thresholding algorithm and half thresholding algorithms, the FPFP algorithm performs the best in sparse signal recovery with and without measurement noise.Comment: 12 page

    Generalized singular value thresholding operator to affine matrix rank minimization problem

    Full text link
    It is well known that the affine matrix rank minimization problem is NP-hard and all known algorithms for exactly solving it are doubly exponential in theory and in practice due to the combinational nature of the rank function. In this paper, a generalized singular value thresholding operator is generated to solve the affine matrix rank minimization problem. Numerical experiments show that our algorithm performs effectively in finding a low-rank matrix compared with some state-of-art methods

    Nonconvex fraction function recovery sparse signal by convex optimization algorithm

    Full text link
    In this paper, we will generate a convex iterative FP thresholding algorithm to solve the problem (FPaλ)(FP^{\lambda}_{a}). Two schemes of convex iterative FP thresholding algorithms are generated. One is convex iterative FP thresholding algorithm-Scheme 1 and the other is convex iterative FP thresholding algorithm-Scheme 2. A global convergence theorem is proved for the convex iterative FP thresholding algorithm-Scheme 1. Under an adaptive rule, the convex iterative FP thresholding algorithm-Scheme 2 will be adaptive both for the choice of the regularized parameter λ\lambda and parameter aa. These are the advantages for our two schemes of convex iterative FP thresholding algorithm compared with our previous proposed two schemes of iterative FP thresholding algorithm. At last, we provide a series of numerical simulations to test the performance of the convex iterative FP thresholding algorithm-Scheme 2, and the simulation results show that our convex iterative FP thresholding algorithm-Scheme 2 performs very well in recovering a sparse signal

    A New Nonconvex Strategy to Affine Matrix Rank Minimization Problem

    Full text link
    The affine matrix rank minimization (AMRM) problem is to find a matrix of minimum rank that satisfies a given linear system constraint. It has many applications in some important areas such as control, recommender systems, matrix completion and network localization. However, the problem (AMRM) is NP-hard in general due to the combinational nature of the matrix rank function. There are many alternative functions have been proposed to substitute the matrix rank function, which lead to many corresponding alternative minimization problems solved efficiently by some popular convex or nonconvex optimization algorithms. In this paper, we propose a new nonconvex function, namely, TLαϵTL_{\alpha}^{\epsilon} function (with 0α00\leq\alpha0), to approximate the rank function, and translate the NP-hard problem (AMRM) into the TLpϵTL_{p}^{\epsilon} function affine matrix rank minimization (TLAMRM) problem. Firstly, we study the equivalence of problem (AMRM) and (TLAMRM), and proved that the uniqueness of global minimizer of the problem (TLAMRM) also solves the NP-hard problem (AMRM) if the linear map A\mathcal{A} satisfies a restricted isometry property (RIP). Secondly, an iterative thresholding algorithm is proposed to solve the regularization problem (RTLAMRM) for all 0α00\leq\alpha0. At last, some numerical results on low-rank matrix completion problems illustrated that our algorithm is able to recover a low-rank matrix, and the extensive numerical on image inpainting problems shown that our algorithm performs the best in finding a low-rank image compared with some state-of-art methods

    Sparse Portfolio Selection via Non-convex Fraction Function

    Full text link
    In this paper, a continuous and non-convex promoting sparsity fraction function is studied in two sparse portfolio selection models with and without short-selling constraints. Firstly, we study the properties of the optimal solution to the problem (FPa,λ,η)(FP_{a,\lambda,\eta}) including the first-order and the second optimality condition and the lower and upper bound of the absolute value for its nonzero entries. Secondly, we develop the thresholding representation theory of the problem (FPa,λ,η)(FP_{a,\lambda,\eta}). Based on it, we prove the existence of the resolvent operator of gradient of Pa(x)P_{a}(x), calculate its analytic expression, and propose an iterative fraction penalty thresholding (IFPT) algorithm to solve the problem (FPa,λ,η)(FP_{a,\lambda,\eta}). Moreover, we also prove that the value of the regularization parameter λ>0\lambda>0 can not be chosen too large. Indeed, there exists λˉ>0\bar{\lambda}>0 such that the optimal solution to the problem (FPa,λ,η)(FP_{a,\lambda,\eta}) is equal to zero for any λ>λˉ\lambda>\bar{\lambda}. At last, inspired by the thresholding representation theory of the problem (FPa,λ,η)(FP_{a,\lambda,\eta}), we propose an iterative nonnegative fraction penalty thresholding (INFPT) algorithm to solve the problem (FPa,λ,η)(FP_{a,\lambda,\eta}^{\geq}). Empirical results show that our methods, for some proper a>0a>0, perform effective in finding the sparse portfolio weights with and without short-selling constraints

    Twin-Load: Building a Scalable Memory System over the Non-Scalable Interface

    Full text link
    Commodity memory interfaces have difficulty in scaling memory capacity to meet the needs of modern multicore and big data systems. DRAM device density and maximum device count are constrained by technology, package, and signal in- tegrity issues that limit total memory capacity. Synchronous DRAM protocols require data to be returned within a fixed latency, and thus memory extension methods over commodity DDRx interfaces fail to support scalable topologies. Current extension approaches either use slow PCIe interfaces, or require expensive changes to the memory interface, which limits commercial adoptability. Here we propose twin-load, a lightweight asynchronous memory access mechanism over the synchronous DDRx interface. Twin-load uses two special loads to accomplish one access request to extended memory, the first serves as a prefetch command to the DRAM system, and the second asynchronously gets the required data. Twin-load requires no hardware changes on the processor side and only slight soft- ware modifications. We emulate this system on a prototype to demonstrate the feasibility of our approach. Twin-load has comparable performance to NUMA extended memory and outperforms a page-swapping PCIe-based system by several orders of magnitude. Twin-load thus enables instant capacity increases on commodity platforms, but more importantly, our architecture opens opportunities for the design of novel, efficient, scalable, cost-effective memory subsystems.Comment: submitted to PACT1

    Adaptive iterative singular value thresholding algorithm to low-rank matrix recovery

    Full text link
    The problem of recovering a low-rank matrix from the linear constraints, known as affine matrix rank minimization problem, has been attracting extensive attention in recent years. In general, affine matrix rank minimization problem is a NP-hard. In our latest work, a non-convex fraction function is studied to approximate the rank function in affine matrix rank minimization problem and translate the NP-hard affine matrix rank minimization problem into a transformed affine matrix rank minimization problem. A scheme of iterative singular value thresholding algorithm is generated to solve the regularized transformed affine matrix rank minimization problem. However, one of the drawbacks for our iterative singular value thresholding algorithm is that the parameter aa, which influences the behaviour of non-convex fraction function in the regularized transformed affine matrix rank minimization problem, needs to be determined manually in every simulation. In fact, how to determine the optimal parameter aa is not an easy problem. Here instead, in this paper, we will generate an adaptive iterative singular value thresholding algorithm to solve the regularized transformed affine matrix rank minimization problem. When doing so, our new algorithm will be intelligent both for the choice of the regularized parameter λ\lambda and the parameter aa

    Iterative thresholding algorithm based on non-convex method for modified lp-norm regularization minimization

    Full text link
    Recently, the \l_{p}-norm regularization minimization problem (Ppλ)(P_{p}^{\lambda}) has attracted great attention in compressed sensing. However, the \l_{p}-norm xpp\|x\|_{p}^{p} in problem (Ppλ)(P_{p}^{\lambda}) is nonconvex and non-Lipschitz for all p(0,1)p\in(0,1), and there are not many optimization theories and methods are proposed to solve this problem. In fact, it is NP-hard for all p(0,1)p\in(0,1) and λ>0\lambda>0. In this paper, we study two modified \l_{p} regularization minimization problems to approximate the NP-hard problem (Ppλ)(P_{p}^{\lambda}). Inspired by the good performance of Half algorithm and 2/32/3 algorithm in some sparse signal recovery problems, two iterative thresholding algorithms are proposed to solve the problems (Pp,1/2,ϵλ)(P_{p,1/2,\epsilon}^{\lambda}) and (Pp,2/3,ϵλ)(P_{p,2/3,\epsilon}^{\lambda}) respectively. Numerical results show that our algorithms perform effectively in finding the sparse signal in some sparse signal recovery problems for some proper p(0,1)p\in(0,1)
    corecore