213 research outputs found

    A Parallel Best-Response Algorithm with Exact Line Search for Nonconvex Sparsity-Regularized Rank Minimization

    Get PDF
    In this paper, we propose a convergent parallel best-response algorithm with the exact line search for the nondifferentiable nonconvex sparsity-regularized rank minimization problem. On the one hand, it exhibits a faster convergence than subgradient algorithms and block coordinate descent algorithms. On the other hand, its convergence to a stationary point is guaranteed, while ADMM algorithms only converge for convex problems. Furthermore, the exact line search procedure in the proposed algorithm is performed efficiently in closed-form to avoid the meticulous choice of stepsizes, which is however a common bottleneck in subgradient algorithms and successive convex approximation algorithms. Finally, the proposed algorithm is numerically tested.Comment: Submitted to IEEE ICASSP 201

    Inexact Block Coordinate Descent Algorithms for Nonsmooth Nonconvex Optimization

    Full text link
    In this paper, we propose an inexact block coordinate descent algorithm for large-scale nonsmooth nonconvex optimization problems. At each iteration, a particular block variable is selected and updated by inexactly solving the original optimization problem with respect to that block variable. More precisely, a local approximation of the original optimization problem is solved. The proposed algorithm has several attractive features, namely, i) high flexibility, as the approximation function only needs to be strictly convex and it does not have to be a global upper bound of the original function; ii) fast convergence, as the approximation function can be designed to exploit the problem structure at hand and the stepsize is calculated by the line search; iii) low complexity, as the approximation subproblems are much easier to solve and the line search scheme is carried out over a properly constructed differentiable function; iv) guaranteed convergence of a subsequence to a stationary point, even when the objective function does not have a Lipschitz continuous gradient. Interestingly, when the approximation subproblem is solved by a descent algorithm, convergence of a subsequence to a stationary point is still guaranteed even if the approximation subproblem is solved inexactly by terminating the descent algorithm after a finite number of iterations. These features make the proposed algorithm suitable for large-scale problems where the dimension exceeds the memory and/or the processing capability of the existing hardware. These features are also illustrated by several applications in signal processing and machine learning, for instance, network anomaly detection and phase retrieval

    Successive convex approximation algorithms for sparse signal estimation with nonconvex regularizations

    Get PDF
    In this paper, we propose a successive convex approximation framework for sparse optimization where the nonsmooth regularization function in the objective function is nonconvex and it can be written as the difference of two convex functions. The proposed framework is based on a nontrivial combination of the majorization-minimization framework and the successive convex approximation framework proposed in literature for a convex regularization function. The proposed framework has several attractive features, namely, i) flexibility, as different choices of the approximate function lead to different type of algorithms; ii) fast convergence, as the problem structure can be better exploited by a proper choice of the approximate function and the stepsize is calculated by the line search; iii) low complexity, as the approximate function is convex and the line search scheme is carried out over a differentiable function; iv) guaranteed convergence to a stationary point. We demonstrate these features by two example applications in subspace learning, namely, the network anomaly detection problem and the sparse subspace clustering problem. Customizing the proposed framework by adopting the best-response type approximation, we obtain soft-thresholding with exact line search algorithms for which all elements of the unknown parameter are updated in parallel according to closed-form expressions. The attractive features of the proposed algorithms are illustrated numerically.Comment: submitted to IEEE Journal of Selected Topics in Signal Processing, special issue in Robust Subspace Learnin

    Extended Successive Convex Approximation for Phase Retrieval with Dictionary Learning

    Full text link
    Phase retrieval aims at reconstructing unknown signals from magnitude measurements of linear mixtures. In this paper, we consider the phase retrieval with dictionary learning problem, which includes an additional prior information that the measured signal admits a sparse representation over an unknown dictionary. The task is to jointly estimate the dictionary and the sparse representation from magnitude-only measurements. To this end, we study two complementary formulations and develop efficient parallel algorithms by extending the successive convex approximation framework using a smooth majorization. The first algorithm is termed compact-SCAphase and is preferable in the case of less diverse mixture models. It employs a compact formulation that avoids the use of auxiliary variables. The proposed algorithm is highly scalable and has reduced parameter tuning cost. The second algorithm, referred to as SCAphase, uses auxiliary variables and is favorable in the case of highly diverse mixture models. It also renders simple incorporation of additional side constraints. The performance of both methods is evaluated when applied to blind sparse channel estimation from subband magnitude measurements in a multi-antenna random access network. Simulation results demonstrate the efficiency of the proposed techniques compared to state-of-the-art methods.Comment: This work has been submitted to the IEEE Transactions on Signal Processing for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Algorithms for Multiclass Classification and Regularized Regression

    Get PDF
    • …
    corecore