4,986 research outputs found

    Structured Sparsity Promoting Functions: Theory and Applications

    Get PDF
    Motivated by the minimax concave penalty based variable selection in high-dimensional linear regression, we introduce a simple scheme to construct structured semiconvex sparsity promoting functions from convex sparsity promoting functions and their Moreau envelopes. Properties of these functions are developed by leveraging their structure. In particular, we show that the behavior of the constructed function can be easily controlled by assumptions on the original convex function. We provide sparsity guarantees for the general family of functions via the proximity operator. Results related to the Fenchel Conjugate and Łojasiewicz exponent of these functions are also provided. We further study the behavior of the proximity operators of several special functions including indicator functions of closed convex sets, piecewise quadratic functions, and linear combinations of the two. To demonstrate these properties, several concrete examples are presented and existing instances are featured as special cases. We explore the effect of these functions on the penalized least squares problem and discuss several algorithms for solving this problem which rely on the particular structure of our functions. We then apply these methods to the total variation denoising problem from signal processing

    Design of Optimal Sparse Feedback Gains via the Alternating Direction Method of Multipliers

    Full text link
    We design sparse and block sparse feedback gains that minimize the variance amplification (i.e., the H2H_2 norm) of distributed systems. Our approach consists of two steps. First, we identify sparsity patterns of feedback gains by incorporating sparsity-promoting penalty functions into the optimal control problem, where the added terms penalize the number of communication links in the distributed controller. Second, we optimize feedback gains subject to structural constraints determined by the identified sparsity patterns. In the first step, the sparsity structure of feedback gains is identified using the alternating direction method of multipliers, which is a powerful algorithm well-suited to large optimization problems. This method alternates between promoting the sparsity of the controller and optimizing the closed-loop performance, which allows us to exploit the structure of the corresponding objective functions. In particular, we take advantage of the separability of the sparsity-promoting penalty functions to decompose the minimization problem into sub-problems that can be solved analytically. Several examples are provided to illustrate the effectiveness of the developed approach.Comment: To appear in IEEE Trans. Automat. Contro

    A Primal-Dual Proximal Algorithm for Sparse Template-Based Adaptive Filtering: Application to Seismic Multiple Removal

    Get PDF
    Unveiling meaningful geophysical information from seismic data requires to deal with both random and structured "noises". As their amplitude may be greater than signals of interest (primaries), additional prior information is especially important in performing efficient signal separation. We address here the problem of multiple reflections, caused by wave-field bouncing between layers. Since only approximate models of these phenomena are available, we propose a flexible framework for time-varying adaptive filtering of seismic signals, using sparse representations, based on inaccurate templates. We recast the joint estimation of adaptive filters and primaries in a new convex variational formulation. This approach allows us to incorporate plausible knowledge about noise statistics, data sparsity and slow filter variation in parsimony-promoting wavelet frames. The designed primal-dual algorithm solves a constrained minimization problem that alleviates standard regularization issues in finding hyperparameters. The approach demonstrates significantly good performance in low signal-to-noise ratio conditions, both for simulated and real field seismic data

    Group-Sparse Signal Denoising: Non-Convex Regularization, Convex Optimization

    Full text link
    Convex optimization with sparsity-promoting convex regularization is a standard approach for estimating sparse signals in noise. In order to promote sparsity more strongly than convex regularization, it is also standard practice to employ non-convex optimization. In this paper, we take a third approach. We utilize a non-convex regularization term chosen such that the total cost function (consisting of data consistency and regularization terms) is convex. Therefore, sparsity is more strongly promoted than in the standard convex formulation, but without sacrificing the attractive aspects of convex optimization (unique minimum, robust algorithms, etc.). We use this idea to improve the recently developed 'overlapping group shrinkage' (OGS) algorithm for the denoising of group-sparse signals. The algorithm is applied to the problem of speech enhancement with favorable results in terms of both SNR and perceptual quality.Comment: 14 pages, 11 figure
    corecore