174,054 research outputs found
Minimization of multi-penalty functionals by alternating iterative thresholding and optimal parameter choices
Inspired by several recent developments in regularization theory,
optimization, and signal processing, we present and analyze a numerical
approach to multi-penalty regularization in spaces of sparsely represented
functions. The sparsity prior is motivated by the largely expected
geometrical/structured features of high-dimensional data, which may not be
well-represented in the framework of typically more isotropic Hilbert spaces.
In this paper, we are particularly interested in regularizers which are able to
correctly model and separate the multiple components of additively mixed
signals. This situation is rather common as pure signals may be corrupted by
additive noise. To this end, we consider a regularization functional composed
by a data-fidelity term, where signal and noise are additively mixed, a
non-smooth and non-convex sparsity promoting term, and a penalty term to model
the noise. We propose and analyze the convergence of an iterative alternating
algorithm based on simple iterative thresholding steps to perform the
minimization of the functional. By means of this algorithm, we explore the
effect of choosing different regularization parameters and penalization norms
in terms of the quality of recovering the pure signal and separating it from
additive noise. For a given fixed noise level numerical experiments confirm a
significant improvement in performance compared to standard one-parameter
regularization methods. By using high-dimensional data analysis methods such as
Principal Component Analysis, we are able to show the correct geometrical
clustering of regularized solutions around the expected solution. Eventually,
for the compressive sensing problems considered in our experiments we provide a
guideline for a choice of regularization norms and parameters.Comment: 32 page
Graph SLAM sparsification with populated topologies using factor descent optimization
© 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Current solutions to the simultaneous localization and mapping (SLAM) problem approach it as the optimization of a graph of geometric constraints. Scalability is achieved by reducing the size of the graph, usually in two phases. First, some selected nodes in the graph are marginalized and then, the dense and non-relinearizable result is sparsified. The sparsified network has a new set of relinearizable factors and is an approximation to the original dense one. Sparsification is typically approached as a Kullback-Liebler divergence (KLD) minimization between the dense marginalization result and the new set of factors. For a simple topology of the new factors, such as a tree, there is a closed form optimal solution. However, more populated topologies can achieve a much better approximation because more information can be encoded, although in that case iterative optimization is needed to solve the KLD minimization. Iterative optimization methods proposed by the state-of-art sparsification require parameter tuning which strongly affect their convergence. In this paper, we propose factor descent and non-cyclic factor descent, two simple algorithms for SLAM sparsification that match the state-of-art methods without any parameters to be tuned. The proposed methods are compared against the state of the art with regards to accuracy and CPU time, in both synthetic and real world datasets.Peer ReviewedPostprint (author's final draft
Fast iterative solution of reaction-diffusion control problems arising from chemical processes
PDE-constrained optimization problems, and the development of preconditioned iterative methods for the efficient solution of the arising matrix system, is a field of numerical analysis that has recently been attracting much attention. In this paper, we analyze and develop preconditioners for matrix systems that arise from the optimal control of reaction-diffusion equations, which themselves result from chemical processes. Important aspects in our solvers are saddle point theory, mass matrix representation and effective Schur complement approximation, as well as the outer (Newton) iteration to take account of the nonlinearity of the underlying PDEs
Spectrum optimization in multi-user multi-carrier systems with iterative convex and nonconvex approximation methods
Several practical multi-user multi-carrier communication systems are
characterized by a multi-carrier interference channel system model where the
interference is treated as noise. For these systems, spectrum optimization is a
promising means to mitigate interference. This however corresponds to a
challenging nonconvex optimization problem. Existing iterative convex
approximation (ICA) methods consist in solving a series of improving convex
approximations and are typically implemented in a per-user iterative approach.
However they do not take this typical iterative implementation into account in
their design. This paper proposes a novel class of iterative approximation
methods that focuses explicitly on the per-user iterative implementation, which
allows to relax the problem significantly, dropping joint convexity and even
convexity requirements for the approximations. A systematic design framework is
proposed to construct instances of this novel class, where several new
iterative approximation methods are developed with improved per-user convex and
nonconvex approximations that are both tighter and simpler to solve (in
closed-form). As a result, these novel methods display a much faster
convergence speed and require a significantly lower computational cost.
Furthermore, a majority of the proposed methods can tackle the issue of getting
stuck in bad locally optimal solutions, and hence improve solution quality
compared to existing ICA methods.Comment: 33 pages, 7 figures. This work has been submitted for possible
publicatio
Computational Methods for Sparse Solution of Linear Inverse Problems
The goal of the sparse approximation problem is to approximate a target signal using a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major practical algorithms for sparse approximation. Specific attention is paid to computational issues, to the circumstances in which individual methods tend to perform well, and to the theoretical guarantees available. Many fundamental questions in electrical engineering, statistics, and applied mathematics can be posed as sparse approximation problems, making these algorithms versatile and relevant to a plethora of applications
- …