48 research outputs found

    Fast exact variable order affine projection algorithm

    Full text link
    Variable order affine projection algorithms have been recently presented to be used when not only the convergence speed of the algorithm has to be adjusted but also its computational cost and its final residual error. These kind of affine projection (AP) algorithms improve the standard AP algorithm performance at steady state by reducing the residual mean square error. Furthermore these algorithms optimize computational cost by dynamically adjusting their projection order to convergence speed requirements. The main cost of the standard AP algorithm is due to the matrix inversion that appears in the coefficient update equation. Most efforts to decrease the computational cost of these algorithms have focused on the optimization of this matrix inversion. This paper deals with optimization of the computational cost of variable order AP algorithms by recursive calculation of the inverse signal matrix. Thus, a fast exact variable order AP algorithm is proposed. Exact iterative expressions to calculate the inverse matrix when the algorithm projection order either increases or decreases are incorporated into a variable order AP algorithm leading to a reduced complexity implementation. The simulation results show the proposed algorithm performs similarly to the variable order AP algorithms and it has a lower computational complexity. © 2012 Elsevier B.V. All rights reserved.Partially supported by TEC2009-13741, PROMETEO 2009/0013, GV/ 2010/027, ACOMP/2010/006 and UPV PAID-06-09.Ferrer Contreras, M.; Gonzalez, A.; Diego Antón, MD.; Piñero Sipán, MG. (2012). Fast exact variable order affine projection algorithm. Signal Processing. 92(9):2308-2314. https://doi.org/10.1016/j.sigpro.2012.03.007S2308231492

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of 2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1

    Courbure discrète : théorie et applications

    Get PDF
    International audienceThe present volume contains the proceedings of the 2013 Meeting on discrete curvature, held at CIRM, Luminy, France. The aim of this meeting was to bring together researchers from various backgrounds, ranging from mathematics to computer science, with a focus on both theory and applications. With 27 invited talks and 8 posters, the conference attracted 70 researchers from all over the world. The challenge of finding a common ground on the topic of discrete curvature was met with success, and these proceedings are a testimony of this wor

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Filtrado adaptativo multicanal para control local de campo sonoro basado en algoritmos de proyección afín

    Full text link
    Esta Tesis doctoral se ha centrado en el desarrollo e implementación de algoritmos eficientes multicanal, basados en el algoritmo de proyección afín, aplicados al control activo de ruido. Para abordar esta cuestión primeramente se han estudiado diferentes algoritmos eficientes de proyección afín que han sido analizados y validados mediante simulación, finalizando con la implementación, en un recinto, de un sistema real de control activo de ruido multicanal ejecutado en un DSP controlado por dichos algoritmos. En los últimos años, los algoritmos de proyección afín han sido propuestos como algoritmos de control en sistemas adaptativos, que pretenden mejorar la velocidad de convergencia de los algoritmos basados en el LMS, siendo una alternativa eficiente, robusta y estable frente a estos algoritmos, cuya limitación principal es, precisamente, la velocidad de convergencia. Los algoritmos de proyección afín pueden ser considerados como una extensión natural del algoritmo NLMS, ya que éste actualiza sus coeficientes basándose en un único vector de datos de la señal de entrada mientras que los algoritmos de proyección afín actualizan los coeficientes de los filtros adaptativos usando N vectores de datos de la señal de entrada (siendo N el orden de proyección). Se han dedicado muchos esfuerzos para tratar de optimizar la eficiencia computacional de estos algoritmos aplicados al problema de la cancelación de eco, surgiendo diferentes versiones eficientes del algoritmo de proyección afín. Sin embargo, al aplicarlo al control activo de ruido, es necesario reducir aún más la complejidad computacional, teniendo en cuenta que, por lo general, la eficiencia computacional se consigue a costa de la degradación de alguna otra característica del algoritmo (generalmente la velocidad de convergencia). En este trabajo se presentan algunas alternativas a versiones eficientes existentes, que no degradan significativamente las prestaciones de dicho algoritmo, y se analiza cómo reducir aúnFerrer Contreras, M. (2008). Filtrado adaptativo multicanal para control local de campo sonoro basado en algoritmos de proyección afín [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/3796Palanci

    Advanced sparse optimization algorithms for interferometric imaging inverse problems in astronomy

    Get PDF
    In the quest to produce images of the sky at unprecedented resolution with high sensitivity, new generation of astronomical interferometers have been designed. To meet the sensing capabilities of these instruments, techniques aiming to recover the sought images from the incompletely sampled Fourier domain measurements need to be reinvented. This goes hand-in-hand with the necessity to calibrate the measurement modulating unknown effects, which adversely affect the image quality, limiting its dynamic range. The contribution of this thesis consists in the development of advanced optimization techniques tailored to address these issues, ranging from radio interferometry (RI) to optical interferometry (OI). In the context of RI, we propose a novel convex optimization approach for full polarization imaging relying on sparsity-promoting regularizations. Unlike standard RI imaging algorithms, our method jointly solves for the Stokes images by enforcing the polarization constraint, which imposes a physical dependency between the images. These priors are shown to enhance the imaging quality via various performed numerical studies. The proposed imaging approach also benefits from its scalability to handle the huge amounts of data expected from the new instruments. When it comes to deal with the critical and challenging issues of the direction-dependent effects calibration, we further propose a non-convex optimization technique that unifies calibration and imaging steps in a global framework, in which we adapt the earlier developed imaging method for the imaging step. In contrast to existing RI calibration modalities, our method benefits from well-established convergence guarantees even in the non-convex setting considered in this work and its efficiency is demonstrated through several numerical experiments. Last but not least, inspired by the performance of these methodologies and drawing ideas from them, we aim to solve image recovery problem in OI that poses its own set of challenges primarily due to the partial loss of phase information. To this end, we propose a sparsity regularized non-convex optimization algorithm that is equipped with convergence guarantees and is adaptable to both monochromatic and hyperspectral OI imaging. We validate it by presenting the simulation results

    Connected Attribute Filtering Based on Contour Smoothness

    Get PDF
    corecore