1,429 research outputs found

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1

    In pursuit of high resolution radar using pursuit algorithms

    Get PDF
    Radar receivers typically employ matched filters designed to maximize signal to noise ratio (SNR) in a single target environment. In a multi-target environment, however, matched filter estimates of target environment often consist of spurious targets because of radar signal sidelobes. As a result, matched filters are not suitable for use in high resolution radars operating in multi-target environments. Assuming a point target model, we show that the radar problem can be formulated as a linear under-determined system with a sparse solution. This suggests that radar can be considered as a sparse signal recovery problem. However, it is shown that the sensing matrix obtained using common radar signals does not usually satisfy the mutual coherence condition. This implies that using recovery techniques available in compressed sensing literature may not result in the optimal solution. In this thesis, we focus on the greedy algorithm approach to solve the problem and show that it naturally yields a quantitative measure for radar resolution. In addition, we show that the limitations of the greedy algorithms can be attributed to the close relation between greedy matching pursuit algorithms and the matched filter. This suggests that improvements to the resolution capability of the greedy pursuit algorithms can be made by using a mismatched signal dictionary. In some cases, unlike the mismatched filter, the proposed mismatched pursuit algorithm is shown to offer improved resolution and stability without any noticeable difference in detection performance. Further improvements in resolution are proposed by using greedy algorithms in a radar system using multiple transmit waveforms. It is shown that while using the greedy algorithms together with linear channel combining can yield significant resolution improvement, a greedy approach using nonlinear channel combining also shows some promise. Finally, a forward-backward greedy algorithm is proposed for target environments comprising of point targets as well as extended targets

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of â„“2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem

    Coding of synthetic aperture radar data

    Get PDF
    • …
    corecore