1,612 research outputs found

    A simultaneous sparse approximation method for multidimensional harmonic retrieval

    No full text
    International audienceIn this paper, a new method for the estimation of the parameters of multidimensional (R-D) harmonic and damped complex signals in noise is presented. The problem is formulated as R simultaneous sparse approximations of multiple 1-D signals. To get a method able to handle large size signals while maintaining a sufficient resolution, a multigrid dictionary refinement technique is associated to the simultaneous sparse approximation. The refinement procedure is proved to converge in the single R-D mode case. Then, for the general multiple modes case, the signal tensor model is decomposed in order to handle each mode separately in an iterative scheme. The proposed method does not require an association step since the estimated modes are automatically "paired". We also derive the Cramér-Rao lower bounds of the parameters of modal R-D signals. The expressions are given in compact form in the single tone case. Finally, numerical simulations are conducted to demonstrate the effectiveness of the proposed method

    ESPRIT for multidimensional general grids

    Full text link
    We present a new method for complex frequency estimation in several variables, extending the classical (1d) ESPRIT-algorithm. We also consider how to work with data sampled on non-standard domains (i.e going beyond multi-rectangles)

    Tensor Decompositions for Signal Processing Applications From Two-way to Multiway Component Analysis

    Full text link
    The widespread use of multi-sensor technology and the emergence of big datasets has highlighted the limitations of standard flat-view matrix models and the necessity to move towards more versatile data analysis tools. We show that higher-order tensors (i.e., multiway arrays) enable such a fundamental paradigm shift towards models that are essentially polynomial and whose uniqueness, unlike the matrix methods, is guaranteed under verymild and natural conditions. Benefiting fromthe power ofmultilinear algebra as theirmathematical backbone, data analysis techniques using tensor decompositions are shown to have great flexibility in the choice of constraints that match data properties, and to find more general latent components in the data than matrix-based methods. A comprehensive introduction to tensor decompositions is provided from a signal processing perspective, starting from the algebraic foundations, via basic Canonical Polyadic and Tucker models, through to advanced cause-effect and multi-view data analysis schemes. We show that tensor decompositions enable natural generalizations of some commonly used signal processing paradigms, such as canonical correlation and subspace techniques, signal separation, linear regression, feature extraction and classification. We also cover computational aspects, and point out how ideas from compressed sensing and scientific computing may be used for addressing the otherwise unmanageable storage and manipulation problems associated with big datasets. The concepts are supported by illustrative real world case studies illuminating the benefits of the tensor framework, as efficient and promising tools for modern signal processing, data analysis and machine learning applications; these benefits also extend to vector/matrix data through tensorization. Keywords: ICA, NMF, CPD, Tucker decomposition, HOSVD, tensor networks, Tensor Train

    The Data Big Bang and the Expanding Digital Universe: High-Dimensional, Complex and Massive Data Sets in an Inflationary Epoch

    Get PDF
    Recent and forthcoming advances in instrumentation, and giant new surveys, are creating astronomical data sets that are not amenable to the methods of analysis familiar to astronomers. Traditional methods are often inadequate not merely because of the size in bytes of the data sets, but also because of the complexity of modern data sets. Mathematical limitations of familiar algorithms and techniques in dealing with such data sets create a critical need for new paradigms for the representation, analysis and scientific visualization (as opposed to illustrative visualization) of heterogeneous, multiresolution data across application domains. Some of the problems presented by the new data sets have been addressed by other disciplines such as applied mathematics, statistics and machine learning and have been utilized by other sciences such as space-based geosciences. Unfortunately, valuable results pertaining to these problems are mostly to be found only in publications outside of astronomy. Here we offer brief overviews of a number of concepts, techniques and developments, some "old" and some new. These are generally unknown to most of the astronomical community, but are vital to the analysis and visualization of complex datasets and images. In order for astronomers to take advantage of the richness and complexity of the new era of data, and to be able to identify, adopt, and apply new solutions, the astronomical community needs a certain degree of awareness and understanding of the new concepts. One of the goals of this paper is to help bridge the gap between applied mathematics, artificial intelligence and computer science on the one side and astronomy on the other.Comment: 24 pages, 8 Figures, 1 Table. Accepted for publication: "Advances in Astronomy, special issue "Robotic Astronomy

    Recovery under Side Constraints

    Full text link
    This paper addresses sparse signal reconstruction under various types of structural side constraints with applications in multi-antenna systems. Side constraints may result from prior information on the measurement system and the sparse signal structure. They may involve the structure of the sensing matrix, the structure of the non-zero support values, the temporal structure of the sparse representationvector, and the nonlinear measurement structure. First, we demonstrate how a priori information in form of structural side constraints influence recovery guarantees (null space properties) using L1-minimization. Furthermore, for constant modulus signals, signals with row-, block- and rank-sparsity, as well as non-circular signals, we illustrate how structural prior information can be used to devise efficient algorithms with improved recovery performance and reduced computational complexity. Finally, we address the measurement system design for linear and nonlinear measurements of sparse signals. Moreover, we discuss the linear mixing matrix design based on coherence minimization. Then we extend our focus to nonlinear measurement systems where we design parallel optimization algorithms to efficiently compute stationary points in the sparse phase retrieval problem with and without dictionary learning

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page
    corecore