375 research outputs found

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Estimation and detection techniques for doubly-selective channels in wireless communications

    Get PDF
    A fundamental problem in communications is the estimation of the channel. The signal transmitted through a communications channel undergoes distortions so that it is often received in an unrecognizable form at the receiver. The receiver must expend significant signal processing effort in order to be able to decode the transmit signal from this received signal. This signal processing requires knowledge of how the channel distorts the transmit signal, i.e. channel knowledge. To maintain a reliable link, the channel must be estimated and tracked by the receiver. The estimation of the channel at the receiver often proceeds by transmission of a signal called the 'pilot' which is known a priori to the receiver. The receiver forms its estimate of the transmitted signal based on how this known signal is distorted by the channel, i.e. it estimates the channel from the received signal and the pilot. This design of the pilot is a function of the modulation, the type of training and the channel. [Continues.

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Convolutive Blind Source Separation Methods

    Get PDF
    In this chapter, we provide an overview of existing algorithms for blind source separation of convolutive audio mixtures. We provide a taxonomy, wherein many of the existing algorithms can be organized, and we present published results from those algorithms that have been applied to real-world audio separation tasks

    Mathematical analysis of super-resolution methodology

    Get PDF
    The attainment of super resolution (SR) from a sequence of degraded undersampled images could be viewed as reconstruction of the high-resolution (HR) image from a finite set of its projections on a sampling lattice. This can then be formulated as an optimization problem whose solution is obtained by minimizing a cost function. The approaches adopted and their analysis to solve the formulated optimization problem are crucial, The image acquisition scheme is important in the modeling of the degradation process. The need for model accuracy is undeniable in the attainment of SR along with the design of the algorithm whose robust implementation will produce the desired quality in the presence of model parameter uncertainty. To keep the presentation focused and of reasonable size, data acquisition with multisensors instead of, say a video camera is considered.published_or_final_versio

    Cyclosparsity: A New Concept for Sparse Deconvolution

    Get PDF
    Periodic random impulse signals are appropriate tools for several situations of interest and are a natural way for modeling highly localized events occuring randomly at given times. Nevertheless, the impulses are generally hidden and swallowed up in noise because of unwanted convolution. Thus, the resulting signal is not legible and may lead to erroneaous analysis, and hence, the need of deconvolution to restore the random periodic impulses. The main purpose of this study is to introduce the concept of cyclic sparsity or cyclosparsity in deconvolution framework for signals that are jointly sparse and cyclostationary like periodic random impulses. Indeed, all related works in this area exploit only one property, either sparsity or cyclostationarity and never both properties together. Although, the key feature of the cyclosparsity concept is that it gathers both properties to better characterize this kind of signals. We show that deconvolution based on cyclic sparsity hypothesis increases the performances and reduces significantly the computation cost as well. Finally, we use computer simulations to investigate the behavior in deconvolution framework of the algorithms Matching Pursuit (MP) [13], Orthogonal Matching Pursuit (OMP) [14], Orthogonal Least Square (OLS) [15], Single Best Replacement (SBR), [19, 20, 21] and the proposed extensions to cyclic sparsity context: Cyclo-MP, Cyclo-OMP, Cyclo-OLS and Cyclo-SBR

    Circulant singular spectrum analysis: a new automated procedure for signal extraction

    Get PDF
    Sometimes, it is of interest to single out the fluctuations associated to a given frequency. We propose a new variant of SSA, Circulant SSA (CiSSA), that allows to extract the signal associated to any frequency specified beforehand. This is a novelty when compared with other SSA procedures that need to iden- tify ex-post the frequencies associated to the extracted signals. We prove that CiSSA is asymptotically equivalent to these alternative procedures although with the advantage of avoiding the need of the subse- quent frequency identification. We check its good performance and compare it to alternative SSA methods through several simulations for linear and nonlinear time series. We also prove its validity in the nonsta- tionary case. We apply CiSSA in two different fields to show how it works with real data and find that it behaves successfully in both applications. Finally, we compare the performance of CiSSA with other state of the art techniques used for nonlinear and nonstationary signals with amplitude and frequency varying in time.MINECO/FEDE
    • …
    corecore