80 research outputs found

    Robust MIMO Channel Estimation from Incomplete and Corrupted Measurements

    Get PDF
    Location-aware communication is one of the enabling techniques for future 5G networks. It requires accurate temporal and spatial channel estimation from multidimensional data. Most of the existing channel estimation techniques assume that the measurements are complete and noise is Gaussian. While these approaches are brittle to corrupted or outlying measurements, which are ubiquitous in real applications. To address these issues, we develop a lp-norm minimization based iteratively reweighted higher-order singular value decomposition algorithm. It is robust to Gaussian as well as the impulsive noise even when the measurement data is incomplete. Compared with the state-of-the-art techniques, accurate estimation results are achieved for the proposed approach

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Low-rank Tensor Recovery

    Get PDF
    Low-rank tensor recovery is an interesting subject from both the theoretical and application point of view. On one side, it is a natural extension of the sparse vector and low-rank matrix recovery problem. On the other side, estimating a low-rank tensor has applications in many different areas such as machine learning, video compression, and seismic data interpolation. In this thesis, two approaches are introduced. The first approach is a convex optimization approach and could be considered as a tractable extension of ell1ell_1-minimization for sparse vector and nuclear norm minimization for matrix recovery to tensor scenario. It is based on theta bodies – a recently introduced tool from real algebraic geometry. In particular, theta bodies of appropriately defined polynomial ideal correspond to the unit-theta norm balls. These unit-theta norm balls are relaxations of the unit-tensor-nuclear norm ball. Thus, in this case, we consider a canonical tensor format. The method requires computing the reduced Groebner basis (with respect to the graded reverse lexicographic ordering) of the appropriately defined polynomial ideal. Numerical results for third-order tensor recovery via theta1theta_1-norm are provided. The second approach is a generalization of iterative hard thresholding algorithm for sparse vector and low-rank matrix recovery to tensor scenario (tensor IHT or TIHT algorithm). Here, we consider the Tucker format, the tensor train decomposition, and the hierarchical Tucker decomposition. The analysis of the algorithm is based on a version of the restricted isometry property (tensor RIP or TRIP) adapted to the tensor decomposition at hand. We show that subgaussian measurement ensembles satisfy TRIP with high probability under an almost optimal condition on the number of measurements. Additionally, we show that partial Fourier maps combined with random sign flips of the tensor entries satisfy TRIP with high probability. Under the assumption that the linear operator satisfies TRIP and under an additional assumption on the thresholding operator, we provide a linear convergence result for the TIHT algorithm. Finally, we present numerical results on low-Tucker-rank third-order tensors via partial Fourier maps combined with random sign flips of tensor entries, tensor completion, and Gaussian measurement ensembles

    Automated Gene Classification using Nonnegative Matrix Factorization on Biomedical Literature

    Get PDF
    Understanding functional gene relationships is a challenging problem for biological applications. High-throughput technologies such as DNA microarrays have inundated biologists with a wealth of information, however, processing that information remains problematic. To help with this problem, researchers have begun applying text mining techniques to the biological literature. This work extends previous work based on Latent Semantic Indexing (LSI) by examining Nonnegative Matrix Factorization (NMF). Whereas LSI incorporates the singular value decomposition (SVD) to approximate data in a dense, mixed-sign space, NMF produces a parts-based factorization that is directly interpretable. This space can, in theory, be used to augment existing ontologies and annotations by identifying themes within the literature. Of course, performing NMF does not come without a price—namely, the large number of parameters. This work attempts to analyze the effects of some of the NMF parameters on both convergence and labeling accuracy. Since there is a dearth of automated label evaluation techniques as well as “gold standard” hierarchies, a method to produce “correct” trees is proposed as well as a technique to label trees and to evaluate those labels
    • …
    corecore