10 research outputs found

    Optimal Feature Extraction and Classification of Tensors via Matrix Product State Decomposition

    Full text link
    © 2015 IEEE. Big data consists of large multidimensional datasets that would often be difficult to analyze if working with the original tensor. There is a rising interest in the use of tensor decompositions for feature extraction due to the ability to extract necessary features from a large dimensional feature space. In this paper the matrix product state (MPS) decomposition is used for feature extraction of large tensors. The novelty of the paper is the introduction of a single core tensor obtained from the MPS that not only contains a significantly reduced feature space, but can perform classification with high accuracy without the need of feature selection methods

    Matrix Product State for Higher-Order Tensor Compression and Classification

    Full text link
    © 2017 IEEE. This paper introduces matrix product state (MPS) decomposition as a new and systematic method to compress multidimensional data represented by higher order tensors. It solves two major bottlenecks in tensor compression: computation and compression quality. Regardless of tensor order, MPS compresses tensors to matrices of moderate dimension, which can be used for classification. Mainly based on a successive sequence of singular value decompositions, MPS is quite simple to implement and arrives at the global optimal matrix, bypassing local alternating optimization, which is not only computationally expensive but cannot yield the global solution. Benchmark results show that MPS can achieve better classification performance with favorable computation cost compared to other tensor compression methods

    Efficient Tensor Completion for Color Image and Video Recovery: Low-Rank Tensor Train

    Full text link
    © 1992-2012 IEEE. This paper proposes a novel approach to tensor completion, which recovers missing entries of data represented by tensors. The approach is based on the tensor train (TT) rank, which is able to capture hidden information from tensors thanks to its definition from a well-balanced matricization scheme. Accordingly, new optimization formulations for tensor completion are proposed as well as two new algorithms for their solution. The first one called simple low-rank tensor completion via TT (SiLRTC-TT) is intimately related to minimizing a nuclear norm based on TT rank. The second one is from a multilinear matrix factorization model to approximate the TT rank of a tensor, and is called tensor completion by parallel matrix factorization via TT (TMac-TT). A tensor augmentation scheme of transforming a low-order tensor to higher orders is also proposed to enhance the effectiveness of SiLRTC-TT and TMac-TT. Simulation results for color image and video recovery show the clear advantage of our method over all other methods

    Concatenated image completion via tensor augmentation and completion

    Full text link
    © 2016 IEEE. This paper proposes a novel framework called concatenated image completion via tensor augmentation and completion (ICTAC), which recovers missing entries of color images with high accuracy. Typical images are second-or third-order tensors (2D/3D) depending if they are grayscale or color, hence tensor completion algorithms are ideal for their recovery. The proposed framework performs image completion by concatenating copies of a single image that has missing entries into a third-order tensor, applying a dimensionality augmentation technique to the tensor, utilizing a tensor completion algorithm for recovering its missing entries, and finally extracting the recovered image from the tensor. The solution relies on two key components that have been recently proposed to take advantage of the tensor train (TT) rank: A tensor augmentation tool called ket augmentation (KA) that represents a low-order tensor by a higher-order tensor, and the algorithm tensor completion by parallel matrix factorization via tensor train (TMac-TT), which has been demonstrated to outperform state-of-the-art tensor completion algorithms. Simulation results for color image recovery show the clear advantage of our framework against current state-of-the-art tensor completion algorithms

    Efficient tensor completion: Low-rank tensor train

    Full text link
    This paper proposes a novel formulation of the tensor completion problem to impute missing entries of data represented by tensors. The formulation is introduced in terms of tensor train (TT) rank which can effectively capture global information of tensors thanks to its construction by a well-balanced matricization scheme. Two algorithms are proposed to solve the corresponding tensor completion problem. The first one called simple low-rank tensor completion via tensor train (SiLRTC-TT) is intimately related to minimizing the TT nuclear norm. The second one is based on a multilinear matrix factorization model to approximate the TT rank of the tensor and called tensor completion by parallel matrix factorization via tensor train (TMac-TT). These algorithms are applied to complete both synthetic and real world data tensors. Simulation results of synthetic data show that the proposed algorithms are efficient in estimating missing entries for tensors with either low Tucker rank or TT rank while Tucker-based algorithms are only comparable in the case of low Tucker rank tensors. When applied to recover color images represented by ninth-order tensors augmented from third-order ones, the proposed algorithms outperforms the Tucker-based algorithms

    Matrix Product State for Feature Extraction of Higher-Order Tensors

    Full text link
    This paper introduces matrix product state (MPS) decomposition as a computational tool for extracting features of multidimensional data represented by higher-order tensors. Regardless of tensor order, MPS extracts its relevant features to the so-called core tensor of maximum order three which can be used for classification. Mainly based on a successive sequence of singular value decompositions (SVD), MPS is quite simple to implement without any recursive procedure needed for optimizing local tensors. Thus, it leads to substantial computational savings compared to other tensor feature extraction methods such as higher-order orthogonal iteration (HOOI) underlying the Tucker decomposition (TD). Benchmark results show that MPS can reduce significantly the feature space of data while achieving better classification performance compared to HOOI

    Two-hop power-relaying for linear wireless sensor networks

    Full text link
    © 2016 IEEE. This paper presents two-hop relay gain-scheduling control in a Wireless Sensor Network to estimate a static target prior characterized by Gaussian probability distribution. The target is observed by a network of linear sensors, whose observations are transmitted to a fusion center for carrying out final estimation via a amplify-And-forward relay node. We are concerned with the joint transmission power allocation for sensors and relay to optimize the minimum mean square error (MMSE) estimator, which is deployed at the fusion center. Particularly, such highly nonlinear optimization problems are solved by an iterative procedure of very low computational complexity. Simulations are provided to support the efficiency of our proposed power allocation

    Infinite projected entangled pair states algorithm improved: Fast full update and gauge fixing

    Get PDF
    © 2015 American Physical Society. ©2015 American Physical Society. The infinite projected entangled pair states (iPEPS) algorithm [J. Jordan, Phys. Rev. Lett. 101, 250602 (2008)PRLTAO0031-900710.1103/PhysRevLett.101.250602] has become a useful tool in the calculation of ground-state properties of two-dimensional quantum lattice systems in the thermodynamic limit. Despite its many successful implementations, the method has some limitations in its present formulation which hinder its application to some highly entangled systems. The purpose of this paper is to unravel some of these issues, in turn enhancing the stability and efficiency of iPEPS methods. For this, we first introduce the fast full update scheme, where effective environment and iPEPS tensors are both simultaneously updated (or evolved) throughout time. As we shall show, this implies two crucial advantages: (i) dramatic computational savings and (ii) improved overall stability. In addition, we extend the application of the local gauge fixing, successfully implemented for finite-size PEPS [M. Lubasch, Phys. Rev. B 90, 064425 (2014)PRBMDO1098-012110.1103/PhysRevB.90.064425], to the iPEPS algorithm. We see that the gauge fixing not only further improves the stability of the method but also accelerates the convergence of the alternating least-squares sweeping in the (either "full" or "fast full") tensor update scheme. The improvement in terms of computational cost and stability of the resulting "improved" iPEPS algorithm is benchmarked by studying the ground-state properties of the quantum Heisenberg and transverse-field Ising models on an infinite square lattice
    corecore