641 research outputs found

    Positive contraction mappings for classical and quantum Schrodinger systems

    Full text link
    The classical Schrodinger bridge seeks the most likely probability law for a diffusion process, in path space, that matches marginals at two end points in time; the likelihood is quantified by the relative entropy between the sought law and a prior, and the law dictates a controlled path that abides by the specified marginals. Schrodinger proved that the optimal steering of the density between the two end points is effected by a multiplicative functional transformation of the prior; this transformation represents an automorphism on the space of probability measures and has since been studied by Fortet, Beurling and others. A similar question can be raised for processes evolving in a discrete time and space as well as for processes defined over non-commutative probability spaces. The present paper builds on earlier work by Pavon and Ticozzi and begins with the problem of steering a Markov chain between given marginals. Our approach is based on the Hilbert metric and leads to an alternative proof which, however, is constructive. More specifically, we show that the solution to the Schrodinger bridge is provided by the fixed point of a contractive map. We approach in a similar manner the steering of a quantum system across a quantum channel. We are able to establish existence of quantum transitions that are multiplicative functional transformations of a given Kraus map, but only for the case of uniform marginals. As in the Markov chain case, and for uniform density matrices, the solution of the quantum bridge can be constructed from the fixed point of a certain contractive map. For arbitrary marginal densities, extensive numerical simulations indicate that iteration of a similar map leads to fixed points from which we can construct a quantum bridge. For this general case, however, a proof of convergence remains elusive.Comment: 27 page

    Advances in independent component analysis and nonnegative matrix factorization

    Get PDF
    A fundamental problem in machine learning research, as well as in many other disciplines, is finding a suitable representation of multivariate data, i.e. random vectors. For reasons of computational and conceptual simplicity, the representation is often sought as a linear transformation of the original data. In other words, each component of the representation is a linear combination of the original variables. Well-known linear transformation methods include principal component analysis (PCA), factor analysis, and projection pursuit. In this thesis, we consider two popular and widely used techniques: independent component analysis (ICA) and nonnegative matrix factorization (NMF). ICA is a statistical method in which the goal is to find a linear representation of nongaussian data so that the components are statistically independent, or as independent as possible. Such a representation seems to capture the essential structure of the data in many applications, including feature extraction and signal separation. Starting from ICA, several methods of estimating the latent structure in different problem settings are derived and presented in this thesis. FastICA as one of most efficient and popular ICA algorithms has been reviewed and discussed. Its local and global convergence and statistical behavior have been further studied. A nonnegative FastICA algorithm is also given in this thesis. Nonnegative matrix factorization is a recently developed technique for finding parts-based, linear representations of non-negative data. It is a method for dimensionality reduction that respects the nonnegativity of the input data while constructing a low-dimensional approximation. The non-negativity constraints make the representation purely additive (allowing no subtractions), in contrast to many other linear representations such as principal component analysis and independent component analysis. A literature survey of Nonnegative matrix factorization is given in this thesis, and a novel method called Projective Nonnegative matrix factorization (P-NMF) and its applications are provided

    RTL Implementation of image compression techniques in WSN

    Get PDF
    The Wireless sensor networks have limitations regarding data redundancy, power and require high bandwidth when used for multimedia data. Image compression methods overcome these problems. Non-negative Matrix Factorization (NMF) method is useful in approximating high dimensional data where the data has non-negative components. Another method of the NMF called (PNMF) Projective Nonnegative Matrix Factorization is used for learning spatially localized visual patterns. Simulation results show the comparison between SVD, NMF, PNMF compression schemes. Compressed images are transmitted from base station to cluster head node and received from ordinary nodes. The station takes on the image restoration. Image quality, compression ratio, signal to noise ratio and energy consumption are the essential metrics measured for compression performance. In this paper, the compression methods are designed using Matlab.The parameters like PSNR, the total node energy consumption are calculated. RTL schematic of NMF SVD, PNMF methods is generated by using Verilog HDL

    Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions

    Get PDF
    Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, robustness, and/or speed. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m × n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k)) floating-point operations (flops) in contrast to O(mnk) for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multiprocessor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data

    The classical-quantum boundary for correlations: discord and related measures

    Full text link
    One of the best signatures of nonclassicality in a quantum system is the existence of correlations that have no classical counterpart. Different methods for quantifying the quantum and classical parts of correlations are amongst the more actively-studied topics of quantum information theory over the past decade. Entanglement is the most prominent of these correlations, but in many cases unentangled states exhibit nonclassical behavior too. Thus distinguishing quantum correlations other than entanglement provides a better division between the quantum and classical worlds, especially when considering mixed states. Here we review different notions of classical and quantum correlations quantified by quantum discord and other related measures. In the first half, we review the mathematical properties of the measures of quantum correlations, relate them to each other, and discuss the classical-quantum division that is common among them. In the second half, we show that the measures identify and quantify the deviation from classicality in various quantum-information-processing tasks, quantum thermodynamics, open-system dynamics, and many-body physics. We show that in many cases quantum correlations indicate an advantage of quantum methods over classical ones.Comment: Close to the published versio

    Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions

    Get PDF
    Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed---either explicitly or implicitly---to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis
    corecore