36,019 research outputs found

    Complex-Valued Random Vectors and Channels: Entropy, Divergence, and Capacity

    Full text link
    Recent research has demonstrated significant achievable performance gains by exploiting circularity/non-circularity or propeness/improperness of complex-valued signals. In this paper, we investigate the influence of these properties on important information theoretic quantities such as entropy, divergence, and capacity. We prove two maximum entropy theorems that strengthen previously known results. The proof of the former theorem is based on the so-called circular analog of a given complex-valued random vector. Its introduction is supported by a characterization theorem that employs a minimum Kullback-Leibler divergence criterion. In the proof of latter theorem, on the other hand, results about the second-order structure of complex-valued random vectors are exploited. Furthermore, we address the capacity of multiple-input multiple-output (MIMO) channels. Regardless of the specific distribution of the channel parameters (noise vector and channel matrix, if modeled as random), we show that the capacity-achieving input vector is circular for a broad range of MIMO channels (including coherent and noncoherent scenarios). Finally, we investigate the situation of an improper and Gaussian distributed noise vector. We compute both capacity and capacity-achieving input vector and show that improperness increases capacity, provided that the complementary covariance matrix is exploited. Otherwise, a capacity loss occurs, for which we derive an explicit expression.Comment: 33 pages, 1 figure, slightly modified version of first paper revision submitted to IEEE Trans. Inf. Theory on October 31, 201

    Time and spectral domain relative entropy: A new approach to multivariate spectral estimation

    Full text link
    The concept of spectral relative entropy rate is introduced for jointly stationary Gaussian processes. Using classical information-theoretic results, we establish a remarkable connection between time and spectral domain relative entropy rates. This naturally leads to a new spectral estimation technique where a multivariate version of the Itakura-Saito distance is employed}. It may be viewed as an extension of the approach, called THREE, introduced by Byrnes, Georgiou and Lindquist in 2000 which, in turn, followed in the footsteps of the Burg-Jaynes Maximum Entropy Method. Spectral estimation is here recast in the form of a constrained spectrum approximation problem where the distance is equal to the processes relative entropy rate. The corresponding solution entails a complexity upper bound which improves on the one so far available in the multichannel framework. Indeed, it is equal to the one featured by THREE in the scalar case. The solution is computed via a globally convergent matricial Newton-type algorithm. Simulations suggest the effectiveness of the new technique in tackling multivariate spectral estimation tasks, especially in the case of short data records.Comment: 32 pages, submitted for publicatio

    The maximum entropy ansatz in the absence of a time arrow: fractional pole models

    Full text link
    The maximum entropy ansatz, as it is often invoked in the context of time-series analysis, suggests the selection of a power spectrum which is consistent with autocorrelation data and corresponds to a random process least predictable from past observations. We introduce and compare a class of spectra with the property that the underlying random process is least predictable at any given point from the complete set of past and future observations. In this context, randomness is quantified by the size of the corresponding smoothing error and deterministic processes are characterized by integrability of the inverse of their power spectral densities--as opposed to the log-integrability in the classical setting. The power spectrum which is consistent with a partial autocorrelation sequence and corresponds to the most random process in this new sense, is no longer rational but generated by finitely many fractional-poles.Comment: 18 pages, 3 figure

    Maximum Entropy Linear Manifold for Learning Discriminative Low-dimensional Representation

    Full text link
    Representation learning is currently a very hot topic in modern machine learning, mostly due to the great success of the deep learning methods. In particular low-dimensional representation which discriminates classes can not only enhance the classification procedure, but also make it faster, while contrary to the high-dimensional embeddings can be efficiently used for visual based exploratory data analysis. In this paper we propose Maximum Entropy Linear Manifold (MELM), a multidimensional generalization of Multithreshold Entropy Linear Classifier model which is able to find a low-dimensional linear data projection maximizing discriminativeness of projected classes. As a result we obtain a linear embedding which can be used for classification, class aware dimensionality reduction and data visualization. MELM provides highly discriminative 2D projections of the data which can be used as a method for constructing robust classifiers. We provide both empirical evaluation as well as some interesting theoretical properties of our objective function such us scale and affine transformation invariance, connections with PCA and bounding of the expected balanced accuracy error.Comment: submitted to ECMLPKDD 201
    corecore