1,535 research outputs found

    A Tight Bound on the Performance of a Minimal-Delay Joint Source-Channel Coding Scheme

    Get PDF
    An analog source is to be transmitted across a Gaussian channel in more than one channel use per source symbol. This paper derives a lower bound on the asymptotic mean squared error for a strategy that consists of repeatedly quantizing the source, transmitting the quantizer outputs in the first channel uses, and sending the remaining quantization error uncoded in the last channel use. The bound coincides with the performance achieved by a suboptimal decoder studied by the authors in a previous paper, thereby establishing that the bound is tight.Comment: 5 pages, submitted to IEEE International Symposium on Information Theory (ISIT) 201

    How long does it take to generate a group?

    Get PDF
    The diameter of a finite group GG with respect to a generating set AA is the smallest non-negative integer nn such that every element of GG can be written as a product of at most nn elements of AâˆȘA−1A \cup A^{-1}. We denote this invariant by \diam_A(G). It can be interpreted as the diameter of the Cayley graph induced by AA on GG and arises, for instance, in the context of efficient communication networks. In this paper we study the diameters of a finite abelian group GG with respect to its various generating sets AA. We determine the maximum possible value of \diam_A(G) and classify all generating sets for which this maximum value is attained. Also, we determine the maximum possible cardinality of AA subject to the condition that \diam_A(G) is "not too small". Connections with caps, sum-free sets, and quasi-perfect codes are discussed

    Local stability and robustness of sparse dictionary learning in the presence of noise

    Get PDF
    A popular approach within the signal processing and machine learning communities consists in modelling signals as sparse linear combinations of atoms selected from a learned dictionary. While this paradigm has led to numerous empirical successes in various fields ranging from image to audio processing, there have only been a few theoretical arguments supporting these evidences. In particular, sparse coding, or sparse dictionary learning, relies on a non-convex procedure whose local minima have not been fully analyzed yet. In this paper, we consider a probabilistic model of sparse signals, and show that, with high probability, sparse coding admits a local minimum around the reference dictionary generating the signals. Our study takes into account the case of over-complete dictionaries and noisy signals, thus extending previous work limited to noiseless settings and/or under-complete dictionaries. The analysis we conduct is non-asymptotic and makes it possible to understand how the key quantities of the problem, such as the coherence or the level of noise, can scale with respect to the dimension of the signals, the number of atoms, the sparsity and the number of observations

    Cognitive scale-free networks as a model for intermittency in human natural language

    Full text link
    We model certain features of human language complexity by means of advanced concepts borrowed from statistical mechanics. Using a time series approach, the diffusion entropy method (DE), we compute the complexity of an Italian corpus of newspapers and magazines. We find that the anomalous scaling index is compatible with a simple dynamical model, a random walk on a complex scale-free network, which is linguistically related to Saussurre's paradigms. The model yields the famous Zipf's law in terms of the generalized central limit theorem.Comment: Conference FRACTAL 200

    Sample Complexity of Dictionary Learning and other Matrix Factorizations

    Get PDF
    Many modern tools in machine learning and signal processing, such as sparse dictionary learning, principal component analysis (PCA), non-negative matrix factorization (NMF), KK-means clustering, etc., rely on the factorization of a matrix obtained by concatenating high-dimensional vectors from a training collection. While the idealized task would be to optimize the expected quality of the factors over the underlying distribution of training vectors, it is achieved in practice by minimizing an empirical average over the considered collection. The focus of this paper is to provide sample complexity estimates to uniformly control how much the empirical average deviates from the expected cost function. Standard arguments imply that the performance of the empirical predictor also exhibit such guarantees. The level of genericity of the approach encompasses several possible constraints on the factors (tensor product structure, shift-invariance, sparsity \ldots), thus providing a unified perspective on the sample complexity of several widely used matrix factorization schemes. The derived generalization bounds behave proportional to log⁥(n)/n\sqrt{\log(n)/n} w.r.t.\ the number of samples nn for the considered matrix factorization techniques.Comment: to appea
    • 

    corecore