206,226 research outputs found

    Metric mean dimension and analog compression

    Full text link
    Wu and Verd\'u developed a theory of almost lossless analog compression, where one imposes various regularity conditions on the compressor and the decompressor with the input signal being modelled by a (typically infinite-entropy) stationary stochastic process. In this work we consider all stationary stochastic processes with trajectories in a prescribed set of (bi-)infinite sequences and find uniform lower and upper bounds for certain compression rates in terms of metric mean dimension and mean box dimension. An essential tool is the recent Lindenstrauss-Tsukamoto variational principle expressing metric mean dimension in terms of rate-distortion functions. We obtain also lower bounds on compression rates for a fixed stationary process in terms of the rate-distortion dimension rates and study several examples.Comment: v3: Accepted for publication in IEEE Transactions on Information Theory. Additional examples were added. Material have been reorganized (with some parts removed). Minor mistakes were correcte

    New Uniform Bounds for Almost Lossless Analog Compression

    Full text link
    Wu and Verd\'u developed a theory of almost lossless analog compression, where one imposes various regularity conditions on the compressor and the decompressor with the input signal being modelled by a (typically infinite-entropy) stationary stochastic process. In this work we consider all stationary stochastic processes with trajectories in a prescribed set S⊂[0,1]Z\mathcal{S} \subset [0,1]^\mathbb{Z} of (bi)infinite sequences and find uniform lower and upper bounds for certain compression rates in terms of metric mean dimension and mean box dimension. An essential tool is the recent Lindenstrauss-Tsukamoto variational principle expressing metric mean dimension in terms of rate-distortion functions.Comment: This paper is going to be presented at 2019 IEEE International Symposium on Information Theory. It is a short version of arXiv:1812.0045

    Compression functions of uniform embeddings of groups into Hilbert and Banach spaces

    Full text link
    We construct finitely generated groups with arbitrary prescribed Hilbert space compression \alpha from the interval [0,1]. For a large class of Banach spaces E (including all uniformly convex Banach spaces), the E-compression of these groups coincides with their Hilbert space compression. Moreover, the groups that we construct have asymptotic dimension at most 3, hence they are exact. In particular, the first examples of groups that are uniformly embeddable into a Hilbert space (respectively, exact, of finite asymptotic dimension) with Hilbert space compression 0 are given. These groups are also the first examples of groups with uniformly convex Banach space compression 0.Comment: 21 pages; version 3: The final version, accepted by Crelle; version 2: corrected misprints, added references, the group has asdim at most 2, not at most 3 as in the first version (thanks to A. Dranishnikov); version 3: took into account referee remarks, added references. the paper is accepted in Crell

    Multiclass Learnability Does Not Imply Sample Compression

    Full text link
    A hypothesis class admits a sample compression scheme, if for every sample labeled by a hypothesis from the class, it is possible to retain only a small subsample, using which the labels on the entire sample can be inferred. The size of the compression scheme is an upper bound on the size of the subsample produced. Every learnable binary hypothesis class (which must necessarily have finite VC dimension) admits a sample compression scheme of size only a finite function of its VC dimension, independent of the sample size. For multiclass hypothesis classes, the analog of VC dimension is the DS dimension. We show that the analogous statement pertaining to sample compression is not true for multiclass hypothesis classes: every learnable multiclass hypothesis class, which must necessarily have finite DS dimension, does not admit a sample compression scheme of size only a finite function of its DS dimension

    On sample complexity for computational pattern recognition

    Full text link
    In statistical setting of the pattern recognition problem the number of examples required to approximate an unknown labelling function is linear in the VC dimension of the target learning class. In this work we consider the question whether such bounds exist if we restrict our attention to computable pattern recognition methods, assuming that the unknown labelling function is also computable. We find that in this case the number of examples required for a computable method to approximate the labelling function not only is not linear, but grows faster (in the VC dimension of the class) than any computable function. No time or space constraints are put on the predictors or target functions; the only resource we consider is the training examples. The task of pattern recognition is considered in conjunction with another learning problem -- data compression. An impossibility result for the task of data compression allows us to estimate the sample complexity for pattern recognition

    Bounding Embeddings of VC Classes into Maximum Classes

    Full text link
    One of the earliest conjectures in computational learning theory-the Sample Compression conjecture-asserts that concept classes (equivalently set systems) admit compression schemes of size linear in their VC dimension. To-date this statement is known to be true for maximum classes---those that possess maximum cardinality for their VC dimension. The most promising approach to positively resolving the conjecture is by embedding general VC classes into maximum classes without super-linear increase to their VC dimensions, as such embeddings would extend the known compression schemes to all VC classes. We show that maximum classes can be characterised by a local-connectivity property of the graph obtained by viewing the class as a cubical complex. This geometric characterisation of maximum VC classes is applied to prove a negative embedding result which demonstrates VC-d classes that cannot be embedded in any maximum class of VC dimension lower than 2d. On the other hand, we show that every VC-d class C embeds in a VC-(d+D) maximum class where D is the deficiency of C, i.e., the difference between the cardinalities of a maximum VC-d class and of C. For VC-2 classes in binary n-cubes for 4 <= n <= 6, we give best possible results on embedding into maximum classes. For some special classes of Boolean functions, relationships with maximum classes are investigated. Finally we give a general recursive procedure for embedding VC-d classes into VC-(d+k) maximum classes for smallest k.Comment: 22 pages, 2 figure
    • …
    corecore