69,214 research outputs found

    On the Sample Complexity of Predictive Sparse Coding

    Full text link
    The goal of predictive sparse coding is to learn a representation of examples as sparse linear combinations of elements from a dictionary, such that a learned hypothesis linear in the new representation performs well on a predictive task. Predictive sparse coding algorithms recently have demonstrated impressive performance on a variety of supervised tasks, but their generalization properties have not been studied. We establish the first generalization error bounds for predictive sparse coding, covering two settings: 1) the overcomplete setting, where the number of features k exceeds the original dimensionality d; and 2) the high or infinite-dimensional setting, where only dimension-free bounds are useful. Both learning bounds intimately depend on stability properties of the learned sparse encoder, as measured on the training sample. Consequently, we first present a fundamental stability result for the LASSO, a result characterizing the stability of the sparse codes with respect to perturbations to the dictionary. In the overcomplete setting, we present an estimation error bound that decays as \tilde{O}(sqrt(d k/m)) with respect to d and k. In the high or infinite-dimensional setting, we show a dimension-free bound that is \tilde{O}(sqrt(k^2 s / m)) with respect to k and s, where s is an upper bound on the number of non-zeros in the sparse code for any training data point.Comment: Sparse Coding Stability Theorem from version 1 has been relaxed considerably using a new notion of coding margin. Old Sparse Coding Stability Theorem still in new version, now as Theorem 2. Presentation of all proofs simplified/improved considerably. Paper reorganized. Empirical analysis showing new coding margin is non-trivial on real dataset

    Non-linear Quantization of Integrable Classical Systems

    Full text link
    It is demonstrated that the so-called "unavoidable quantum anomalies" can be avoided in the farmework of a special non-linear quantization scheme. A simple example is discussed in detail.Comment: LaTeX, 14 p

    Multi-Frequency Magnonic Logic Circuits for Parallel Data Processing

    Full text link
    We describe and analyze magnonic logic circuits enabling parallel data processing on multiple frequencies. The circuits combine bi-stable (digital) input/output elements and an analog core. The data transmission and processing within the analog part is accomplished by the spin waves, where logic 0 and 1 are encoded into the phase of the propagating wave. The latter makes it possible to utilize a number of bit carrying frequencies as independent information channels. The operation of the magnonic logic circuits is illustrated by numerical modeling. We also present the estimates on the potential functional throughput enhancement and compare it with scaled CMOS. The described multi-frequency approach offers a fundamental advantage over the transistor-based circuitry and may provide an extra dimension for the Moor's law continuation. The shortcoming and potentials issues are also discussed
    • …
    corecore