762,606 research outputs found

    The Sparsity Gap: Uncertainty Principles Proportional to Dimension

    Get PDF
    In an incoherent dictionary, most signals that admit a sparse representation admit a unique sparse representation. In other words, there is no way to express the signal without using strictly more atoms. This work demonstrates that sparse signals typically enjoy a higher privilege: each nonoptimal representation of the signal requires far more atoms than the sparsest representation-unless it contains many of the same atoms as the sparsest representation. One impact of this finding is to confer a certain degree of legitimacy on the particular atoms that appear in a sparse representation. This result can also be viewed as an uncertainty principle for random sparse signals over an incoherent dictionary.Comment: 6 pages. To appear in the Proceedings of the 44th Ann. IEEE Conf. on Information Sciences and System

    Multiscale Adaptive Representation of Signals: I. The Basic Framework

    Full text link
    We introduce a framework for designing multi-scale, adaptive, shift-invariant frames and bi-frames for representing signals. The new framework, called AdaFrame, improves over dictionary learning-based techniques in terms of computational efficiency at inference time. It improves classical multi-scale basis such as wavelet frames in terms of coding efficiency. It provides an attractive alternative to dictionary learning-based techniques for low level signal processing tasks, such as compression and denoising, as well as high level tasks, such as feature extraction for object recognition. Connections with deep convolutional networks are also discussed. In particular, the proposed framework reveals a drawback in the commonly used approach for visualizing the activations of the intermediate layers in convolutional networks, and suggests a natural alternative

    Graph Signal Representation with Wasserstein Barycenters

    Get PDF
    In many applications signals reside on the vertices of weighted graphs. Thus, there is the need to learn low dimensional representations for graph signals that will allow for data analysis and interpretation. Existing unsupervised dimensionality reduction methods for graph signals have focused on dictionary learning. In these works the graph is taken into consideration by imposing a structure or a parametrization on the dictionary and the signals are represented as linear combinations of the atoms in the dictionary. However, the assumption that graph signals can be represented using linear combinations of atoms is not always appropriate. In this paper we propose a novel representation framework based on non-linear and geometry-aware combinations of graph signals by leveraging the mathematical theory of Optimal Transport. We represent graph signals as Wasserstein barycenters and demonstrate through our experiments the potential of our proposed framework for low-dimensional graph signal representation

    Neural Representations for Sensory-Motor Control, II: Learning a Head-Centered Visuomotor Representation of 3-D Target Position

    Full text link
    A neural network model is described for how an invariant head-centered representation of 3-D target position can be autonomously learned by the brain in real time. Once learned, such a target representation may be used to control both eye and limb movements. The target representation is derived from the positions of both eyes in the head, and the locations which the target activates on the retinas of both eyes. A Vector Associative Map, or YAM, learns the many-to-one transformation from multiple combinations of eye-and-retinal position to invariant 3-D target position. Eye position is derived from outflow movement signals to the eye muscles. Two successive stages of opponent processing convert these corollary discharges into a. head-centered representation that closely approximates the azimuth, elevation, and vergence of the eyes' gaze position with respect to a cyclopean origin located between the eyes. YAM learning combines this cyclopean representation of present gaze position with binocular retinal information about target position into an invariant representation of 3-D target position with respect to the head. YAM learning can use a teaching vector that is externally derived from the positions of the eyes when they foveate the target. A YAM can also autonomously discover and learn the invariant representation, without an explicit teacher, by generating internal error signals from environmental fluctuations in which these invariant properties are implicit. YAM error signals are computed by Difference Vectors, or DVs, that are zeroed by the YAM learning process. YAMs may be organized into YAM Cascades for learning and performing both sensory-to-spatial maps and spatial-to-motor maps. These multiple uses clarify why DV-type properties are computed by cells in the parietal, frontal, and motor cortices of many mammals. YAMs are modulated by gating signals that express different aspects of the will-to-act. These signals transform a single invariant representation into movements of different speed (GO signal) and size (GRO signal), and thereby enable YAM controllers to match a planned action sequence to variable environmental conditions.National Science Foundation (IRI-87-16960, IRI-90-24877); Office of Naval Research (N00014-92-J-1309

    Application of Compressive Sensing Theory for the Reconstruction of Signals in Plastic Scintillators

    Get PDF
    Compressive Sensing theory says that it is possible to reconstruct a measured signal if an enough sparse representation of this signal exists in comparison to the number of random measurements. This theory was applied to reconstruct signals from measurements of plastic scintillators. Sparse representation of obtained signals was found using SVD transform.Comment: 7 pages, 3 figures; Presented at Symposium on applied nuclear physics and innovative technologies, Cracow, 03-06 June 201

    Dictionary Learning of Convolved Signals

    Get PDF
    Assuming that a set of source signals is sparsely representable in a given dictionary, we show how their sparse recovery fails whenever we can only measure a convolved observation of them. Starting from this motivation, we develop a block coordinate descent method which aims to learn a convolved dictionary and provide a sparse representation of the observed signals with small residual norm. We compare the proposed approach to the K-SVD dictionary learning algorithm and show through numerical experiment on synthetic signals that, provided some conditions on the problem data, our technique converges in a fixed number of iterations to a sparse representation with smaller residual norm

    Efficient time-domain modeling and simulation of passive bandpass systems

    Get PDF
    In communication systems, the signals of interest are often amplitude and/or phase modulated ones. In this framework, the baseband equivalent signals and systems representation is usually adopted to simulate the digital parts of communication systems in an efficient manner. This contribution extends the applicability of such representation to RF/analog devices, leading to a common and efficient modeling and simulation framework. In particular, the proposed method can build half-size models compared to existing approaches, and allows one to choose the simulation time step according to the bandwidth of the modulating signals rather than the carrier frequency, thereby significantly speeding up the simulation procedure. The novel proposed method is validated via a suitable application example
    corecore