1,283 research outputs found

    Sparse Representation of Astronomical Images

    Get PDF
    Sparse representation of astronomical images is discussed. It is shown that a significant gain in sparsity is achieved when particular mixed dictionaries are used for approximating these types of images with greedy selection strategies. Experiments are conducted to confirm: i)Effectiveness at producing sparse representations. ii)Competitiveness, with respect to the time required to process large images.The latter is a consequence of the suitability of the proposed dictionaries for approximating images in partitions of small blocks.This feature makes it possible to apply the effective greedy selection technique Orthogonal Matching Pursuit, up to some block size. For blocks exceeding that size a refinement of the original Matching Pursuit approach is considered. The resulting method is termed Self Projected Matching Pursuit, because is shown to be effective for implementing, via Matching Pursuit itself, the optional back-projection intermediate steps in that approach.Comment: Software to implement the approach is available on http://www.nonlinear-approx.info/examples/node1.htm

    Fast Dictionary Learning for Sparse Representations of Speech Signals

    Get PDF
    © 2011 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. Published version: IEEE Journal of Selected Topics in Signal Processing 5(5): 1025-1031, Sep 2011. DOI: 10.1109/JSTSP.2011.2157892

    Sparsity and `Something Else': An Approach to Encrypted Image Folding

    Get PDF
    A property of sparse representations in relation to their capacity for information storage is discussed. It is shown that this feature can be used for an application that we term Encrypted Image Folding. The proposed procedure is realizable through any suitable transformation. In particular, in this paper we illustrate the approach by recourse to the Discrete Cosine Transform and a combination of redundant Cosine and Dirac dictionaries. The main advantage of the proposed technique is that both storage and encryption can be achieved simultaneously using simple processing steps.Comment: Revised manuscript- Software for implementing the Encrypted Image Folding proposed in this paper is available on http://www.nonlinear-approx.info

    Lattice dynamical wavelet neural networks implemented using particle swarm optimisation for spatio-temporal system identification

    Get PDF
    Starting from the basic concept of coupled map lattices, a new family of adaptive wavelet neural networks, called lattice dynamical wavelet neural networks (LDWNN), is introduced for spatiotemporal system identification, by combining an efficient wavelet representation with a coupled map lattice model. A new orthogonal projection pursuit (OPP) method, coupled with a particle swarm optimisation (PSO) algorithm, is proposed for augmenting the proposed network. A novel two-stage hybrid training scheme is developed for constructing a parsimonious network model. In the first stage, by applying the orthogonal projection pursuit algorithm, significant wavelet-neurons are adaptively and successively recruited into the network, where adjustable parameters of the associated waveletneurons are optimised using a particle swarm optimiser. The resultant network model, obtained in the first stage, may however be redundant. In the second stage, an orthogonal least squares (OLS) algorithm is then applied to refine and improve the initially trained network by removing redundant wavelet-neurons from the network. The proposed two-stage hybrid training procedure can generally produce a parsimonious network model, where a ranked list of wavelet-neurons, according to the capability of each neuron to represent the total variance in the system output signal is produced. Two spatio-temporal system identification examples are presented to demonstrate the performance of the proposed new modelling framework

    Role of homeostasis in learning sparse representations

    Full text link
    Neurons in the input layer of primary visual cortex in primates develop edge-like receptive fields. One approach to understanding the emergence of this response is to state that neural activity has to efficiently represent sensory data with respect to the statistics of natural scenes. Furthermore, it is believed that such an efficient coding is achieved using a competition across neurons so as to generate a sparse representation, that is, where a relatively small number of neurons are simultaneously active. Indeed, different models of sparse coding, coupled with Hebbian learning and homeostasis, have been proposed that successfully match the observed emergent response. However, the specific role of homeostasis in learning such sparse representations is still largely unknown. By quantitatively assessing the efficiency of the neural representation during learning, we derive a cooperative homeostasis mechanism that optimally tunes the competition between neurons within the sparse coding algorithm. We apply this homeostasis while learning small patches taken from natural images and compare its efficiency with state-of-the-art algorithms. Results show that while different sparse coding algorithms give similar coding results, the homeostasis provides an optimal balance for the representation of natural images within the population of neurons. Competition in sparse coding is optimized when it is fair. By contributing to optimizing statistical competition across neurons, homeostasis is crucial in providing a more efficient solution to the emergence of independent components
    • …
    corecore