3,075 research outputs found

    Role of homeostasis in learning sparse representations

    Full text link
    Neurons in the input layer of primary visual cortex in primates develop edge-like receptive fields. One approach to understanding the emergence of this response is to state that neural activity has to efficiently represent sensory data with respect to the statistics of natural scenes. Furthermore, it is believed that such an efficient coding is achieved using a competition across neurons so as to generate a sparse representation, that is, where a relatively small number of neurons are simultaneously active. Indeed, different models of sparse coding, coupled with Hebbian learning and homeostasis, have been proposed that successfully match the observed emergent response. However, the specific role of homeostasis in learning such sparse representations is still largely unknown. By quantitatively assessing the efficiency of the neural representation during learning, we derive a cooperative homeostasis mechanism that optimally tunes the competition between neurons within the sparse coding algorithm. We apply this homeostasis while learning small patches taken from natural images and compare its efficiency with state-of-the-art algorithms. Results show that while different sparse coding algorithms give similar coding results, the homeostasis provides an optimal balance for the representation of natural images within the population of neurons. Competition in sparse coding is optimized when it is fair. By contributing to optimizing statistical competition across neurons, homeostasis is crucial in providing a more efficient solution to the emergence of independent components

    Comparison Of Sparse Coding And Jpeg Coding Schemes For Blurred Retinal Images.

    Get PDF
    Overcomplete representations are currently one of the highly researched areas especially in the field of signal processing due to their strong potential to generate sparse representation of signals. Sparse representation implies that given signal can be represented with components that are only rarely significantly active. It has been strongly argued that the mammalian visual system is highly related towards sparse and overcomplete representations. The primary visual cortex has overcomplete responses in representing an input signal which leads to the use of sparse neuronal activity for further processing. This work investigates the sparse coding with an overcomplete basis set representation which is believed to be the strategy employed by the mammalian visual system for efficient coding of natural images. This work analyzes the Sparse Code Learning algorithm in which the given image is represented by means of linear superposition of sparse statistically independent events on a set of overcomplete basis functions. This algorithm trains and adapts the overcomplete basis functions such as to represent any given image in terms of sparse structures. The second part of the work analyzes an inhibition based sparse coding model in which the Gabor based overcomplete representations are used to represent the image. It then applies an iterative inhibition algorithm based on competition between neighboring transform coefficients to select subset of Gabor functions such as to represent the given image with sparse set of coefficients. This work applies the developed models for the image compression applications and tests the achievable levels of compression of it. The research towards these areas so far proves that sparse coding algorithms are inefficient in representing high frequency sharp image features. So this work analyzes the performance of these algorithms only on the natural images which does not have sharp features and compares the compression results with the current industrial standard coding schemes such as JPEG and JPEG 2000. It also models the characteristics of an image falling on the retina after the distortion effects of the eye and then applies the developed algorithms towards these images and tests compression results

    Information recovery from rank-order encoded images

    Get PDF
    The time to detection of a visual stimulus by the primate eye is recorded at 100 – 150ms. This near instantaneous recognition is in spite of the considerable processing required by the several stages of the visual pathway to recognise and react to a visual scene. How this is achieved is still a matter of speculation. Rank-order codes have been proposed as a means of encoding by the primate eye in the rapid transmission of the initial burst of information from the sensory neurons to the brain. We study the efficiency of rank-order codes in encoding perceptually-important information in an image. VanRullen and Thorpe built a model of the ganglion cell layers of the retina to simulate and study the viability of rank-order as a means of encoding by retinal neurons. We validate their model and quantify the information retrieved from rank-order encoded images in terms of the visually-important information recovered. Towards this goal, we apply the ‘perceptual information preservation algorithm’, proposed by Petrovic and Xydeas after slight modification. We observe a low information recovery due to losses suffered during the rank-order encoding and decoding processes. We propose to minimise these losses to recover maximum information in minimum time from rank-order encoded images. We first maximise information recovery by using the pseudo-inverse of the filter-bank matrix to minimise losses during rankorder decoding. We then apply the biological principle of lateral inhibition to minimise losses during rank-order encoding. In doing so, we propose the Filteroverlap Correction algorithm. To test the perfomance of rank-order codes in a biologically realistic model, we design and simulate a model of the foveal-pit ganglion cells of the retina keeping close to biological parameters. We use this as a rank-order encoder and analyse its performance relative to VanRullen and Thorpe’s retinal model

    Projection-Based and Look Ahead Strategies for Atom Selection

    Full text link
    In this paper, we improve iterative greedy search algorithms in which atoms are selected serially over iterations, i.e., one-by-one over iterations. For serial atom selection, we devise two new schemes to select an atom from a set of potential atoms in each iteration. The two new schemes lead to two new algorithms. For both the algorithms, in each iteration, the set of potential atoms is found using a standard matched filter. In case of the first scheme, we propose an orthogonal projection strategy that selects an atom from the set of potential atoms. Then, for the second scheme, we propose a look ahead strategy such that the selection of an atom in the current iteration has an effect on the future iterations. The use of look ahead strategy requires a higher computational resource. To achieve a trade-off between performance and complexity, we use the two new schemes in cascade and develop a third new algorithm. Through experimental evaluations, we compare the proposed algorithms with existing greedy search and convex relaxation algorithms.Comment: sparsity, compressive sensing; IEEE Trans on Signal Processing 201

    Zero-padding Network Coding and Compressed Sensing for Optimized Packets Transmission

    Get PDF
    Ubiquitous Internet of Things (IoT) is destined to connect everybody and everything on a never-before-seen scale. Such networks, however, have to tackle the inherent issues created by the presence of very heterogeneous data transmissions over the same shared network. This very diverse communication, in turn, produces network packets of various sizes ranging from very small sensory readings to comparatively humongous video frames. Such a massive amount of data itself, as in the case of sensory networks, is also continuously captured at varying rates and contributes to increasing the load on the network itself, which could hinder transmission efficiency. However, they also open up possibilities to exploit various correlations in the transmitted data due to their sheer number. Reductions based on this also enable the networks to keep up with the new wave of big data-driven communications by simply investing in the promotion of select techniques that efficiently utilize the resources of the communication systems. One of the solutions to tackle the erroneous transmission of data employs linear coding techniques, which are ill-equipped to handle the processing of packets with differing sizes. Random Linear Network Coding (RLNC), for instance, generates unreasonable amounts of padding overhead to compensate for the different message lengths, thereby suppressing the pervasive benefits of the coding itself. We propose a set of approaches that overcome such issues, while also reducing the decoding delays at the same time. Specifically, we introduce and elaborate on the concept of macro-symbols and the design of different coding schemes. Due to the heterogeneity of the packet sizes, our progressive shortening scheme is the first RLNC-based approach that generates and recodes unequal-sized coded packets. Another of our solutions is deterministic shifting that reduces the overall number of transmitted packets. Moreover, the RaSOR scheme employs coding using XORing operations on shifted packets, without the need for coding coefficients, thus favoring linear encoding and decoding complexities. Another facet of IoT applications can be found in sensory data known to be highly correlated, where compressed sensing is a potential approach to reduce the overall transmissions. In such scenarios, network coding can also help. Our proposed joint compressed sensing and real network coding design fully exploit the correlations in cluster-based wireless sensor networks, such as the ones advocated by Industry 4.0. This design focused on performing one-step decoding to reduce the computational complexities and delays of the reconstruction process at the receiver and investigates the effectiveness of combined compressed sensing and network coding

    Efficient compression of motion compensated residuals

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Graph-based techniques for compression and reconstruction of sparse sources

    Get PDF
    The main goal of this thesis is to develop lossless compression schemes for analog and binary sources. All the considered compression schemes have as common feature that the encoder can be represented by a graph, so they can be studied employing tools from modern coding theory. In particular, this thesis is focused on two compression problems: the group testing and the noiseless compressed sensing problems. Although both problems may seem unrelated, in the thesis they are shown to be very close. Furthermore, group testing has the same mathematical formulation as non-linear binary source compression schemes that use the OR operator. In this thesis, the similarities between these problems are exploited. The group testing problem is aimed at identifying the defective subjects of a population with as few tests as possible. Group testing schemes can be divided into two groups: adaptive and non-adaptive group testing schemes. The former schemes generate tests sequentially and exploit the partial decoding results to attempt to reduce the overall number of tests required to label all members of the population, whereas non-adaptive schemes perform all the test in parallel and attempt to label as many subjects as possible. Our contributions to the group testing problem are both theoretical and practical. We propose a novel adaptive scheme aimed to efficiently perform the testing process. Furthermore, we develop tools to predict the performance of both adaptive and non-adaptive schemes when the number of subjects to be tested is large. These tools allow to characterize the performance of adaptive and non-adaptive group testing schemes without simulating them. The goal of the noiseless compressed sensing problem is to retrieve a signal from its lineal projection version in a lower-dimensional space. This can be done only whenever the amount of null components of the original signal is large enough. Compressed sensing deals with the design of sampling schemes and reconstruction algorithms that manage to reconstruct the original signal vector with as few samples as possible. In this thesis we pose the compressed sensing problem within a probabilistic framework, as opposed to the classical compression sensing formulation. Recent results in the state of the art show that this approach is more efficient than the classical one. Our contributions to noiseless compressed sensing are both theoretical and practical. We deduce a necessary and sufficient matrix design condition to guarantee that the reconstruction is lossless. Regarding the design of practical schemes, we propose two novel reconstruction algorithms based on message passing over the sparse representation of the matrix, one of them with very low computational complexity.El objetivo principal de la tesis es el desarrollo de esquemas de compresión sin pérdidas para fuentes analógicas y binarias. Los esquemas analizados tienen en común la representación del compresor mediante un grafo; esto ha permitido emplear en su estudio las herramientas de codificación modernas. Más concretamente la tesis estudia dos problemas de compresión en particular: el diseño de experimentos de testeo comprimido de poblaciones (de sangre, de presencia de elementos contaminantes, secuenciado de ADN, etcétera) y el muestreo comprimido de señales reales en ausencia de ruido. A pesar de que a primera vista parezcan problemas totalmente diferentes, en la tesis mostramos que están muy relacionados. Adicionalmente, el problema de testeo comprimido de poblaciones tiene una formulación matemática idéntica a los códigos de compresión binarios no lineales basados en puertas OR. En la tesis se explotan las similitudes entre todos estos problemas. Existen dos aproximaciones al testeo de poblaciones: el testeo adaptativo y el no adaptativo. El primero realiza los test de forma secuencial y explota los resultados parciales de estos para intentar reducir el número total de test necesarios, mientras que el segundo hace todos los test en bloque e intenta extraer el máximo de datos posibles de los test. Nuestras contribuciones al problema de testeo comprimido han sido tanto teóricas como prácticas. Hemos propuesto un nuevo esquema adaptativo para realizar eficientemente el proceso de testeo. Además hemos desarrollado herramientas que permiten predecir el comportamiento tanto de los esquemas adaptativos como de los esquemas no adaptativos cuando el número de sujetos a testear es elevado. Estas herramientas permiten anticipar las prestaciones de los esquemas de testeo sin necesidad de simularlos. El objetivo del muestreo comprimido es recuperar una señal a partir de su proyección lineal en un espacio de menor dimensión. Esto sólo es posible si se asume que la señal original tiene muchas componentes que son cero. El problema versa sobre el diseño de matrices y algoritmos de reconstrucción que permitan implementar esquemas de muestreo y reconstrucción con un número mínimo de muestras. A diferencia de la formulación clásica de muestreo comprimido, en esta tesis se ha empleado un modelado probabilístico de la señal. Referencias recientes en la literatura demuestran que este enfoque permite conseguir esquemas de compresión y descompresión más eficientes. Nuestras contribuciones en el campo de muestreo comprimido de fuentes analógicas dispersas han sido también teóricas y prácticas. Por un lado, la deducción de la condición necesaria y suficiente que debe garantizar la matriz de muestreo para garantizar que se puede reconstruir unívocamente la secuencia de fuente. Por otro lado, hemos propuesto dos algoritmos, uno de ellos de baja complejidad computacional, que permiten reconstruir la señal original basados en paso de mensajes entre los nodos de la representación gráfica de la matriz de proyección.Postprint (published version
    corecore