292 research outputs found

    Learning Active Basis Models by EM-Type Algorithms

    Full text link
    EM algorithm is a convenient tool for maximum likelihood model fitting when the data are incomplete or when there are latent variables or hidden states. In this review article we explain that EM algorithm is a natural computational scheme for learning image templates of object categories where the learning is not fully supervised. We represent an image template by an active basis model, which is a linear composition of a selected set of localized, elongated and oriented wavelet elements that are allowed to slightly perturb their locations and orientations to account for the deformations of object shapes. The model can be easily learned when the objects in the training images are of the same pose, and appear at the same location and scale. This is often called supervised learning. In the situation where the objects may appear at different unknown locations, orientations and scales in the training images, we have to incorporate the unknown locations, orientations and scales as latent variables into the image generation process, and learn the template by EM-type algorithms. The E-step imputes the unknown locations, orientations and scales based on the currently learned template. This step can be considered self-supervision, which involves using the current template to recognize the objects in the training images. The M-step then relearns the template based on the imputed locations, orientations and scales, and this is essentially the same as supervised learning. So the EM learning process iterates between recognition and supervised learning. We illustrate this scheme by several experiments.Comment: Published in at http://dx.doi.org/10.1214/09-STS281 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    New contributions in overcomplete image representations inspired from the functional architecture of the primary visual cortex = Nuevas contribuciones en representaciones sobrecompletas de imágenes inspiradas por la arquitectura funcional de la corteza visual primaria

    Get PDF
    The present thesis aims at investigating parallelisms between the functional architecture of primary visual areas and image processing methods. A first objective is to refine existing models of biological vision on the base of information theory statements and a second is to develop original solutions for image processing inspired from natural vision. The available data on visual systems contains physiological and psychophysical studies, Gestalt psychology and statistics on natural images The thesis is mostly centered in overcomplete representations (i.e. representations increasing the dimensionality of the data) for multiple reasons. First because they allow to overcome existing drawbacks of critically sampled transforms, second because biological vision models appear overcomplete and third because building efficient overcomplete representations raises challenging and actual mathematical problems, in particular the problem of sparse approximation. The thesis proposes first a self-invertible log-Gabor wavelet transformation inspired from the receptive field and multiresolution arrangement of the simple cells in the primary visual cortex (V1). This transform shows promising abilities for noise elimination. Second, interactions observed between V1 cells consisting in lateral inhibition and in facilitation between aligned cells are shown efficient for extracting edges of natural images. As a third point, the redundancy introduced by the overcompleteness is reduced by a dedicated sparse approximation algorithm which builds a sparse representation of the images based on their edge content. For an additional decorrelation of the image information and for improving the image compression performances, edges arranged along continuous contours are coded in a predictive manner through chains of coefficients. This offers then an efficient representation of contours. Fourth, a study on contour completion using the tensor voting framework based on Gestalt psychology is presented. There, the use of iterations and of the curvature information allow to improve the robustness and the perceptual quality of the existing method. La presente tesis doctoral tiene como objetivo indagar en algunos paralelismos entre la arquitectura funcional de las áreas visuales primarias y el tratamiento de imágenes. Un primer objetivo consiste en mejorar los modelos existentes de visión biológica basándose en la teoría de la información. Un segundo es el desarrollo de nuevos algoritmos de tratamiento de imágenes inspirados de la visión natural. Los datos disponibles sobre el sistema visual abarcan estudios fisiológicos y psicofísicos, psicología Gestalt y estadísticas de las imágenes naturales. La tesis se centra principalmente en las representaciones sobrecompletas (i.e. representaciones que incrementan la dimensionalidad de los datos) por las siguientes razones. Primero porque permiten sobrepasar importantes desventajas de las transformaciones ortogonales; segundo porque los modelos de visión biológica necesitan a menudo ser sobrecompletos y tercero porque construir representaciones sobrecompletas eficientes involucra problemas matemáticos relevantes y novedosos, en particular el problema de las aproximaciones dispersas. La tesis propone primero una transformación en ondículas log-Gabor auto-inversible inspirada del campo receptivo y la organización en multiresolución de las células simples del cortex visual primario (V1). Esta transformación ofrece resultados prometedores para la eliminación del ruido. En segundo lugar, las interacciones observadas entre las células de V1 que consisten en la inhibición lateral y en la facilitación entre células alineadas se han mostrado eficientes para extraer los bordes de las imágenes naturales. En tercer lugar, la redundancia introducida por la transformación sobrecompleta se reduce gracias a un algoritmo dedicado de aproximación dispersa el cual construye una representación dispersa de las imágenes sobre la base de sus bordes. Para una decorrelación adicional y para conseguir más altas tasas de compresión, los bordes alineados a lo largo de contornos continuos están codificado de manera predictiva por cadenas de coeficientes, lo que ofrece una representacion eficiente de los contornos. Finalmente se presenta un estudio sobre el cierre de contornos utilizando la metodología de tensor voting. Proponemos el uso de iteraciones y de la información de curvatura para mejorar la robustez y la calidad perceptual de los métodos existentes

    Are v1 simple cells optimized for visual occlusions? : A comparative study

    Get PDF
    Abstract: Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex. Author Summary: The statistics of our visual world is dominated by occlusions. Almost every image processed by our brain consists of mutually occluding objects, animals and plants. Our visual cortex is optimized through evolution and throughout our lifespan for such stimuli. Yet, the standard computational models of primary visual processing do not consider occlusions. In this study, we ask what effects visual occlusions may have on predicted response properties of simple cells which are the first cortical processing units for images. Our results suggest that recently observed differences between experiments and predictions of the standard simple cell models can be attributed to occlusions. The most significant consequence of occlusions is the prediction of many cells sensitive to center-surround stimuli. Experimentally, large quantities of such cells are observed since new techniques (reverse correlation) are used. Without occlusions, they are only obtained for specific settings and none of the seminal studies (sparse coding, ICA) predicted such fields. In contrast, the new type of response naturally emerges as soon as occlusions are considered. In comparison with recent in vivo experiments we find that occlusive models are consistent with the high percentages of center-surround simple cells observed in macaque monkeys, ferrets and mice

    Sparse Codes for Speech Predict Spectrotemporal Receptive Fields in the Inferior Colliculus

    Get PDF
    We have developed a sparse mathematical representation of speech that minimizes the number of active model neurons needed to represent typical speech sounds. The model learns several well-known acoustic features of speech such as harmonic stacks, formants, onsets and terminations, but we also find more exotic structures in the spectrogram representation of sound such as localized checkerboard patterns and frequency-modulated excitatory subregions flanked by suppressive sidebands. Moreover, several of these novel features resemble neuronal receptive fields reported in the Inferior Colliculus (IC), as well as auditory thalamus and cortex, and our model neurons exhibit the same tradeoff in spectrotemporal resolution as has been observed in IC. To our knowledge, this is the first demonstration that receptive fields of neurons in the ascending mammalian auditory pathway beyond the auditory nerve can be predicted based on coding principles and the statistical properties of recorded sounds.Comment: For Supporting Information, see PLoS website: http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.100259

    Detection and classification of non-stationary signals using sparse representations in adaptive dictionaries

    Get PDF
    Automatic classification of non-stationary radio frequency (RF) signals is of particular interest in persistent surveillance and remote sensing applications. Such signals are often acquired in noisy, cluttered environments, and may be characterized by complex or unknown analytical models, making feature extraction and classification difficult. This thesis proposes an adaptive classification approach for poorly characterized targets and backgrounds based on sparse representations in non-analytical dictionaries learned from data. Conventional analytical orthogonal dictionaries, e.g., Short Time Fourier and Wavelet Transforms, can be suboptimal for classification of non-stationary signals, as they provide a rigid tiling of the time-frequency space, and are not specifically designed for a particular signal class. They generally do not lead to sparse decompositions (i.e., with very few non-zero coefficients), and use in classification requires separate feature selection algorithms. Pursuit-type decompositions in analytical overcomplete (non-orthogonal) dictionaries yield sparse representations, by design, and work well for signals that are similar to the dictionary elements. The pursuit search, however, has a high computational cost, and the method can perform poorly in the presence of realistic noise and clutter. One such overcomplete analytical dictionary method is also analyzed in this thesis for comparative purposes. The main thrust of the thesis is learning discriminative RF dictionaries directly from data, without relying on analytical constraints or additional knowledge about the signal characteristics. A pursuit search is used over the learned dictionaries to generate sparse classification features in order to identify time windows that contain a target pulse. Two state-of-the-art dictionary learning methods are compared, the K-SVD algorithm and Hebbian learning, in terms of their classification performance as a function of dictionary training parameters. Additionally, a novel hybrid dictionary algorithm is introduced, demonstrating better performance and higher robustness to noise. The issue of dictionary dimensionality is explored and this thesis demonstrates that undercomplete learned dictionaries are suitable for non-stationary RF classification. Results on simulated data sets with varying background clutter and noise levels are presented. Lastly, unsupervised classification with undercomplete learned dictionaries is also demonstrated in satellite imagery analysis

    Comparison Of Sparse Coding And Jpeg Coding Schemes For Blurred Retinal Images.

    Get PDF
    Overcomplete representations are currently one of the highly researched areas especially in the field of signal processing due to their strong potential to generate sparse representation of signals. Sparse representation implies that given signal can be represented with components that are only rarely significantly active. It has been strongly argued that the mammalian visual system is highly related towards sparse and overcomplete representations. The primary visual cortex has overcomplete responses in representing an input signal which leads to the use of sparse neuronal activity for further processing. This work investigates the sparse coding with an overcomplete basis set representation which is believed to be the strategy employed by the mammalian visual system for efficient coding of natural images. This work analyzes the Sparse Code Learning algorithm in which the given image is represented by means of linear superposition of sparse statistically independent events on a set of overcomplete basis functions. This algorithm trains and adapts the overcomplete basis functions such as to represent any given image in terms of sparse structures. The second part of the work analyzes an inhibition based sparse coding model in which the Gabor based overcomplete representations are used to represent the image. It then applies an iterative inhibition algorithm based on competition between neighboring transform coefficients to select subset of Gabor functions such as to represent the given image with sparse set of coefficients. This work applies the developed models for the image compression applications and tests the achievable levels of compression of it. The research towards these areas so far proves that sparse coding algorithms are inefficient in representing high frequency sharp image features. So this work analyzes the performance of these algorithms only on the natural images which does not have sharp features and compares the compression results with the current industrial standard coding schemes such as JPEG and JPEG 2000. It also models the characteristics of an image falling on the retina after the distortion effects of the eye and then applies the developed algorithms towards these images and tests compression results

    Role of homeostasis in learning sparse representations

    Full text link
    Neurons in the input layer of primary visual cortex in primates develop edge-like receptive fields. One approach to understanding the emergence of this response is to state that neural activity has to efficiently represent sensory data with respect to the statistics of natural scenes. Furthermore, it is believed that such an efficient coding is achieved using a competition across neurons so as to generate a sparse representation, that is, where a relatively small number of neurons are simultaneously active. Indeed, different models of sparse coding, coupled with Hebbian learning and homeostasis, have been proposed that successfully match the observed emergent response. However, the specific role of homeostasis in learning such sparse representations is still largely unknown. By quantitatively assessing the efficiency of the neural representation during learning, we derive a cooperative homeostasis mechanism that optimally tunes the competition between neurons within the sparse coding algorithm. We apply this homeostasis while learning small patches taken from natural images and compare its efficiency with state-of-the-art algorithms. Results show that while different sparse coding algorithms give similar coding results, the homeostasis provides an optimal balance for the representation of natural images within the population of neurons. Competition in sparse coding is optimized when it is fair. By contributing to optimizing statistical competition across neurons, homeostasis is crucial in providing a more efficient solution to the emergence of independent components
    corecore