90,157 research outputs found

    Coordinated optimization of visual cortical maps : 1. Symmetry-based analysis

    Get PDF
    In the primary visual cortex of primates and carnivores, functional architecture can be characterized by maps of various stimulus features such as orientation preference (OP), ocular dominance (OD), and spatial frequency. It is a long-standing question in theoretical neuroscience whether the observed maps should be interpreted as optima of a specific energy functional that summarizes the design principles of cortical functional architecture. A rigorous evaluation of this optimization hypothesis is particularly demanded by recent evidence that the functional architecture of orientation columns precisely follows species invariant quantitative laws. Because it would be desirable to infer the form of such an optimization principle from the biological data, the optimization approach to explain cortical functional architecture raises the following questions: i) What are the genuine ground states of candidate energy functionals and how can they be calculated with precision and rigor? ii) How do differences in candidate optimization principles impact on the predicted map structure and conversely what can be learned about a hypothetical underlying optimization principle from observations on map structure? iii) Is there a way to analyze the coordinated organization of cortical maps predicted by optimization principles in general? To answer these questions we developed a general dynamical systems approach to the combined optimization of visual cortical maps of OP and another scalar feature such as OD or spatial frequency preference. From basic symmetry assumptions we obtain a comprehensive phenomenological classification of possible inter-map coupling energies and examine representative examples. We show that each individual coupling energy leads to a different class of OP solutions with different correlations among the maps such that inferences about the optimization principle from map layout appear viable. We systematically assess whether quantitative laws resembling experimental observations can result from the coordinated optimization of orientation columns with other feature maps

    Learning scale-variant and scale-invariant features for deep image classification

    Get PDF
    Convolutional Neural Networks (CNNs) require large image corpora to be trained on classification tasks. The variation in image resolutions, sizes of objects and patterns depicted, and image scales, hampers CNN training and performance, because the task-relevant information varies over spatial scales. Previous work attempting to deal with such scale variations focused on encouraging scale-invariant CNN representations. However, scale-invariant representations are incomplete representations of images, because images contain scale-variant information as well. This paper addresses the combined development of scale-invariant and scale-variant representations. We propose a multi- scale CNN method to encourage the recognition of both types of features and evaluate it on a challenging image classification task involving task-relevant characteristics at multiple scales. The results show that our multi-scale CNN outperforms single-scale CNN. This leads to the conclusion that encouraging the combined development of a scale-invariant and scale-variant representation in CNNs is beneficial to image recognition performance

    The computational magic of the ventral stream

    Get PDF
    I argue that the sample complexity of (biological, feedforward) object recognition is mostly due to geometric image transformations and conjecture that a main goal of the ventral stream – V1, V2, V4 and IT – is to learn-and-discount image transformations.

In the first part of the paper I describe a class of simple and biologically plausible memory-based modules that learn transformations from unsupervised visual experience. The main theorems show that these modules provide (for every object) a signature which is invariant to local affine transformations and approximately invariant for other transformations. I also prove that,
in a broad class of hierarchical architectures, signatures remain invariant from layer to layer. The identification of these memory-based modules with complex (and simple) cells in visual areas leads to a theory of invariant recognition for the ventral stream.

In the second part, I outline a theory about hierarchical architectures that can learn invariance to transformations. I show that the memory complexity of learning affine transformations is drastically reduced in a hierarchical architecture that factorizes transformations in terms of the subgroup of translations and the subgroups of rotations and scalings. I then show how translations are automatically selected as the only learnable transformations during development by enforcing small apertures – eg small receptive fields – in the first layer.

In a third part I show that the transformations represented in each area can be optimized in terms of storage and robustness, as a consequence determining the tuning of the neurons in the area, rather independently (under normal conditions) of the statistics of natural images. I describe a model of learning that can be proved to have this property, linking in an elegant way the spectral properties of the signatures with the tuning of receptive fields in different areas. A surprising implication of these theoretical results is that the computational goals and some of the tuning properties of cells in the ventral stream may follow from symmetry properties (in the sense of physics) of the visual world through a process of unsupervised correlational learning, based on Hebbian synapses. In particular, simple and complex cells do not directly care about oriented bars: their tuning is a side effect of their role in translation invariance. Across the whole ventral stream the preferred features reported for neurons in different areas are only a symptom of the invariances computed and represented.

The results of each of the three parts stand on their own independently of each other. Together this theory-in-fieri makes several broad predictions, some of which are:

-invariance to small transformations in early areas (eg translations in V1) may underly stability of visual perception (suggested by Stu Geman);

-each cell’s tuning properties are shaped by visual experience of image transformations during developmental and adult plasticity;

-simple cells are likely to be the same population as complex cells, arising from different convergence of the Hebbian learning rule. The input to complex “complex” cells are dendritic branches with simple cell properties;

-class-specific transformations are learned and represented at the top of the ventral stream hierarchy; thus class-specific modules such as faces, places and possibly body areas should exist in IT;

-the type of transformations that are learned from visual experience depend on the size of the receptive fields and thus on the area (layer in the models) – assuming that the size increases with layers;

-the mix of transformations learned in each area influences the tuning properties of the cells oriented bars in V1+V2, radial and spiral patterns in V4 up to class specific tuning in AIT (eg face tuned cells);

-features must be discriminative and invariant: invariance to transformations is the primary determinant of the tuning of cortical neurons rather than statistics of natural images.

The theory is broadly consistent with the current version of HMAX. It explains it and extend it in terms of unsupervised learning, a broader class of transformation invariance and higher level modules. The goal of this paper is to sketch a comprehensive theory with little regard for mathematical niceties. If the theory turns out to be useful there will be scope for deep mathematics, ranging from group representation tools to wavelet theory to dynamics of learning

    Using basic image features for texture classification

    Get PDF
    Representing texture images statistically as histograms over a discrete vocabulary of local features has proven widely effective for texture classification tasks. Images are described locally by vectors of, for example, responses to some filter bank; and a visual vocabulary is defined as a partition of this descriptor-response space, typically based on clustering. In this paper, we investigate the performance of an approach which represents textures as histograms over a visual vocabulary which is defined geometrically, based on the Basic Image Features of Griffin and Lillholm (Proc. SPIE 6492(09):1-11, 2007), rather than by clustering. BIFs provide a natural mathematical quantisation of a filter-response space into qualitatively distinct types of local image structure. We also extend our approach to deal with intra-class variations in scale. Our algorithm is simple: there is no need for a pre-training step to learn a visual dictionary, as in methods based on clustering, and no tuning of parameters is required to deal with different datasets. We have tested our implementation on three popular and challenging texture datasets and find that it produces consistently good classification results on each, including what we believe to be the best reported for the KTH-TIPS and equal best reported for the UIUCTex databases

    Automated verification of model transformations based on visual contracts

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/s10515-012-0102-yModel-Driven Engineering promotes the use of models to conduct the different phases of the software development. In this way, models are transformed between different languages and notations until code is generated for the final application. Hence, the construction of correct Model-to-Model (M2M) transformations becomes a crucial aspect in this approach. Even though many languages and tools have been proposed to build and execute M2M transformations, there is scarce support to specify correctness requirements for such transformations in an implementation-independent way, i.e., irrespective of the actual transformation language used. In this paper we fill this gap by proposing a declarative language for the specification of visual contracts, enabling the verification of transformations defined with any transformation language. The verification is performed by compiling the contracts into QVT to detect disconformities of transformation results with respect to the contracts. As a proof of concept, we also report on a graphical modeling environment for the specification of contracts, and on its use for the verification of transformations in several case studies.This work has been funded by the Austrian Science Fund (FWF) under grant P21374-N13, the Spanish Ministry of Science under grants TIN2008-02081 and TIN2011-24139, and the R&D programme of the Madrid Region under project S2009/TIC-1650

    Hyperbolic planforms in relation to visual edges and textures perception

    Get PDF
    We propose to use bifurcation theory and pattern formation as theoretical probes for various hypotheses about the neural organization of the brain. This allows us to make predictions about the kinds of patterns that should be observed in the activity of real brains through, e.g. optical imaging, and opens the door to the design of experiments to test these hypotheses. We study the specific problem of visual edges and textures perception and suggest that these features may be represented at the population level in the visual cortex as a specific second-order tensor, the structure tensor, perhaps within a hypercolumn. We then extend the classical ring model to this case and show that its natural framework is the non-Euclidean hyperbolic geometry. This brings in the beautiful structure of its group of isometries and certain of its subgroups which have a direct interpretation in terms of the organization of the neural populations that are assumed to encode the structure tensor. By studying the bifurcations of the solutions of the structure tensor equations, the analog of the classical Wilson and Cowan equations, under the assumption of invariance with respect to the action of these subgroups, we predict the appearance of characteristic patterns. These patterns can be described by what we call hyperbolic or H-planforms that are reminiscent of Euclidean planar waves and of the planforms that were used in [1, 2] to account for some visual hallucinations. If these patterns could be observed through brain imaging techniques they would reveal the built-in or acquired invariance of the neural organization to the action of the corresponding subgroups.Comment: 34 pages, 11 figures, 2 table
    • ā€¦
    corecore