5,810 research outputs found

    Edge Potential Functions (EPF) and Genetic Algorithms (GA) for Edge-Based Matching of Visual Objects

    Get PDF
    Edges are known to be a semantically rich representation of the contents of a digital image. Nevertheless, their use in practical applications is sometimes limited by computation and complexity constraints. In this paper, a new approach is presented that addresses the problem of matching visual objects in digital images by combining the concept of Edge Potential Functions (EPF) with a powerful matching tool based on Genetic Algorithms (GA). EPFs can be easily calculated starting from an edge map and provide a kind of attractive pattern for a matching contour, which is conveniently exploited by GAs. Several tests were performed in the framework of different image matching applications. The results achieved clearly outline the potential of the proposed method as compared to state of the art methodologies. (c) 2007 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works

    Class-Based Feature Matching Across Unrestricted Transformations

    Get PDF
    We develop a novel method for class-based feature matching across large changes in viewing conditions. The method is based on the property that when objects share a similar part, the similarity is preserved across viewing conditions. Given a feature and a training set of object images, we first identify the subset of objects that share this feature. The transformation of the feature's appearance across viewing conditions is determined mainly by properties of the feature, rather than of the object in which it is embedded. Therefore, the transformed feature will be shared by approximately the same set of objects. Based on this consistency requirement, corresponding features can be reliably identified from a set of candidate matches. Unlike previous approaches, the proposed scheme compares feature appearances only in similar viewing conditions, rather than across different viewing conditions. As a result, the scheme is not restricted to locally planar objects or affine transformations. The approach also does not require examples of correct matches. We show that by using the proposed method, a dense set of accurate correspondences can be obtained. Experimental comparisons demonstrate that matching accuracy is significantly improved over previous schemes. Finally, we show that the scheme can be successfully used for invariant object recognition

    The computational magic of the ventral stream

    Get PDF
    I argue that the sample complexity of (biological, feedforward) object recognition is mostly due to geometric image transformations and conjecture that a main goal of the ventral stream – V1, V2, V4 and IT – is to learn-and-discount image transformations.

In the first part of the paper I describe a class of simple and biologically plausible memory-based modules that learn transformations from unsupervised visual experience. The main theorems show that these modules provide (for every object) a signature which is invariant to local affine transformations and approximately invariant for other transformations. I also prove that,
in a broad class of hierarchical architectures, signatures remain invariant from layer to layer. The identification of these memory-based modules with complex (and simple) cells in visual areas leads to a theory of invariant recognition for the ventral stream.

In the second part, I outline a theory about hierarchical architectures that can learn invariance to transformations. I show that the memory complexity of learning affine transformations is drastically reduced in a hierarchical architecture that factorizes transformations in terms of the subgroup of translations and the subgroups of rotations and scalings. I then show how translations are automatically selected as the only learnable transformations during development by enforcing small apertures – eg small receptive fields – in the first layer.

In a third part I show that the transformations represented in each area can be optimized in terms of storage and robustness, as a consequence determining the tuning of the neurons in the area, rather independently (under normal conditions) of the statistics of natural images. I describe a model of learning that can be proved to have this property, linking in an elegant way the spectral properties of the signatures with the tuning of receptive fields in different areas. A surprising implication of these theoretical results is that the computational goals and some of the tuning properties of cells in the ventral stream may follow from symmetry properties (in the sense of physics) of the visual world through a process of unsupervised correlational learning, based on Hebbian synapses. In particular, simple and complex cells do not directly care about oriented bars: their tuning is a side effect of their role in translation invariance. Across the whole ventral stream the preferred features reported for neurons in different areas are only a symptom of the invariances computed and represented.

The results of each of the three parts stand on their own independently of each other. Together this theory-in-fieri makes several broad predictions, some of which are:

-invariance to small transformations in early areas (eg translations in V1) may underly stability of visual perception (suggested by Stu Geman);

-each cell’s tuning properties are shaped by visual experience of image transformations during developmental and adult plasticity;

-simple cells are likely to be the same population as complex cells, arising from different convergence of the Hebbian learning rule. The input to complex “complex” cells are dendritic branches with simple cell properties;

-class-specific transformations are learned and represented at the top of the ventral stream hierarchy; thus class-specific modules such as faces, places and possibly body areas should exist in IT;

-the type of transformations that are learned from visual experience depend on the size of the receptive fields and thus on the area (layer in the models) – assuming that the size increases with layers;

-the mix of transformations learned in each area influences the tuning properties of the cells oriented bars in V1+V2, radial and spiral patterns in V4 up to class specific tuning in AIT (eg face tuned cells);

-features must be discriminative and invariant: invariance to transformations is the primary determinant of the tuning of cortical neurons rather than statistics of natural images.

The theory is broadly consistent with the current version of HMAX. It explains it and extend it in terms of unsupervised learning, a broader class of transformation invariance and higher level modules. The goal of this paper is to sketch a comprehensive theory with little regard for mathematical niceties. If the theory turns out to be useful there will be scope for deep mathematics, ranging from group representation tools to wavelet theory to dynamics of learning

    The Computational Magic of the Ventral Stream: Towards a Theory

    Get PDF
    I conjecture that the sample complexity of object recognition is mostly due to geometric image transformations and that a main goal of the ventral stream – V1, V2, V4 and IT – is to learn-and-discount image transformations. The most surprising implication of the theory emerging from these assumptions is that the computational goals and detailed properties of cells in the ventral stream follow from symmetry properties of the visual world through a process of unsupervised correlational learning.

From the assumption of a hierarchy of areas with receptive fields of increasing size the theory predicts that the size of the receptive fields determines which transformations are learned during development and then factored out during normal processing; that the transformation represented in each area determines the tuning of the neurons in the aerea, independently of the statistics of natural images; and that class-specific transformations are learned and represented at the top of the ventral stream hierarchy.

Some of the main predictions of this theory-in-fieri are:
1. the type of transformation that are learned from visual experience depend on the size (measured in terms of wavelength) and thus on the area (layer in the models) – assuming that the aperture size increases with layers;
2. the mix of transformations learned determine the properties of the receptive fields – oriented bars in V1+V2, radial and spiral patterns in V4 up to class specific tuning in AIT (eg face tuned cells);
3. invariance to small translations in V1 may underly stability of visual perception
4. class-specific modules – such as faces, places and possibly body areas – should exist in IT to process images of object classes

    M\"obius Invariants of Shapes and Images

    Full text link
    Identifying when different images are of the same object despite changes caused by imaging technologies, or processes such as growth, has many applications in fields such as computer vision and biological image analysis. One approach to this problem is to identify the group of possible transformations of the object and to find invariants to the action of that group, meaning that the object has the same values of the invariants despite the action of the group. In this paper we study the invariants of planar shapes and images under the M\"obius group PSL(2,C)\mathrm{PSL}(2,\mathbb{C}), which arises in the conformal camera model of vision and may also correspond to neurological aspects of vision, such as grouping of lines and circles. We survey properties of invariants that are important in applications, and the known M\"obius invariants, and then develop an algorithm by which shapes can be recognised that is M\"obius- and reparametrization-invariant, numerically stable, and robust to noise. We demonstrate the efficacy of this new invariant approach on sets of curves, and then develop a M\"obius-invariant signature of grey-scale images

    Affine Registration of label maps in Label Space

    Get PDF
    Two key aspects of coupled multi-object shape\ud analysis and atlas generation are the choice of representation\ud and subsequent registration methods used to align the sample\ud set. For example, a typical brain image can be labeled into\ud three structures: grey matter, white matter and cerebrospinal\ud fluid. Many manipulations such as interpolation, transformation,\ud smoothing, or registration need to be performed on these images\ud before they can be used in further analysis. Current techniques\ud for such analysis tend to trade off performance between the two\ud tasks, performing well for one task but developing problems when\ud used for the other.\ud This article proposes to use a representation that is both\ud flexible and well suited for both tasks. We propose to map object\ud labels to vertices of a regular simplex, e.g. the unit interval for\ud two labels, a triangle for three labels, a tetrahedron for four\ud labels, etc. This representation, which is routinely used in fuzzy\ud classification, is ideally suited for representing and registering\ud multiple shapes. On closer examination, this representation\ud reveals several desirable properties: algebraic operations may\ud be done directly, label uncertainty is expressed as a weighted\ud mixture of labels (probabilistic interpretation), interpolation is\ud unbiased toward any label or the background, and registration\ud may be performed directly.\ud We demonstrate these properties by using label space in a gradient\ud descent based registration scheme to obtain a probabilistic\ud atlas. While straightforward, this iterative method is very slow,\ud could get stuck in local minima, and depends heavily on the initial\ud conditions. To address these issues, two fast methods are proposed\ud which serve as coarse registration schemes following which the\ud iterative descent method can be used to refine the results. Further,\ud we derive an analytical formulation for direct computation of the\ud "group mean" from the parameters of pairwise registration of all\ud the images in the sample set. We show results on richly labeled\ud 2D and 3D data sets
    corecore