15 research outputs found

    Combining Feature Selection and Integration—A Neural Model for MT Motion Selectivity

    Get PDF
    Background: The computation of pattern motion in visual area MT based on motion input from area V1 has been investigated in many experiments and models attempting to replicate the main mechanisms. Two different core conceptual approaches were developed to explain the findings. In integrationist models the key mechanism to achieve pattern selectivity is the nonlinear integration of V1 motion activity. In contrast, selectionist models focus on the motion computation at positions with 2D features. Methodology/Principal Findings: Recent experiments revealed that neither of the two concepts alone is sufficient to explain all experimental data and that most of the existing models cannot account for the complex behaviour found. MT pattern selectivity changes over time for stimuli like type II plaids from vector average to the direction computed with an intersection of constraint rule or by feature tracking. Also, the spatial arrangement of the stimulus within the receptive field of a MT cell plays a crucial role. We propose a recurrent neural model showing how feature integration and selection can be combined into one common architecture to explain these findings. The key features of the model are the computation of 1D and 2D motion in model area V1 subpopulations that are integrated in model MT cells using feedforward and feedback processing. Our results are also in line with findings concerning the solution of the aperture problem. Conclusions/Significance: We propose a new neural model for MT pattern computation and motion disambiguation that i

    Sketching Shiny Surfaces: 3D Shape Extraction and Depiction of Specular Surfaces

    No full text
    Many materials including water, plastic and metal have specular surface characteristics. Specular reflections have commonly been considered a nuisance for the recovery of object shape. However, the way that reflections are distorted across the surface depends crucially on 3D curvature, suggesting that they could in fact be a useful source of information. Indeed, observers can have a vivid impression of 3D shape when an object is perfectly mirrored (i.e. the image contains nothing but specular reflections). This leads to the question what are the underlying mechanisms of our visual system to extract this 3D shape information from a perfectly mirrored object. In this paper we propose a biologically motivated recurrent model for the extraction of visual features relevant for the perception of 3D shape information from images of mirrored objects. We analyze qualitatively and quantitatively the results of computational model simulations and show that bidirectional recurrent information processing leads to better r esults then pure feedforward processing. Furthermore we utilize the model output to create a rough non-photorealistic sketch representation of a mirrored object, which emphasizes image features that are mandatory for 3D shape perception (e.g. occluding contour, regions of high curvature). Moreover, this sketch illustrates that the model generates a representation of object features independent of the surrounding scene reflected in the mirrored object

    Mechanisms of Recovering Shape Properties from Perfectly Mirrored Objects

    No full text
    When we look at a perfectly mirrored object, such as a polished kettle, we generally have a remarkably strong impression of its 3D shape. This leads to the question of whether there is a mechanism to completely recover the shape of a mirrored object from a single static image (e.g. a photograph). Without explicit knowledge of the surrounding scene, this is theoretically impossible because many possible combinations of illumination from the surrounding scene and surface properties can generate the same image (i.e. it is an ill-posed problem). Therefore, the only way to extract information about object shape is to constrain the possible combinations of object shape and illumination. If we assume that the reflected scene contains isotropic contrast information, then there is a close relation between the surface curvature of an object (specifically the second derivatives of the surface function) and the distortions of the reflected scenery [1]. In this contribution we present two different computational methods for analysing images of mirrored objects to recover certain properties of 3D shape. Our first method is a statistical approach, based on principal components of the image gradient computed in a local neighborhood, known as the structure tensor. In this context, the eigenvectors of the tensor tell us the orientation of curvature and the eigenvalues of the tensor give us information about the anisotropy of curvature (ratio of maximal and minimal curvature). Our second method is a biologically motivated approach, based on Gabor filters and grouping. We apply an iterative refinement in a simple model of cortical feedforward/feedback processing [2]. Context information is collected by cells with long-range lateral connections. This information is fed back to enhance regions where local information matches the top-down reentry pattern provided by the larger context. Our approach shows that under the assumption mentioned above, it is possible to recover two characteristic curvature properties of mirrored objects: (i) the direction of maximal and minimal curvature and (ii) the anisotropy of curvature. Our simulations demonstrate that both methods (the statistical and biological motivated approach) lead to comparable results and that the models perform well even if the assumption of isotropic contrasts in the scenery is violated to a certain degree

    Extracting and depicting the 3D shape of specular surfaces

    No full text
    Many materials including water, plastic and metal have specular surface characteristics. Specular reflections have commonly been considered a nuisance for the recovery of object shape. However, the way that reflections are distorted across the surface depends crucially on 3D curvature, suggesting that they could in fact be a useful source of information. Indeed, observers can have a vivid impression of 3D shape when an object is perfectly mirrored (i.e. the image contains nothing but specular reflections). This leads to the question what are the underlying mechanisms of our visual system to extract this 3D shape information from a perfectly mirrored object. In this paper we propose a biologically motivated recurrent model for the extraction of visual features relevant for the perception of 3D shape information from images of mirrored objects. We analyze qualitatively and quantitatively the results of computational model simulations and show that bidirectional recurrent information processing leads to better results then pure feedforward processing. Furthermore we utilize the model output to create a rough non-photorealistic sketch representation of a mirrored object, which emphasizes image features that are mandatory for 3D shape perception (e.g. occluding contour, regions of high curvature). Moreover, this sketch illustrates that the model generates a representation of object features independent of the surrounding scene reflected in the mirrored object
    corecore