2,426 research outputs found

    Towards a more secure border control with 3D face recognition

    Get PDF
    Biometric data have been integrated in all ICAO compliant passports, since the ICAO members started to implement the ePassport standard. The additional use of three-dimensional models promises significant performance enhancements for border control points. By combining the geometry- and texture-channel information of the face, 3D face recognition systems show an improved robustness while processing variations in poses and problematic lighting conditions when taking the photo. This even holds in a hybrid scenario, when a 3D face scan is compared to a 2D reference image. To assess the potential of three-dimensional face recognition, the 3D Face project was initiated. This paper outlines the approach and research results of this project: The objective was not only to increase the recognition rate but also to develop a new, fake resistant capture device. In addition, methods for protection of the biometric template were researched and the second generation of the international standard ISO/IEC 19794-5:2011 was inspired by the project results

    View-Invariant Object Category Learning, Recognition, and Search: How Spatial and Object Attention Are Coordinated Using Surface-Based Attentional Shrouds

    Full text link
    Air Force Office of Scientific Research (F49620-01-1-0397); National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Diverting Attention Suppresses Human Amygdala Responses to Faces

    Get PDF
    Recent neuroimaging studies disagree as to whether the processing of emotion-laden visual stimuli is dependent upon the availability of attentional resources or entirely capacity-free. Two main factors have been proposed to be responsible for the discrepancies: the differences in the perceptual attentional demands of the tasks used to divert attentional resources from emotional stimuli and the spatial location of the affective stimuli in the visual field. To date, no neuroimaging report addressed these two issues in the same set of subjects. Therefore, the aim of the study was to investigate the effects of high and low attentional load as well as different stimulus locations on face processing in the amygdala using functional magnetic resonance imaging to provide further evidence for one of the two opposing theories. We were able for the first time to directly test the interaction of attentional load and spatial location. The results revealed a strong attenuation of amygdala activity when the attentional load was high. The eccentricity of the emotional stimuli did not affect responses in the amygdala and no interaction effect between attentional load and spatial location was found. We conclude that the processing of emotional stimuli in the amygdala is strongly dependent on the availability of attentional resources without a preferred processing of stimuli presented in the periphery and provide firm evidence for the concept of the attentional load theory of emotional processing in the amygdala

    Texture Segregation By Visual Cortex: Perceptual Grouping, Attention, and Learning

    Get PDF
    A neural model is proposed of how laminar interactions in the visual cortex may learn and recognize object texture and form boundaries. The model brings together five interacting processes: region-based texture classification, contour-based boundary grouping, surface filling-in, spatial attention, and object attention. The model shows how form boundaries can determine regions in which surface filling-in occurs; how surface filling-in interacts with spatial attention to generate a form-fitting distribution of spatial attention, or attentional shroud; how the strongest shroud can inhibit weaker shrouds; and how the winning shroud regulates learning of texture categories, and thus the allocation of object attention. The model can discriminate abutted textures with blurred boundaries and is sensitive to texture boundary attributes like discontinuities in orientation and texture flow curvature as well as to relative orientations of texture elements. The model quantitatively fits a large set of human psychophysical data on orientation-based textures. Object boundar output of the model is compared to computer vision algorithms using a set of human segmented photographic images. The model classifies textures and suppresses noise using a multiple scale oriented filterbank and a distributed Adaptive Resonance Theory (dART) classifier. The matched signal between the bottom-up texture inputs and top-down learned texture categories is utilized by oriented competitive and cooperative grouping processes to generate texture boundaries that control surface filling-in and spatial attention. Topdown modulatory attentional feedback from boundary and surface representations to early filtering stages results in enhanced texture boundaries and more efficient learning of texture within attended surface regions. Surface-based attention also provides a self-supervising training signal for learning new textures. Importance of the surface-based attentional feedback in texture learning and classification is tested using a set of textured images from the Brodatz micro-texture album. Benchmark studies vary from 95.1% to 98.6% with attention, and from 90.6% to 93.2% without attention.Air Force Office of Scientific Research (F49620-01-1-0397, F49620-01-1-0423); National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades

    Get PDF
    Creating datasets for Neuromorphic Vision is a challenging task. A lack of available recordings from Neuromorphic Vision sensors means that data must typically be recorded specifically for dataset creation rather than collecting and labelling existing data. The task is further complicated by a desire to simultaneously provide traditional frame-based recordings to allow for direct comparison with traditional Computer Vision algorithms. Here we propose a method for converting existing Computer Vision static image datasets into Neuromorphic Vision datasets using an actuated pan-tilt camera platform. Moving the sensor rather than the scene or image is a more biologically realistic approach to sensing and eliminates timing artifacts introduced by monitor updates when simulating motion on a computer monitor. We present conversion of two popular image datasets (MNIST and Caltech101) which have played important roles in the development of Computer Vision, and we provide performance metrics on these datasets using spike-based recognition algorithms. This work contributes datasets for future use in the field, as well as results from spike-based algorithms against which future works can compare. Furthermore, by converting datasets already popular in Computer Vision, we enable more direct comparison with frame-based approaches.Comment: 10 pages, 6 figures in Frontiers in Neuromorphic Engineering, special topic on Benchmarks and Challenges for Neuromorphic Engineering, 2015 (under review

    Modulation of Brain Activity by the Integration of Color into Dorsal Stream Object Files

    Get PDF
    Two superimposed surfaces of dots are perceived as separate objects when rotating in two different directions. When one surface is cued, there is a larger suppression of the attentional ERP components of the unattended surface than the attended surface when two objects are perceived versus when one object is perceived. We hypothesized that the strength of object-based attention was dependent on the differentiation of the two object representations. We tested this hypothesis by determining if two oppositely rotating superimposed surfaces of differing colors would produce a greater cueing effect than if the two surfaces were the same color. This additional color feature would allow for object files with stronger neural representation, leading to a greater suppression of the uncued surface in the task. It was found that there was a greater cueing effect in the bicolored condition compared to the unicolored condition both behaviorally and in event related potentials

    How Does the Cerebral Cortex Work? Developement, Learning, Attention, and 3D Vision by Laminar Circuits of Visual Cortex

    Full text link
    A key goal of behavioral and cognitive neuroscience is to link brain mechanisms to behavioral functions. The present article describes recent progress towards explaining how the visual cortex sees. Visual cortex, like many parts of perceptual and cognitive neocortex, is organized into six main layers of cells, as well as characteristic sub-lamina. Here it is proposed how these layered circuits help to realize the processes of developement, learning, perceptual grouping, attention, and 3D vision through a combination of bottom-up, horizontal, and top-down interactions. A key theme is that the mechanisms which enable developement and learning to occur in a stable way imply properties of adult behavior. These results thus begin to unify three fields: infant cortical developement, adult cortical neurophysiology and anatomy, and adult visual perception. The identified cortical mechanisms promise to generalize to explain how other perceptual and cognitive processes work.Air Force Office of Scientific Research (F49620-01-1-0397); Office of Naval Research (N00014-01-1-0624
    corecore