3,552 research outputs found

    From 3D Point Clouds to Pose-Normalised Depth Maps

    Get PDF
    We consider the problem of generating either pairwise-aligned or pose-normalised depth maps from noisy 3D point clouds in a relatively unrestricted poses. Our system is deployed in a 3D face alignment application and consists of the following four stages: (i) data filtering, (ii) nose tip identification and sub-vertex localisation, (iii) computation of the (relative) face orientation, (iv) generation of either a pose aligned or a pose normalised depth map. We generate an implicit radial basis function (RBF) model of the facial surface and this is employed within all four stages of the process. For example, in stage (ii), construction of novel invariant features is based on sampling this RBF over a set of concentric spheres to give a spherically-sampled RBF (SSR) shape histogram. In stage (iii), a second novel descriptor, called an isoradius contour curvature signal, is defined, which allows rotational alignment to be determined using a simple process of 1D correlation. We test our system on both the University of York (UoY) 3D face dataset and the Face Recognition Grand Challenge (FRGC) 3D data. For the more challenging UoY data, our SSR descriptors significantly outperform three variants of spin images, successfully identifying nose vertices at a rate of 99.6%. Nose localisation performance on the higher quality FRGC data, which has only small pose variations, is 99.9%. Our best system successfully normalises the pose of 3D faces at rates of 99.1% (UoY data) and 99.6% (FRGC data)

    Using the 3D shape of the nose for biometric authentication

    Get PDF

    Visual pursuit behavior in mice maintains the pursued prey on the retinal region with least optic flow

    Get PDF
    Mice have a large visual field that is constantly stabilized by vestibular ocular reflex (VOR) driven eye rotations that counter head-rotations. While maintaining their extensive visual coverage is advantageous for predator detection, mice also track and capture prey using vision. However, in the freely moving animal quantifying object location in the field of view is challenging. Here, we developed a method to digitally reconstruct and quantify the visual scene of freely moving mice performing a visually based prey capture task. By isolating the visual sense and combining a mouse eye optic model with the head and eye rotations, the detailed reconstruction of the digital environment and retinal features were projected onto the corneal surface for comparison, and updated throughout the behavior. By quantifying the spatial location of objects in the visual scene and their motion throughout the behavior, we show that the prey image consistently falls within a small area of the VOR-stabilized visual field. This functional focus coincides with the region of minimal optic flow within the visual field and consequently area of minimal motion-induced image-blur, as during pursuit mice ran directly toward the prey. The functional focus lies in the upper-temporal part of the retina and coincides with the reported high density-region of Alpha-ON sustained retinal ganglion cells.Mice have a lot to keep an eye on. To survive, they need to dodge predators looming on land and from the skies, while also hunting down the small insects that are part of their diet. To do this, they are helped by their large panoramic field of vision, which stretches from behind and over their heads to below their snouts. To stabilize their gaze when they are on the prowl, mice reflexively move their eyes to counter the movement of their head: in fact, they are unable to move their eyes independently. This raises the question: what part of their large visual field of view do these rodents use when tracking a prey, and to what advantage? This is difficult to investigate, since it requires simultaneously measuring the eye and head movements of mice as they chase and capture insects. In response, Holmgren, Stahr et al. developed a new technique to record the precise eye positions, head rotations and prey location of mice hunting crickets in surroundings that were fully digitized at high resolution. Combining this information allowed the team to mathematically recreate what mice would see as they chased the insects, and to assess what part of their large visual field they were using. This revealed that, once a cricket had entered any part of the mices large field of view, the rodents shifted their head - but not their eyes - to bring the prey into both eye views, and then ran directly at it. If the insect escaped, the mice repeated that behavior. During the pursuit, the crickets position was mainly held in a small area of the mouses view that corresponds to a specialized region in the eye which is thought to help track objects. This region also allowed the least motion-induced image blur when the animals were running forward. The approach developed by Holmgren, Stahr et al. gives a direct insight into what animals see when they hunt, and how this constantly changing view ties to what happens in the eyes. This method could be applied to other species, ushering in a new wave of tools to explore what freely moving animals see, and the relationship between behaviour and neural circuitry

    Data Parallel Line Relaxation (DPLR) Code User Manual: Acadia - Version 4.01.1

    Get PDF
    Data-Parallel Line Relaxation (DPLR) code is a computational fluid dynamic (CFD) solver that was developed at NASA Ames Research Center to help mission support teams generate high-value predictive solutions for hypersonic flow field problems. The DPLR Code Package is an MPI-based, parallel, full three-dimensional Navier-Stokes CFD solver with generalized models for finite-rate reaction kinetics, thermal and chemical non-equilibrium, accurate high-temperature transport coefficients, and ionized flow physics incorporated into the code. DPLR also includes a large selection of generalized realistic surface boundary conditions and links to enable loose coupling with external thermal protection system (TPS) material response and shock layer radiation codes

    Freely-moving mice visually pursue prey using a retinal area with least optic flow

    Get PDF
    Mice have a large visual field that is constantly stabilized by vestibular ocular reflex driven eye rotations that counter head-rotations. While maintaining their extensive visual coverage is advantageous for predator detection, mice also track and capture prey using vision. However, in the freely moving animal quantifying object location in the field of view is challenging. Here, we developed a method to digitally reconstruct and quantify the visual scene of freely moving mice performing a visually based prey capture task. By isolating the visual sense and combining amouse eye optic model with the head and eye rotations, the detailed reconstruction of the digital environment and retinal features were projected onto the corneal surface for comparison, and updated throughout the behavior. By quantifying the spatial location of objects in the visual scene and their motion throughout the behavior, we show that the image of the prey is maintained within a small area, the functional focus, in the upper-temporal part of the retina. This functional focus coincides with a region of minimal optic flow in the visual field and consequently minimal motion-induced image blur during pursuit, as well as the reported high density-region of Alpha-ON sustained retinal ganglion cells

    Surface representations for 3D face recognition

    Get PDF

    3D and Thermo-Face Fusion

    Get PDF

    Processing and Recognising Faces in 3D Images

    Get PDF
    corecore