6,963 research outputs found

    The ear as a biometric

    No full text
    It is more than 10 years since the first tentative experiments in ear biometrics were conducted and it has now reached the “adolescence” of its development towards a mature biometric. Here we present a timely retrospective of the ensuing research since those early days. Whilst its detailed structure may not be as complex as the iris, we show that the ear has unique security advantages over other biometrics. It is most unusual, even unique, in that it supports not only visual and forensic recognition, but also acoustic recognition at the same time. This, together with its deep three-dimensional structure and its robust resistance to change with age will make it very difficult to counterfeit thus ensuring that the ear will occupy a special place in situations requiring a high degree of protection

    The effect of time on ear biometrics

    No full text
    We present an experimental study to demonstrate the effect of the time difference in image acquisition for gallery and probe on the performance of ear recognition. This experimental research is the first study on the time effect on ear biometrics. For the purpose of recognition, we convolve banana wavelets with an ear image and then apply local binary pattern on the convolved image. The histograms of the produced image are then used as features to describe an ear. A histogram intersection technique is then applied on the histograms of two ears to measure the ear similarity for the recognition purposes. We also use analysis of variance (ANOVA) to select features to identify the best banana wavelets for the recognition process. The experimental results show that the recognition rate is only slightly reduced by time. The average recognition rate of 98.5% is achieved for an eleven month-difference between gallery and probe on an un-occluded ear dataset of 1491 images of ears selected from Southampton University ear database

    Pushing the Limits of 3D Color Printing: Error Diffusion with Translucent Materials

    Full text link
    Accurate color reproduction is important in many applications of 3D printing, from design prototypes to 3D color copies or portraits. Although full color is available via other technologies, multi-jet printers have greater potential for graphical 3D printing, in terms of reproducing complex appearance properties. However, to date these printers cannot produce full color, and doing so poses substantial technical challenges, from the shear amount of data to the translucency of the available color materials. In this paper, we propose an error diffusion halftoning approach to achieve full color with multi-jet printers, which operates on multiple isosurfaces or layers within the object. We propose a novel traversal algorithm for voxel surfaces, which allows the transfer of existing error diffusion algorithms from 2D printing. The resulting prints faithfully reproduce colors, color gradients and fine-scale details.Comment: 15 pages, 14 figures; includes supplemental figure

    From 3D Point Clouds to Pose-Normalised Depth Maps

    Get PDF
    We consider the problem of generating either pairwise-aligned or pose-normalised depth maps from noisy 3D point clouds in a relatively unrestricted poses. Our system is deployed in a 3D face alignment application and consists of the following four stages: (i) data filtering, (ii) nose tip identification and sub-vertex localisation, (iii) computation of the (relative) face orientation, (iv) generation of either a pose aligned or a pose normalised depth map. We generate an implicit radial basis function (RBF) model of the facial surface and this is employed within all four stages of the process. For example, in stage (ii), construction of novel invariant features is based on sampling this RBF over a set of concentric spheres to give a spherically-sampled RBF (SSR) shape histogram. In stage (iii), a second novel descriptor, called an isoradius contour curvature signal, is defined, which allows rotational alignment to be determined using a simple process of 1D correlation. We test our system on both the University of York (UoY) 3D face dataset and the Face Recognition Grand Challenge (FRGC) 3D data. For the more challenging UoY data, our SSR descriptors significantly outperform three variants of spin images, successfully identifying nose vertices at a rate of 99.6%. Nose localisation performance on the higher quality FRGC data, which has only small pose variations, is 99.9%. Our best system successfully normalises the pose of 3D faces at rates of 99.1% (UoY data) and 99.6% (FRGC data)

    A compact structured light based otoscope for three dimensional imaging of the tympanic membrane

    Get PDF
    Three dimensional (3D) imaging of the tympanic membrane (TM) has been carried out using a traditional otoscope equipped with a high-definition webcam, a portable projector and a telecentric optical system. The device allows us to project fringe patterns on the TM and the magnified image is processed using phase shifting algorithms to arrive at a 3D description of the TM. Obtaining a 3D image of the TM can aid in the diagnosis of ear infections such as otitis media with effusion, which is essentially fluid build-up in the middle ear. The high resolution of this device makes it possible examine a computer generated 3D profile for abnormalities in the shape of the eardrum. This adds an additional dimension to the image that can be obtained from a traditional otoscope by allowing visualization of the TM from different perspectives. In this paper, we present the design and construction of this device and details of the imaging processing for recovering the 3D profile of the subject under test. The design of the otoscope is similar to that of the traditional device making it ergonomically compatible and easy to adopt in clinical practice

    Inference of the Cold Dark Matter substructure mass function at z=0.2 using strong gravitational lenses

    Get PDF
    We present the results of a search for galaxy substructures in a sample of 11 gravitational lens galaxies from the Sloan Lens ACS Survey. We find no significant detection of mass clumps, except for a luminous satellite in the system SDSS J0956+5110. We use these non-detections, in combination with a previous detection in the system SDSS J0946+1006, to derive constraints on the substructure mass function in massive early-type host galaxies with an average redshift z ~ 0.2 and an average velocity dispersion of 270 km/s. We perform a Bayesian inference on the substructure mass function, within a median region of about 32 kpc squared around the Einstein radius (~4.2 kpc). We infer a mean projected substructure mass fraction f=0.00760.0052+0.0208f = 0.0076^{+0.0208}_{-0.0052} at the 68 percent confidence level and a substructure mass function slope α\alpha < 2.93 at the 95 percent confidence level for a uniform prior probability density on alpha. For a Gaussian prior based on Cold Dark Matter (CDM) simulations, we infer f=0.00640.0042+0.0080f = 0 .0064^{+0.0080}_{-0.0042} and a slope of α\alpha = 1.900.098+0.098^{+0.098}_{-0.098} at the 68 percent confidence level. Since only one substructure was detected in the full sample, we have little information on the mass function slope, which is therefore poorly constrained (i.e. the Bayes factor shows no positive preference for any of the two models).The inferred fraction is consistent with the expectations from CDM simulations and with inference from flux ratio anomalies at the 68 percent confidence level.Comment: Accepted for publication on MNRAS, some typos corrected and some important references adde

    Deep learning for 3D ear detection: A complete pipeline from data generation to segmentation

    Get PDF
    The human ear has distinguishing features that can be used for identification. Automated ear detection from 3D profile face images plays a vital role in ear-based human recognition. This work proposes a complete pipeline including synthetic data generation and ground-truth data labeling for ear detection in 3D point clouds. The ear detection problem is formulated as a semantic part segmentation problem that detects the ear directly in 3D point clouds of profile face data. We introduce EarNet, a modified version of the PointNet++ architecture, and apply rotation augmentation to handle different pose variations in the real data. We demonstrate that PointNet and PointNet++ cannot manage the rotation of a given object without such augmentation. The synthetic 3D profile face data is generated using statistical shape models. In addition, an automatic tool has been developed and is made publicly available to create ground-truth labels of any 3D public data set that includes co-registered 2D images. The experimental results on the real data demonstrate higher localization as compared to existing state-of-the-art approaches

    Ear Contour Detection and Modeling Using Statistical Shape Models

    Get PDF
    Ear detection is an actively growing area of research because of its applications in human head tracking and biometric recognition. In head tracking, it is used to augment face detectors and to perform pose estimation. In biometric systems, it is used both as an independent modality and in multi-modal biometric recognition. The ear shape is the preferred feature used to perform detection because of its unique structure in both 2D color images and 3D range images. Ear shape models have also been used in literature to perform ear detection, but at a cost of a loss in information about the exact ear structure. In this thesis, we seek to address these issues in existing methods by a combination of techniques including Viola Jones Haar Cascades, Active Shape Models (ASM) and Dijkstra\u27s shortest path algorithm to devise a shape model of the ear using geometric parameters and mark an accurate contour around the ear using only 2D color images. The Viola Jones Haar Cascades classifier is used to mark a rectangular region around the ear in a left side profile image. Then a set of key landmark points around the ear including the ear outer helix, the ear anti-helix and the ear center is extracted using the ASM. This set of landmarks is then fed into Dijkstra\u27s shortest path algorithm which traces out the strongest edge between adjacent landmarks, to extract the entire ear outer contour, while maintaining a high computational efficiency

    Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates

    Get PDF
    The study of cerebral anatomy in developing neonates is of great importance for the understanding of brain development during the early period of life. This dissertation therefore focuses on three challenges in the modelling of cerebral anatomy in neonates during brain development. The methods that have been developed all use Magnetic Resonance Images (MRI) as source data. To facilitate study of vascular development in the neonatal period, a set of image analysis algorithms are developed to automatically extract and model cerebral vessel trees. The whole process consists of cerebral vessel tracking from automatically placed seed points, vessel tree generation, and vasculature registration and matching. These algorithms have been tested on clinical Time-of- Flight (TOF) MR angiographic datasets. To facilitate study of the neonatal cortex a complete cerebral cortex segmentation and reconstruction pipeline has been developed. Segmentation of the neonatal cortex is not effectively done by existing algorithms designed for the adult brain because the contrast between grey and white matter is reversed. This causes pixels containing tissue mixtures to be incorrectly labelled by conventional methods. The neonatal cortical segmentation method that has been developed is based on a novel expectation-maximization (EM) method with explicit correction for mislabelled partial volume voxels. Based on the resulting cortical segmentation, an implicit surface evolution technique is adopted for the reconstruction of the cortex in neonates. The performance of the method is investigated by performing a detailed landmark study. To facilitate study of cortical development, a cortical surface registration algorithm for aligning the cortical surface is developed. The method first inflates extracted cortical surfaces and then performs a non-rigid surface registration using free-form deformations (FFDs) to remove residual alignment. Validation experiments using data labelled by an expert observer demonstrate that the method can capture local changes and follow the growth of specific sulcus
    corecore