32 research outputs found

    Shaped Wavelets for Curvilinear Structures for Ear Biometrics

    Full text link

    The effect of time on ear biometrics

    No full text
    We present an experimental study to demonstrate the effect of the time difference in image acquisition for gallery and probe on the performance of ear recognition. This experimental research is the first study on the time effect on ear biometrics. For the purpose of recognition, we convolve banana wavelets with an ear image and then apply local binary pattern on the convolved image. The histograms of the produced image are then used as features to describe an ear. A histogram intersection technique is then applied on the histograms of two ears to measure the ear similarity for the recognition purposes. We also use analysis of variance (ANOVA) to select features to identify the best banana wavelets for the recognition process. The experimental results show that the recognition rate is only slightly reduced by time. The average recognition rate of 98.5% is achieved for an eleven month-difference between gallery and probe on an un-occluded ear dataset of 1491 images of ears selected from Southampton University ear database

    Biometric security: A novel ear recognition approach using a 3D morphable ear model

    Get PDF
    Biometrics is a critical component of cybersecurity that identifies persons by verifying their behavioral and physical traits. In biometric-based authentication, each individual can be correctly recognized based on their intrinsic behavioral or physical features, such as face, fingerprint, iris, and ears. This work proposes a novel approach for human identification using 3D ear images. Usually, in conventional methods, the probe image is registered with each gallery image using computational heavy registration algorithms, making it practically infeasible due to the time-consuming recognition process. Therefore, this work proposes a recognition pipeline that reduces the one-to-one registration between probe and gallery. First, a deep learning-based algorithm is used for ear detection in 3D side face images. Second, a statistical ear model known as a 3D morphable ear model (3DMEM), was constructed to use as a feature extractor from the detected ear images. Finally, a novel recognition algorithm named you morph once (YMO) is proposed for human recognition that reduces the computational time by eliminating one-to-one registration between probe and gallery, which only calculates the distance between the parameters stored in the gallery and the probe. The experimental results show the significance of the proposed method for a real-time application

    The image ray transform

    No full text
    Image feature extraction is a fundamental area of image processing and computer vision. There are many ways that techniques can be created that extract features and particularly novel techniques can be developed by taking influence from the physical world. This thesis presents the Image Ray Transform (IRT), a technique based upon an analogy to light, using the mechanisms that define how light travels through different media and analogy to optical fibres to extract structural features within an image. Through analogising the image as a transparent medium we can use refraction and reflection to cast many rays inside the image and guide them towards features, transforming the image in order to emphasise tubular and circular structures.The power of the transform for structural feature detection is shown empirically in a number of applications, especially through its ability to highlight curvilinear structures. The IRT is used to enhance the accuracy of circle detection through use as a preprocessor, highlighting circles to a greater extent than conventional edge detection methods. The transform is also shown to be well suited to enrolment for ear biometrics, providing a high detection and recognition rate with PCA, comparable to manual enrolment. Vascular features such as those found in medical images are also shown to be emphasised by the transform, and the IRT is used for detection of the vasculature in retinal fundus images.Extensions to the basic image ray transform allow higher level features to be detected. A method is shown for expressing rays in an invariant form to describe the structures of an object and hence the object itself with a bag-of-visual words model. These ray features provide a complementary description of objects to other patch-based descriptors and have been tested on a number of object categorisation databases. Finally a different analysis of rays is provided that can produce information on both bilateral (reflectional) and rotational symmetry within the image, allowing a deeper understanding of image structure. The IRT is a flexible technique, capable of detecting a range of high and low level image features, and open to further use and extension across a range of applications

    Biometric Systems

    Get PDF
    Biometric authentication has been widely used for access control and security systems over the past few years. The purpose of this book is to provide the readers with life cycle of different biometric authentication systems from their design and development to qualification and final application. The major systems discussed in this book include fingerprint identification, face recognition, iris segmentation and classification, signature verification and other miscellaneous systems which describe management policies of biometrics, reliability measures, pressure based typing and signature verification, bio-chemical systems and behavioral characteristics. In summary, this book provides the students and the researchers with different approaches to develop biometric authentication systems and at the same time includes state-of-the-art approaches in their design and development. The approaches have been thoroughly tested on standard databases and in real world applications

    Shaped Wavelets for Curvilinear Structures for Ear Biometrics

    No full text
    One of the most recent trends in biometrics is recognition by ear ap-pearance in head profile images. Determining the region of interest which con-tains the ear is an important step in an ear biometric system. To this end, we propose a robust, simple and effective method for ear detection from profile im-ages by employing a bank of curved and stretched Gabor wavelets, known as banana wavelets. A 100% detection rate is achieved here on a group of 252 pro-file images from XM2VTS database. The banana wavelets technique demon-strates better performances than Gabor wavelets technique. This indicates that the curved wavelets are advantageous here. Also the banana wavelet technique is applied to a new and more challenging database which highlights practical considerations of a more realistic deployment. This ear detection technique is fully automated, has encouraging performance and appears to be robust to de-gradation by noise

    Handbook of Vascular Biometrics

    Get PDF

    AutoGraff: towards a computational understanding of graffiti writing and related art forms.

    Get PDF
    The aim of this thesis is to develop a system that generates letters and pictures with a style that is immediately recognizable as graffiti art or calligraphy. The proposed system can be used similarly to, and in tight integration with, conventional computer-aided geometric design tools and can be used to generate synthetic graffiti content for urban environments in games and in movies, and to guide robotic or fabrication systems that can materialise the output of the system with physical drawing media. The thesis is divided into two main parts. The first part describes a set of stroke primitives, building blocks that can be combined to generate different designs that resemble graffiti or calligraphy. These primitives mimic the process typically used to design graffiti letters and exploit well known principles of motor control to model the way in which an artist moves when incrementally tracing stylised letter forms. The second part demonstrates how these stroke primitives can be automatically recovered from input geometry defined in vector form, such as the digitised traces of writing made by a user, or the glyph outlines in a font. This procedure converts the input geometry into a seed that can be transformed into a variety of calligraphic and graffiti stylisations, which depend on parametric variations of the strokes

    Handbook of Vascular Biometrics

    Get PDF
    This open access handbook provides the first comprehensive overview of biometrics exploiting the shape of human blood vessels for biometric recognition, i.e. vascular biometrics, including finger vein recognition, hand/palm vein recognition, retina recognition, and sclera recognition. After an introductory chapter summarizing the state of the art in and availability of commercial systems and open datasets/open source software, individual chapters focus on specific aspects of one of the biometric modalities, including questions of usability, security, and privacy. The book features contributions from both academia and major industrial manufacturers

    COMPUTATIONAL MODELS OF FEATURE REPRESENTATIONS IN THE VENTRAL VISUAL STREAM

    Get PDF
    Understanding vision requires unpacking the representations of the visual processing hierarchy. One major and unresolved challenge is to understand the representations of high-level category-selective areas – areas that respond preferentially to certain semantic categories of stimuli (e.g., scene-selective areas respond more to scenes than objects). Attempts at characterizing the representations of category-selective areas have been hampered by the difficulty of describing their complex perceptual representations in words — these representations exist in an “ineffable valley” between the describable patterns of perceptual features (e.g., edges, colors) and the commonsense concepts of visual cognition (e.g., object categories). Here I developed a novel approach to identify the emergent properties of mid-level representations in purely feedforward deep convolutional neural network (CNN) models of category-selective cortex. Using this approach, CNN models were fit to scene-evoked fMRI responses in both scene-selective cortex and object-selective cortex. This method uses a semantically-guided image-occlusion procedure together with behavioral ratings to systematically characterize the tuning profiles of the category-selective CNNs. I found that while the representations in category-selective CNNs appear complex and difficult to describe at a surface level, large-scale computational analyses can reveal 1) interpretable descriptions of mid-level feature representations and 2) the emergence of semantic selectivity through purely bottom-up perceptual feature tuning. Specifically, these models provide a proof-of-principle demonstration of how the semantic selectivity of category-selective regions could arise through perceptual-feature tuning in a small series of feedforward computations. These effects were robust to variations of model hyperparameters and were reproducible across different CNN architectures and training procedures. Taken together, I demonstrated how large datasets and in-silico computational models can be used to reveal the tuning profiles of category-selective regions and to identify how semantic preferences could emerge through bottom-up processes
    corecore