131 research outputs found

    Fusion features ensembling models using Siamese convolutional neural network for kinship verification

    Get PDF
    Family is one of the most important entities in the community. Mining the genetic information through facial images is increasingly being utilized in wide range of real-world applications to facilitate family members tracing and kinship analysis to become remarkably easy, inexpensive, and fast as compared to the procedure of profiling Deoxyribonucleic acid (DNA). However, the opportunities of building reliable models for kinship recognition are still suffering from the insufficient determination of the familial features, unstable reference cues of kinship, and the genetic influence factors of family features. This research proposes enhanced methods for extracting and selecting the effective familial features that could provide evidences of kinship leading to improve the kinship verification accuracy through visual facial images. First, the Convolutional Neural Network based on Optimized Local Raw Pixels Similarity Representation (OLRPSR) method is developed to improve the accuracy performance by generating a new matrix representation in order to remove irrelevant information. Second, the Siamese Convolutional Neural Network and Fusion of the Best Overlapping Blocks (SCNN-FBOB) is proposed to track and identify the most informative kinship clues features in order to achieve higher accuracy. Third, the Siamese Convolutional Neural Network and Ensembling Models Based on Selecting Best Combination (SCNN-EMSBC) is introduced to overcome the weak performance of the individual image and classifier. To evaluate the performance of the proposed methods, series of experiments are conducted using two popular benchmarking kinship databases; the KinFaceW-I and KinFaceW-II which then are benchmarked against the state-of-art algorithms found in the literature. It is indicated that SCNN-EMSBC method achieves promising results with the average accuracy of 92.42% and 94.80% on KinFaceW-I and KinFaceW-II, respectively. These results significantly improve the kinship verification performance and has outperformed the state-of-art algorithms for visual image-based kinship verification

    A geometric and fractional entropy-based method for family photo classification

    Full text link
    © 2019 Due to the power and impact of social media, unsolved practical issues such as human trafficking, kinship recognition, and clustering family photos from large collections have recently received special attention from researchers. In this paper, we present a new idea for family and non-family photo classification. Unlike existing methods that explore face recognition and biometric features, the proposed method explores the strengths of facial geometric features and texture given by a new fractional-entropy approach for classification. The geometric features include spatial and angle information of facial key points, which give spatial and directional coherence. The texture features extract regular patterns in images. The proposed method then combines the above properties in a new way for classifying family and non-family photos with the help of Convolutional Neural Networks (CNNs). Experimental results on our own as well as benchmark datasets show that the proposed approach outperforms the state-of-the-art methods in terms of classification rate

    Face modeling for face recognition in the wild.

    Get PDF
    Face understanding is considered one of the most important topics in computer vision field since the face is a rich source of information in social interaction. Not only does the face provide information about the identity of people, but also of their membership in broad demographic categories (including sex, race, and age), and about their current emotional state. Facial landmarks extraction is the corner stone in the success of different facial analyses and understanding applications. In this dissertation, a novel facial modeling is designed for facial landmarks detection in unconstrained real life environment from different image modalities including infra-red and visible images. In the proposed facial landmarks detector, a part based model is incorporated with holistic face information. In the part based model, the face is modeled by the appearance of different face part(e.g., right eye, left eye, left eyebrow, nose, mouth) and their geometric relation. The appearance is described by a novel feature referred to as pixel difference feature. This representation is three times faster than the state-of-art in feature representation. On the other hand, to model the geometric relation between the face parts, the complex Bingham distribution is adapted from the statistical community into computer vision for modeling the geometric relationship between the facial elements. The global information is incorporated with the local part model using a regression model. The model results outperform the state-of-art in detecting facial landmarks. The proposed facial landmark detector is tested in two computer vision problems: boosting the performance of face detectors by rejecting pseudo faces and camera steering in multi-camera network. To highlight the applicability of the proposed model for different image modalities, it has been studied in two face understanding applications which are face recognition from visible images and physiological measurements for autistic individuals from thermal images. Recognizing identities from faces under different poses, expressions and lighting conditions from a complex background is an still unsolved problem even with accurate detection of landmark. Therefore, a learning similarity measure is proposed. The proposed measure responds only to the difference in identities and filter illuminations and pose variations. similarity measure makes use of statistical inference in the image plane. Additionally, the pose challenge is tackled by two new approaches: assigning different weights for different face part based on their visibility in image plane at different pose angles and synthesizing virtual facial images for each subject at different poses from single frontal image. The proposed framework is demonstrated to be competitive with top performing state-of-art methods which is evaluated on standard benchmarks in face recognition in the wild. The other framework for the face understanding application, which is a physiological measures for autistic individual from infra-red images. In this framework, accurate detecting and tracking Superficial Temporal Arteria (STA) while the subject is moving, playing, and interacting in social communication is a must. It is very challenging to track and detect STA since the appearance of the STA region changes over time and it is not discriminative enough from other areas in face region. A novel concept in detection, called supporter collaboration, is introduced. In support collaboration, the STA is detected and tracked with the help of face landmarks and geometric constraint. This research advanced the field of the emotion recognition

    Modelling facial dynamics change as people age

    Get PDF
    In the recent years, increased research activity in the area of facial ageing modelling has been recorded. This interest is attributed to the potential of using facial ageing modelling techniques for a number of different applications, including age estimation, prediction of the current appearance of missing persons, age-specific human-computer interaction, computer graphics, forensic applications, and medical applications. This thesis describes a general AAM model for modelling 4D (3D dynamic) ageing and specific models to map facial dynamics as people age. A fully automatic and robust pre-processing pipeline is used, along with an approach for tracking and inter-subject registering of 3D sequences (4D data). A 4D database of 3D videos of individuals has been assembled to achieve this goal. The database is the first of its kind in the world. Various techniques were deployed to build this database to overcome problems due to noise and missing data. A two-factor (age groups and gender) analysis of variance (MANOVA) was performed on the dataset. The groups were then compared to assess the separate effects of age on gender through variance analysis. The results show that smiles alter with age and have different characteristics between males and females. We analysed the rich sources of information present in the 3D dynamic features of smiles to provide more insight into the patterns of smile dynamics. The sources of temporal information that have been investigated include the varying dynamics of lip movements, which are analysed to extract the descriptive features. We evaluated the dynamic features of closed-mouth smiles among 80 subjects of both genders. Multilevel Principal Components Analysis (mPCA) is used to analyse the effect of naturally occurring groups in a population of individuals for smile dynamics data. A concise overview of the formal aspects of mPCA has been outlined, and we have demonstrated that mPCA offers a way to model the variations at different levels of structure in the data (between and within group levels)
    corecore