9 research outputs found

    Feature Fusion and NRML Metric Learning for Facial Kinship Verification

    Get PDF
    Features extracted from facial images are used in various fields such as kinship verification. The kinship verification system determines the kin or non-kin relation between a pair of facial images by analysing their facial features. In this research, different texture and color features have been used along with the metric learning method, to verify the kinship for the four kinship relations of father-son, father-daughter, mother-son and mother-daughter. First, by fusing effective features, NRML metric learning used to generate the discriminative feature vector, then SVM classifier used to verify to kinship relations. To measure the accuracy of the proposed method, KinFaceW-I and KinFaceW-II databases have been used. The results of the evaluations show that the feature fusion and NRML metric learning methods have been able to improve the performance of the kinship verification system. In addition to the proposed approach, the effect of feature extraction from the image blocks or the whole image is investigated and the results are presented. The results indicate that feature extraction in block form, can be effective in improving the final accuracy of kinship verification

    Fusion features ensembling models using Siamese convolutional neural network for kinship verification

    Get PDF
    Family is one of the most important entities in the community. Mining the genetic information through facial images is increasingly being utilized in wide range of real-world applications to facilitate family members tracing and kinship analysis to become remarkably easy, inexpensive, and fast as compared to the procedure of profiling Deoxyribonucleic acid (DNA). However, the opportunities of building reliable models for kinship recognition are still suffering from the insufficient determination of the familial features, unstable reference cues of kinship, and the genetic influence factors of family features. This research proposes enhanced methods for extracting and selecting the effective familial features that could provide evidences of kinship leading to improve the kinship verification accuracy through visual facial images. First, the Convolutional Neural Network based on Optimized Local Raw Pixels Similarity Representation (OLRPSR) method is developed to improve the accuracy performance by generating a new matrix representation in order to remove irrelevant information. Second, the Siamese Convolutional Neural Network and Fusion of the Best Overlapping Blocks (SCNN-FBOB) is proposed to track and identify the most informative kinship clues features in order to achieve higher accuracy. Third, the Siamese Convolutional Neural Network and Ensembling Models Based on Selecting Best Combination (SCNN-EMSBC) is introduced to overcome the weak performance of the individual image and classifier. To evaluate the performance of the proposed methods, series of experiments are conducted using two popular benchmarking kinship databases; the KinFaceW-I and KinFaceW-II which then are benchmarked against the state-of-art algorithms found in the literature. It is indicated that SCNN-EMSBC method achieves promising results with the average accuracy of 92.42% and 94.80% on KinFaceW-I and KinFaceW-II, respectively. These results significantly improve the kinship verification performance and has outperformed the state-of-art algorithms for visual image-based kinship verification

    Machine learning for audio-visual kinship verification

    No full text
    Abstract Human faces implicitly indicate the family linkage, showing the perceived facial resemblance in people who are biologically related. Psychological studies found that humans have the ability to discriminate the parent-child pairs from unrelated pairs, just by observing facial images. Inspired by this finding, automatic facial kinship verification has emerged in the field of computer vision and pattern recognition, and many advanced computational models have been developed to assess the facial similarity between kinship pairs. Compared to human perception ability, automatic kinship verification methods can effectively and objectively capture subtle kin similarities such as shape and color. While many efforts have been devoted to improving the verification performance from human faces, multimodal exploration of kinship verification has not been properly addressed. This thesis proposes, for the first time, the combination of human faces and voices to verify kinship, which is referred to as audio-visual kinship verification, establishing the first comprehensive audio-visual kinship datasets, which consist of multiple videos of kin-related people speaking to the camera. Extensive experiments on these newly collected datasets are conducted, detailing the comparative performance of both audio and visual modalities and their combination using novel deep-learning fusion methods. The experimental results indicate the effectiveness of the proposed methods and that audio (voice) information is complementary and useful for the kinship verification problem.Tiivistelmä Ihmiskasvot osoittavat implisiittisesti perhesidonnaisuuden, mikä osoittaa biologisesti sukua olevien ihmisten koettua kasvojen samankaltaisuutta. Psykologiset tutkimukset havaitsivat, että ihmisillä on kyky erottaa vanhempi-lapsi-parit toisistaan riippumattomista pareista pelkästään kasvojen kuvien avulla. Tämän löydön innoittamana automaattinen kasvojen sukulaisuuden todentaminen on syntynyt tietokonenäön ja hahmontunnistuksen alalla, ja monia kehittyneitä laskennallisia malleja on kehitetty arvioimaan kasvojen samankaltaisuutta sukulaisparien välillä. Verrattuna ihmisen havainnointikykyyn automaattiset sukulaisuuden todentamismenetelmät voivat tehokkaasti ja objektiivisesti havaita hienovaraisia sukulaisyhteyksiä, kuten kasvojen muotoa ja ihonväriä. Vaikka monia ponnisteluja on tehty pyrkimyksenä parantaa ihmiskasvojen todentamista, sukulaisuuden todentamisen multimodaalista tutkimista ei ole käsitelty kunnolla. Tässä opinnäytetyössä ehdotetaan ensimmäistä kertaa ihmiskasvojen ja äänen yhdistämistä sukulaisuuden todentamiseksi tavalla, jota kutsutaan audiovisuaaliseksi sukulaisuustodentamiseksi. Näin luodaan ensimmäiset kattavat audiovisuaaliset sukulaisuustietojoukot, jotka koostuvat useista videoista, joissa esiintyy kameralle puhuvia sukulaisia. Näillä äskettäin kerätyillä tietojoukoilla tehdään laajoja kokeita, joissa kuvataan yksityiskohtaisesti sekä ääni että visuaalisten modaliteettien vertailevaa suorituskykyä ja niiden yhdistelmää käyttämällä uusia syvän oppimisen fuusiomenetelmiä. Kokeelliset tulokset osoittavat ehdotettujen menetelmien tehokkuuden ja sen, että ääni- (ääni)informaatio on täydentävää ja hyödyllistä sukulaisuuden todentamisongelmassa

    Audio-visual kinship verification in the wild

    No full text
    Abstract Kinship verification is a challenging problem, where recognition systems are trained to establish a kin relation between two individuals based on facial images or videos. However, due to variations in capture conditions (background, pose, expression, illumination and occlusion), state-of-the-art systems currently provide a low level of accuracy. As in many visual recognition and affective computing applications, kinship verification may benefit from a combination of discriminant information extracted from both video and audio signals. In this paper, we investigate for the first time the fusion audio-visual information from both face and voice modalities to improve kinship verification accuracy. First, we propose a new multi-modal kinship dataset called TALking KINship (TALKIN), that is comprised of several pairs of video sequences with subjects talking. State-of-the-art conventional and deep learning models are assessed and compared for kinship verification using this dataset. Finally, we propose a deep Siamese network for multi-modal fusion of kinship relations. Experiments with the TALKIN dataset indicate that the proposed Siamese network provides a significantly higher level of accuracy over baseline uni-modal and multi-modal fusion techniques for kinship verification. Results also indicate that audio (vocal) information is complementary and useful for kinship verification problem

    Audio-Visual Kinship Verification: A New Dataset and a Unified Adaptive Adversarial Multimodal Learning Approach

    No full text
    Abstract Facial kinship verification refers to automatically determining whether two people have a kin relation from their faces. It has become a popular research topic due to potential practical applications. Over the past decade, many efforts have been devoted to improving the verification performance from human faces only while lacking other biometric information, for example, speaking voice. In this article, to interpret and benefit from multiple modalities, we propose for the first time to combine human faces and voices to verify kinship, which we refer it as the audio-visual kinship verification study. We first establish a comprehensive audio-visual kinship dataset that consists of familial talking facial videos under various scenarios, called TALKIN-Family. Based on the dataset, we present the extensive evaluation of kinship verification from faces and voices. In particular, we propose a deep-learning-based fusion method, called unified adaptive adversarial multimodal learning (UAAML). It consists of the adversarial network and the attention module on the basis of unified multimodal features. Experiments show that audio (voice) information is complementary to facial features and useful for the kinship verification problem. Furthermore, the proposed fusion method outperforms baseline methods. In addition, we also evaluate the human verification ability on a subset of TALKIN-Family. It indicates that humans have higher accuracy when they have access to both faces and voices. The machine-learning methods could effectively and efficiently outperform the human ability. Finally, we include the future work and research opportunities with the TALKIN-Family dataset
    corecore