21 research outputs found

    Towards a reliable face recognition system.

    Get PDF
    Face Recognition (FR) is an important area in computer vision with many applications such as security and automated border controls. The recent advancements in this domain have pushed the performance of models to human-level accuracy. However, the varying conditions in the real-world expose more challenges for their adoption. In this paper, we investigate the performance of these models. We analyze the performance of a cross-section of face detection and recognition models. Experiments were carried out without any preprocessing on three state-of-the-art face detection methods namely HOG, YOLO and MTCNN, and three recognition models namely, VGGface2, FaceNet and Arcface. Our results indicated that there is a significant reliance by these methods on preprocessing for optimum performance

    Face presentation attack detection using texture analysis

    No full text
    Abstract In the last decades, face recognition systems have evolved a lot in terms of performance. As a result, this technology is now considered as mature and is applied in many real world applications from border control to financial transactions and computer security. Yet, many studies show that these systems suffer from vulnerabilities to spoofing attacks, a weakness that may limit their usage in many cases. A face spoofing attack or presentation attack occurs when someone tries to masquerade as someone else by presenting a fake face in front of the face recognition camera. To protect the recognition systems against attacks of this kind, many face anti-spoofing methods have been proposed. These methods have shown good performances on the existing face anti-spoofing databases. However, their performances degrade drastically under real world variations (e.g., illumination and camera device variations). In this thesis, we concentrate on improving the generalization capabilities of the face anti-spoofing methods with a particular focus on the texture based techniques. In contrast to most existing texture based methods aiming at extracting texture features from gray-scale images, we propose a joint color-texture analysis. First, the face images are converted into different color spaces. Then, the feature histograms computed over each image band are concatenated and used for discriminating between real and fake face images. Our experiments conducted on three color spaces: RGB, HSV and YCbCr show that extracting the texture information from separated luminance chrominance color spaces (HSV and YCbCr) yields to better performances compared to gray-scale and RGB image representations. Moreover, to deal with the problem of illumination and image-resolution variations, we propose to extract this texture information from different scale images. In addition to representing the face images in different scales, the multi-scale filtering methods also act as pre-processing against factors such as noise and illumination. Although our obtained results are better than the state of the art, they are still far from the requirements of real world applications. Thus, to help in the development of robust face anti-spoofing methods, we collected a new challenging face anti-spoofing database using six camera devices in three different illumination and environmental conditions. Furthermore, we have organized a competition on the collected database where fourteen face anti-spoofing methods have been assessed and compared.Tiivistelmä Kasvontunnistusjärjestelmien suorituskyky on parantunut huomattavasti viime vuosina. Tästä syystä tätä teknologiaa pidetään nykyisin riittävän kypsänä ja käytetään jo useissa käytännön sovelluksissa kuten rajatarkastuksissa, rahansiirroissa ja tietoturvasovelluksissa. Monissa tutkimuksissa on kuitenkin havaittu, että nämä järjestelmät ovat myös haavoittuvia huijausyrityksille, joissa joku yrittää esiintyä jonakin toisena henkilönä esittämällä kameralle jäljennöksen kohdehenkilön kasvoista. Tämä haavoittuvuus rajoittaa kasvontunnistuksen laajempaa käyttöä monissa sovelluksissa. Tunnistusjärjestelmien turvaamiseksi on kehitetty lukuisia menetelmiä tällaisten hyökkäysten torjumiseksi. Nämä menetelmät ovat toimineet hyvin tätä tarkoitusta varten kehitetyillä kasvotietokannoilla, mutta niiden suorituskyky huononee dramaattisesti todellisissa käytännön olosuhteissa, esim. valaistuksen ja käytetyn kuvantamistekniikan variaatioista johtuen. Tässä työssä yritämme parantaa kasvontunnistuksen huijauksen estomenetelmien yleistämiskykyä keskittyen erityisesti tekstuuripohjaisiin menetelmiin. Toisin kuin useimmat olemassa olevat tekstuuripohjaiset menetelmät, joissa tekstuuripiirteitä irrotetaan harmaasävykuvista, ehdotamme väritekstuurianalyysiin pohjautuvaa ratkaisua. Ensin kasvokuvat muutetaan erilaisiin väriavaruuksiin. Sen jälkeen kuvan jokaiselta kanavalta erikseen lasketut piirrehistogrammit yhdistetään ja käytetään erottamaan aidot ja väärät kasvokuvat toisistaan. Kolmeen eri väriavaruuteen, RGB, HSV ja YCbCr, perustuvat testimme osoittavat, että tekstuuri-informaation irrottaminen HSV- ja YCbCr-väriavaruuksien erillisistä luminanssi- ja krominanssikuvista parantaa suorituskykyä kuvien harmaasävy- ja RGB-esitystapoihin verrattuna. Valaistuksen ja kuvaresoluution variaation takia ehdotamme myös tämän tekstuuri-informaation irrottamista eri tavoin skaalatuista kuvista. Sen lisäksi, että itse kasvot esitetään eri skaaloissa, useaan skaalaan perustuvat suodatusmenetelmät toimivat myös esikäsittelynä sellaisia suorituskykyä heikentäviä tekijöitä vastaan kuten kohina ja valaistus. Vaikka tässä tutkimuksessa saavutetut tulokset ovat parempia kuin uusinta tekniikkaa edustavat tulokset, ne ovat kuitenkin vielä riittämättömiä reaalimaailman sovelluksissa tarvittavaan suorituskykyyn. Sen takia edistääksemme uusien robustien kasvontunnistuksen huijaamisen ilmaisumenetelmien kehittämistä kokosimme uuden, haasteellisen huijauksenestotietokannan käyttäen kuutta kameraa kolmessa erilaisessa valaistus- ja ympäristöolosuhteessa. Järjestimme keräämällämme tietokannalla myös kansainvälisen kilpailun, jossa arvioitiin ja verrattiin neljäätoista kasvontunnistuksen huijaamisen ilmaisumenetelmää

    On the generalization of color texture-based face anti-spoofing

    No full text
    Abstract Despite the significant attention given to the problem of face spoofing, we still lack generalized presentation attack detection (PAD) methods performing robustly in practical face recognition systems. The existing face anti-spoofing techniques have indeed achieved impressive results when trained and evaluated on the same database (i.e. intra-test protocols). Cross-database experiments have, however, revealed that the performance of the state-of-the-art methods drops drastically as they fail to cope with new attacks scenarios and other operating conditions that have not been seen during training and development phases. So far, even the popular convolutional neural networks (CNN) have failed to derive well-generalizing features for face anti-spoofing. In this work, we explore the effect of different factors, such as acquisition conditions and presentation attack instrument (PAI) variation, on the generalization of color texture-based face anti-spoofing. Our extensive cross-database evaluation of seven color texture-based methods demonstrates that most of the methods are unable to generalize to unseen spoofing attack scenarios. More importantly, the experiments show that some facial color texture representations are more robust to particular PAIs than others. From this observation, we propose a face PAD solution of attack-specific countermeasures based solely on color texture analysis and investigate how well it generalizes under display and print attacks in different conditions. The evaluation of the method combining attack-specific detectors on three benchmark face anti-spoofing databases showed remarkable generalization ability against display attacks while print attacks require still further attention

    Face antispoofing using speeded-up robust features and Fisher vector encoding

    No full text
    Abstract The vulnerabilities of face biometric authentication systems to spoofing attacks have received a significant attention during the recent years. Some of the proposed countermeasures have achieved impressive results when evaluated on intratests, i.e., the system is trained and tested on the same database. Unfortunately, most of these techniques fail to generalize well to unseen attacks, e.g., when the system is trained on one database and then evaluated on another database. This is a major concern in biometric antispoofing research that is mostly overlooked. In this letter, we propose a novel solution based on describing the facial appearance by applying Fisher vector encoding on speeded-up robust features extracted from different color spaces. The evaluation of our countermeasure on three challenging benchmark face-spoofing databases, namely the CASIA face antispoofing database, the replay-attack database, and MSU mobile face spoof database, showed excellent and stable performance across all the three datasets. Most importantly, in interdatabase tests, our proposed approach outperforms the state of the art and yields very promising generalization capabilities, even when only limited training data are used

    Review of face presentation attack detection competitions

    No full text
    Abstract Face presentation attack detection has received increasing attention ever since the vulnerabilities to spoofing have been widely recognized. The state of the art in software-based face anti-spoofing has been assessed in three international competitions organized in conjunction with major biometrics conferences in 2011, 2013, and 2017, each introducing new challenges to the research community. In this chapter, we present the design and results of the three competitions. The particular focus is on the latest competition, where the aim was to evaluate the generalization abilities of the proposed algorithms under some real-world variations faced in mobile scenarios, including previously unseen acquisition conditions, presentation attack instruments, and sensors. We also discuss the lessons learnt from the competitions and future challenges in the field in general

    Face anti-spoofing with human material perception

    No full text
    Abstract Face anti-spoofing (FAS) plays a vital role in securing the face recognition systems from presentation attacks. Most existing FAS methods capture various cues (e.g., texture, depth and reflection) to distinguish the live faces from the spoofing faces. All these cues are based on the discrepancy among physical materials (e.g., skin, glass, paper and silicone). In this paper we rephrase face anti-spoofing as a material recognition problem and combine it with classical human material perception, intending to extract discriminative and robust features for FAS. To this end, we propose the Bilateral Convolutional Networks (BCN), which is able to capture intrinsic material-based patterns via aggregating multi-level bilateral macro- and micro- information. Furthermore, Multi-level Feature Refinement Module (MFRM) and multi-head supervision are utilized to learn more robust features. Comprehensive experiments are performed on six benchmark datasets, and the proposed method achieves superior performance on both intra- and cross-dataset testings. One highlight is that we achieve overall 11.3 ± 9.5% EER for cross-type testing in SiW-M dataset, which significantly outperforms previous results. We hope this work will facilitate future cooperation between FAS and material communities

    OULU-NPU:a mobile face presentation attack database with real-world variations

    No full text
    Abstract The vulnerabilities of face-based biometric systems to presentation attacks have been finally recognized but yet we lack generalized software-based face presentation attack detection (PAD) methods performing robustly in practical mobile authentication scenarios. This is mainly due to the fact that the existing public face PAD datasets are beginning to cover a variety of attack scenarios and acquisition conditions but their standard evaluation protocols do not encourage researchers to assess the generalization capabilities of their methods across these variations. In this present work, we introduce a new public face PAD database, OULU-NPU, aiming at evaluating the generalization of PAD methods in more realistic mobile authentication scenarios across three covariates: unknown environmental conditions (namely illumination and background scene), acquisition devices and presentation attack instruments (PAI). This publicly available database consists of 5940 videos corresponding to 55 subjects recorded in three different environments using high-resolution frontal cameras of six different smartphones. The high-quality print and video-replay attacks were created using two different printers and two different display devices. Each of the four unambiguously defined evaluation protocols introduces at least one previously unseen condition to the test set, which enables a fair comparison on the generalization capabilities between new and existing approaches. The baseline results using color texture analysis based face PAD method demonstrate the challenging nature of the database

    Robust face anti‐spoofing using CNN with LBP and WLD

    No full text
    corecore