80 research outputs found

    Detecção de ataques de apresentação por faces em dispositivos móveis

    Get PDF
    Orientadores: Anderson de Rezende Rocha, Fernanda Alcântara AndalóDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Com o crescimento e popularização de tecnologias de autenticação biométrica, tais como aquelas baseadas em reconhecimento facial, aumenta-se também a motivação para se explorar ataques em nível de sensor de captura ameaçando a eficácia dessas aplicações em cenários reais. Um desses ataques se dá quando um impostor, desejando destravar um celular alheio, busca enganar o sistema de reconhecimento facial desse dispositivo apresentando a ele uma foto do usuário alvo. Neste trabalho, estuda-se o problema de detecção automática de ataques de apresentação ao reconhecimento facial em dispositivos móveis, considerando o caso de uso de destravamento rápido e as limitações desses dispositivos. Não se assume o uso de sensores adicionais, ou intervenção consciente do usuário, dependendo apenas da imagem capturada pela câmera frontal em todos os processos de decisão. Contribuições foram feitas em relação a diferentes aspectos do problema. Primeiro, foi coletada uma base de dados de ataques de apresentação chamada RECOD-MPAD, que foi especificamente projetada para o cenário alvo, possuindo variações realistas de iluminação, incluindo sessões ao ar livre e de baixa luminosidade, ao contrário das bases públicas disponíveis atualmente. Em seguida, para enriquecer o entendimento do que se pode esperar de métodos baseados puramente em software, adota-se uma abordagem em que as características determinantes para o problema são aprendidas diretamente dos dados a partir de redes convolucionais, diferenciando-se de abordagens tradicionais baseadas em conhecimentos específicos de aspectos do problema. São propostas três diferentes formas de treinamento da rede convolucional profunda desenvolvida para detectar ataques de apresentação: treinamento com faces inteiras e alinhadas, treinamento com patches (regiões de interesse) de resolução variável, e treinamento com uma função objetivo projetada especificamente para o problema. Usando uma arquitetura leve como núcleo da nossa rede, certifica-se que a solução desenvolvida pode ser executada diretamente em celulares disponíveis no mercado no ano de 2017. Adicionalmente, é feita uma análise que considera protocolos inter-fatores e disjuntos de usuário, destacando-se alguns dos problemas com bases de dados e abordagens atuais. Experimentos no benchmark OULU-NPU, proposto recentemente e usado em uma competição internacional, sugerem que os métodos propostos se comparam favoravelmente ao estado da arte, e estariam entre os melhores na competição, mesmo com a condição de pouco uso de memória e recursos computacionais limitados. Finalmente, para melhor adaptar a solução a cada usuário, propõe-se uma forma efetiva de usar uma galeria de dados do usuário para adaptar os modelos ao usuário e ao dispositivo usado, aumentando sua eficácia no cenário operacionalAbstract: With the widespread use of biometric authentication systems, such as those based on face recognition, comes the exploitation of simple attacks at the sensor level that can undermine the effectiveness of these technologies in real-world setups. One example of such attack takes place when an impostor, aiming at unlocking someone else's smartphone, deceives the device¿s built-in face recognition system by presenting a printed image of the genuine user's face. In this work, we study the problem of automatically detecting presentation attacks against face authentication methods in mobile devices, considering the use-case of fast device unlocking and hardware constraints of such devices. We do not assume the existence of any extra sensors or user intervention, relying only on the image captured by the device¿s frontal camera. Our contributions lie on multiple aspects of the problem. Firstly, we collect RECOD-MPAD, a new presentation-attack dataset that is tailored to the mobile-device setup, and is built to have real-world variations in lighting, including outdoors and low-light sessions, in contrast to existing public datasets. Secondly, to enrich the understanding of how far we can go with purely software-based methods when tackling this problem, we adopt a solely data-driven approach ¿ differently from handcrafted methods in prior art that focus on specific aspects of the problem ¿ and propose three different ways of training a deep convolutional neural network to detect presentation attacks: training with aligned faces, training with multi-resolution patches, and training with a multi-objective loss function crafted specifically to the problem. By using a lightweight architecture as the core of our network, we ensure that our solution can be efficiently embedded in modern smartphones in the market at the year of 2017. Additionally, we provide a careful analysis that considers several user-disjoint and cross-factor protocols, highlighting some of the problems with current datasets and approaches. Experiments with the OULU-NPU benchmark, which was used recently in an international competition, suggest that our methods are among the top performing ones. Finally, to further enhance the model's efficacy and discriminability in the target setup of user authentication for mobile devices, we propose a method that leverages the available gallery of user data in the device and adapts the method decision-making process to the user's and device¿s own characteristicsMestradoCiência da ComputaçãoMestre em Ciência da Computaçã

    Multi-Modal Ocular Recognition in presence of occlusion in Mobile Devices

    Get PDF
    Title from PDF of title page viewed September 18, 2019Dissertation advisor: Reza DerakhshaniVitaIncludes bibliographical references (pages 128-144)Thesis (Ph.D.)--School of Computing and Engineering. University of Missouri--Kansas City, 2018The existence eyeglasses in human faces cause real challenges for ocular, facial, and soft-based (such as eyebrows) biometric recognition due to glasses reflection, shadow, and frame occlusion. In this regard, two operations (eyeglasses detection and eyeglasses segmentation) have been proposed to mitigate the effect of occlusion using eyeglasses. Eyeglasses detection is an important initial step towards eyeglass segmentation. Three schemes of eye glasses detection have been proposed which are non-learning-based, learning-based, and deep learning-based schemes. The non-learning scheme of eyeglasses detection which consists of cascaded filters achieved an overall accuracy of 99.0% for VI SOB and 97.9% for FERET datasets. The learning-based scheme of eyeglass detection consisting of extracting Local Binary Pattern (LBP), Histogram of Gradients (HOG) and fusing them together, then applying classifiers (such as Support Vector Machine (SVM), Multi-Layer Perceptron (MLP), and Linear Discriminant Analysis (LDA)), and fusing the output of these classifiers. The latter obtained a best overall accuracy of about 99.3% on FERET and 100% on VISOB dataset. Besides, the deep learning-based scheme of eye glasses detection showed a comparative study for eyeglasses frame detection using different Convolutional Neural Network (CNN) structures that are applied to Frame Bridge region and extended ocular region. The best CNN model obtained an overall accuracy of 99.96% for ROI consisting of Frame Bridge. Moreover, two schemes of eyeglasses segmentation have been introduced. The first segmentation scheme was cascaded convolutional Neural Network (CNN). This scheme consists of cascaded CNN’s for eyeglasses detection, weight generation, and glasses segmentation, followed by mathematical and binarization operations. The scheme showed a 100% eyeglasses detection and 91% segmentation accuracy by our proposed approach. Also, the second segmentation scheme was the convolutional de-convolutional network. This CNN model has been implemented with main convolutional layers, de-convolutional layers, and one custom (lamda) layer. This scheme achieved better segmentation results of 97% segmentation accuracy over the cascaded approach. Furthermore, two soft biometric re-identification schemes have been introduced with eyeglasses mitigation. The first scheme was eyebrows-based user authentication consists of local, global, deep feature extraction with learning-based matching. The best result of 0.63% EER using score level fusion of handcraft descriptors (HOG, and GIST) with the deep VGG16 descriptor for eyebrow-based user authentication. The second scheme was eyeglass-based user authentication which consisting of eyeglasses segmentation, morphological cleanup, features extraction, and learning-based matching. The best result of 3.44% EER using score level fusion of handcraft descriptors (HOG, and GIST) with the deep VGG16 descriptor for eyeglasses-based user authentication. Also, an EER enhancement of 2.51% for indoor vs. outdoor (In: Out) light set tings was achieved for eyebrow-based authentication after eyeglasses segmentation and removal using Convolutional-Deconvolutional approach followed by in-painting.Introduction -- Background in machine learning and computer vision -- Eyeglasses detection and segmentation -- User authentication using soft-biometric -- Conclusion and future work -- Appendi

    No soldiers left behind: An IoT-based low-power military mobile health system design

    Get PDF
    © 2013 IEEE. There has been an increasing prevalence of ad-hoc networks for various purposes and applications. These include Low Power Wide Area Networks (LPWAN) and Wireless Body Area Networks (WBAN) which have emerging applications in health monitoring as well as user location tracking in emergency settings. Further applications can include real-Time actuation of IoT equipment, and activation of emergency alarms through the inference of a user\u27s situation using sensors and personal devices through a LPWAN. This has potential benefits for military networks and applications regarding the health of soldiers and field personnel during a mission. Due to the wireless nature of ad-hoc network devices, it is crucial to conserve battery power for sensors and equipment which transmit data to a central server. An inference system can be applied to devices to reduce data size for transfer and subsequently reduce battery consumption, however this could result in compromising accuracy. This paper presents a framework for secure automated messaging and data fusion as a solution to address the challenges of requiring data size reduction whilst maintaining a satisfactory accuracy rate. A Multilayer Inference System (MIS) was used to conserve the battery power of devices such as wearables and sensor devices. The results for this system showed a data reduction of 97.9% whilst maintaining satisfactory accuracy against existing single layer inference methods. Authentication accuracy can be further enhanced with additional biometrics and health data information

    Face Liveness Detection under Processed Image Attacks

    Get PDF
    Face recognition is a mature and reliable technology for identifying people. Due to high-definition cameras and supporting devices, it is considered the fastest and the least intrusive biometric recognition modality. Nevertheless, effective spoofing attempts on face recognition systems were found to be possible. As a result, various anti-spoofing algorithms were developed to counteract these attacks. They are commonly referred in the literature a liveness detection tests. In this research we highlight the effectiveness of some simple, direct spoofing attacks, and test one of the current robust liveness detection algorithms, i.e. the logistic regression based face liveness detection from a single image, proposed by the Tan et al. in 2010, against malicious attacks using processed imposter images. In particular, we study experimentally the effect of common image processing operations such as sharpening and smoothing, as well as corruption with salt and pepper noise, on the face liveness detection algorithm, and we find that it is especially vulnerable against spoofing attempts using processed imposter images. We design and present a new facial database, the Durham Face Database, which is the first, to the best of our knowledge, to have client, imposter as well as processed imposter images. Finally, we evaluate our claim on the effectiveness of proposed imposter image attacks using transfer learning on Convolutional Neural Networks. We verify that such attacks are more difficult to detect even when using high-end, expensive machine learning techniques

    Robust iris recognition under unconstrained settings

    Get PDF
    Tese de mestrado integrado. Bioengenharia. Faculdade de Engenharia. Universidade do Porto. 201

    Security and privacy services based on biosignals for implantable and wearable device

    Get PDF
    Mención Internacional en el título de doctorThe proliferation of wearable and implantable medical devices has given rise to an interest in developing security schemes suitable for these devices and the environment in which they operate. One area that has received much attention lately is the use of (human) biological signals as the basis for biometric authentication, identification and the generation of cryptographic keys. More concretely, in this dissertation we use the Electrocardiogram (ECG) to extract some fiducial points which are later used on crytographic protocols. The fiducial points are used to describe the points of interest which can be extracted from biological signals. Some examples of fiducials points of the ECG are P-wave, QRS complex,T-wave, R peaks or the RR-time-interval. In particular, we focus on the time difference between two consecutive heartbeats (R-peaks). These time intervals are referred to as Inter-Pulse Intervals (IPIs) and have been proven to contain entropy after applying some signal processing algorithms. This process is known as quantization algorithm. Theentropy that the heart signal has makes the ECG values an ideal candidate to generate tokens to be used on security protocols. Most of the proposed solutions in the literature rely on some questionable assumptions. For instance, it is commonly assumed that it possible to generate the same cryptographic token in at least two different devices that are sensing the same signal using the IPI of each cardiac signal without applying any synchronization algorithm; authors typically only measure the entropy of the LSB to determine whether the generated cryptographic values are random or not; authors usually pick the four LSBs assuming they are the best ones to create the best cryptographic tokens; the datasets used in these works are rather small and, therefore, possibly not significant enough, or; in general it is impossible to reproduce the experiments carried out by other researchers because the source code of such experiments is not usually available. In this Thesis, we overcome these weaknesses trying to systematically address most of the open research questions. That is why, in all the experiments carried out during this research we used a public database called PhysioNet which is available on Internet and stores a huge heart database named PhysioBank. This repository is constantly being up dated by medical researchers who share the sensitive information about patients and it also offers an open source software named PhysioToolkit which can be used to read and display these signals. All datasets we used contain ECG records obtained from a variety of real subjects with different heart-related pathologies as well as healthy people. The first chapter of this dissertation (Chapter 1) is entirely dedicated to present the research questions, introduce the main concepts used all along this document as well as settle down some medical and cryptographic definitions. Finally, the objectives that this dissertation tackles down are described together with the main motivations for this Thesis. In Chapter 2 we report the results of a large-scale statistical study to determine if heart signal is a good source of entropy. For this, we analyze 19 public datasets of heart signals from the Physionet repository, spanning electrocardiograms from multiple subjects sampled at different frequencies and lengths. We then apply both ENT and NIST STS standard battery of randomness tests to the extracted IPIs. The results we obtain through the analysis, clearly show that a short burst of bits derived from an ECG record may seem random, but large files derived from long ECG records should not be used for security purposes. In Chapter3, we carry out an análisis to check whether it is reasonable or not the assumption that two different sensors can generate the same cryptographic token. We systematically check if two sensors can agree on the same token without sharing any type of information. Similarly to other proposals, we include ECC algorithms like BCH to the token generation. We conclude that a fuzzy extractor (or another error correction technique) is not enough to correct the synchronization errors between the IPI values derived from two ECG signals captured via two sensors placed on different positions. We demonstrate that a pre-processing of the heart signal must be performed before the fuzzy extractor is applied. Going one step forward and, in order to generate the same token on different sensors, we propose a synchronization algorithm. To do so, we include a runtimemonitoralgorithm. Afterapplyingourproposedsolution,werun again the experiments with 19 public databases from the PhysioNet repository. The only constraint to pick those databases was that they need at least two measurements of heart signals (ECG1 and ECG2). As a conclusion, running the experiments, the same token can be dexix rived on different sensors in most of the tested databases if and only if a pre-processing of the heart signal is performed before extracting the tokens. In Chapter 4, we analyze the entropy of the tokens extracted from a heart signal according to the NISTSTS recommendation (i.e.,SP80090B Recommendation for the Entropy Sources Used for Random Bit Generation). We downloaded 19 databases from the Physionet public repository and analyze, in terms of min-entropy, more than 160,000 files. Finally, we propose other combinations for extracting tokens by taking 2, 3, 4 and 5 bits different than the usual four LSBs. Also, we demonstrate that the four LSB are not the best bits to be used in cryptographic applications. We offer other alternative combinations for two (e.g., 87), three (e.g., 638), four (e.g., 2638) and five (e.g., 23758) bits which are, in general, much better than taking the four LSBs from the entropy point of view. Finally, the last Chapter of this dissertation (Chapter 5) summarizes the main conclusions arisen from this PhD Thesis and introduces some open questions.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: Arturo Ribagorda Garnacho.- Secretario: Jorge Blasco Alis.- Vocal: Jesús García López de la Call

    A framework for continuous, transparent authentication on mobile devices

    Get PDF
    Mobile devices have consistently advanced in terms of processing power, amount of memory and functionality. With these advances, the ability to store potentially private or sensitive information on them has increased. Traditional methods for securing mobile devices, passwords and PINs, are inadequate given their weaknesses and the bursty use patterns that characterize mobile devices. Passwords and PINs are often shared or weak secrets to ameliorate the memory load on device owners. Furthermore, they represent point-of-entry security, which provides access control but not authentication. Alternatives to these traditional meth- ods have been suggested. Examples include graphical passwords, biometrics and sketched passwords, among others. These alternatives all have their place in an authentication toolbox, as do passwords and PINs, but do not respect the unique needs of the mobile device environment. This dissertation presents a continuous, transparent authentication method for mobile devices called the Transparent Authentication Framework. The Framework uses behavioral biometrics, which are patterns in how people perform actions, to verify the identity of the mobile device owner. It is transparent in that the biometrics are gathered in the background while the device is used normally, and is continuous in that verification takes place regularly. The Framework requires little effort from the device owner, goes beyond access control to provide authentication, and is acceptable and trustworthy to device owners, all while respecting the memory and processor limitations of the mobile device environment

    Handbook of Vascular Biometrics

    Get PDF
    corecore