916 research outputs found

    Evaluating the Sensitivity of Face Presentation Attack Detection Techniques to Images of Varying Resolutions

    Get PDF
    In the last decades, emerging techniques for face Presentation Attack Detection (PAD) have reported a remarkable performance to detect attack presentations whose attack type and capture conditions are known a priori. However, the generalisation capability of PAD approaches shows a considerable deterioration to detect unknown attacks. In order to tackle those generalisation issues, several PAD techniques have focused on the detection of homogeneous features from known attacks to detect unknown Presentation Attack Instruments without taking into account how some intrinsic image properties such as the image resolution or biometric quality could impact their detection performance. In this work, we carry out a thorough analysis of the sensitivity of several texture descriptors which shows how the use of images with varying resolutions for training leads to a high decrease on the attack detection performance

    Learning Domain Invariant Information to Enhance Presentation Attack Detection in Visible Face Recognition Systems

    Get PDF
    Face signatures, including size, shape, texture, skin tone, eye color, appearance, and scars/marks, are widely used as discriminative, biometric information for access control. Despite recent advancements in facial recognition systems, presentation attacks on facial recognition systems have become increasingly sophisticated. The ability to detect presentation attacks or spoofing attempts is a pressing concern for the integrity, security, and trust of facial recognition systems. Multi-spectral imaging has been previously introduced as a way to improve presentation attack detection by utilizing sensors that are sensitive to different regions of the electromagnetic spectrum (e.g., visible, near infrared, long-wave infrared). Although multi-spectral presentation attack detection systems may be discriminative, the need for additional sensors and computational resources substantially increases complexity and costs. Instead, we propose a method that exploits information from infrared imagery during training to increase the discriminability of visible-based presentation attack detection systems. We introduce (1) a new cross-domain presentation attack detection framework that increases the separability of bonafide and presentation attacks using only visible spectrum imagery, (2) an inverse domain regularization technique for added training stability when optimizing our cross-domain presentation attack detection framework, and (3) a dense domain adaptation subnetwork to transform representations between visible and non-visible domains. Adviser: Benjamin Rigga

    Evading Classifiers by Morphing in the Dark

    Full text link
    Learning-based systems have been shown to be vulnerable to evasion through adversarial data manipulation. These attacks have been studied under assumptions that the adversary has certain knowledge of either the target model internals, its training dataset or at least classification scores it assigns to input samples. In this paper, we investigate a much more constrained and realistic attack scenario wherein the target classifier is minimally exposed to the adversary, revealing on its final classification decision (e.g., reject or accept an input sample). Moreover, the adversary can only manipulate malicious samples using a blackbox morpher. That is, the adversary has to evade the target classifier by morphing malicious samples "in the dark". We present a scoring mechanism that can assign a real-value score which reflects evasion progress to each sample based on the limited information available. Leveraging on such scoring mechanism, we propose an evasion method -- EvadeHC -- and evaluate it against two PDF malware detectors, namely PDFRate and Hidost. The experimental evaluation demonstrates that the proposed evasion attacks are effective, attaining 100%100\% evasion rate on the evaluation dataset. Interestingly, EvadeHC outperforms the known classifier evasion technique that operates based on classification scores output by the classifiers. Although our evaluations are conducted on PDF malware classifier, the proposed approaches are domain-agnostic and is of wider application to other learning-based systems

    Boosting Face Presentation Attack Detection in Multi-Spectral Videos Through Score Fusion of Wavelet Partition Images

    Get PDF
    Presentation attack detection (PAD) algorithms have become an integral requirement for the secure usage of face recognition systems. As face recognition algorithms and applications increase from constrained to unconstrained environments and in multispectral scenarios, presentation attack detection algorithms must also increase their scope and effectiveness. It is important to realize that the PAD algorithms are not only effective for one environment or condition but rather be generalizable to a multitude of variabilities that are presented to a face recognition algorithm. With this motivation, as the first contribution, the article presents a unified PAD algorithm for different kinds of attacks such as printed photos, a replay of video, 3D masks, silicone masks, and wax faces. The proposed algorithm utilizes a combination of wavelet decomposed raw input images from sensor and face region data to detect whether the input image is bonafide or attacked. The second contribution of the article is the collection of a large presentation attack database in the NIR spectrum, containing images from individuals of two ethnicities. The database contains 500 print attack videos which comprise approximately 1,00,000 frames collectively in the NIR spectrum. Extensive evaluation of the algorithm on NIR images as well as visible spectrum images obtained from existing benchmark databases shows that the proposed algorithm yields state-of-the-art results and surpassed several complex and state-of-the-art algorithms. For instance, on benchmark datasets, namely CASIA-FASD, Replay-Attack, and MSU-MFSD, the proposed algorithm achieves a maximum error of 0.92% which is significantly lower than state-of-the-art attack detection algorithms

    FENCE: Feasible Evasion Attacks on Neural Networks in Constrained Environments

    Full text link
    As advances in Deep Neural Networks (DNNs) demonstrate unprecedented levels of performance in many critical applications, their vulnerability to attacks is still an open question. We consider evasion attacks at the testing time against Deep Learning in constrained environments, in which dependencies between features need to be satisfied. These situations may arise naturally in tabular data or may be the result of feature engineering in specific application domains, such as threat detection. We propose a general iterative gradient-based framework called FENCE for crafting evasion attacks that take into consideration the specifics of constrained domains. We apply it against Feed-Forward Neural Networks in two threat detection applications: network traffic botnet classification and malicious domain classification, to generate feasible adversarial examples. We extensively evaluate the success rate and performance of our attacks, compare their significant improvement over several baselines, and analyze several factors that impact the attack success rate, including the optimization objective and the data imbalance. We show that with minimal effort (e.g., generating 12 additional network connections), an attacker can change the model's prediction to the target one. We found that models trained on datasets with higher imbalance are more vulnerable to our FENCE attacks. Finally, we show the potential of adversarial training in constrained domains to increase the DNN resilience against these attacks

    Modeling Deception for Cyber Security

    Get PDF
    In the era of software-intensive, smart and connected systems, the growing power and so- phistication of cyber attacks poses increasing challenges to software security. The reactive posture of traditional security mechanisms, such as anti-virus and intrusion detection systems, has not been sufficient to combat a wide range of advanced persistent threats that currently jeopardize systems operation. To mitigate these extant threats, more ac- tive defensive approaches are necessary. Such approaches rely on the concept of actively hindering and deceiving attackers. Deceptive techniques allow for additional defense by thwarting attackers’ advances through the manipulation of their perceptions. Manipu- lation is achieved through the use of deceitful responses, feints, misdirection, and other falsehoods in a system. Of course, such deception mechanisms may result in side-effects that must be handled. Current methods for planning deception chiefly portray attempts to bridge military deception to cyber deception, providing only high-level instructions that largely ignore deception as part of the software security development life cycle. Con- sequently, little practical guidance is provided on how to engineering deception-based techniques for defense. This PhD thesis contributes with a systematic approach to specify and design cyber deception requirements, tactics, and strategies. This deception approach consists of (i) a multi-paradigm modeling for representing deception requirements, tac- tics, and strategies, (ii) a reference architecture to support the integration of deception strategies into system operation, and (iii) a method to guide engineers in deception mod- eling. A tool prototype, a case study, and an experimental evaluation show encouraging results for the application of the approach in practice. Finally, a conceptual coverage map- ping was developed to assess the expressivity of the deception modeling language created.Na era digital o crescente poder e sofisticação dos ataques cibernéticos apresenta constan- tes desafios para a segurança do software. A postura reativa dos mecanismos tradicionais de segurança, como os sistemas antivírus e de detecção de intrusão, não têm sido suficien- tes para combater a ampla gama de ameaças que comprometem a operação dos sistemas de software actuais. Para mitigar estas ameaças são necessárias abordagens ativas de defesa. Tais abordagens baseiam-se na ideia de adicionar mecanismos para enganar os adversários (do inglês deception). As técnicas de enganação (em português, "ato ou efeito de enganar, de induzir em erro; artimanha usada para iludir") contribuem para a defesa frustrando o avanço dos atacantes por manipulação das suas perceções. A manipula- ção é conseguida através de respostas enganadoras, de "fintas", ou indicações erróneas e outras falsidades adicionadas intencionalmente num sistema. É claro que esses meca- nismos de enganação podem resultar em efeitos colaterais que devem ser tratados. Os métodos atuais usados para enganar um atacante inspiram-se fundamentalmente nas técnicas da área militar, fornecendo apenas instruções de alto nível que ignoram, em grande parte, a enganação como parte do ciclo de vida do desenvolvimento de software seguro. Consequentemente, há poucas referências práticas em como gerar técnicas de defesa baseadas em enganação. Esta tese de doutoramento contribui com uma aborda- gem sistemática para especificar e desenhar requisitos, táticas e estratégias de enganação cibernéticas. Esta abordagem é composta por (i) uma modelação multi-paradigma para re- presentar requisitos, táticas e estratégias de enganação, (ii) uma arquitetura de referência para apoiar a integração de estratégias de enganação na operação dum sistema, e (iii) um método para orientar os engenheiros na modelação de enganação. Uma ferramenta protó- tipo, um estudo de caso e uma avaliação experimental mostram resultados encorajadores para a aplicação da abordagem na prática. Finalmente, a expressividade da linguagem de modelação de enganação é avaliada por um mapeamento de cobertura de conceitos

    Handbook of Digital Face Manipulation and Detection

    Get PDF
    This open access book provides the first comprehensive collection of studies dealing with the hot topic of digital face manipulation such as DeepFakes, Face Morphing, or Reenactment. It combines the research fields of biometrics and media forensics including contributions from academia and industry. Appealing to a broad readership, introductory chapters provide a comprehensive overview of the topic, which address readers wishing to gain a brief overview of the state-of-the-art. Subsequent chapters, which delve deeper into various research challenges, are oriented towards advanced readers. Moreover, the book provides a good starting point for young researchers as well as a reference guide pointing at further literature. Hence, the primary readership is academic institutions and industry currently involved in digital face manipulation and detection. The book could easily be used as a recommended text for courses in image processing, machine learning, media forensics, biometrics, and the general security area

    Network anomaly detection using adversarial Deep Learning

    Get PDF
    Dissertação de mestrado integrado em Engenharia InformáticaComputer networks security is becoming an important and challenging topic. In particular, one currently witnesses increasingly complex attacks which are also bound to become more and more sophisticated with the advent of artificial intelligence technologies. Intrusion detection systems are a crucial component in network security. However, the limited number of publicly available network datasets and their poor traffic variety and attack diversity are a major stumbling block in the proper development of these systems. In order to overcome such difficulties and therefore maximise the detection of anomalies in the network, it is proposed the use of Adversarial Deep Learning techniques to increase the amount and variety of existing data and, simultaneously, to improve the learning ability of the classification models used for anomaly detection. This master’s dissertation main goal is the development of a system that proves capable of improving the detection of anomalies in the network through the use of Adversarial Deep Learning techniques, in particular, Generative Adversarial Networks. With this in mind, firstly, a state-of-the-art analysis and a review of existing solutions were addressed. Subsequently, efforts were made to build a modular solution to learn from imbalanced datasets with applications not only in the field of anomaly detection in the network, but also in all areas affected by imbalanced data problems. Finally, it was demonstrated the feasibility of the developed system with its application to a network flow dataset.A segurança das redes de computadores tem-se vindo a tornar num tópico importante e desafiador. Em particular, atualmente testemunham-se ataques cada vez mais complexos que, com o advento das tecnologias de inteligência artificial, tendem a tornar-se cada vez mais sofisticados. Sistemas de deteção de intrusão são uma peça chave na segurança de redes de computadores. No entanto, o número limitado de dados públicos de fluxo de rede e a sua pobre diversidade e variedade de ataques revelam-se num grande obstáculo para o correto desenvolvimento destes sistemas. De forma a ultrapassar tais adversidades e consequentemente melhorar a deteção de anomalias na rede, é proposto que sejam utilizadas técnicas de Adversarial Deep Learning para aumentar o número e variedade de dados existentes e, simultaneamente, melhorar a capacidade de aprendizagem dos modelos de classificação utilizados na deteção de anomalias. O objetivo principal desta dissertação de mestrado é o desenvolvimento de um sistema que se prove capaz de melhorar a deteção de anomalias na rede através de técnicas de Adversarial Deep Learning, em particular, através do uso de Generative Adversarial Networks. Neste sentido, primeiramente, procedeu-se à análise do estado de arte assim como à investigação de soluções existentes. Posteriormente, atuou-se de forma a desenvolver uma solução modular com aplicação não só na área de deteção de anomalias na rede, mas também em todas as áreas afetadas pelo problema de dados desbalanceados. Por fim, demonstrou-se a viabilidade do sistema desenvolvido com a sua aplicação a um conjunto de dados de fluxo de rede

    Handbook of Digital Face Manipulation and Detection

    Get PDF
    This open access book provides the first comprehensive collection of studies dealing with the hot topic of digital face manipulation such as DeepFakes, Face Morphing, or Reenactment. It combines the research fields of biometrics and media forensics including contributions from academia and industry. Appealing to a broad readership, introductory chapters provide a comprehensive overview of the topic, which address readers wishing to gain a brief overview of the state-of-the-art. Subsequent chapters, which delve deeper into various research challenges, are oriented towards advanced readers. Moreover, the book provides a good starting point for young researchers as well as a reference guide pointing at further literature. Hence, the primary readership is academic institutions and industry currently involved in digital face manipulation and detection. The book could easily be used as a recommended text for courses in image processing, machine learning, media forensics, biometrics, and the general security area
    corecore