298,750 research outputs found
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Learning-based pattern classifiers, including deep networks, have shown
impressive performance in several application domains, ranging from computer
vision to cybersecurity. However, it has also been shown that adversarial input
perturbations carefully crafted either at training or at test time can easily
subvert their predictions. The vulnerability of machine learning to such wild
patterns (also referred to as adversarial examples), along with the design of
suitable countermeasures, have been investigated in the research field of
adversarial machine learning. In this work, we provide a thorough overview of
the evolution of this research area over the last ten years and beyond,
starting from pioneering, earlier work on the security of non-deep learning
algorithms up to more recent work aimed to understand the security properties
of deep learning algorithms, in the context of computer vision and
cybersecurity tasks. We report interesting connections between these
apparently-different lines of work, highlighting common misconceptions related
to the security evaluation of machine-learning algorithms. We review the main
threat models and attacks defined to this end, and discuss the main limitations
of current work, along with the corresponding future challenges towards the
design of more secure learning algorithms.Comment: Accepted for publication on Pattern Recognition, 201
High Performance Technology in Algorithmic Cryptography
Alan Turing’s article, “Computation and intelligence”, gives the preamble of the characteristics of guessing if it is a machine or another human being. Currently, the use of ubiquitous technologies, such as the use of firmware, allows direct access to analog data, however, we must find a way to secure the information. Analyzing cryptographic algorithms for the transfer of multimedia information. Raise the use of cryptarithmetic. Finite automata will be developed that will govern the logic of the cryptographic algorithms to be integrated into Firmware, performance tests and controls will be carried out to determine the best strategies for their performance and algorithmic complexity. Technologies are expressed that allow the creation of learning environments, such as neural networks, that support other processes as the recognition of patterns on images
AVATAR: Robust Voice Search Engine Leveraging Autoregressive Document Retrieval and Contrastive Learning
Voice, as input, has progressively become popular on mobiles and seems to
transcend almost entirely text input. Through voice, the voice search (VS)
system can provide a more natural way to meet user's information needs.
However, errors from the automatic speech recognition (ASR) system can be
catastrophic to the VS system. Building on the recent advanced lightweight
autoregressive retrieval model, which has the potential to be deployed on
mobiles, leading to a more secure and personal VS assistant. This paper
presents a novel study of VS leveraging autoregressive retrieval and tackles
the crucial problems facing VS, viz. the performance drop caused by ASR noise,
via data augmentations and contrastive learning, showing how explicit and
implicit modeling the noise patterns can alleviate the problems. A series of
experiments conducted on the Open-Domain Question Answering (ODSQA) confirm our
approach's effectiveness and robustness in relation to some strong baseline
systems
Few-Shot User-Definable Radar-Based Hand Gesture Recognition at the Edge
This work was supported in part by ITEA3 Unleash Potentials in Simulation (UPSIM) by the German Federal Ministry of Education and Research (BMBF) under Project 19006, in part by the Austrian Research Promotion Agency (FFG), in part by the Rijksdienst voor Ondernemend Nederland (Rvo), and in part by the Innovation Fund Denmark (IFD).Technological advances and scalability are leading Human-Computer Interaction (HCI) to evolve towards intuitive forms, such as through gesture recognition. Among the various interaction strategies, radar-based recognition is emerging as a touchless, privacy-secure, and versatile solution in different environmental conditions. Classical radar-based gesture HCI solutions involve deep learning but require training on large and varied datasets to achieve robust prediction. Innovative self-learning algorithms can help tackling this problem by recognizing patterns and adapt from similar contexts. Yet, such approaches are often computationally expensive and hardly integrable into hardware-constrained solutions. In this paper, we present a gesture recognition algorithm which is easily adaptable to new users and contexts. We exploit an optimization-based meta-learning approach to enable gesture recognition in learning sequences. This method targets at learning the best possible initialization of the model parameters, simplifying training on new contexts when small amounts of data are available. The reduction in computational cost is achieved by processing the radar sensed data of gestures in the form of time maps, to minimize the input data size. This approach enables the adaptation of simple convolutional neural network (CNN) to new hand poses, thus easing the integration of the model into a hardware-constrained platform. Moreover, the use of a Variational Autoencoders (VAE) to reduce the gestures' dimensionality leads to a model size decrease of an order of magnitude and to half of the required adaptation time. The proposed framework, deployed on the Intel(R) Neural Compute Stick 2 (NCS 2), leads to an average accuracy of around 84% for unseen gestures when only one example per class is utilized at training time. The accuracy increases up to 92.6% and 94.2% when three and five samples per class are used.Federal Ministry of Education & Research (BMBF) 19006Austrian Research Promotion Agency (FFG)Rijksdienst voor Ondernemend Nederland (Rvo)Innovation Fund Denmark (IFD
Privacy-Preserving Facial Recognition Using Biometric-Capsules
Indiana University-Purdue University Indianapolis (IUPUI)In recent years, developers have used the proliferation of biometric sensors in smart devices, along with recent advances in deep learning, to implement an array of biometrics-based recognition systems. Though these systems demonstrate remarkable performance and have seen wide acceptance, they present unique and pressing security and privacy concerns. One proposed method which addresses these concerns is the elegant, fusion-based Biometric-Capsule (BC) scheme. The BC scheme is provably secure, privacy-preserving, cancellable and interoperable in its secure feature fusion design.
In this work, we demonstrate that the BC scheme is uniquely fit to secure state-of-the-art facial verification, authentication and identification systems. We compare the performance of unsecured, underlying biometrics systems to the performance of the BC-embedded systems in order to directly demonstrate the minimal effects of the privacy-preserving BC scheme on underlying system performance. Notably, we demonstrate that, when seamlessly embedded into a state-of-the-art FaceNet and ArcFace verification systems which achieve accuracies of 97.18% and 99.75% on the benchmark LFW dataset, the BC-embedded systems are able to achieve accuracies of 95.13% and 99.13% respectively. Furthermore, we also demonstrate that the BC scheme outperforms or performs as well as several other proposed secure biometric methods
Implicit Smartphone User Authentication with Sensors and Contextual Machine Learning
Authentication of smartphone users is important because a lot of sensitive
data is stored in the smartphone and the smartphone is also used to access
various cloud data and services. However, smartphones are easily stolen or
co-opted by an attacker. Beyond the initial login, it is highly desirable to
re-authenticate end-users who are continuing to access security-critical
services and data. Hence, this paper proposes a novel authentication system for
implicit, continuous authentication of the smartphone user based on behavioral
characteristics, by leveraging the sensors already ubiquitously built into
smartphones. We propose novel context-based authentication models to
differentiate the legitimate smartphone owner versus other users. We
systematically show how to achieve high authentication accuracy with different
design alternatives in sensor and feature selection, machine learning
techniques, context detection and multiple devices. Our system can achieve
excellent authentication performance with 98.1% accuracy with negligible system
overhead and less than 2.4% battery consumption.Comment: Published on the IEEE/IFIP International Conference on Dependable
Systems and Networks (DSN) 2017. arXiv admin note: substantial text overlap
with arXiv:1703.0352
- …