1,510 research outputs found
Visual identification by signature tracking
We propose a new camera-based biometric: visual signature identification. We discuss the importance of the parameterization of the signatures in order to achieve good classification results, independently of variations in the position of the camera with respect to the writing surface. We show that affine arc-length parameterization performs better than conventional time and Euclidean arc-length ones. We find that the system verification performance is better than 4 percent error on skilled forgeries and 1 percent error on random forgeries, and that its recognition performance is better than 1 percent error rate, comparable to the best camera-based biometrics
Continuous Authentication for Voice Assistants
Voice has become an increasingly popular User Interaction (UI) channel,
mainly contributing to the ongoing trend of wearables, smart vehicles, and home
automation systems. Voice assistants such as Siri, Google Now and Cortana, have
become our everyday fixtures, especially in scenarios where touch interfaces
are inconvenient or even dangerous to use, such as driving or exercising.
Nevertheless, the open nature of the voice channel makes voice assistants
difficult to secure and exposed to various attacks as demonstrated by security
researchers. In this paper, we present VAuth, the first system that provides
continuous and usable authentication for voice assistants. We design VAuth to
fit in various widely-adopted wearable devices, such as eyeglasses,
earphones/buds and necklaces, where it collects the body-surface vibrations of
the user and matches it with the speech signal received by the voice
assistant's microphone. VAuth guarantees that the voice assistant executes only
the commands that originate from the voice of the owner. We have evaluated
VAuth with 18 users and 30 voice commands and find it to achieve an almost
perfect matching accuracy with less than 0.1% false positive rate, regardless
of VAuth's position on the body and the user's language, accent or mobility.
VAuth successfully thwarts different practical attacks, such as replayed
attacks, mangled voice attacks, or impersonation attacks. It also has low
energy and latency overheads and is compatible with most existing voice
assistants
Fast computation of the performance evaluation of biometric systems: application to multibiometric
The performance evaluation of biometric systems is a crucial step when
designing and evaluating such systems. The evaluation process uses the Equal
Error Rate (EER) metric proposed by the International Organization for
Standardization (ISO/IEC). The EER metric is a powerful metric which allows
easily comparing and evaluating biometric systems. However, the computation
time of the EER is, most of the time, very intensive. In this paper, we propose
a fast method which computes an approximated value of the EER. We illustrate
the benefit of the proposed method on two applications: the computing of non
parametric confidence intervals and the use of genetic algorithms to compute
the parameters of fusion functions. Experimental results show the superiority
of the proposed EER approximation method in term of computing time, and the
interest of its use to reduce the learning of parameters with genetic
algorithms. The proposed method opens new perspectives for the development of
secure multibiometrics systems by speeding up their computation time.Comment: Future Generation Computer Systems (2012
Performance Evaluation of Voice Classifier Algorithms for Voice Recognition Using Hidden Markov Model
This paper provides performance evaluation of K mean and Gaussian mixture algorithms which are voice classifier algorithms for voice recognition using the differences in their recognition , training and testing time as parameter for the evaluation. The performance evaluation results has shown classification efficiency of K – means & Gaussian Mixture algorithms. In the results, comparing the Average Training time for Kmeans algorithm: (Standard database = 435.6854s, Local Database = 411.4578s) while for Gaussian mixture algorithm : (Standard Database = 454.5678s, Local Database = 424.5673s). Moreover, in the considering the Average Testing time, Kmeans algorithm: (Standard database = 23.7178s, Local Database = 23.7178s) while for Gaussian mixture algorithm : (Standard Database = 25.1271s, Local Database = 20.1271s). For the Average Recognition time, Kmeans algorithm: (Standard database = 0.3388s, Local Database = 0.3388s) while for Gaussian mixture algorithm : (Standard Database = 0.4345s, Local Database = 0.4345s). Therefore, conclusions could be made that K-mean algorithm is a better classifier for voices in a voice recognition system because it has minimum training, testing and recognition time compared to Gaussian mixture algorithms. Key Words: Evaluation, Classification, Efficiency, Algorithm, K- means algorithm, Gaussian mixture algorithm, Training, Speaker, Recognitio
Enhancing Usability, Security, and Performance in Mobile Computing
We have witnessed the prevalence of smart devices in every aspect of human life. However, the ever-growing smart devices present significant challenges in terms of usability, security, and performance. First, we need to design new interfaces to improve the device usability which has been neglected during the rapid shift from hand-held mobile devices to wearables. Second, we need to protect smart devices with abundant private data against unauthorized users. Last, new applications with compute-intensive tasks demand the integration of emerging mobile backend infrastructure. This dissertation focuses on addressing these challenges. First, we present GlassGesture, a system that improves the usability of Google Glass through a head gesture user interface with gesture recognition and authentication. We accelerate the recognition by employing a novel similarity search scheme, and improve the authentication performance by applying new features of head movements in an ensemble learning method. as a result, GlassGesture achieves 96% gesture recognition accuracy. Furthermore, GlassGesture accepts authorized users in nearly 92% of trials, and rejects attackers in nearly 99% of trials. Next, we investigate the authentication between a smartphone and a paired smartwatch. We design and implement WearLock, a system that utilizes one\u27s smartwatch to unlock one\u27s smartphone via acoustic tones. We build an acoustic modem with sub-channel selection and adaptive modulation, which generates modulated acoustic signals to maximize the unlocking success rate against ambient noise. We leverage the motion similarities of the devices to eliminate unnecessary unlocking. We also offload heavy computation tasks from the smartwatch to the smartphone to shorten response time and save energy. The acoustic modem achieves a low bit error rate (BER) of 8%. Compared to traditional manual personal identification numbers (PINs) entry, WearLock not only automates the unlocking but also speeds it up by at least 18%. Last, we consider low-latency video analytics on mobile devices, leveraging emerging mobile backend infrastructure. We design and implement LAVEA, a system which offloads computation from mobile clients to edge nodes, to accomplish tasks with intensive computation at places closer to users in a timely manner. We formulate an optimization problem for offloading task selection and prioritize offloading requests received at the edge node to minimize the response time. We design and compare various task placement schemes for inter-edge collaboration to further improve the overall response time. Our results show that the client-edge configuration has a speedup ranging from 1.3x to 4x against running solely by the client and 1.2x to 1.7x against the client-cloud configuration
Recommended from our members
Evaluation and analysis of hybrid intelligent pattern recognition techniques for speaker identification
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The rapid momentum of the technology progress in the recent years has led to a tremendous rise in the use of biometric authentication systems. The objective of this research is to investigate the problem
of identifying a speaker from its voice regardless of the content (i.e.
text-independent), and to design efficient methods of combining face and voice in producing a robust authentication system.
A novel approach towards speaker identification is developed using
wavelet analysis, and multiple neural networks including Probabilistic
Neural Network (PNN), General Regressive Neural Network (GRNN)and Radial Basis Function-Neural Network (RBF NN) with the AND
voting scheme. This approach is tested on GRID and VidTIMIT cor-pora and comprehensive test results have been validated with state-
of-the-art approaches. The system was found to be competitive and it improved the recognition rate by 15% as compared to the classical Mel-frequency Cepstral Coe±cients (MFCC), and reduced the recognition time by 40% compared to Back Propagation Neural Network (BPNN), Gaussian Mixture Models (GMM) and Principal Component Analysis (PCA).
Another novel approach using vowel formant analysis is implemented using Linear Discriminant Analysis (LDA). Vowel formant based speaker identification is best suitable for real-time implementation and requires only a few bytes of information to be stored for each speaker, making it both storage and time efficient. Tested on GRID and Vid-TIMIT, the proposed scheme was found to be 85.05% accurate when Linear Predictive Coding (LPC) is used to extract the vowel formants, which is much higher than the accuracy of BPNN and GMM. Since the proposed scheme does not require any training time other than creating a small database of vowel formants, it is faster as well. Furthermore, an increasing number of speakers makes it di±cult for BPNN and GMM to sustain their accuracy, but the proposed score-based methodology stays almost linear.
Finally, a novel audio-visual fusion based identification system is implemented using GMM and MFCC for speaker identiÂŻcation and PCA for face recognition. The results of speaker identification and face recognition are fused at different levels, namely the feature, score and decision levels. Both the score-level and decision-level (with OR voting) fusions were shown to outperform the feature-level fusion in terms of accuracy and error resilience. The result is in line with the distinct nature of the two modalities which lose themselves when combined at the feature-level. The GRID and VidTIMIT test results validate that
the proposed scheme is one of the best candidates for the fusion of
face and voice due to its low computational time and high recognition accuracy
Identification of Age Voiceprint Using Machine Learning Algorithms
The voice is considered a biometric trait since we can extract information from the speech signal that allows us to identify the person speaking in a specific recording. Fingerprints, iris, DNA, or speech can be used in biometric systems, with speech being the most intuitive, basic, and easy to create characteristic. Speech-based services are widely used in the banking and mobile sectors, although these services do not employ voice recognition to identify consumers. As a result, the possibility of using these services under a fake name is always there. To reduce the possibility of fraudulent identification, voice-based recognition systems must be designed. In this research, Mel Frequency Cepstral Coefficients (MFCC) characteristics were retrieved from the gathered voice samples to train five different machine learning algorithms, namely, the decision tree, random forest (RF), support vector machines (SVM), closest neighbor (k-NN), and multi-layer sensor (MLP). Accuracy, precision, recall, specificity, and F1 score were used as classification performance metrics to compare these algorithms. According to the findings of the study, the MLP approach had a high classification accuracy of 91%. In addition, it seems that RF performs better than other measurements. This finding demonstrates how these categorization algorithms may assist voice-based biometric systems
- …