236 research outputs found

    A Behavioral Model System for Implicit Mobile Authentication

    Get PDF
    Smartphones are increasingly essential to users’ everyday lives. Security concerns of data compromises are growing, and explicit authentication methods are proving to be inconvenient and insufficient. Meanwhile, users demand quicker and more secure authentication. To address this, a user can be authenticated continuously and implicitly, through understanding consistency in their behavior. This research project develops a Behavioral Model System (BMS) that records users’ behavioral metrics on an Android device and sends them to a server to develop a behavioral model for the user. Once a strong model is generated with TensorFlow, a user’s most recent behavior is queried against the model to authenticate them. The model is tested across its metrics to evaluate the reliability of BMS

    Mobile Device Background Sensors: Authentication vs Privacy

    Get PDF
    The increasing number of mobile devices in recent years has caused the collection of a large amount of personal information that needs to be protected. To this aim, behavioural biometrics has become very popular. But, what is the discriminative power of mobile behavioural biometrics in real scenarios? With the success of Deep Learning (DL), architectures based on Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), such as Long Short-Term Memory (LSTM), have shown improvements compared to traditional machine learning methods. However, these DL architectures still have limitations that need to be addressed. In response, new DL architectures like Transformers have emerged. The question is, can these new Transformers outperform previous biometric approaches? To answers to these questions, this thesis focuses on behavioural biometric authentication with data acquired from mobile background sensors (i.e., accelerometers and gyroscopes). In addition, to the best of our knowledge, this is the first thesis that explores and proposes novel behavioural biometric systems based on Transformers, achieving state-of-the-art results in gait, swipe, and keystroke biometrics. The adoption of biometrics requires a balance between security and privacy. Biometric modalities provide a unique and inherently personal approach for authentication. Nevertheless, biometrics also give rise to concerns regarding the invasion of personal privacy. According to the General Data Protection Regulation (GDPR) introduced by the European Union, personal data such as biometric data are sensitive and must be used and protected properly. This thesis analyses the impact of sensitive data in the performance of biometric systems and proposes a novel unsupervised privacy-preserving approach. The research conducted in this thesis makes significant contributions, including: i) a comprehensive review of the privacy vulnerabilities of mobile device sensors, covering metrics for quantifying privacy in relation to sensitive data, along with protection methods for safeguarding sensitive information; ii) an analysis of authentication systems for behavioural biometrics on mobile devices (i.e., gait, swipe, and keystroke), being the first thesis that explores the potential of Transformers for behavioural biometrics, introducing novel architectures that outperform the state of the art; and iii) a novel privacy-preserving approach for mobile biometric gait verification using unsupervised learning techniques, ensuring the protection of sensitive data during the verification process

    Novel Bidirectional Body - Machine Interface to Control Upper Limb Prosthesis

    Get PDF
    Objective. The journey of a bionic prosthetic user is characterized by the opportunities and limitations involved in adopting a device (the prosthesis) that should enable activities of daily living (ADL). Within this context, experiencing a bionic hand as a functional (and, possibly, embodied) limb constitutes the premise for mitigating the risk of its abandonment through the continuous use of the device. To achieve such a result, different aspects must be considered for making the artificial limb an effective support for carrying out ADLs. Among them, intuitive and robust control is fundamental to improving amputees’ quality of life using upper limb prostheses. Still, as artificial proprioception is essential to perceive the prosthesis movement without constant visual attention, a good control framework may not be enough to restore practical functionality to the limb. To overcome this, bidirectional communication between the user and the prosthesis has been recently introduced and is a requirement of utmost importance in developing prosthetic hands. Indeed, closing the control loop between the user and a prosthesis by providing artificial sensory feedback is a fundamental step towards the complete restoration of the lost sensory-motor functions. Within my PhD work, I proposed the development of a more controllable and sensitive human-like hand prosthesis, i.e., the Hannes prosthetic hand, to improve its usability and effectiveness. Approach. To achieve the objectives of this thesis work, I developed a modular and scalable software and firmware architecture to control the Hannes prosthetic multi-Degree of Freedom (DoF) system and to fit all users’ needs (hand aperture, wrist rotation, and wrist flexion in different combinations). On top of this, I developed several Pattern Recognition (PR) algorithms to translate electromyographic (EMG) activity into complex movements. However, stability and repeatability were still unmet requirements in multi-DoF upper limb systems; hence, I started by investigating different strategies to produce a more robust control. To do this, EMG signals were collected from trans-radial amputees using an array of up to six sensors placed over the skin. Secondly, I developed a vibrotactile system to implement haptic feedback to restore proprioception and create a bidirectional connection between the user and the prosthesis. Similarly, I implemented an object stiffness detection to restore tactile sensation able to connect the user with the external word. This closed-loop control between EMG and vibration feedback is essential to implementing a Bidirectional Body - Machine Interface to impact amputees’ daily life strongly. For each of these three activities: (i) implementation of robust pattern recognition control algorithms, (ii) restoration of proprioception, and (iii) restoration of the feeling of the grasped object's stiffness, I performed a study where data from healthy subjects and amputees was collected, in order to demonstrate the efficacy and usability of my implementations. In each study, I evaluated both the algorithms and the subjects’ ability to use the prosthesis by means of the F1Score parameter (offline) and the Target Achievement Control test-TAC (online). With this test, I analyzed the error rate, path efficiency, and time efficiency in completing different tasks. Main results. Among the several tested methods for Pattern Recognition, the Non-Linear Logistic Regression (NLR) resulted to be the best algorithm in terms of F1Score (99%, robustness), whereas the minimum number of electrodes needed for its functioning was determined to be 4 in the conducted offline analyses. Further, I demonstrated that its low computational burden allowed its implementation and integration on a microcontroller running at a sampling frequency of 300Hz (efficiency). Finally, the online implementation allowed the subject to simultaneously control the Hannes prosthesis DoFs, in a bioinspired and human-like way. In addition, I performed further tests with the same NLR-based control by endowing it with closed-loop proprioceptive feedback. In this scenario, the results achieved during the TAC test obtained an error rate of 15% and a path efficiency of 60% in experiments where no sources of information were available (no visual and no audio feedback). Such results demonstrated an improvement in the controllability of the system with an impact on user experience. Significance. The obtained results confirmed the hypothesis of improving robustness and efficiency of a prosthetic control thanks to of the implemented closed-loop approach. The bidirectional communication between the user and the prosthesis is capable to restore the loss of sensory functionality, with promising implications on direct translation in the clinical practice

    Adaptive User Authentication on Mobile Devices

    Get PDF
    Modern mobile devices allow users to access various applications and services anywhere. However, high mobility also exposes mobile devices to device loss, unauthorized access, and many other risks. Existing studies have proposed a variety of explicit authentication (EA) and implicit authentication (IA) mechanisms to secure sensitive personal and corporate data on mobile devices. Considering the limitations of these mechanisms under different circumstances, we expect that future authentication systems will be able to dynamically determine when and how to authenticate users based on the current context, which is called adaptive authentication. This thesis investigates adaptive authentication from the perspectives of context sensing techniques, authentication and access control adaptations, and adaptation modeling. First, we investigate the smartphone loss scenario. Context sensing is critical for triggering immediate device locking with re-authentication and an alert to the owner before they leave without the phone. We propose Chaperone, an active acoustic sensing based solution to detect a user's departure from the device. It is designed to robustly provide a user's proximity and motion contexts in real-world scenarios characterized by bursting high-frequency noise, bustling crowds, and diverse environmental layouts. Extensive evaluations at a variety of real-world locations have shown that Chaperone has high accuracy and low detection latency under various conditions. Second, we investigate temporary device sharing as a special scenario of adaptive authentication. We propose device sharing awareness (DSA), a new sharing-protection approach for temporarily shared mobile devices. DSA exploits natural handover gestures and behavioral biometrics as contextual factors to transparently enable and disable a device's sharing mode without requiring explicit input of the device owner. It also supports various access control strategies to fulfill sharing requirements imposed by an app. Our user study has shown the effectiveness of handover detection and demonstrated how DSA automatically processes sharing events to provide a secure sharing environment. Third, we investigate the adaptation of an IA system to shared mobile devices to reject imposters and distinguish between legitimate users in real-time. We propose a multi-user IA solution that incorporates multiple modalities and supports adding new users and automatically labeling new incoming data for model updating. Our solution adopts a score fusion strategy based on Dempster-Shafer (D-S) theory to improve accuracy with considering uncertainties among different IA mechanisms. We also provide an evaluation framework to support IA researchers in the evaluation of multi-user, multi-modal IA systems. We present two sample use cases to showcase how our framework helps address practical design questions of multi-user IA systems. Fourth, we investigate a high-level organization of different adaptation policies in an adaptive authentication system. We design and build a multi-stage risk-aware adaptive authentication and access control framework (MRAAC). MRAAC organizes adaptation policies in multiple stages to handle various scenarios and progressively adapts authentication mechanisms based on context, resource sensitivity, and user authenticity. We present three use cases to show how MRAAC enables various stakeholders (device manufacturers, enterprise and secure app developers) to provide adaptive authentication workflows on COTS Android with low processing and battery overhead. In conclusion, this thesis fills the gaps in adaptive authentication systems for shared mobile devices and adaptation models for authentication and access control. Our frameworks and implementations also benefit researchers and developers to develop and evaluate their adaptive authentication systems efficiently

    Integrating passive ubiquitous surfaces into human-computer interaction

    Get PDF
    Mobile technologies enable people to interact with computers ubiquitously. This dissertation investigates how ordinary, ubiquitous surfaces can be integrated into human-computer interaction to extend the interaction space beyond the edge of the display. It turns out that acoustic and tactile features generated during an interaction can be combined to identify input events, the user, and the surface. In addition, it is shown that a heterogeneous distribution of different surfaces is particularly suitable for realizing versatile interaction modalities. However, privacy concerns must be considered when selecting sensors, and context can be crucial in determining whether and what interaction to perform.Mobile Technologien ermöglichen den Menschen eine allgegenwĂ€rtige Interaktion mit Computern. Diese Dissertation untersucht, wie gewöhnliche, allgegenwĂ€rtige OberflĂ€chen in die Mensch-Computer-Interaktion integriert werden können, um den Interaktionsraum ĂŒber den Rand des Displays hinaus zu erweitern. Es stellt sich heraus, dass akustische und taktile Merkmale, die wĂ€hrend einer Interaktion erzeugt werden, kombiniert werden können, um Eingabeereignisse, den Benutzer und die OberflĂ€che zu identifizieren. DarĂŒber hinaus wird gezeigt, dass eine heterogene Verteilung verschiedener OberflĂ€chen besonders geeignet ist, um vielfĂ€ltige InteraktionsmodalitĂ€ten zu realisieren. Bei der Auswahl der Sensoren mĂŒssen jedoch Datenschutzaspekte berĂŒcksichtigt werden, und der Kontext kann entscheidend dafĂŒr sein, ob und welche Interaktion durchgefĂŒhrt werden soll

    Towards streaming gesture recognition

    Get PDF
    The emergence of low-cost sensors allows more devices to be equipped with various types of sensors. In this way, mobile device such as smartphones or smartwatches now may contain accelerometers, gyroscopes, etc. This offers new possibilities for interacting with the environment and benefits would come to exploit these sensors. As a consequence, the literature on gesture recognition systems that employ such sensors grow considerably. The literature regarding online gesture recognition counts many methods based on Dynamic Time Warping (DTW). However, this method was demonstrated has non-efficient for time series from inertial sensors unit as a lot of noise is present. In this way new methods based on LCSS (Longest Common SubSequence) were introduced. Nevertheless, none of them focus on a class optimization process. In this master thesis, we present and evaluate a new algorithm for online gesture recognition in noisy streams. This technique relies upon the LM-WLCSS (Limited Memory and Warping LCSS) algorithm that has demonstrated its efficiency on gesture recognition. This new method involves a quantization step (via the K-Means clustering algorithm) that transforms new data to a finite set. In this way, each new sample can be compared to several templates (one per class). Gestures are rejected based on a previously trained rejection threshold. Thereafter, an algorithm, called SearchMax, find a local maximum within a sliding window and output whether or not the gesture has been recognized. In order to resolve conflicts that may occur, another classifier (i.e. C4.5) could be completed. As the K-Means clustering algorithm needs to be initialized with the number of clusters to create, we also introduce a straightforward optimization process. Such an operation also optimizes the window size for the SearchMax algorithm. In order to demonstrate the robustness of our algorithm, an experiment has been performed over two different data sets. However, results on tested data sets are only accurate when training data are used as test data. This may be due to the fact that the method is in an overlearning state. L’apparition de nouveaux capteurs Ă  bas prix a permis d’en Ă©quiper dans beaucoup plus d’appareils. En effet, dans les appareils mobiles tels que les tĂ©lĂ©phones et les montres intelligentes nous retrouvons des accĂ©lĂ©romĂštres, gyroscopes, etc. Ces capteurs prĂ©sents dans notre vie quotidienne offrent de toutes nouvelles possibilitĂ©s en matiĂšre d’interaction avec notre environnement et il serait avantageux de les utiliser. Cela a eu pour consĂ©quence une augmentation considĂ©rable du nombre de recherches dans le domaine de reconnaissance de geste basĂ© sur ce type de capteur. La littĂ©rature concernant la reconnaissance de gestes en ligne comptabilise beaucoup de mĂ©thodes qui se basent sur Dynamic Time Warping (DTW). Cependant, il a Ă©tĂ© dĂ©montrĂ© que cette mĂ©thode se rĂ©vĂšle inefficace en ce qui concerne les sĂ©ries temporelles provenant d’une centrale Ă  inertie puisqu’elles contiennent beaucoup de bruit. En ce sens de nouvelles mĂ©thodes basĂ©es sur LCSS (Longest Common SubSequence) sont apparues. NĂ©anmoins, aucune d’entre elles ne s’est focalisĂ©e sur un processus d’optimisation par class. Ce mĂ©moire de maĂźtrise consiste en une prĂ©sentation et une Ă©valuation d’un nouvel algorithme pour la reconnaissance de geste en ligne avec des donnĂ©es bruitĂ©es. Cette technique repose sur l’algorithme LM-WLCSS (Limited Memory and Warping LCSS) qui a d’ores et dĂ©jĂ  dĂ©montrĂ© son efficacitĂ© quant Ă  la reconnaissance de geste. Cette nouvelle mĂ©thode est donc composĂ©e d’une Ă©tape dite de quantification (grĂące Ă  l’algorithme de regroupement K-Means) qui se charge de convertir les nouvelles donnĂ©es entrantes vers un ensemble de donnĂ©es fini. Chaque nouvelle donnĂ©e peut donc ĂȘtre comparĂ©e Ă  plusieurs motifs (un par classe) et un geste est reconnu dĂšs lors que son score dĂ©passe un seuil prĂ©alablement entrainĂ©. Puis, un autre algorithme appelĂ© SearchMax se charge de trouver un maximum local au sein d’une fenĂȘtre glissant afin de prĂ©ciser si oui ou non un geste a Ă©tĂ© reconnu. Cependant des conflits peuvent survenir et en ce sens un autre classifieur (c.-Ă d. C4.5) est chainĂ©. Étant donnĂ© que l’algorithme de regroupement K-Means a besoin d’une valeur pour le nombre de regroupements Ă  faire, nous introduisons Ă©galement une technique simple d’optimisation Ă  ce sujet. Cette partie d’optimisation se charge Ă©galement de trouver la meilleure taille de fenĂȘtre possible pour l’algorithme SearchMax. Afin de dĂ©montrer l’efficacitĂ© et la robustesse de notre algorithme, nous l’avons testĂ© sur deux ensembles de donnĂ©es diffĂ©rents. Cependant, les rĂ©sultats sur les ensembles de donnĂ©es testĂ©es n’étaient bons que lorsque les donnĂ©es d’entrainement Ă©taient utilisĂ©es en tant que donnĂ©es de test. Cela peut ĂȘtre dĂ» au fait que la mĂ©thode est dans un Ă©tat de surapprentissage

    Towards gestural understanding for intelligent robots

    Get PDF
    Fritsch JN. Towards gestural understanding for intelligent robots. Bielefeld: UniversitĂ€t Bielefeld; 2012.A strong driving force of scientific progress in the technical sciences is the quest for systems that assist humans in their daily life and make their life easier and more enjoyable. Nowadays smartphones are probably the most typical instances of such systems. Another class of systems that is getting increasing attention are intelligent robots. Instead of offering a smartphone touch screen to select actions, these systems are intended to offer a more natural human-machine interface to their users. Out of the large range of actions performed by humans, gestures performed with the hands play a very important role especially when humans interact with their direct surrounding like, e.g., pointing to an object or manipulating it. Consequently, a robot has to understand such gestures to offer an intuitive interface. Gestural understanding is, therefore, a key capability on the way to intelligent robots. This book deals with vision-based approaches for gestural understanding. Over the past two decades, this has been an intensive field of research which has resulted in a variety of algorithms to analyze human hand motions. Following a categorization of different gesture types and a review of other sensing techniques, the design of vision systems that achieve hand gesture understanding for intelligent robots is analyzed. For each of the individual algorithmic steps – hand detection, hand tracking, and trajectory-based gesture recognition – a separate Chapter introduces common techniques and algorithms and provides example methods. The resulting recognition algorithms are considering gestures in isolation and are often not sufficient for interacting with a robot who can only understand such gestures when incorporating the context like, e.g., what object was pointed at or manipulated. Going beyond a purely trajectory-based gesture recognition by incorporating context is an important prerequisite to achieve gesture understanding and is addressed explicitly in a separate Chapter of this book. Two types of context, user-provided context and situational context, are reviewed and existing approaches to incorporate context for gestural understanding are reviewed. Example approaches for both context types provide a deeper algorithmic insight into this field of research. An overview of recent robots capable of gesture recognition and understanding summarizes the currently realized human-robot interaction quality. The approaches for gesture understanding covered in this book are manually designed while humans learn to recognize gestures automatically during growing up. Promising research targeted at analyzing developmental learning in children in order to mimic this capability in technical systems is highlighted in the last Chapter completing this book as this research direction may be highly influential for creating future gesture understanding systems
    • 

    corecore