122 research outputs found

    Mobile Device Background Sensors: Authentication vs Privacy

    Get PDF
    The increasing number of mobile devices in recent years has caused the collection of a large amount of personal information that needs to be protected. To this aim, behavioural biometrics has become very popular. But, what is the discriminative power of mobile behavioural biometrics in real scenarios? With the success of Deep Learning (DL), architectures based on Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), such as Long Short-Term Memory (LSTM), have shown improvements compared to traditional machine learning methods. However, these DL architectures still have limitations that need to be addressed. In response, new DL architectures like Transformers have emerged. The question is, can these new Transformers outperform previous biometric approaches? To answers to these questions, this thesis focuses on behavioural biometric authentication with data acquired from mobile background sensors (i.e., accelerometers and gyroscopes). In addition, to the best of our knowledge, this is the first thesis that explores and proposes novel behavioural biometric systems based on Transformers, achieving state-of-the-art results in gait, swipe, and keystroke biometrics. The adoption of biometrics requires a balance between security and privacy. Biometric modalities provide a unique and inherently personal approach for authentication. Nevertheless, biometrics also give rise to concerns regarding the invasion of personal privacy. According to the General Data Protection Regulation (GDPR) introduced by the European Union, personal data such as biometric data are sensitive and must be used and protected properly. This thesis analyses the impact of sensitive data in the performance of biometric systems and proposes a novel unsupervised privacy-preserving approach. The research conducted in this thesis makes significant contributions, including: i) a comprehensive review of the privacy vulnerabilities of mobile device sensors, covering metrics for quantifying privacy in relation to sensitive data, along with protection methods for safeguarding sensitive information; ii) an analysis of authentication systems for behavioural biometrics on mobile devices (i.e., gait, swipe, and keystroke), being the first thesis that explores the potential of Transformers for behavioural biometrics, introducing novel architectures that outperform the state of the art; and iii) a novel privacy-preserving approach for mobile biometric gait verification using unsupervised learning techniques, ensuring the protection of sensitive data during the verification process

    Learning-Based Ubiquitous Sensing For Solving Real-World Problems

    Get PDF
    Recently, as the Internet of Things (IoT) technology has become smaller and cheaper, ubiquitous sensing ability within these devices has become increasingly accessible. Learning methods have also become more complex in the field of computer science ac- cordingly. However, there remains a gap between these learning approaches and many problems in other disciplinary fields. In this dissertation, I investigate four different learning-based studies via ubiquitous sensing for solving real-world problems, such as in IoT security, athletics, and healthcare. First, I designed an online intrusion detection system for IoT devices via power auditing. To realize the real-time system, I created a lightweight power auditing device. With this device, I developed a distributed Convolutional Neural Network (CNN) for online inference. I demonstrated that the distributed system design is secure, lightweight, accurate, real-time, and scalable. Furthermore, I characterized potential Information-stealer attacks via power auditing. To defend against this potential exfiltration attack, a prototype system was built on top of the botnet detection system. In a testbed environment, I defined and deployed an IoT Information-stealer attack. Then, I designed a detection classifier. Altogether, the proposed system is able to identify malicious behavior on endpoint IoT devices via power auditing. Next, I enhanced athletic performance via ubiquitous sensing and machine learning techniques. I first designed a metric called LAX-Score to quantify a collegiate lacrosse team’s athletic performance. To derive this metric, I utilized feature selection and weighted regression. Then, the proposed metric was statistically validated on over 700 games from the last three seasons of NCAA Division I women’s lacrosse. I also exam- ined the biometric sensing dataset obtained from a collegiate team’s athletes over the course of a season. I then identified the practice features that are most correlated with high-performance games. Experimental results indicate that LAX-Score provides insight into athletic performance quality beyond wins and losses. Finally, I studied the data of patients with Parkinson’s Disease. I secured the Inertial Measurement Unit (IMU) sensing data of 30 patients while they conducted pre-defined activities. Using this dataset, I measured tremor events during drawing activities for more convenient tremor screening. Our preliminary analysis demonstrates that IMU sensing data can identify potential tremor events in daily drawing or writing activities. For future work, deep learning-based techniques will be used to extract features of the tremor in real-time. Overall, I designed and applied learning-based methods across different fields to solve real-world problems. The results show that combining learning methods with domain knowledge enables the formation of solutions

    WearPut : Designing Dexterous Wearable Input based on the Characteristics of Human Finger Motions

    Get PDF
    Department of Biomedical Engineering (Human Factors Engineering)Powerful microchips for computing and networking allow a wide range of wearable devices to be miniaturized with high fidelity and availability. In particular, the commercially successful smartwatches placed on the wrist drive market growth by sharing the role of smartphones and health management. The emerging Head Mounted Displays (HMDs) for Augmented Reality (AR) and Virtual Reality (VR) also impact various application areas in video games, education, simulation, and productivity tools. However, these powerful wearables have challenges in interaction with the inevitably limited space for input and output due to the specialized form factors for fitting the body parts. To complement the constrained interaction experience, many wearable devices still rely on other large form factor devices (e.g., smartphones or hand-held controllers). Despite their usefulness, the additional devices for interaction can constrain the viability of wearable devices in many usage scenarios by tethering users' hands to the physical devices. This thesis argues that developing novel Human-Computer interaction techniques for the specialized wearable form factors is vital for wearables to be reliable standalone products. This thesis seeks to address the issue of constrained interaction experience with novel interaction techniques by exploring finger motions during input for the specialized form factors of wearable devices. The several characteristics of the finger input motions are promising to enable increases in the expressiveness of input on the physically limited input space of wearable devices. First, the input techniques with fingers are prevalent on many large form factor devices (e.g., touchscreen or physical keyboard) due to fast and accurate performance and high familiarity. Second, many commercial wearable products provide built-in sensors (e.g., touchscreen or hand tracking system) to detect finger motions. This enables the implementation of novel interaction systems without any additional sensors or devices. Third, the specialized form factors of wearable devices can create unique input contexts while the fingers approach their locations, shapes, and components. Finally, the dexterity of fingers with a distinctive appearance, high degrees of freedom, and high sensitivity of joint angle perception have the potential to widen the range of input available with various movement features on the surface and in the air. Accordingly, the general claim of this thesis is that understanding how users move their fingers during input will enable increases in the expressiveness of the interaction techniques we can create for resource-limited wearable devices. This thesis demonstrates the general claim by providing evidence in various wearable scenarios with smartwatches and HMDs. First, this thesis explored the comfort range of static and dynamic touch input with angles on the touchscreen of smartwatches. The results showed the specific comfort ranges on variations in fingers, finger regions, and poses due to the unique input context that the touching hand approaches a small and fixed touchscreen with a limited range of angles. Then, finger region-aware systems that recognize the flat and side of the finger were constructed based on the contact areas on the touchscreen to enhance the expressiveness of angle-based touch input. In the second scenario, this thesis revealed distinctive touch profiles of different fingers caused by the unique input context for the touchscreen of smartwatches. The results led to the implementation of finger identification systems for distinguishing two or three fingers. Two virtual keyboards with 12 and 16 keys showed the feasibility of touch-based finger identification that enables increases in the expressiveness of touch input techniques. In addition, this thesis supports the general claim with a range of wearable scenarios by exploring the finger input motions in the air. In the third scenario, this thesis investigated the motions of in-air finger stroking during unconstrained in-air typing for HMDs. The results of the observation study revealed details of in-air finger motions during fast sequential input, such as strategies, kinematics, correlated movements, inter-fingerstroke relationship, and individual in-air keys. The in-depth analysis led to a practical guideline for developing robust in-air typing systems with finger stroking. Lastly, this thesis examined the viable locations of in-air thumb touch input to the virtual targets above the palm. It was confirmed that fast and accurate sequential thumb touch can be achieved at a total of 8 key locations with the built-in hand tracking system in a commercial HMD. Final typing studies with a novel in-air thumb typing system verified increases in the expressiveness of virtual target selection on HMDs. This thesis argues that the objective and subjective results and novel interaction techniques in various wearable scenarios support the general claim that understanding how users move their fingers during input will enable increases in the expressiveness of the interaction techniques we can create for resource-limited wearable devices. Finally, this thesis concludes with thesis contributions, design considerations, and the scope of future research works, for future researchers and developers to implement robust finger-based interaction systems on various types of wearable devices.ope

    SwipeFormer: Transformers for mobile touchscreen biometrics

    Get PDF
    The growing number of mobile devices over the past few years brings a large amount of personal information, which needs to be properly protected. As a result, several mobile authentication methods have been developed. In particular, behavioural biometrics has become one of the most relevant methods due to its ability to extract the uniqueness of each subject in a secure, non-intrusive, and continuous way. This article presents SwipeFormer, a novel Transformer-based system for mobile subject authentication by means of swipe gestures in an unconstrained scenario (i.e., subjects could use their personal devices freely, without restrictions on the direction of swipe gestures or the position of the device). Our proposed system contains two modules: (i) a Transformer-based feature extractor, and (ii) a similarity computation module. Mobile data from the touchscreen and different background sensors (accelerometer and gyroscope) have been studied, including in the analysis both Android and iOS operating systems. A complete analysis of SwipeFormer is carried out using an in-house large-scale database acquired in unconstrained scenarios. In these operational conditions, SwipeFormer achieves Equal Error Rate (EER) values of 6.6% and 3.6% on Android and iOS respectively, outperforming the state of the art. In addition, we evaluate SwipeFormer on the popular publicly available databases Frank DB and HuMIdb, achieving EER values of 11.0% and 5.0% respectively, outperforming previous approaches under the same experimental setup

    WiFi Sensing at the Edge Towards Scalable On-Device Wireless Sensing Systems

    Get PDF
    WiFi sensing offers a powerful method for tracking physical activities using the radio-frequency signals already found throughout our homes and offices. This novel sensing modality offers continuous and non-intrusive activity tracking since sensing can be performed (i) without requiring wearable sensors, (ii) outside the line-of-sight, and even (iii) through the wall. Furthermore, WiFi has become a ubiquitous technology in our computers, our smartphones, and even in low-cost Internet of Things devices. In this work, we consider how the ubiquity of these low-cost WiFi devices offer an unparalleled opportunity for improving the scalability of wireless sensing systems. Thus far, WiFi sensing research assumes costly offline computing resources and hardware for training machine learning models and for performing model inference. To improve the scalability of WiFi sensing systems, this dissertation introduces techniques for improving machine learning at the edge by thoroughly surveying and evaluating signal preprocessing and edge machine learning techniques. Additionally, we introduce the use of federated learning for collaboratively training machine learning models with WiFi data only available on edge devices. We then consider privacy and security concerns of WiFi sensing by demonstrating possible adversarial surveillance attacks. To combat these attacks, we propose a method for leveraging spatially distributed antennas to prevent eavesdroppers from performing adversarial surveillance while still enabling and even improving the sensing capabilities of allowed WiFi sensing devices within our environments. The overall goal throughout this work is to demonstrate that WiFi sensing can become a ubiquitous and secure sensing option through the use of on-device computation on low-cost edge devices

    WiFi-Based Human Activity Recognition Using Attention-Based BiLSTM

    Get PDF
    Recently, significant efforts have been made to explore human activity recognition (HAR) techniques that use information gathered by existing indoor wireless infrastructures through WiFi signals without demanding the monitored subject to carry a dedicated device. The key intuition is that different activities introduce different multi-paths in WiFi signals and generate different patterns in the time series of channel state information (CSI). In this paper, we propose and evaluate a full pipeline for a CSI-based human activity recognition framework for 12 activities in three different spatial environments using two deep learning models: ABiLSTM and CNN-ABiLSTM. Evaluation experiments have demonstrated that the proposed models outperform state-of-the-art models. Also, the experiments show that the proposed models can be applied to other environments with different configurations, albeit with some caveats. The proposed ABiLSTM model achieves an overall accuracy of 94.03%, 91.96%, and 92.59% across the 3 target environments. While the proposed CNN-ABiLSTM model reaches an accuracy of 98.54%, 94.25% and 95.09% across those same environments

    Digital writing technologies in higher education : theory, research, and practice

    Get PDF
    This open access book serves as a comprehensive guide to digital writing technology, featuring contributions from over 20 renowned researchers from various disciplines around the world. The book is designed to provide a state-of-the-art synthesis of the developments in digital writing in higher education, making it an essential resource for anyone interested in this rapidly evolving field. In the first part of the book, the authors offer an overview of the impact that digitalization has had on writing, covering more than 25 key technological innovations and their implications for writing practices and pedagogical uses. Drawing on these chapters, the second part of the book explores the theoretical underpinnings of digital writing technology such as writing and learning, writing quality, formulation support, writing and thinking, and writing processes. The authors provide insightful analysis on the impact of these developments and offer valuable insights into the future of writing. Overall, this book provides a cohesive and consistent theoretical view of the new realities of digital writing, complementing existing literature on the digitalization of writing. It is an essential resource for scholars, educators, and practitioners interested in the intersection of technology and writing

    Investigating Real-time Touchless Hand Interaction and Machine Learning Agents in Immersive Learning Environments

    Get PDF
    The recent surge in the adoption of new technologies and innovations in connectivity, interaction technology, and artificial realities can fundamentally change the digital world. eXtended Reality (XR), with its potential to bridge the virtual and real environments, creates new possibilities to develop more engaging and productive learning experiences. Evidence is emerging that thissophisticated technology offers new ways to improve the learning process for better student interaction and engagement. Recently, immersive technology has garnered much attention as an interactive technology that facilitates direct interaction with virtual objects in the real world. Furthermore, these virtual objects can be surrogates for real-world teaching resources, allowing for virtual labs. Thus XR could enable learning experiences that would not bepossible in impoverished educational systems worldwide. Interestingly, concepts such as virtual hand interaction and techniques such as machine learning are still not widely investigated in immersive learning. Hand interaction technologies in virtual environments can support the kinesthetic learning pedagogical approach, and the need for its touchless interaction nature hasincreased exceptionally in the post-COVID world. By implementing and evaluating real-time hand interaction technology for kinesthetic learning and machine learning agents for self-guided learning, this research has addressed these underutilized technologies to demonstrate the efficiency of immersive learning. This thesis has explored different hand-tracking APIs and devices to integrate real-time hand interaction techniques. These hand interaction techniques and integrated machine learning agents using reinforcement learning are evaluated with different display devices to test compatibility. The proposed approach aims to provide self-guided, more productive, and interactive learning experiences. Further, this research has investigated ethics, privacy, and security issues in XR and covered the future of immersive learning in the Metaverse.<br/

    CLARIN

    Get PDF
    The book provides a comprehensive overview of the Common Language Resources and Technology Infrastructure – CLARIN – for the humanities. It covers a broad range of CLARIN language resources and services, its underlying technological infrastructure, the achievements of national consortia, and challenges that CLARIN will tackle in the future. The book is published 10 years after establishing CLARIN as an Europ. Research Infrastructure Consortium

    Investigating Real-time Touchless Hand Interaction and Machine Learning Agents in Immersive Learning Environments

    Get PDF
    The recent surge in the adoption of new technologies and innovations in connectivity, interaction technology, and artificial realities can fundamentally change the digital world. eXtended Reality (XR), with its potential to bridge the virtual and real environments, creates new possibilities to develop more engaging and productive learning experiences. Evidence is emerging that thissophisticated technology offers new ways to improve the learning process for better student interaction and engagement. Recently, immersive technology has garnered much attention as an interactive technology that facilitates direct interaction with virtual objects in the real world. Furthermore, these virtual objects can be surrogates for real-world teaching resources, allowing for virtual labs. Thus XR could enable learning experiences that would not bepossible in impoverished educational systems worldwide. Interestingly, concepts such as virtual hand interaction and techniques such as machine learning are still not widely investigated in immersive learning. Hand interaction technologies in virtual environments can support the kinesthetic learning pedagogical approach, and the need for its touchless interaction nature hasincreased exceptionally in the post-COVID world. By implementing and evaluating real-time hand interaction technology for kinesthetic learning and machine learning agents for self-guided learning, this research has addressed these underutilized technologies to demonstrate the efficiency of immersive learning. This thesis has explored different hand-tracking APIs and devices to integrate real-time hand interaction techniques. These hand interaction techniques and integrated machine learning agents using reinforcement learning are evaluated with different display devices to test compatibility. The proposed approach aims to provide self-guided, more productive, and interactive learning experiences. Further, this research has investigated ethics, privacy, and security issues in XR and covered the future of immersive learning in the Metaverse.<br/
    • …
    corecore