13 research outputs found
Seamless and Secure VR: Adapting and Evaluating Established Authentication Systems for Virtual Reality
Virtual reality (VR) headsets are enabling a wide range of new
opportunities for the user. For example, in the near future users
may be able to visit virtual shopping malls and virtually join
international conferences. These and many other scenarios pose
new questions with regards to privacy and security, in particular
authentication of users within the virtual environment. As a first
step towards seamless VR authentication, this paper investigates
the direct transfer of well-established concepts (PIN, Android
unlock patterns) into VR. In a pilot study (N = 5) and a lab
study (N = 25), we adapted existing mechanisms and evaluated
their usability and security for VR. The results indicate that
both PINs and patterns are well suited for authentication in
VR. We found that the usability of both methods matched the
performance known from the physical world. In addition, the
private visual channel makes authentication harder to observe,
indicating that authentication in VR using traditional concepts
already achieves a good balance in the trade-off between usability
and security. The paper contributes to a better understanding of
authentication within VR environments, by providing the first
investigation of established authentication methods within VR,
and presents the base layer for the design of future authentication
schemes, which are used in VR environments only
Comparison of Data Representations and Machine Learning Architectures for User Identification on Arbitrary Motion Sequences
Reliable and robust user identification and authentication are important and
often necessary requirements for many digital services. It becomes paramount in
social virtual reality (VR) to ensure trust, specifically in digital encounters
with lifelike realistic-looking avatars as faithful replications of real
persons. Recent research has shown that the movements of users in extended
reality (XR) systems carry user-specific information and can thus be used to
verify their identities. This article compares three different potential
encodings of the motion data from head and hands (scene-relative,
body-relative, and body-relative velocities), and the performances of five
different machine learning architectures (random forest, multi-layer
perceptron, fully recurrent neural network, long-short term memory, gated
recurrent unit). We use the publicly available dataset "Talking with Hands" and
publish all code to allow reproducibility and to provide baselines for future
work. After hyperparameter optimization, the combination of a long-short term
memory architecture and body-relative data outperformed competing combinations:
the model correctly identifies any of the 34 subjects with an accuracy of 100%
within 150 seconds. Altogether, our approach provides an effective foundation
for behaviometric-based identification and authentication to guide researchers
and practitioners. Data and code are published under
https://go.uniwue.de/58w1r.Comment: in press at IEEE VRAI 202
Extensible Motion-based Identification of XR Users with Non-Specific Motion
Recently emerged solutions demonstrate that the movements of users
interacting with extended reality (XR) applications carry identifying
information and can be leveraged for identification. While such solutions can
identify XR users within a few seconds, current systems require one or the
other trade-off: either they apply simple distance-based approaches that can
only be used for specific predetermined motions. Or they use
classification-based approaches that use more powerful machine learning models
and thus also work for arbitrary motions, but require full retraining to enroll
new users, which can be prohibitively expensive. In this paper, we propose to
combine the strengths of both approaches by using an embedding-based approach
that leverages deep metric learning. We train the model on a dataset of users
playing the VR game "Half-Life: Alyx" and conduct multiple experiments and
analyses. The results show that the embedding-based method 1) is able to
identify new users from non-specific movements using only a few minutes of
reference data, 2) can enroll new users within seconds, while retraining a
comparable classification-based approach takes almost a day, 3) is more
reliable than a baseline classification-based approach when only little
reference data is available, 4) can be used to identify new users from another
dataset recorded with different VR devices. Altogether, our solution is a
foundation for easily extensible XR user identification systems, applicable
even to non-specific movements. It also paves the way for production-ready
models that could be used by XR practitioners without the requirements of
expertise, hardware, or data for training deep learning models
Who is Alyx? A new behavioral biometric dataset for user identification in XR
Introduction: This paper addresses the need for reliable user identification in Extended Reality (XR), focusing on the scarcity of public datasets in this area.Methods: We present a new dataset collected from 71 users who played the game “Half-Life: Alyx” on an HTC Vive Pro for 45 min across two separate sessions. The dataset includes motion and eye-tracking data, along with physiological data from a subset of 31 users. Benchmark performance is established using two state-of-the-art deep learning architectures, Convolutional Neural Networks (CNN) and Gated Recurrent Units (GRU).Results: The best model achieved a mean accuracy of 95% for user identification within 2 min when trained on the first session and tested on the second.Discussion: The dataset is freely available and serves as a resource for future research in XR user identification, thereby addressing a significant gap in the field. Its release aims to facilitate advancements in user identification methods and promote reproducibility in XR research
One-Step, Three-Factor Passthought Authentication With Custom-Fit, In-Ear EEG
In-ear EEG offers a promising path toward usable, discreet brain-computer interfaces (BCIs) for both healthy individuals and persons with disabilities. To test the promise of this modality, we produced a brain-based authentication system using custom-fit EEG earpieces. In a sample of N = 7 participants, we demonstrated that our system has high accuracy, higher than prior work using non-custom earpieces. We demonstrated that both inherence and knowledge factors contribute to authentication accuracy, and performed a simulated attack to show our system's robustness against impersonation. From an authentication standpoint, our system provides three factors of authentication in a single step. From a usability standpoint, our system does not require a cumbersome, head-worn device