14 research outputs found
Forecasting User Attention During Everyday Mobile Interactions Using Device-Integrated and Wearable Sensors
Visual attention is highly fragmented during mobile interactions, but the
erratic nature of attention shifts currently limits attentive user interfaces
to adapting after the fact, i.e. after shifts have already happened. We instead
study attention forecasting -- the challenging task of predicting users' gaze
behaviour (overt visual attention) in the near future. We present a novel
long-term dataset of everyday mobile phone interactions, continuously recorded
from 20 participants engaged in common activities on a university campus over
4.5 hours each (more than 90 hours in total). We propose a proof-of-concept
method that uses device-integrated sensors and body-worn cameras to encode rich
information on device usage and users' visual scene. We demonstrate that our
method can forecast bidirectional attention shifts and predict whether the
primary attentional focus is on the handheld mobile device. We study the impact
of different feature sets on performance and discuss the significant potential
but also remaining challenges of forecasting user attention during mobile
interactions.Comment: 13 pages, 9 figure
PrivacEye: Privacy-Preserving Head-Mounted Eye Tracking Using Egocentric Scene Image and Eye Movement Features
Eyewear devices, such as augmented reality displays, increasingly integrate
eye tracking but the first-person camera required to map a user's gaze to the
visual scene can pose a significant threat to user and bystander privacy. We
present PrivacEye, a method to detect privacy-sensitive everyday situations and
automatically enable and disable the eye tracker's first-person camera using a
mechanical shutter. To close the shutter in privacy-sensitive situations, the
method uses a deep representation of the first-person video combined with rich
features that encode users' eye movements. To open the shutter without visual
input, PrivacEye detects changes in users' eye movements alone to gauge changes
in the "privacy level" of the current situation. We evaluate our method on a
first-person video dataset recorded in daily life situations of 17
participants, annotated by themselves for privacy sensitivity, and show that
our method is effective in preserving privacy in this challenging setting.Comment: 10 pages, 6 figures, supplementary materia
Adversarial Attacks on Classifiers for Eye-based User Modelling
An ever-growing body of work has demonstrated the rich information content
available in eye movements for user modelling, e.g. for predicting users'
activities, cognitive processes, or even personality traits. We show that
state-of-the-art classifiers for eye-based user modelling are highly vulnerable
to adversarial examples: small artificial perturbations in gaze input that can
dramatically change a classifier's predictions. We generate these adversarial
examples using the Fast Gradient Sign Method (FGSM) that linearises the
gradient to find suitable perturbations. On the sample task of eye-based
document type recognition we study the success of different adversarial attack
scenarios: with and without knowledge about classifier gradients (white-box vs.
black-box) as well as with and without targeting the attack to a specific
class, In addition, we demonstrate the feasibility of defending against
adversarial attacks by adding adversarial examples to a classifier's training
data.Comment: 9 pages, 7 figure
Privacy-aware eye tracking using differential privacy
With eye tracking being increasingly integrated into virtual and augmented reality (VR/AR) head-mounted displays, preserving users’ privacy is an ever more important, yet under-explored, topic in the eye tracking community. We report a large-scale online survey (N=124) on privacy aspects of eye tracking that provides the first comprehensive account of with whom, for which services,
and to what extent users are willing to share their gaze data. Using these insights, we design a privacy-aware VR interface that uses differential privacy, which we evaluate on a new 20-participant dataset for two privacy sensitive tasks: We show that our method can prevent user re-identification and protect gender information while maintaining high performance for gaze-based document type
classification. Our results highlight the privacy challenges particular to gaze data and demonstrate that differential privacy is a potential means to address them. Thus, this paper lays important foundations for future research on privacy-aware gaze interfaces
Darmstadt stair ambulation dataset including level walking, stair ascent, stair descent and gait transitions at three stair heights
The Darmstadt stair ambulation dataset was collected to improve the control and assistance of wearable lower limb robotics. It contains the kinematics, kinetics and electromyographic data (EMG) for transitions between level walking and stair ascent, and between stair ascent and level walking. Further, it contains the data for transitions in between level walking and stair descent, and between stair descent and level walking.
Twelve physically unimpaired male subjects with a mean age of 25.4 yrs and a mean weight of 74.6 kg participated in the experiments.
A motion capture system was used to capture the body kinematics. Seven force plates were used in two setups to collect the ground reaction forces of eleven strides for the stair ascent and of eleven strides for the stair descent transitions. The center of pressure, the center of mass, joint angles and angular velocities were determined. Further, inverse dynamics were used to determine the lower limb joint moments and the lower limb joint power. Sixteen EMG sensors were used to collect the muscle activity of twelve muscles. As each EMG sensor also contains an inertial measurement unit (IMU), including 3D-Gyroscope, 3D-Accelerometer and 3D-Magnetometer, they were also used to capture the lower limb kinematics in parallel to the motion capture system. Stair ambulation was performed at three stair slopes. The data is provided at different processing levels including raw to fully processed data. The attached documentation will provide details about the data acquisition, the data processing and the provided data structure and format.Version 1.
Darmstadt stair ambulation dataset including level walking, stair ascent, stair descent and gait transitions at three stair heights
The Darmstadt stair ambulation dataset was collected to improve the control and assistance of wearable lower limb robotics. It contains the kinematics, kinetics and electromyographic data (EMG) for transitions between level walking and stair ascent, and between stair ascent and level walking. Further, it contains the data for transitions in between level walking and stair descent, and between stair descent and level walking. Twelve physically unimpaired male subjects with a mean age of 25.4 yrs and a mean weight of 74.6 kg participated in the experiments. A motion capture system was used to capture the body kinematics. Seven force plates were used in two setups to collect the ground reaction forces of eleven strides for the stair ascent and of eleven strides for the stair descent transitions. The center of pressure, the center of mass, joint angles and angular velocities were determined. Further, inverse dynamics were used to determine the lower limb joint moments and the lower limb joint power. Sixteen EMG sensors were used to collect the muscle activity of twelve muscles. As each EMG sensor also contains an inertial measurement unit (IMU), including 3D-Gyroscope, 3D-Accelerometer and 3D-Magnetometer, they were also used to capture the lower limb kinematics in parallel to the motion capture system. Stair ambulation was performed at three stair slopes. The data is provided at different processing levels including raw to fully processed data. The attached documentation will provide details about the data acquisition, the data processing and the provided data structure and format
Lower limb joint biomechanics-based identification of gait transitions in between level walking and stair ambulation
Lower limb exoskeletons and lower limb prostheses have the potential to reduce gait limitations during stair ambulation. To develop robotic assistance devices, the biomechanics of stair ambulation and the required transitions to level walking have to be understood. This study aimed to identify the timing of these transitions, to determine if transition phases exist and how long they last, and to investigate if there exists a joint-related order and timing for the start and end of the transitions. Therefore, this study analyzed the kinematics and kinetics of both transitions between level walking and stair ascent, and between level walking and stair descent (12 subjects, 25.4 yrs, 74.6 kg). We found that transitions primarily start within the stance phase and end within the swing phase. Transition phases exist for each limb, all joints (hip, knee, ankle), and types of transitions. They have a mean duration of half of one stride and they do not last longer than one stride. The duration of the transition phase for all joints of a single limb in aggregate is less than 35% of one stride in all but one case. The distal joints initialize stair ascent, while the proximal joints primarily initialize the stair descent transitions. In general, the distal joints complete the transitions first. We believe that energy- and balance-related processes are responsible for the joint-specific transition timing. Regarding the existence of a transition phase for all joints and transitions, we believe that lower limb exoskeleton or prosthetic control concepts should account for these transitions in order to improve the smoothness of the transition and to thus increase the user comfort, safety, and user experience. Our gait data and the identified transition timings can provide a reference for the design and the performance of stair ambulation- related control concepts