18 research outputs found

    Eye Tracking: A Perceptual Interface for Content Based Image Retrieval

    Get PDF
    In this thesis visual search experiments are devised to explore the feasibility of an eye gaze driven search mechanism. The thesis first explores gaze behaviour on images possessing different levels of saliency. Eye behaviour was predominantly attracted by salient locations, but appears to also require frequent reference to non-salient background regions which indicated that information from scan paths might prove useful for image search. The thesis then specifically investigates the benefits of eye tracking as an image retrieval interface in terms of speed relative to selection by mouse, and in terms of the efficiency of eye tracking mechanisms in the task of retrieving target images. Results are analysed using ANOVA and significant findings are discussed. Results show that eye selection was faster than a computer mouse and experience gained during visual tasks carried out using a mouse would benefit users if they were subsequently transferred to an eye tracking system. Results on the image retrieval experiments show that users are able to navigate to a target image within a database confirming the feasibility of an eye gaze driven search mechanism. Additional histogram analysis of the fixations, saccades and pupil diameters in the human eye movement data revealed a new method of extracting intentions from gaze behaviour for image search, of which the user was not aware and promises even quicker search performances. The research has two implications for Content Based Image Retrieval: (i) improvements in query formulation for visual search and (ii) new methods for visual search using attentional weighting. Futhermore it was demonstrated that users are able to find target images at sufficient speeds indicating that pre-attentive activity is playing a role in visual search. A current review of eye tracking technology, current applications, visual perception research, and models of visual attention is discussed. A review of the potential of the technology for commercial exploitation is also presented

    Towards Energy Efficient Mobile Eye Tracking for AR Glasses through Optical Sensor Technology

    Get PDF
    After the introduction of smartphones and smartwatches, Augmented Reality (AR) glasses are considered the next breakthrough in the field of wearables. While the transition from smartphones to smartwatches was based mainly on established display technologies, the display technology of AR glasses presents a technological challenge. Many display technologies, such as retina projectors, are based on continuous adaptive control of the display based on the user’s pupil position. Furthermore, head-mounted systems require an adaptation and extension of established interaction concepts to provide the user with an immersive experience. Eye-tracking is a crucial technology to help AR glasses achieve a breakthrough through optimized display technology and gaze-based interaction concepts. Available eye-tracking technologies, such as Video Oculography (VOG), do not meet the requirements of AR glasses, especially regarding power consumption, robustness, and integrability. To further overcome these limitations and push mobile eye-tracking for AR glasses forward, novel laser-based eye-tracking sensor technologies are researched in this thesis. The thesis contributes to a significant scientific advancement towards energy-efficientmobile eye-tracking for AR glasses. In the first part of the thesis, novel scanned laser eye-tracking sensor technologies for AR glasses with retina projectors as display technology are researched. The goal is to solve the disadvantages of VOG systems and to enable robust eye-tracking and efficient ambient light and slippage through optimized sensing methods and algorithms. The second part of the thesis researches the use of static Laser Feedback Interferometry (LFI) sensors as low power always-on sensor modality for detection of user interaction by gaze gestures and context recognition through Human Activity Recognition (HAR) for AR glasses. The static LFI sensors can measure the distance to the eye and the eye’s surface velocity with an outstanding sampling rate. Furthermore, they offer high integrability regardless of the display technology. In the third part of the thesis, a model-based eye-tracking approach is researched based on the static LFI sensor technology. The approach leads to eye-tracking with an extremely high sampling rate by fusing multiple LFI sensors, which enables methods for display resolution enhancement such as foveated rendering for AR glasses and Virtual Reality (VR) systems. The scientific contributions of this work lead to a significant advance in the field of mobile eye-tracking for AR glasses through the introduction of novel sensor technologies that enable robust eye tracking in uncontrolled environments in particular. Furthermore, the scientific contributions of this work have been published in internationally renowned journals and conferences

    How to improve learning from video, using an eye tracker

    Get PDF
    The initial trigger of this research about learning from video was the availability of log files from users of video material. Video modality is seen as attractive as it is associated with the relaxed mood of watching TV. The experiments in this research have the goal to gain more insight in viewing patterns of students when viewing video. Students received an awareness instruction about the use of possible alternative viewing behaviors to see whether this would enhance their learning effects. We found that: - the learning effects of students with a narrow viewing repertoire were less than the learning effects of students with a broad viewing repertoire or strategic viewers. - students with some basic knowledge of the topics covered in the videos benefited most from the use of possible alternative viewing behaviors and students with low prior knowledge benefited the least. - the knowledge gain of students with low prior knowledge disappeared after a few weeks; knowledge construction seems worse when doing two things at the same time. - media players could offer more options to help students with their search for the content they want to view again. - there was no correlation between pervasive personality traits and viewing behavior of students. The right use of video in higher education will lead to students and teachers that are more aware of their learning and teaching behavior, to better videos, to enhanced media players, and, finally, to higher learning effects that let users improve their learning from video

    Towards Usable End-user Authentication

    Get PDF
    Authentication is the process of validating the identity of an entity, e.g., a person, a machine, etc.; the entity usually provides a proof of identity in order to be authenticated. When the entity - to be authenticated - is a human, the authentication process is called end-user authentication. Making an end-user authentication usable entails making it easy for a human to obtain, manage, and input the proof of identity in a secure manner. In machine-to-machine authentication, both ends have comparable memory and computational power to securely carry out the authentication process using cryptographic primitives and protocols. On the contrary, as a human has limited memory and computational power, in end-user authentication, cryptography is of little use. Although password based end-user authentication has many well-known security and usability problems, it is the de facto standard. Almost half a century of research effort has produced a multitude of end-user authentication methods more sophisticated than passwords; yet, none has come close to replacing passwords. In this dissertation, taking advantage of the built-in sensing capability of smartphones, we propose an end-user authentication framework for smartphones - called ePet - which does not require any active participation from the user most of the times; thus the proposed framework is highly usable. Using data collected from subjects, we validate a part of the authentication framework for the Android platform. For web authentication, in this dissertation, we propose a novel password creation interface, which helps a user remember a newly created password with more confidence - by allowing her to perform various memory tasks built upon her new password. Declarative and motor memory help the user remember and efficiently input a password. From a within-subjects study we show that declarative memory is sufficient for passwords; motor memory mostly facilitate the input process and thus the memory tasks have been designed to help cement the declarative memory for a newly created password. This dissertation concludes with an evaluation of the increased usability of the proposed interface through a between-subjects study

    Validation of a prototype hybrid eye-tracker against the DPI and the Tobii Spectrum

    No full text
    We benchmark a new hybrid eye-tracker system against the DPI (Dual Purkinje Imaging) tracker and the Tobii Spectrum in a series of three experiments. In a first within-subjects battery of tests, we show that the precision of the new eye-tracker is much better than that of both the DPI and the Spectrum, but that accuracy is not better. We also show that the new eye-tracker is insensitive to effects of pupil contraction on gaze direction (in contrast to both the DPI and the Spectrum), that it detects microsaccades on par with the DPI and better than the Spectrum, and that it can possibly record tremor. In the second experiment, sensors of the novel eye-tracker were integrated into the optical path of the DPI bench. Simultaneous recordings show that saccade dynamics, post-saccadic oscillations and measurements of translational movements are comparable to those of the DPI. In the third experiment, we show that the DPI and the new eye-tracker are capable of detecting 2 arcmin artificial-eye rotations while the Spectrum cannot. Results suggest that the new eye-tracker, in contrast to video-based P-CR systems [Holmqvist and Blignaut 2020], is suitable for studies that record small eye-movements under varying ambient light levels

    KEER2022

    Get PDF
    AvanttĂ­tol: KEER2022. DiversitiesDescripciĂł del recurs: 25 juliol 202

    Evaluation of Detecting Cybersickness via VR HMD Positional Measurements Under Realistic Usage Conditions.

    Get PDF
    With the resurgence of virtual reality, head-mounted displays (VR HMD) technologies since 2015, VR technology is becoming ever more present in people's day-to-day lives. However, one significant barrier to this progress is a condition called cybersickness, a form of motion sickness induced by the usage of VR HMD’s. It is often debilitating to sufferers, resulting in symptoms anywhere from mild discomfort to full-on vomiting. Much research effort focuses on identifying the cause of and solution to this problem, with many studies reporting various factors that influence cybersickness, such as vection and field of view. However, there is often disagreement in these studies' results and comparing the results is often complicated as stimuli used for the experiments vary wildly. This study theorised that these results' mismatch might partially be down to the different mental loads of these tasks, which may influence cybersickness and stability-based measurement methods such as postural stability captured by the centre of pressure (COP) measurements. One recurring desire in these research projects is the idea of using the HMD device itself to capture the stability of the users head. However, measuring the heads position via the VR HMD is known to have inaccuracies meaning a perfect representation of the heads position cannot be measured. This research took the HTC Vive headset and used it to capture the head position of multiple subjects experiencing two different VR environments under differing levels of cognitive load. The design of these test environments reflected normal VR usage. This research found that the VR HMD measurements in this scenario may be a suitable proxy for recording instability. However, the underlying method was greatly influenced by other factors, with cognitive load (5.4% instability increase between the low and high load conditions) and test order (2.4% instability decrease between first run and second run conditions) having a more significant impact on the instability recorded than the onset of cybersickness (2% instability increase between sick and well participants). Also, separating participants suffering from cybersickness from unaffected participants was not possible based upon the recorded motion alone. Additionally, attempts to capture stability data during actual VR gameplay in specific areas of possible head stability provided mixed results and failed to identify participants exhibiting symptoms of cybersickness successfully. In conclusion, this study finds that while a proxy measurement for head stability is obtainable from an HTC Vive headset, the results recorded in no way indicate cybersickness onset. Additionally, the study proves cognitive load and test order significantly impact stability measurements recorded in this way. As such, this approach would need calibration on a case-by-case basis if used to detect cybersickness

    Preface

    Get PDF

    Haptics: Science, Technology, Applications

    Get PDF
    This open access book constitutes the proceedings of the 12th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, EuroHaptics 2020, held in Leiden, The Netherlands, in September 2020. The 60 papers presented in this volume were carefully reviewed and selected from 111 submissions. The were organized in topical sections on haptic science, haptic technology, and haptic applications. This year's focus is on accessibility
    corecore