78 research outputs found

    Using Gaze for Behavioural Biometrics

    Get PDF
    A principled approach to the analysis of eye movements for behavioural biometrics is laid down. The approach grounds in foraging theory, which provides a sound basis to capture the unique- ness of individual eye movement behaviour. We propose a composite Ornstein-Uhlenbeck process for quantifying the exploration/exploitation signature characterising the foraging eye behaviour. The rel- evant parameters of the composite model, inferred from eye-tracking data via Bayesian analysis, are shown to yield a suitable feature set for biometric identification; the latter is eventually accomplished via a classical classification technique. A proof of concept of the method is provided by measuring its identification performance on a publicly available dataset. Data and code for reproducing the analyses are made available. Overall, we argue that the approach offers a fresh view on either the analyses of eye-tracking data and prospective applications in this field

    Tracking the temporal dynamics of cultural perceptual diversity in visual information processing

    Get PDF
    Human perception and cognition processing are not universal. Culture and experience markedly modulate visual information sampling in humans. Cross-cultural studies comparing between Western Caucasians (WCs) and East Asians (EAs) have shown cultural differences in behaviour and neural activities in regarding to perception and cognition. Particularly, a number of studies suggest a local perceptual bias for Westerners (WCs) and a global bias for Easterners (EAs): WCs perceive most efficiently the salient information in the focal object; as a contrast EAs are biased toward the information in the background. Such visual processing bias has been observed in a wide range of tasks and stimuli. However, the underlying neural mechanisms of such perceptual tunings, especially the temporal dynamic of different information coding, have yet to be clarified. Here, in the first two experiments I focus on the perceptual function of the diverse eye movement strategies between WCs and EAs. Human observers engage in different eye movement strategies to gather facial information: WCs preferentially fixate on the eyes and mouth, whereas EAs allocate their gaze relatively more on the center of the face. By employing a fixational eye movement paradigm in Study 1 and electroencephalographic (EEG) recording in study 2, the results confirm the cultural differences in spatial-frequency information tuning and suggest the different perceptual functions of preferred eye movement pattern as a function of culture. The third study makes use of EEG adaptation and hierarchical visual stimulus to access the cultural tuning in global/local processing. Culture diversity driven by selective attention is revealed in the early sensory stage. The results here together showed the temporal dynamic of cultural perceptual diversity. Cultural distinctions in the early time course are driven by selective attention to global information in EAs, whereas late effects are modulated by detail processing of local information in WC observers

    How to improve learning from video, using an eye tracker

    Get PDF
    The initial trigger of this research about learning from video was the availability of log files from users of video material. Video modality is seen as attractive as it is associated with the relaxed mood of watching TV. The experiments in this research have the goal to gain more insight in viewing patterns of students when viewing video. Students received an awareness instruction about the use of possible alternative viewing behaviors to see whether this would enhance their learning effects. We found that: - the learning effects of students with a narrow viewing repertoire were less than the learning effects of students with a broad viewing repertoire or strategic viewers. - students with some basic knowledge of the topics covered in the videos benefited most from the use of possible alternative viewing behaviors and students with low prior knowledge benefited the least. - the knowledge gain of students with low prior knowledge disappeared after a few weeks; knowledge construction seems worse when doing two things at the same time. - media players could offer more options to help students with their search for the content they want to view again. - there was no correlation between pervasive personality traits and viewing behavior of students. The right use of video in higher education will lead to students and teachers that are more aware of their learning and teaching behavior, to better videos, to enhanced media players, and, finally, to higher learning effects that let users improve their learning from video

    The role of emotion in the learning of trustworthiness from eye-gaze : Evidence from facial electromyography.

    Get PDF
    When perception of gaze direction is congruent with the location of a target, attention is facilitated and responses are faster compared to when incongruent. Faces that consistently gaze congruently are also judged trustworthier than faces that consistently gaze incongruently. However, it’s unclear how gaze-cues elicit changes in trust. We measured facial electromyography (EMG) during an identity-contingent gaze-cueing task to examine whether embodied emotional reactions to gaze-cues mediate trust learning. Gaze-cueing effects were found to be equivalent regardless of whether participants showed learning of trust in the expected direction or did not. In contrast, we found distinctly different patterns of EMG activity in these two populations. In a further experiment we showed the learning effects were specific to viewing faces, as no changes in liking were detected when viewing arrows that evoked similar attentional orienting responses. These findings implicate embodied emotion in learning trust from identity-contingent gaze-cueing, possibly due to the social value of shared attention or deception rather than domain-general attentional orienting

    Scanpath modeling and classification with Hidden Markov Models

    Get PDF
    How people look at visual information reveals fundamental information about them; their interests and their states of mind. Previous studies showed that scanpath, i.e., the sequence of eye movements made by an observer exploring a visual stimulus, can be used to infer observer-related (e.g., task at hand) and stimuli-related (e.g., image semantic category) information. However, eye movements are complex signals and many of these studies rely on limited gaze descriptors and bespoke datasets. Here, we provide a turnkey method for scanpath modeling and classification. This method relies on variational hidden Markov models (HMMs) and discriminant analysis (DA). HMMs encapsulate the dynamic and individualistic dimensions of gaze behavior, allowing DA to capture systematic patterns diagnostic of a given class of observers and/or stimuli. We test our approach on two very different datasets. Firstly, we use fixations recorded while viewing 800 static natural scene images, and infer an observer-related characteristic: the task at hand. We achieve an average of 55.9% correct classification rate (chance = 33%). We show that correct classification rates positively correlate with the number of salient regions present in the stimuli. Secondly, we use eye positions recorded while viewing 15 conversational videos, and infer a stimulus-related characteristic: the presence or absence of original soundtrack. We achieve an average 81.2% correct classification rate (chance = 50%). HMMs allow to integrate bottom-up, top-down, and oculomotor influences into a single model of gaze behavior. This synergistic approach between behavior and machine learning will open new avenues for simple quantification of gazing behavior. We release SMAC with HMM, a Matlab toolbox freely available to the community under an open-source license agreement.published_or_final_versio

    Varieties of Attractiveness and their Brain Responses

    Get PDF

    Science of Facial Attractiveness

    Get PDF
    • …
    corecore