36 research outputs found

    Perceptual Manipulations for Hiding Image Transformations in Virtual Reality

    Get PDF
    Users of a virtual reality make frequent gaze shifts and head movements to explore their surrounding environment. Saccades are rapid, ballistic, conjugate eye movements that reposition our gaze, and in doing so create large-field motion on our retina. Due to the high speed motion on the retina during saccades, the brain suppresses the visual signals from the eye, a perceptual phenomenon known as the saccadic suppression. These moments of visual blindness can help hide the display graphical updates in a virtual reality. In this dissertation, I investigated how the visibility of various image transformations differed, during combinations of saccade and head rotation conditions. Additionally, I studied how hand and gaze interaction, affected image change discrimination in an inattentional blindness task. I conducted four psychophysical experiments in desktop or head-mounted VR. In the eye tracking studies, users viewed 3D scenes, and were triggered to make a vertical or horizontal saccade. During the saccade an instantaneous translation or rotation was applied to the virtual camera used to render the scene. Participants were required to indicate the direction of these transitions after each trial. The results showed that type and size of the image transformation affected change detectability. During horizontal or vertical saccades, rotations along the roll axis were the most detectable, while horizontal and vertical translations were least noticed. In a second similar study, I added a constant camera motion to simulate a head rotation, and in a third study, I compared active head rotation with a simulated rotation or a static head. I found less sensitivity to transsaccadic horizontal compared to vertical camera shifts during simulated or real head pan. Conversely, during simulated or real head tilt observers were less sensitive to transsaccadic vertical than horizontal camera shifts. In addition, in my multi-interactive inattentional blindness experiment, I compared sensitivity to sudden image transformations when a participant used their hand and gaze to move and watch an object, to when they only watched it move. The results confirmed that when involved in a primary task that requires focus and attention with two interaction modalities (gaze and hand), a visual stimuli can better be hidden than when only one sense (vision) is involved. Understanding the effect of continuous head movement and attention on the visibility of a sudden transsaccadic change can help optimize the visual performance of gaze-contingent displays and improve user experience. Perceptually suppressed rotations or translations can be used to introduce imperceptible changes in virtual camera pose in applications such as networked gaming, collaborative virtual reality and redirected walking. This dissertation suggests that such transformations can be more effective and more substantial during active or passive head motion. Moreover, inattentional blindness during an attention-demanding task provides additional opportunities for imperceptible updates to a visual display

    Perceptual Manipulations for Hiding Image Transformations in Virtual Reality

    Get PDF
    Users of a virtual reality make frequent gaze shifts and head movements to explore their surrounding environment. Saccades are rapid, ballistic, conjugate eye movements that reposition our gaze, and in doing so create large-field motion on our retina. Due to the high speed motion on the retina during saccades, the brain suppresses the visual signals from the eye, a perceptual phenomenon known as the saccadic suppression. These moments of visual blindness can help hide the display graphical updates in a virtual reality. In this dissertation, I investigated how the visibility of various image transformations differed, during combinations of saccade and head rotation conditions. Additionally, I studied how hand and gaze interaction, affected image change discrimination in an inattentional blindness task. I conducted four psychophysical experiments in desktop or head-mounted VR. In the eye tracking studies, users viewed 3D scenes, and were triggered to make a vertical or horizontal saccade. During the saccade an instantaneous translation or rotation was applied to the virtual camera used to render the scene. Participants were required to indicate the direction of these transitions after each trial. The results showed that type and size of the image transformation affected change detectability. During horizontal or vertical saccades, rotations along the roll axis were the most detectable, while horizontal and vertical translations were least noticed. In a second similar study, I added a constant camera motion to simulate a head rotation, and in a third study, I compared active head rotation with a simulated rotation or a static head. I found less sensitivity to transsaccadic horizontal compared to vertical camera shifts during simulated or real head pan. Conversely, during simulated or real head tilt observers were less sensitive to transsaccadic vertical than horizontal camera shifts. In addition, in my multi-interactive inattentional blindness experiment, I compared sensitivity to sudden image transformations when a participant used their hand and gaze to move and watch an object, to when they only watched it move. The results confirmed that when involved in a primary task that requires focus and attention with two interaction modalities (gaze and hand), a visual stimuli can better be hidden than when only one sense (vision) is involved. Understanding the effect of continuous head movement and attention on the visibility of a sudden transsaccadic change can help optimize the visual performance of gaze-contingent displays and improve user experience. Perceptually suppressed rotations or translations can be used to introduce imperceptible changes in virtual camera pose in applications such as networked gaming, collaborative virtual reality and redirected walking. This dissertation suggests that such transformations can be more effective and more substantial during active or passive head motion. Moreover, inattentional blindness during an attention-demanding task provides additional opportunities for imperceptible updates to a visual display

    Advancing proxy-based haptic feedback in virtual reality

    Get PDF
    This thesis advances haptic feedback for Virtual Reality (VR). Our work is guided by Sutherland's 1965 vision of the ultimate display, which calls for VR systems to control the existence of matter. To push towards this vision, we build upon proxy-based haptic feedback, a technique characterized by the use of passive tangible props. The goal of this thesis is to tackle the central drawback of this approach, namely, its inflexibility, which yet hinders it to fulfill the vision of the ultimate display. Guided by four research questions, we first showcase the applicability of proxy-based VR haptics by employing the technique for data exploration. We then extend the VR system's control over users' haptic impressions in three steps. First, we contribute the class of Dynamic Passive Haptic Feedback (DPHF) alongside two novel concepts for conveying kinesthetic properties, like virtual weight and shape, through weight-shifting and drag-changing proxies. Conceptually orthogonal to this, we study how visual-haptic illusions can be leveraged to unnoticeably redirect the user's hand when reaching towards props. Here, we contribute a novel perception-inspired algorithm for Body Warping-based Hand Redirection (HR), an open-source framework for HR, and psychophysical insights. The thesis concludes by proving that the combination of DPHF and HR can outperform the individual techniques in terms of the achievable flexibility of the proxy-based haptic feedback.Diese Arbeit widmet sich haptischem Feedback für Virtual Reality (VR) und ist inspiriert von Sutherlands Vision des ultimativen Displays, welche VR-Systemen die Fähigkeit zuschreibt, Materie kontrollieren zu können. Um dieser Vision näher zu kommen, baut die Arbeit auf dem Konzept proxy-basierter Haptik auf, bei der haptische Eindrücke durch anfassbare Requisiten vermittelt werden. Ziel ist es, diesem Ansatz die für die Realisierung eines ultimativen Displays nötige Flexibilität zu verleihen. Dazu bearbeiten wir vier Forschungsfragen und zeigen zunächst die Anwendbarkeit proxy-basierter Haptik durch den Einsatz der Technik zur Datenexploration. Anschließend untersuchen wir in drei Schritten, wie VR-Systeme mehr Kontrolle über haptische Eindrücke von Nutzern erhalten können. Hierzu stellen wir Dynamic Passive Haptic Feedback (DPHF) vor, sowie zwei Verfahren, die kinästhetische Eindrücke wie virtuelles Gewicht und Form durch Gewichtsverlagerung und Veränderung des Luftwiderstandes von Requisiten vermitteln. Zusätzlich untersuchen wir, wie visuell-haptische Illusionen die Hand des Nutzers beim Greifen nach Requisiten unbemerkt umlenken können. Dabei stellen wir einen neuen Algorithmus zur Body Warping-based Hand Redirection (HR), ein Open-Source-Framework, sowie psychophysische Erkenntnisse vor. Abschließend zeigen wir, dass die Kombination von DPHF und HR proxy-basierte Haptik noch flexibler machen kann, als es die einzelnen Techniken alleine können

    Natural Walking in Virtual Reality:A Review

    Get PDF

    Multimodality in VR: A survey

    Get PDF
    Virtual reality (VR) is rapidly growing, with the potential to change the way we create and consume content. In VR, users integrate multimodal sensory information they receive, to create a unified perception of the virtual world. In this survey, we review the body of work addressing multimodality in VR, and its role and benefits in user experience, together with different applications that leverage multimodality in many disciplines. These works thus encompass several fields of research, and demonstrate that multimodality plays a fundamental role in VR; enhancing the experience, improving overall performance, and yielding unprecedented abilities in skill and knowledge transfer

    Emotion Recognition in Immersive Virtual Reality: From Statistics to Affective Computing

    Full text link
    [EN] Emotions play a critical role in our daily lives, so the understanding and recognition of emotional responses is crucial for human research. Affective computing research has mostly used non-immersive two-dimensional (2D) images or videos to elicit emotional states. However, immersive virtual reality, which allows researchers to simulate environments in controlled laboratory conditions with high levels of sense of presence and interactivity, is becoming more popular in emotion research. Moreover, its synergy with implicit measurements and machine-learning techniques has the potential to impact transversely in many research areas, opening new opportunities for the scientific community. This paper presents a systematic review of the emotion recognition research undertaken with physiological and behavioural measures using head-mounted displays as elicitation devices. The results highlight the evolution of the field, give a clear perspective using aggregated analysis, reveal the current open issues and provide guidelines for future research.This research was funded by European Commission, grant number H2020-825585 HELIOS.Marín-Morales, J.; Llinares Millán, MDC.; Guixeres Provinciale, J.; Alcañiz Raya, ML. (2020). Emotion Recognition in Immersive Virtual Reality: From Statistics to Affective Computing. Sensors. 20(18):1-26. https://doi.org/10.3390/s20185163S126201

    Modelling human emotions using immersive virtual reality, physiological signals and behavioural responses

    Full text link
    Tesis por compendio[ES] El uso de la realidad virtual (RV) se ha incrementado notablemente en la comunidad científica para la investigación del comportamiento humano. En particular, la RV inmersiva ha crecido debido a la democratización de las gafas de realidad virtual o head mounted displays (HMD), que ofrecen un alto rendimiento con una inversión económica. Uno de los campos que ha emergido con fuerza en la última década es el Affective Computing, que combina psicofisiología, informática, ingeniería biomédica e inteligencia artificial, desarrollando sistemas que puedan reconocer emociones automáticamente. Su progreso es especialmente importante en el campo de la investigación del comportamiento humano, debido al papel fundamental que las emociones juegan en muchos procesos psicológicos como la percepción, la toma de decisiones, la creatividad, la memoria y la interacción social. Muchos estudios se han centrado en intentar obtener una metodología fiable para evocar y automáticamente identificar estados emocionales, usando medidas fisiológicas objetivas y métodos de aprendizaje automático. Sin embargo, la mayoría de los estudios previos utilizan imágenes, audios o vídeos para generar los estados emocionales y, hasta donde llega nuestro conocimiento, ninguno de ellos ha desarrollado un sistema de reconocimiento emocional usando RV inmersiva. Aunque algunos trabajos anteriores sí analizan las respuestas fisiológicas en RV inmersivas, estos no presentan modelos de aprendizaje automático para procesamiento y clasificación automática de bioseñales. Además, un concepto crucial cuando se usa la RV en investigación del comportamiento humano es la validez: la capacidad de evocar respuestas similares en un entorno virtual a las evocadas por el espacio físico. Aunque algunos estudios previos han usado dimensiones psicológicas y cognitivas para comparar respuestas entre entornos reales y virtuales, las investigaciones que analizan respuestas fisiológicas o comportamentales están mucho menos extendidas. Según nuestros conocimientos, este es el primer trabajo que compara entornos físicos con su réplica en RV, empleando respuestas fisiológicas y algoritmos de aprendizaje automático y analizando la capacidad de la RV de transferir y extrapolar las conclusiones obtenidas al entorno real que se está simulando. El objetivo principal de la tesis es validar el uso de la RV inmersiva como una herramienta de estimulación emocional usando respuestas psicofisiológicas y comportamentales en combinación con algoritmos de aprendizaje automático, así como realizar una comparación directa entre un entorno real y virtual. Para ello, se ha desarrollado un protocolo experimental que incluye entornos emocionales 360º, un museo real y una virtualización 3D altamente realista del mismo museo. La tesis presenta novedosas contribuciones del uso de la RV inmersiva en la investigación del comportamiento humano, en particular en lo relativo al estudio de las emociones. Esta ayudará a aplicar metodologías a estímulos más realistas para evaluar entornos y situaciones de la vida diaria, superando las actuales limitaciones de la estimulación emocional que clásicamente ha incluido imágenes, audios o vídeos. Además, en ella se analiza la validez de la RV realizando una comparación directa usando una simulación altamente realista. Creemos que la RV inmersiva va a revolucionar los métodos de estimulación emocional en entornos de laboratorio. Además, su sinergia junto a las medidas fisiológicas y las técnicas de aprendizaje automático, impactarán transversalmente en muchas áreas de investigación como la arquitectura, la salud, la evaluación psicológica, el entrenamiento, la educación, la conducción o el marketing, abriendo un nuevo horizonte de oportunidades para la comunidad científica. La presente tesis espera contribuir a caminar en esa senda.[EN] In recent years the scientific community has significantly increased its use of virtual reality (VR) technologies in human behaviour research. In particular, the use of immersive VR has grown due to the introduction of affordable, high performance head mounted displays (HMDs). Among the fields that has strongly emerged in the last decade is affective computing, which combines psychophysiology, computer science, biomedical engineering and artificial intelligence in the development of systems that can automatically recognize emotions. The progress of affective computing is especially important in human behaviour research due to the central role that emotions play in many background processes, such as perception, decision-making, creativity, memory and social interaction. Several studies have tried to develop a reliable methodology to evoke and automatically identify emotional states using objective physiological measures and machine learning methods. However, the majority of previous studies used images, audio or video to elicit emotional statements; to the best of our knowledge, no previous research has developed an emotion recognition system using immersive VR. Although some previous studies analysed physiological responses in immersive VR, they did not use machine learning techniques for biosignal processing and classification. Moreover, a crucial concept when using VR for human behaviour research is validity: the capacity to evoke a response from the user in a simulated environment similar to the response that might be evoked in a physical environment. Although some previous studies have used psychological and cognitive dimensions to compare responses in real and virtual environments, few have extended this research to analyse physiological or behavioural responses. Moreover, to our knowledge, this is the first study to compare VR scenarios with their real-world equivalents using physiological measures coupled with machine learning algorithms, and to analyse the ability of VR to transfer and extrapolate insights obtained from VR environments to real environments. The main objective of this thesis is, using psycho-physiological and behavioural responses in combination with machine learning methods, and by performing a direct comparison between a real and virtual environment, to validate immersive VR as an emotion elicitation tool. To do so we develop an experimental protocol involving emotional 360º environments, an art exhibition in a real museum, and a highly-realistic 3D virtualization of the same art exhibition. This thesis provides novel contributions to the use of immersive VR in human behaviour research, particularly in relation to emotions. VR can help in the application of methodologies designed to present more realistic stimuli in the assessment of daily-life environments and situations, thus overcoming the current limitations of affective elicitation, which classically uses images, audio and video. Moreover, it analyses the validity of VR by performing a direct comparison using highly-realistic simulation. We believe that immersive VR will revolutionize laboratory-based emotion elicitation methods. Moreover, its synergy with physiological measurement and machine learning techniques will impact transversely in many other research areas, such as architecture, health, assessment, training, education, driving and marketing, and thus open new opportunities for the scientific community. The present dissertation aims to contribute to this progress.[CA] L'ús de la realitat virtual (RV) s'ha incrementat notablement en la comunitat científica per a la recerca del comportament humà. En particular, la RV immersiva ha crescut a causa de la democratització de les ulleres de realitat virtual o head mounted displays (HMD), que ofereixen un alt rendiment amb una reduïda inversió econòmica. Un dels camps que ha emergit amb força en l'última dècada és el Affective Computing, que combina psicofisiologia, informàtica, enginyeria biomèdica i intel·ligència artificial, desenvolupant sistemes que puguen reconéixer emocions automàticament. El seu progrés és especialment important en el camp de la recerca del comportament humà, a causa del paper fonamental que les emocions juguen en molts processos psicològics com la percepció, la presa de decisions, la creativitat, la memòria i la interacció social. Molts estudis s'han centrat en intentar obtenir una metodologia fiable per a evocar i automàticament identificar estats emocionals, utilitzant mesures fisiològiques objectives i mètodes d'aprenentatge automàtic. No obstant això, la major part dels estudis previs utilitzen imatges, àudios o vídeos per a generar els estats emocionals i, fins on arriba el nostre coneixement, cap d'ells ha desenvolupat un sistema de reconeixement emocional mitjançant l'ús de la RV immersiva. Encara que alguns treballs anteriors sí que analitzen les respostes fisiològiques en RV immersives, aquests no presenten models d'aprenentatge automàtic per a processament i classificació automàtica de biosenyals. A més, un concepte crucial quan s'utilitza la RV en la recerca del comportament humà és la validesa: la capacitat d'evocar respostes similars en un entorn virtual a les evocades per l'espai físic. Encara que alguns estudis previs han utilitzat dimensions psicològiques i cognitives per a comparar respostes entre entorns reals i virtuals, les recerques que analitzen respostes fisiològiques o comportamentals estan molt menys esteses. Segons els nostres coneixements, aquest és el primer treball que compara entorns físics amb la seua rèplica en RV, emprant respostes fisiològiques i algorismes d'aprenentatge automàtic i analitzant la capacitat de la RV de transferir i extrapolar les conclusions obtingudes a l'entorn real que s'està simulant. L'objectiu principal de la tesi és validar l'ús de la RV immersiva com una eina d'estimulació emocional usant respostes psicofisiològiques i comportamentals en combinació amb algorismes d'aprenentatge automàtic, així com realitzar una comparació directa entre un entorn real i virtual. Per a això, s'ha desenvolupat un protocol experimental que inclou entorns emocionals 360º, un museu real i una virtualització 3D altament realista del mateix museu. La tesi presenta noves contribucions de l'ús de la RV immersiva en la recerca del comportament humà, en particular quant a l'estudi de les emocions. Aquesta ajudarà a aplicar metodologies a estímuls més realistes per a avaluar entorns i situacions de la vida diària, superant les actuals limitacions de l'estimulació emocional que clàssicament ha inclòs imatges, àudios o vídeos. A més, en ella s'analitza la validesa de la RV realitzant una comparació directa usant una simulació altament realista. Creiem que la RV immersiva revolucionarà els mètodes d'estimulació emocional en entorns de laboratori. A més, la seua sinergia al costat de les mesures fisiològiques i les tècniques d'aprenentatge automàtic, impactaran transversalment en moltes àrees de recerca com l'arquitectura, la salut, l'avaluació psicològica, l'entrenament, l'educació, la conducció o el màrqueting, obrint un nou horitzó d'oportunitats per a la comunitat científica. La present tesi espera contribuir a caminar en aquesta senda.Marín Morales, J. (2020). Modelling human emotions using immersive virtual reality, physiological signals and behavioural responses [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/148717TESISCompendi

    Eye movements in dynamic environments

    Get PDF
    The capabilities of the visual system and the biological mechanisms controlling its active nature are still unequaled by modern technology. Despite the spatial and temporal complexity of our environment, we succeed in tasks that demand extracting relevant information from complex, ambiguous, and noisy sensory data. Dynamically distributing visual attention across multiple targets is an important task. In many situations, for example driving a vehicle, switching focus between several targets (e.g., looking ahead, mirrors, control panels) is needed to succeed. This is further complicated by the fact, that most information gathered during active gaze is highly dynamic (e.g., other vehicles on the street, changes of street direction). Hence, while looking at one of the targets, the uncertainty regarding the others increases. Crucially, we manage to do so despite omnipresent stochastic changes in our surroundings. The mechanisms responsible for how the brain schedules our visual system to access the information we need exactly when we need it are far from understood. In a dynamic world, humans not only have to decide where to look but also when to direct their gaze to potentially informative locations in the visual scene. Our foveated visual apparatus is only capable of gathering information with high resolution within a limited area of the visual field. As a consequence, in a changing environment, we constantly and inevitably lose information about the locations not currently brought into focus. Little is known about how the timing of eye movements is related to environmental regularities and how gaze strategies are learned. This is due to three main reasons: First, to relate the scheduling of eye movements to stochastic environmental dynamics, we need to have access to those statistics. However, these are usually unknown. Second, to apply the powerful framework of statistical learning theory, we require knowledge of the current goals of the subject. During every-day tasks, the goal structure can be complex, multi-dimensional and is only partially accessible. Third, the computational problem is, in general, intractable. Usually, it involves learning sequences of eye movements rather than a single action from delayed rewards under temporal and spatial uncertainty that is further amplified by dynamic changes in the environment. In the present thesis, we propose an experimental paradigm specifically designed to target these problems: First, we use simple stimuli with reduced spatial complexity and controlled stochastic behavior. Second, we give subjects explicit task instructions. Finally, the temporal and spatial statistics are designed in a way, that significantly simplifies computation and makes it possible to infer several human properties from the action sequences while still using normative models for behavior. We present results from four different studies that show how this approach can be used to gain insights into the temporal structure of human gaze selection. In a controlled setting in which crucial quantities are known, we show how environmental dynamics are learned and used to control several components of the visual apparatus by properly scheduling the time course of actions. First, we investigated how endogenous eye blinks are controlled in the presence of nonstationary environmental demands. Eye blinks are linked to dopamine and therefore have been used as a behavioral marker for many internal cognitive processes. Also, they introduce gaps in the stream of visual information. Empirical results had suggested that 1) blinking behavior is affected by the current activity and 2) highly variable between participants. We present a computational approach that quantifies the relationship between blinking behavior and environmental demands. We show that blinking is the result of a trade-off between task demands and the internal urge to blink in our psychophysical experiment. Crucially, we can predict the temporal dynamics of blinking (i.e., the distribution of interblink intervals) for individual blinking patterns. Second, we present behavioral data establishing that humans learn to adjust their temporal eye movements efficiently. More time is spent at locations where meaningful events are short and therefore easily missed. Our computational model further shows how several properties of the visual system determine the timing of gaze. We present a Bayesian learner that fully explains how eye movement patterns change due to learning the event statistics. Thus, humans use temporal regularities learned from observations to adjust the scheduling of eye movements in a nearly optimal way. This is a first computational account towards understanding how eye movements are scheduled in natural behavior. After establishing the connection of temporal eye movement dynamics, reward in the form of task performance, and physiological costs for saccades and endogenous eye blinks, we applied our paradigm to study the variability in temporal eye movement sequences within and across subjects. The experimental design facilitates analyzing the temporal structure of eye movementswith full knowledge about the statistics of the environment. Hence, we can quantify the internal beliefs about task-relevant properties and can further study how they contribute to the variability in gaze sequences in combination with physiological costs. Crucially, we developed a visual monitoring task where a subject is confronted with the same stimulus dynamics multiple times while learning effects are kept to a minimum. Hence, we are not only able to compute the variability between subjects but also over trials of the same subject. We present behavioral data and results from our computational model showing how variability of eye movement sequences is related to task properties. Having access to the subjects' reward structure, we are able to show how expected rewards influence the variance in visual behavior. Finally, we studied the computational properties underlying the control of eye movement sequences in a visual search task. In particular, we investigated whether eye movements are planned. Research from psychology has merely revealed that sequences of multiple eye movements are jointly prepared as a scanpath. Here we examine whether humans are capable of finding the optimal scanpath even if it requires incorporating more than just the next eye movement into the decision. For a visual search task, we derive an ideal observer as well as an ideal planner based on the framework of partially observable Markov decision processes (POMDP). The former always takes the action associated with the maximum immediate reward while the latter maximized the total sum of rewards for the whole action sequence. We show that depending on the search shape ideal planner and ideal observer lead to different scanpaths. Following this paradigm, we found evidence that humans are indeed capable of planning scanpaths. The ideal planner explained our subjects' behavior better compared to the ideal observer. In particular, the location of the first fixation differed depending on the shape and the time available for the search, a characteristic well predicted by the ideal planner but not by the ideal observer. Overall, our results are the first evidence that our visual system is capable of taking into account future consequences beyond the immediate reward for choosing the next fixation target. In summary, this thesis proposes an experimental paradigm that enables us to study the temporal structure of eye movements in dynamic environments. While approaching this computationally is generally intractable, we reduce the complexity of the stimuli in dimensions that do not contribute to the temporal effects. As a consequence, we can collect eye movement data in tasks with a rich temporal structure while being able to compute the internal beliefs of our subjects in a way that is not possible for natural stimuli. We present four different studies that show how this paradigm can lead to new insights into several properties of the visual system. Our findings have several implications for future work: First, we established several factors that play a crucial role in the generation of gaze behavior and have to be accounted for when describing the temporal dynamics of eye movements. Second, future models of eye movements should take into account, that delayed rewards can affect behavior. Third, the relationship between behavioral variability and properties of the reward structure are not limited to eye movements. Instead, it is a general prediction by the computational framework. Therefore, future work can use this approach to study the variability of various other actions. Our computational models have applications in state of the art technology. For example, blink rates are already utilized in vigilance systems for drivers. Our computational model is able to describe the temporal statistics of blinking behavior beyond simple blink rates and also accounts for interindividual differences in eye physiology. Using algorithms that can deal with natural images, e.g., deep neural networks, the environmental statistics can be extracted and our models then can be used to predict eye movements in daily situations like driving a vehicle
    corecore