175 research outputs found

    Classifying EEG Signals during Stereoscopic Visualization to Estimate Visual Comfort

    Get PDF
    International audienceWith stereoscopic displays a sensation of depth that is too strong could impede visual comfort and may result in fatigue or pain. We used Electroencephalography (EEG) to develop a novel brain-computer interface that monitors users' states in order to reduce visual strain. We present the first system that discriminates comfortable conditions from uncomfortable ones during stereoscopic vision using EEG. In particular, we show that either changes in event-related potentials' (ERPs) amplitudes or changes in EEG oscillations power following stereoscopic objects presentation can be used to estimate visual comfort. Our system reacts within 1 s to depth variations, achieving 63% accuracy on average (up to 76%) and 74% on average when 7 consecutive variations are measured (up to 93%). Performances are stable (≈62.5%) when a simplified signal processing is used to simulate online analyses or when the number of EEG channels is lessened. This study could lead to adaptive systems that automatically suit stereoscopic displays to users and viewing conditions. For example, it could be possible to match the stereoscopic effect with users' state by modifying the overlap of left and right images according to the classifier output

    Biosignalų požymių regos diskomfortui vertinti išskyrimas ir tyrimas

    Get PDF
    Comfortable stereoscopic perception continues to be an essential area of research. The growing interest in virtual reality content and increasing market for head-mounted displays (HMDs) still cause issues of balancing depth perception and comfortable viewing. Stereoscopic views are stimulating binocular cues – one type of several available human visual depth cues which becomes conflicting cues when stereoscopic displays are used. Depth perception by binocular cues is based on matching of image features from one retina with corresponding features from the second retina. It is known that our eyes can tolerate small amounts of retinal defocus, which is also known as Depth of Focus. When magnitudes are larger, a problem of visual discomfort arises. The research object of the doctoral dissertation is a visual discomfort level. This work aimed at the objective evaluation of visual discomfort, based on physiological signals. Different levels of disparity and the number of details in stereoscopic views in some cases make it difficult to find the focus point for comfortable depth perception quickly. During this investigation, a tendency for differences in single sensor-based electroencephalographic EEG signal activity at specific frequencies was found. Additionally, changes in eye tracker collected gaze signals were also found. A dataset of EEG and gaze signal records from 28 control subjects was collected and used for further evaluation. The dissertation consists of an introduction, three chapters and general conclusions. The first chapter reveals the fundamental knowledge ways of measuring visual discomfort based on objective and subjective methods. In the second chapter theoretical research results are presented. This research was aimed to investigate methods which use physiological signals to detect changes on the level of sense of presence. Results of the experimental research are presented in the third chapter. This research aimed to find differences in collected physiological signals when a level of visual discomfort changes. An experiment with 28 control subjects was conducted to collect these signals. The results of the thesis were published in six scientific publications – three in peer-reviewed scientific papers, three in conference proceedings. Additionally, the results of the research were presented in 8 conferences.Dissertatio

    EEG-based neuroergonomics for 3D user interfaces: opportunities and challenges

    Get PDF
    International audience3D user interfaces (3DUI) are increasingly used in a number of applications, spanning from entertainment to industrial design. However, 3D interaction tasks are generally more complex for users since interacting with a 3D environment is more cognitively demanding than perceiving and interacting with a 2D one. As such, it is essential that we could evaluate finely user experience, in order to propose seamless interfaces. To do so, a promising research direction is to measure users' inner-state based on brain signals acquired during interaction, by following a neuroergonomics approach. Combined with existing methods, such tool can be used to strengthen the understanding of user experience. In this paper, we review the work being undergone in this area; what has been achieved and the new challenges that arise. We describe how a mobile brain imaging technique such as electroencephalography (EEG) brings continuous and non-disruptive measures. EEG-based evaluation of users can give insights about multiple dimensions of the user experience, with realistic interaction tasks or novel interfaces. We investigate four constructs: workload, attention, error recognition and visual comfort. Ultimately, these metrics could help to alleviate users when they interact with computers

    Recent advances in EEG-based neuroergonomics for Human-Computer Interaction

    Get PDF
    International audienceHuman-Computer Interfaces (HCI) are increasingly ubiquitous in multiple applications including industrial design, education, art or entertainment. As such, HCI could be used by very different users, with very different skills and needs. This thus requires user-centered design approaches and appropriate evaluation methods to maximize User eXperience (UX). Existing evaluation methods include behavioral studies, testbeds, questionnaires and inquiries, among others. While useful, such methods suffer from several limitations as they can be either ambiguous, lack real-time recordings, or disrupt the interaction. Neuroergonomics can be an adequate tool to complement traditional evaluation methods. Notably, Electroencephalography (EEG)-based evaluation of UX has the potential to address the limitations above, by providing objective, real-time and non-disruptive metrics of the ergonomics quality of a given HCI (Frey 2014). In this abstract, we present an overview of our recent works in that direction. In particular, we show how we can process EEG signals in order to derive metrics characterizing 1) how the user perceive the HCI display (HCI output) and 2) how the user interacts with the HCI (HCI input)

    A Neurophysiologic Study Of Visual Fatigue In Stereoscopic Related Displays

    Get PDF
    Two tasks were investigated in this study. The first study investigated the effects of alignment display errors on visual fatigue. The experiment revealed the following conclusive results: First, EEG data suggested the possibility of cognitively-induced time compensation changes due to a corresponding effect in real-time brain activity by the eyes trying to compensate for the alignment. The magnification difference error showed more significant effects on all EEG band waves, which were indications of likely visual fatigue as shown by the prevalence of simulator sickness questionnaire (SSQ) increases across all task levels. Vertical shift errors were observed to be prevalent in theta and beta bands of EEG which probably induced alertness (in theta band) as a result of possible stress. Rotation errors were significant in the gamma band, implying the likelihood of cognitive decline because of theta band influence. Second, the hemodynamic responses revealed that significant differences exist between the left and right dorsolateral prefrontal due to alignment errors. There was also a significant difference between the main effect for power band hemisphere and the ATC task sessions. The analyses revealed that there were significant differences between the dorsal frontal lobes in task processing and interaction effects between the processing lobes and tasks processing. The second study investigated the effects of cognitive response variables on visual fatigue. Third, the physiologic indicator of pupil dilation was 0.95mm that occurred at a mean time of 38.1min, after which the pupil dilation begins to decrease. After the average saccade rest time of 33.71min, saccade speeds leaned toward a decrease as a possible result of fatigue on-set. Fourth, the neural network classifier showed visual response data from eye movement were identified as the best predictor of visual fatigue with a classification accuracy of 90.42%. Experimental data confirmed that 11.43% of the participants actually experienced visual fatigue symptoms after the prolonged task

    Améliorer les interactions homme-machine et la présence sociale avec l’informatique physiologique

    Get PDF
    This thesis explores how physiological computing can contribute to human-computer interaction (HCI) and foster new communication channels among the general public. We investigated how physiological sensors, such as electroencephalography (EEG), could be employed to assess the mental state of the users and how they relate to other evaluation methods. We created the first brain-computer interface that could sense visual comfort during the viewing of stereoscopic images and shaped a framework that could help to assess the over all user experience by monitoring workload, attention and error recognition.To lower the barrier between end users and physiological sensors,we participated in the software integration of a low-cost and open hardware EEG device; used off-the shelf webcams to measure heart rate remotely, crafted we arables that can quickly equip users so that electrocardiography, electrodermal activity or EEG may be measured during public exhibitions. We envisioned new usages for our sensors, that would increase social presence. In a study about human-agent interaction, participants tended to prefer virtual avatars that were mirroring their own internal state. A follow-up study focused on interactions between users to describe how physiological monitoringcould alter our relationships. Advances in HCI enabled us to seam lesslyintegrate biofeedback to the physical world. We developped Teegi, apuppet that lets novices discover by themselves about their brain activity. Finally, with Tobe, a toolkit that encompasses more sensors and give more freedom about their visualizations, we explored how such proxy shifts our representations, about our selves as well as about the others.Cette thèse explore comment l’informatique physiologique peut contribuer aux interactions homme-machine (IHM) et encourager l’apparition de nouveaux canaux de communication parmi le grand public. Nous avons examiné comment des capteurs physiologiques,tels que l’électroencéphalographie (EEG), pourraient être utilisés afin d’estimer l’état mental des utilisateurs et comment ils se positionnent par rapport à d’autres méthodes d’évaluation. Nous avons créé la première interface cerveau-ordinateur capable de discriminer le confort visuel pendant le visionnage d’images stéréoscopiques et nous avons esquissé un système qui peux aider à estimer l’expérience utilisateur dans son ensemble, en mesurant charge mentale, attention et reconnaissance d’erreur. Pour abaisser la barrière entre utilisateurs finaux et capteurs physiologiques, nous avons participé à l’intégration logicielle d’un appareil EEG bon marché et libre, nous avons utilisé des webcams du commerce pour mesurer le rythme cardiaque à distance, nous avons confectionné des wearables dont les utilisateurs peuvent rapidement s’équiper afin qu’électrocardiographie, activité électrodermale et EEG puissent être mesurées lors de manifestations publiques. Nous avons imaginé de nouveaux usages pour nos capteurs, qui augmenteraient la présence sociale. Dans une étude autour de l’interaction humain agent,les participants avaient tendance à préférer les avatars virtuels répliquant leurs propres états internes. Une étude ultérieure s’est concentrée sur l’interaction entre utilisateurs, profitant d’un jeu de plateau pour décrire comment l’examen de la physiologie pourrait changer nos rapports. Des avancées en IHM ont permis d’intégrer de manière transparente du biofeedback au monde physique. Nous avons développé Teegi, une poupée qui permet aux novices d’en découvrir plus sur leur activité cérébrale, par eux-mêmes. Enfin avec Tobe, un toolkit qui comprend plus de capteurs et donne plus de liberté quant à leurs visualisations, nous avons exploré comment un tel proxy décalenos représentations, tant de nous-mêmes que des autres

    VR.net: A Real-world Dataset for Virtual Reality Motion Sickness Research

    Full text link
    Researchers have used machine learning approaches to identify motion sickness in VR experience. These approaches demand an accurately-labeled, real-world, and diverse dataset for high accuracy and generalizability. As a starting point to address this need, we introduce `VR.net', a dataset offering approximately 12-hour gameplay videos from ten real-world games in 10 diverse genres. For each video frame, a rich set of motion sickness-related labels, such as camera/object movement, depth field, and motion flow, are accurately assigned. Building such a dataset is challenging since manual labeling would require an infeasible amount of time. Instead, we utilize a tool to automatically and precisely extract ground truth data from 3D engines' rendering pipelines without accessing VR games' source code. We illustrate the utility of VR.net through several applications, such as risk factor detection and sickness level prediction. We continuously expand VR.net and envision its next version offering 10X more data than the current form. We believe that the scale, accuracy, and diversity of VR.net can offer unparalleled opportunities for VR motion sickness research and beyond

    TOWARDS STEADY-STATE VISUALLY EVOKED POTENTIALS BRAIN-COMPUTER INTERFACES FOR VIRTUAL REALITY ENVIRONMENTS EXPLICIT AND IMPLICIT INTERACTION

    Get PDF
    In the last two decades, Brain-Computer Interfaces (BCIs) have been investigated mainly for the purpose of implementing assistive technologies able to provide new channels for communication and control for people with severe disabilities. Nevertheless, more recently, thanks to technical and scientific advances in the different research fields involved, BCIs are gaining greater attention also for their adoption by healthy users, as new interaction devices. This thesis is dedicated to to the latter goal and in particular will deal with BCIs based on the Steady State Visual Evoked Potential (SSVEP), which in previous works demonstrated to be one of the most flexible and reliable approaches. SSVEP based BCIs could find applications in different contexts, but one which is particularly interesting for healthy users, is their adoption as new interaction devices for Virtual Reality (VR) environments and Computer Games. Although being investigated since several years, BCIs still poses several limitations in terms of speed, reliability and usability with respect to ordinary interaction devices. Despite of this, they may provide additional, more direct and intuitive, explicit interaction modalities, as well as implicit interaction modalities otherwise impossible with ordinary devices. This thesis, after a comprehensive review of the different research fields being the basis of a BCI exploiting the SSVEP modality, present a state-of-the-art open source implementation using a mix of pre-existing and custom software tools. The proposed implementation, mainly aimed to the interaction with VR environments and Computer Games, has then been used to perform several experiments which are hereby described as well. Initially performed experiments aim to stress the validity of the provided implementation, as well as to show its usability with a commodity bio-signal acquisition device, orders of magnitude less expensive than commonly used ones, representing a step forward in the direction of practical BCIs for end users applications. The proposed implementation, thanks to its flexibility, is used also to perform novel experiments aimed to investigate the exploitation of stereoscopic displays to overcome a known limitation of ordinary displays in the context of SSVEP based BCIs. Eventually, novel experiments are presented investigating the use of the SSVEP modality to provide also implicit interaction. In this context, a first proof of concept Passive BCI based on the SSVEP response is presented and demonstrated to provide information exploitable for prospective applications

    Human factors in the perception of stereoscopic images

    Get PDF
    Research into stereoscopic displays is largely divided into how stereo 3D content looks, a field concerned with distortion, and how such content feels to the viewer, that is, comfort. However, seldom are these measures presented simultaneously. Both comfortable displays with unacceptable 3D and uncomfortable displays with great 3D are undesirable. These two scenarios can render conclusions based on research into these measures both moot and impractical. Furthermore, there is a consensus that more disparity correlates directly with greater viewer discomfort. These experiments, and the dissertation thereof, challenge this notion and argue for a more nuanced argument related to acquisition factors such as interaxial distance (IA) and post processing in the form of horizontal image translation (HIT). Indeed, this research seeks to measure tolerance limits for viewing comfort and perceptual distortions across different camera separations. In the experiments, HIT and IA were altered together. Following Banks et al. (2009), our stimuli were simple stereoscopic hinges, and we measured the perceived angle as a function of camera separation. We compared the predictions based on a ray-tracing model with the perceived 3D shape obtained psychophysically. Participants were asked to judge the angles of 250 hinges at different camera separations (IA and HIT remained linked across a 20 to 100mm range, but the angles ranged between 50° and 130°). In turn, comfort data was obtained using a five-point Likert scale for each trial. Stimuli were presented in orthoscopic conditions with screen and observer field of view (FOV) matched at 45°. The 3D hinge and experimental parameters were run across three distinct series of experiments. The first series involved replicating a typical laboratory scenario where screen position was unchanged (Experiment I), the other presenting scenarios representative of real-world applications for a single viewer (Experiments II, III, and IV), and the last presenting real-world applications for multiple viewers (Experiment V). While the laboratory scenario revealed greatest viewer comfort occurred when a virtual hinge was placed on the screen plane, the single-viewer experiment revealed into-the-screen stereo stimuli was judged flatter while out-of-screen content was perceived more veridically. The multi-viewer scenario revealed a marked decline in comfort for off-axis viewing, but no commensurate effect on distortion; importantly, hinge angles were judged as being the same regardless of off-axis viewing for angles of up to 45. More specifically, the main results are as follows. 1) Increased viewing distance enhances viewer comfort for stereoscopic perception. 2) The amount of disparity present was not correlated with comfort. Comfort is not correlated with angular distortion. 3) Distortion is affected by hinge placement on-screen. There is only a significant effect on comfort when the Camera Separation is at 60mm. 4) A perceptual bias between into the depth orientation of the screen stimuli, in to the screen stimuli were judged as flatter than out of the screen stimuli. 5) Perceived distortion not being affected by oblique viewing. Oblique viewing does not affect perceived comfort. In conclusion, the laboratory experiment highlights the limitations of extrapolating a controlled empirical stimulus into a less controlled “real world” environment. The typical usage scenarios consistently reveal no correlation between the amount of screen disparity (parallax) in the stimulus and the comfort rating. The final usage scenario reveals a perceptual constancy in off-axis viewer conditions for angles of up to 45, which, as reported, is not reflected by a typical ray-tracing model. Stereoscopic presentation with non-orthoscopic HIT may give comfortable 3D. However, there is good reason to believe that this 3D is not being perceived veridically. Comfortable 3D is often incorrectly converged due to the differences between distances specified by disparity and monocular cues. This conflict between monocular and stereo cues in the presentation of S3D content leads to loss of veridicality i.e. a perception of flatness. Therefore, correct HIT is recommended as the starting point for creating realistic and comfortable 3D, and this factor is shown by data to be far more important than limiting screen disparity (i.e. parallax). Based on these findings, this study proposes a predictive model of stereoscopic space for 3D content generators who require flexibility in acquisition parameters. This is important as there is no data for viewing conditions where the acquisition parameters are changed
    corecore