730 research outputs found

    Using natural versus artificial stimuli to perform calibration for 3D gaze tracking

    No full text
    International audienceThe presented study tests which type of stereoscopic image, natural or artificial, is more adapted to perform efficient and reliable calibration in order to track the gaze of observers in 3D space using classical 2D eye tracker. We measured the horizontal disparities, i.e. the difference between the x coordinates of the two eyes obtained using a 2D eye tracker. This disparity was recorded for each observer and for several target positions he had to fixate. Target positions were equally distributed in the 3D space, some on the screen (with a null disparity), some behind the screen (uncrossed disparity) and others in front of the screen (crossed disparity). We tested different regression models (linear and non linear) to explain either the true disparity or the depth with the measured disparity. Models were tested and compared on their prediction error for new targets at new positions. First of all, we found that we obtained more reliable disparities measures when using natural stereoscopic images rather than artificial. Second, we found that overall a non-linear model was more efficient. Finally, we discuss the fact that our results were observer dependent, with variability's between the observer's behavior when looking at 3D stimuli. Because of this variability, we proposed to compute observer specific model to accurately predict their gaze position when exploring 3D stimuli

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Improving the performance of GIS/spatial analysts though novel applications of the Emotiv EPOC EEG headset

    Get PDF
    Geospatial information systems are used to analyze spatial data to provide decision makers with relevant, up-to-date, information. The processing time required for this information is a critical component to response time. Despite advances in algorithms and processing power, we still have many “human-in-the-loop” factors. Given the limited number of geospatial professionals, analysts using their time effectively is very important. The automation and faster humancomputer interactions of common tasks that will not disrupt their workflow or attention is something that is very desirable. The following research describes a novel approach to increase productivity with a wireless, wearable, electroencephalograph (EEG) headset within the geospatial workflow

    Direct interaction with large displays through monocular computer vision

    Get PDF
    Large displays are everywhere, and have been shown to provide higher productivity gain and user satisfaction compared to traditional desktop monitors. The computer mouse remains the most common input tool for users to interact with these larger displays. Much effort has been made on making this interaction more natural and more intuitive for the user. The use of computer vision for this purpose has been well researched as it provides freedom and mobility to the user and allows them to interact at a distance. Interaction that relies on monocular computer vision, however, has not been well researched, particularly when used for depth information recovery. This thesis aims to investigate the feasibility of using monocular computer vision to allow bare-hand interaction with large display systems from a distance. By taking into account the location of the user and the interaction area available, a dynamic virtual touchscreen can be estimated between the display and the user. In the process, theories and techniques that make interaction with computer display as easy as pointing to real world objects is explored. Studies were conducted to investigate the way human point at objects naturally with their hand and to examine the inadequacy in existing pointing systems. Models that underpin the pointing strategy used in many of the previous interactive systems were formalized. A proof-of-concept prototype is built and evaluated from various user studies. Results from this thesis suggested that it is possible to allow natural user interaction with large displays using low-cost monocular computer vision. Furthermore, models developed and lessons learnt in this research can assist designers to develop more accurate and natural interactive systems that make use of human’s natural pointing behaviours

    A Neurophysiologic Study Of Visual Fatigue In Stereoscopic Related Displays

    Get PDF
    Two tasks were investigated in this study. The first study investigated the effects of alignment display errors on visual fatigue. The experiment revealed the following conclusive results: First, EEG data suggested the possibility of cognitively-induced time compensation changes due to a corresponding effect in real-time brain activity by the eyes trying to compensate for the alignment. The magnification difference error showed more significant effects on all EEG band waves, which were indications of likely visual fatigue as shown by the prevalence of simulator sickness questionnaire (SSQ) increases across all task levels. Vertical shift errors were observed to be prevalent in theta and beta bands of EEG which probably induced alertness (in theta band) as a result of possible stress. Rotation errors were significant in the gamma band, implying the likelihood of cognitive decline because of theta band influence. Second, the hemodynamic responses revealed that significant differences exist between the left and right dorsolateral prefrontal due to alignment errors. There was also a significant difference between the main effect for power band hemisphere and the ATC task sessions. The analyses revealed that there were significant differences between the dorsal frontal lobes in task processing and interaction effects between the processing lobes and tasks processing. The second study investigated the effects of cognitive response variables on visual fatigue. Third, the physiologic indicator of pupil dilation was 0.95mm that occurred at a mean time of 38.1min, after which the pupil dilation begins to decrease. After the average saccade rest time of 33.71min, saccade speeds leaned toward a decrease as a possible result of fatigue on-set. Fourth, the neural network classifier showed visual response data from eye movement were identified as the best predictor of visual fatigue with a classification accuracy of 90.42%. Experimental data confirmed that 11.43% of the participants actually experienced visual fatigue symptoms after the prolonged task

    Spatial cognition in virtual environments

    Get PDF
    Since the last decades of the past century, Virtual Reality (VR) has been developed also as a methodology in research, besides a set of helpful applications in medical field (trainings for surgeons, but also rehabilitation tools). In science, there is still no agreement if the use of this technology in research on cognitive processes allows us to generalize results found in a Virtual Environment (VE) to the human behavior or cognition in the real world. This happens because of a series of differences found in basic perceptual processes (for example, depth perception) suggest a big difference in visual environmental representation capabilities of Virtual scenarios. On the other side, in literature quite a lot of studies can be found, which give a proof of VEs reliability in more than one field (trainings and rehabilitation, but also in some research paradigms). The main aim of this thesis is to investigate if, and in which cases, these two different views can be integrated and shed a new light and insights on the use of VR in research. Through the many experiments conducted in the "Virtual Development and Training Center" of the Fraunhofer Institute in Magdeburg, we addressed both low-level spatial processes (within an "evaluation of distances paradigm") and high-level spatial cognition (using a navigation and visuospatial planning task, called "3D Maps"), trying to address, at the same time, also practical problems as, for example, the use of stereoscopy in VEs or the problem of "Simulator Sickness" during navigation in immersive VEs. The results obtained with our research fill some gaps in literature about spatial cognition in VR and allow us to suggest that the use of VEs in research is quite reliable, mainly if the investigated processes are from the higher level of complexity. In this case, in fact, human brain "adapts" pretty well even to a "new" reality like the one offered by the VR, providing of course a familiarization period and the possibility to interact with the environment; the behavior will then be “like if” the environment was real: what is strongly lacking, at the moment, is the possibility to give a completely multisensorial experience, which is a very important issue in order to get the best from this kind of “visualization” of an artificial world. From a low-level point of view, we can confirm what already found in literature, that there are some basic differences in how our visual system perceives important spatial cues as depth and relationships between objects, and, therefore, we cannot talk about "similar environments" talking about VR and reality. The idea that VR is a "different" reality, offering potentially unlimited possibilities of use, even overcoming some physical limits of the real world, in which this "new" reality can be acquired by our cognitive system just by interacting with it, is therefore discussed in the conclusions of this work

    Spatial cognition in virtual environments

    Get PDF
    Since the last decades of the past century, Virtual Reality (VR) has been developed also as a methodology in research, besides a set of helpful applications in medical field (trainings for surgeons, but also rehabilitation tools). In science, there is still no agreement if the use of this technology in research on cognitive processes allows us to generalize results found in a Virtual Environment (VE) to the human behavior or cognition in the real world. This happens because of a series of differences found in basic perceptual processes (for example, depth perception) suggest a big difference in visual environmental representation capabilities of Virtual scenarios. On the other side, in literature quite a lot of studies can be found, which give a proof of VEs reliability in more than one field (trainings and rehabilitation, but also in some research paradigms). The main aim of this thesis is to investigate if, and in which cases, these two different views can be integrated and shed a new light and insights on the use of VR in research. Through the many experiments conducted in the "Virtual Development and Training Center" of the Fraunhofer Institute in Magdeburg, we addressed both low-level spatial processes (within an "evaluation of distances paradigm") and high-level spatial cognition (using a navigation and visuospatial planning task, called "3D Maps"), trying to address, at the same time, also practical problems as, for example, the use of stereoscopy in VEs or the problem of "Simulator Sickness" during navigation in immersive VEs. The results obtained with our research fill some gaps in literature about spatial cognition in VR and allow us to suggest that the use of VEs in research is quite reliable, mainly if the investigated processes are from the higher level of complexity. In this case, in fact, human brain "adapts" pretty well even to a "new" reality like the one offered by the VR, providing of course a familiarization period and the possibility to interact with the environment; the behavior will then be “like if” the environment was real: what is strongly lacking, at the moment, is the possibility to give a completely multisensorial experience, which is a very important issue in order to get the best from this kind of “visualization” of an artificial world. From a low-level point of view, we can confirm what already found in literature, that there are some basic differences in how our visual system perceives important spatial cues as depth and relationships between objects, and, therefore, we cannot talk about "similar environments" talking about VR and reality. The idea that VR is a "different" reality, offering potentially unlimited possibilities of use, even overcoming some physical limits of the real world, in which this "new" reality can be acquired by our cognitive system just by interacting with it, is therefore discussed in the conclusions of this work
    • …
    corecore