29 research outputs found

    Perceptual Manipulations for Hiding Image Transformations in Virtual Reality

    Get PDF
    Users of a virtual reality make frequent gaze shifts and head movements to explore their surrounding environment. Saccades are rapid, ballistic, conjugate eye movements that reposition our gaze, and in doing so create large-field motion on our retina. Due to the high speed motion on the retina during saccades, the brain suppresses the visual signals from the eye, a perceptual phenomenon known as the saccadic suppression. These moments of visual blindness can help hide the display graphical updates in a virtual reality. In this dissertation, I investigated how the visibility of various image transformations differed, during combinations of saccade and head rotation conditions. Additionally, I studied how hand and gaze interaction, affected image change discrimination in an inattentional blindness task. I conducted four psychophysical experiments in desktop or head-mounted VR. In the eye tracking studies, users viewed 3D scenes, and were triggered to make a vertical or horizontal saccade. During the saccade an instantaneous translation or rotation was applied to the virtual camera used to render the scene. Participants were required to indicate the direction of these transitions after each trial. The results showed that type and size of the image transformation affected change detectability. During horizontal or vertical saccades, rotations along the roll axis were the most detectable, while horizontal and vertical translations were least noticed. In a second similar study, I added a constant camera motion to simulate a head rotation, and in a third study, I compared active head rotation with a simulated rotation or a static head. I found less sensitivity to transsaccadic horizontal compared to vertical camera shifts during simulated or real head pan. Conversely, during simulated or real head tilt observers were less sensitive to transsaccadic vertical than horizontal camera shifts. In addition, in my multi-interactive inattentional blindness experiment, I compared sensitivity to sudden image transformations when a participant used their hand and gaze to move and watch an object, to when they only watched it move. The results confirmed that when involved in a primary task that requires focus and attention with two interaction modalities (gaze and hand), a visual stimuli can better be hidden than when only one sense (vision) is involved. Understanding the effect of continuous head movement and attention on the visibility of a sudden transsaccadic change can help optimize the visual performance of gaze-contingent displays and improve user experience. Perceptually suppressed rotations or translations can be used to introduce imperceptible changes in virtual camera pose in applications such as networked gaming, collaborative virtual reality and redirected walking. This dissertation suggests that such transformations can be more effective and more substantial during active or passive head motion. Moreover, inattentional blindness during an attention-demanding task provides additional opportunities for imperceptible updates to a visual display

    Perceptual Manipulations for Hiding Image Transformations in Virtual Reality

    Get PDF
    Users of a virtual reality make frequent gaze shifts and head movements to explore their surrounding environment. Saccades are rapid, ballistic, conjugate eye movements that reposition our gaze, and in doing so create large-field motion on our retina. Due to the high speed motion on the retina during saccades, the brain suppresses the visual signals from the eye, a perceptual phenomenon known as the saccadic suppression. These moments of visual blindness can help hide the display graphical updates in a virtual reality. In this dissertation, I investigated how the visibility of various image transformations differed, during combinations of saccade and head rotation conditions. Additionally, I studied how hand and gaze interaction, affected image change discrimination in an inattentional blindness task. I conducted four psychophysical experiments in desktop or head-mounted VR. In the eye tracking studies, users viewed 3D scenes, and were triggered to make a vertical or horizontal saccade. During the saccade an instantaneous translation or rotation was applied to the virtual camera used to render the scene. Participants were required to indicate the direction of these transitions after each trial. The results showed that type and size of the image transformation affected change detectability. During horizontal or vertical saccades, rotations along the roll axis were the most detectable, while horizontal and vertical translations were least noticed. In a second similar study, I added a constant camera motion to simulate a head rotation, and in a third study, I compared active head rotation with a simulated rotation or a static head. I found less sensitivity to transsaccadic horizontal compared to vertical camera shifts during simulated or real head pan. Conversely, during simulated or real head tilt observers were less sensitive to transsaccadic vertical than horizontal camera shifts. In addition, in my multi-interactive inattentional blindness experiment, I compared sensitivity to sudden image transformations when a participant used their hand and gaze to move and watch an object, to when they only watched it move. The results confirmed that when involved in a primary task that requires focus and attention with two interaction modalities (gaze and hand), a visual stimuli can better be hidden than when only one sense (vision) is involved. Understanding the effect of continuous head movement and attention on the visibility of a sudden transsaccadic change can help optimize the visual performance of gaze-contingent displays and improve user experience. Perceptually suppressed rotations or translations can be used to introduce imperceptible changes in virtual camera pose in applications such as networked gaming, collaborative virtual reality and redirected walking. This dissertation suggests that such transformations can be more effective and more substantial during active or passive head motion. Moreover, inattentional blindness during an attention-demanding task provides additional opportunities for imperceptible updates to a visual display

    Memory-Based Active Visual Search for Humanoid Robots

    Get PDF

    Cortical Mechanisms for Transsaccadic Perception of Visual Object Features

    Get PDF
    The cortical correlates for transsaccadic perception (i.e., the ability to perceive, maintain, and update information across rapid eye movements, or saccades; Irwin, 1991) have been little investigated. Previously, Dunkley et al. (2016) found evidence of transsaccadic updating of object orientation in specific intraparietal (i.e., supramarginal gyrus, SMG) and extrastriate occipital (putative V4) regions. Based on these findings, I hypothesized that transsaccadic perception may rely on a single cortical mechanism. In this dissertation, I first investigated whether activation in the previous regions would generalize to another modality (i.e., motor/grasping) for the same feature (orientation) change, using a functional magnetic resonance imaging (fMRI) event-related paradigm that involved participants grasping a three-dimensional rotatable object for either fixations or saccades. The findings from this experiment further support the role of SMG in transsaccadic updating of object orientation, and provide a novel view of traditional reach/grasp-related regions in their ability to update grasp-related signals across saccades. In the second experiment, I investigated whether parietal cortex (e.g., SMG) plays a general role in the transsaccadic perception of other low-level object features, such as spatial frequency. The results point to the engagement of a different, posteromedial extrastriate (i.e., cuneus) region for transsaccadic perception of spatial frequency changes. This indirect assessment of transsaccadic interactions for different object features suggests that feature sensitive mechanisms may exist. In the third experiment, I tested the cortical correlates directly for two object features: orientation and shape. In this experiment, only posteromedial extrastriate cortex was associated with transsaccadic feature updating in the feature discrimination task, as it showed both saccade and feature modulations. Overall, the results of these three neuroimaging studies suggest that transsaccadic perception may be brought about by more than a single, general mechanism and, instead, through multiple, feature-dependent cortical mechanisms. Specifically, the saccade system communicates with inferior parietal cortex for transsaccadic judgements of orientation in an identified object, whereas as a medial occipital system is engaged for feature judgements related to object identity

    Perceptual stability during saccadic eye movements

    Get PDF
    Humans and other primates perform multiple fast eye movements per second in order to redirect gaze within the visual field. These so called saccades challenge visual perception: During the movement phases the projection of the outside world sweeps rapidly across the photoreceptors altering the retinal positions of objects that are otherwise stable in the environment. Despite this ever-changing sensory input, the brain creates the percept of a continuous, stable visual world. Currently, it is assumed that this perceptual stability is achieved by the synergistic interplay of multiple mechanisms, for example, a reduction of the sensitivity of the visual system around the time of the eye movement ('saccadic suppression') as well as transient reorganizations in the neuronal representations of space ('remapping'). This thesis comprises six studies on trans-saccadic perceptual stability

    Le rétablissement des positions d un objet dans l espace à travers des mouvements des yeux et de la tête

    Get PDF
    Le système visuel a évolué de manière à prendre en compte les conséquences de nos mouvements sur notre perception. L évolution nous a particulièrement doté de la capacité à percevoir notre environnement visuel comme stable et continu malgré les importants déplacements de ses projections sur nos rétines à chaque fois que nous déplaçons nos yeux, notre tête ou notre corps. Des études chez l animal ont récemment montré que dans certaines aires corticales et sous-corticales, impliquées dans le contrôle attentionnel et dans l élaboration des mouvements oculaires, des neurones sont capables d anticiper les conséquences des futurs mouvements volontaires des yeux sur leurs entrées visuelles. Ces neurones prédisent ce à quoi ressemblera notre environnement visuel en re-cartographiant la position des objets d importance à l endroit qu ils occuperont après l exécution d une saccade. Dans une série d études, nous avons tout d abord démontré que cette re- cartographie pouvait être évaluée de manière non invasive chez l Homme avec de simples cibles en mouvement apparent. En utilisant l enregistrement des mouvements des yeux combinés à des méthodes psychophysiques, nous avons déterminé la distribution des erreurs de re-cartographie à travers le champ visuel et ainsi découvert que la compensation des saccades oculomotrices se faisait de manière relativement précise. D autre part, les patterns d erreurs observés soutiennent un modèle de la constance spatiale basé sur la re-cartographie de pointeurs attentionnels et excluent d autres modèles issus de la littérature. Par la suite, en utilisant des objets en mouvement continu et l exécution de saccades au travers de leurs trajectoires, nous avons mis à jour une visualisation directe des processus de re-cartographie. Avec ce nouveau procédé nous avons à nouveau démontré l existence d erreurs systématiques de correction pour les saccades, qui s expliquent par une re-cartographie imprécise de la position attendue des objets en mouvement. Nous avons par la suite étendu notre modèle à d autres types de mouvements du corps et notamment étudié les contributions de récepteurs sous-corticaux (otoliths et canaux semi-circulaires) dans le maintien de la constance spatiale à travers des mouvements de la tête. Contrairement à des études décrivant une compensation presque parfaite des mouvements de la tête, nous avons observé une rupture de la constance spatiale pour des mouvements de roulis et de translation de la tête. Enfin, nous avons testé cette re-cartographie de la position des objets compensant un déplacement oculaire avec des cibles présentées à la limite du champ visuel, une re-cartographie censée placer la position attendue de l objet à l extérieur du champ visuel. Nos résultats suggèrent que les aires visuelles cérébrales impliquées dans ce processus de re-cartographie construisent une représentation globale de l espace allant au-delà du traditionnel champ visuel. Pour finir, nous avons conduit deux expériences pour déterminer le déploiement de l attention à travers l exécution de saccades. Nous avons alors démontré que l attention capturée par la présentation brève d un stimuli est re-cartographiée à sa position spatiale correcte après l exécution d une saccade, et que cet effet peut être observé avant même l initiation d une saccade. L ensemble de ces résultats démontre le rôle des pointeurs attentionnels dans la gestion du rétablissement des positions d un objet dans l espace ainsi que l apport des mesures comportementales à un champ de recherche initialement restreint à l électrophysiologieThe visual system has evolved to deal with the consequences of our own movements onour perception. In particular, evolution has given us the ability to perceive our visual world as stableand continuous despite large shift of the image on our retinas when we move our eyes, head orbody. Animal studies have recently shown that in some cortical and sub-cortical areas involved inattention and saccade control, neurons are able to anticipate the consequences of voluntary eyemovements on their visual input. These neurons predict how the world will look like after a saccadeby remapping the location of each attended object to the place it will occupy following a saccade.In a series of studies, we first showed that remapping could be evaluated in a non-invasive fashion in human with simple apparent motion targets. Using eye movement recordingsand psychophysical methods, we evaluated the distribution of remapping errors across the visualfield and found that saccade compensation was fairly accurate. The pattern of errors observedsupport a model of space constancy based on a remapping of attention pointers and excluded otherknown models. Then using targets that moved continuously while a saccade was made across themotion path, we were able to directly visualize the remapping processes. With this novel method wedemonstrated again the existence of systematic errors of correction for the saccade, best explainedby an inaccurate remapping of expected moving target locations. We then extended our model toother body movements, and studied the contribution of sub-cortical receptors (otoliths and semi-circular canals) in the maintenance of space constancy across head movements. Contrary tostudies reporting almost perfect compensations for head movements, we observed breakdowns ofspace constancy for head tilt as well as for head translation. Then, we tested remapping of targetlocations to correct for saccades at the very edge of the visual field, remapping that would place theexpected target location outside the visual field. Our results suggest that visual areas involved inremapping construct a global representation of space extending out beyond the traditional visualfield. Finally, we conducted experiments to determine the allocation of attention across saccades.We demonstrated that the attention captured by a brief transient was remapped to the correctspatial location after the eye movement and that this shift can be observed even before thesaccade.Taken together these results demonstrate the management of attention pointers to therecovery of target locations in space as well as the ability of behavioral measurements to address atopic pioneered by eletrophysiologists.PARIS5-Bibliotheque electronique (751069902) / SudocSudocFranceF

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
    corecore