138 research outputs found

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Чинники, що спричиняють кіберхвороби

    Get PDF
    The section discusses factors impacting cybersickness.У розділі розглянуто чинники, що спричиняють кіберхвороби

    The role of multisensory feedback in the objective and subjective evaluations of fidelity in virtual reality environments.

    Get PDF
    The use of virtual reality in academic and industrial research has been rapidly expanding in recent years therefore evaluations of the quality and effectiveness of virtual environments are required. The assessment process is usually done through user evaluation that is being measured whilst the user engages with the system. The limitations of this method in terms of its variability and user bias of pre and post-experience have been recognised in the research literature. Therefore, there is a need to design more objective measures of system effectiveness that could complement subjective measures and provide a conceptual framework for the fidelity assessment in VR. There are many technological and perceptual factors that can influence the overall experience in virtual environments. The focus of this thesis was to investigate how multisensory feedback, provided during VR exposure, can modulate a user’s qualitative and quantitative experience in the virtual environment. In a series of experimental studies, the role of visual, audio, haptic and motion cues on objective and subjective evaluations of fidelity in VR was investigated. In all studies, objective measures of performance were collected and compared to the subjective measures of user perception. The results showed that the explicit evaluation of environmental and perceptual factors available within VR environments modulated user experience. In particular, the results shown that a user’s postural responses can be used as a basis for the objective measure of fidelity. Additionally, the role of augmented sensory cues was investigated during a manual assembly task. By recording and analysing the objective and subjective measures it was shown that augmented multisensory feedback modulated the user’s acceptability of the virtual environment in a positive manner and increased overall task performance. Furthermore, the presence of augmented cues mitigated the negative effects of inaccurate motion tracking and simulation sickness. In the follow up study, the beneficial effects of virtual training with augmented sensory cues were observed in the transfer of learning when the same task was performed in a real environment. Similarly, when the effects of 6 degrees of freedom motion cuing on user experience were investigated in a high fidelity flight simulator, the consistent findings between objective and subjective data were recorded. By measuring the pilot’s accuracy to follow the desired path during a slalom manoeuvre while perceived task demand was increased, it was shown that motion cuing is related to effective task performance and modulates the levels of workload, sickness and presence. The overall findings revealed that multisensory feedback plays an important role in the overall perception and fidelity evaluations of VR systems and as such user experience needs to be included when investigating the effectiveness of sensory feedback signals. Throughout this thesis it was consistently shown that subjective measures of user perception in VR are directly comparable to the objective measures of performance and therefore both should be used in order to obtain a robust results when investigating the effectiveness of VR systems. This conceptual framework can provide an effective method to study human perception, which can in turn provide a deeper understanding of the environmental and cognitive factors that can influence the overall user experience, in terms of fidelity requirements, in virtual reality environments

    Naturalistic depth perception and binocular vision

    Get PDF
    Humans continuously move both their eyes to redirect their foveae to objects at new depths. To correctly execute these complex combinations of saccades, vergence eye movements and accommodation changes, the visual system makes use of multiple sources of depth information, including binocular disparity and defocus. Furthermore, during development, both fine-tuning of oculomotor control as well as correct eye growth are likely driven by complex interactions between eye movements, accommodation, and the distributions of defocus and depth information across the retina. I have employed photographs of natural scenes taken with a commercial plenoptic camera to examine depth perception while varying perspective, blur and binocular disparity. Using a gaze contingent display with these natural images, I have shown that disparity and peripheral blur interact to modify eye movements and facilitate binocular fusion. By decoupling visual feedback for each eye, I have found it possible to induces both conjugate and disconjugate changes in saccadic adaptation, which helps us understand to what degree the eyes can be individually controlled. To understand the aetiology of myopia, I have developed geometric models of emmetropic and myopic eye shape, from which I have derived psychophysically testable predictions about visual function. I have then tested the myopic against the emmetropic visual system and have found that some aspects of visual function decrease in the periphery at a faster rate in best-corrected myopic observers than in emmetropes. To study the effects of different depth cues on visual development, I have investigated accommodation response and sensitivity to blur in normal and myopic subjects. This body of work furthers our understanding of oculomotor control and 3D perception, has applied implications regarding discomfort in the use of virtual reality, and provides clinically relevant insights regarding the development of refractive error and potential approaches to prevent incorrect emmetropization

    Explaining Self-Motion Perception using Virtual Reality in Patients with Ocular Disease

    Full text link
    Safe mobility requires accurate object and self-motion perception. This involves processing retinal motion generated by optic flow (which change with eye and head movements) and correctly integrating this with vestibular and proprioceptive cues. Poor sensory feedback of self-motion can lead to increased risks of accidents which impacts quality of life. This is further problematic for those with visual deficits, such as central or peripheral vision loss or impaired binocular vision. The expansion of healthcare into using virtual reality (VR) has allowed the assessment of sensory and motor performance in a safe environment. An advantage of VR is its ability to generate vection (perceived illusory self-motion) and presence (sense of being ‘there’). However, a limitation is the potential to develop cybersickness. Initially, the project examined how binocular vision influences vection in a virtual environment. Observers with or without stereopsis (ability to judge depth binocularly) were asked to compare their perceptual experiences based on psychophysical judgements of magnitude estimation. The findings suggest that the absence of stereopsis impairs accurate judgement of self-motion and reduces perceived presence, however, it was protective for cybersickness. The project then examined the impact of central and peripheral vision loss on self-motion perception by comparing those with age-related macular degeneration (AMD) and glaucoma respectively. Effects of these visual deficits on sensory conflicts involving visual-vestibular interactions was then assessed. Sensory conflict was imposed by altering the gain of simulated head linear head position and angular orientation to be either compatible or incompatible with head movement in two separate experiments. Fixation was used to control gaze during changes in angular head orientation. Vection and presence was higher in those with AMD, compared with those with glaucoma, indicating the importance of regional specificity in visual deficits on self-motion perception. Across studies, vection and presence were predominantly visually mediated despite changes in visual-vestibular sensory conflict. The vestibular system, however, appeared to play a larger role in developing cybersickness. The altered perception of self-motion may worsen mobility, particularly with disease progression. We therefore provide a framework and recommendations for a multidisciplinary patient-centric model of care to maximise quality of life

    Science of Facial Attractiveness

    Get PDF

    Varieties of Attractiveness and their Brain Responses

    Get PDF

    Human Machine Interfaces for Teleoperators and Virtual Environments

    Get PDF
    In Mar. 1990, a meeting organized around the general theme of teleoperation research into virtual environment display technology was conducted. This is a collection of conference-related fragments that will give a glimpse of the potential of the following fields and how they interplay: sensorimotor performance; human-machine interfaces; teleoperation; virtual environments; performance measurement and evaluation methods; and design principles and predictive models

    Eye tracking in optometry: A systematic review

    Get PDF
    This systematic review examines the use of eye-tracking devices in optometry, describing their main characteristics, areas of application and metrics used. Using the PRISMA method, a systematic search was performed of three databases. The search strategy identified 141 reports relevant to this topic, indicating the exponential growth over the past ten years of the use of eye trackers in optometry. Eye-tracking technology was applied in at least 12 areas of the field of optometry and rehabilitation, the main ones being optometric device technology, and the assessment, treatment, and analysis of ocular disorders. The main devices reported on were infrared light-based and had an image capture frequency of 60 Hz to 2000 Hz. The main metrics mentioned were fixations, saccadic movements, smooth pursuit, microsaccades, and pupil variables. Study quality was sometimes limited in that incomplete information was provided regarding the devices used, the study design, the methods used, participants' visual function and statistical treatment of data. While there is still a need for more research in this area, eye-tracking devices should be more actively incorporated as a useful tool with both clinical and research applications. This review highlights the robustness this technology offers to obtain objective information about a person's vision in terms of optometry and visual function, with implications for improving visual health services and our understanding of the vision process

    The effects of rotating the self out of the body in the full virtual body ownership illusion

    Get PDF
    It has been shown that it is possible to induce a strong illusion that a virtual body (VB) is one's own body. However, the relative influence of a first-person-perspective (1PP) view of the VB and spatial coincidence of the real body and VB remains unclear. We demonstrate a method that permits separation of these two factors. It provides a 1PP view of a VB, supporting visuomotor synchrony between real body and VB movements, but where the entire scene including the body is rotated 15° upwards through the axis connecting the eyes, so that the VB and real body are coincident only through this axis. In a within-subjects study that compared this 15° rotation with a 0° rotation condition, participants reported only slightly diminished levels of perceived ownership of the VB in the rotated condition and did not detect the rotation of the scene. These results indicate that strong spatial coincidence of the virtual and real bodies is not necessary for a full-body ownership illusion. The rotation method used, similar to the effects of vertical prisms, did not produce significant negative side-effects, thus providing a useful methodology for further investigations of body ownership
    corecore