26 research outputs found

    Crossmodal binding in searching for objects

    No full text
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Crossmodal binding in localizing objects outside the field of view

    No full text
    Using virtual reality techniques we created a virtual room within which participants could orient themselves by means of a head-mounted display. Participants were required to search for a nonimmediately visually available object attached to different parts of the virtual room's walls. The search could be guided by a light and/or a sound emitted by the object. When the object was found participants engaged it with a sighting circle. The time taken by participants to initiate the search and to engage the target object was measured. Results from three experiments suggest that (1) advantages in starting the search, finding, and engaging the object were found when the object emitted both light and sound; (2) these advantages disappeared when the visual and auditory information emitted by the object was separated in time by more than 150 ms; (3) misleading visual information determined a greater level of interference than misleading auditory information (e.g., sound from one part of the room, light from the object)

    Comparing effects of 2-D and 3-D visual cues during aurally aided target acquisition

    No full text
    The aim of the present study was to investigate interactions between vision and audition during a visual target acquisition task performed in a virtual environment. In two experiments, participants were required to perform an acquisition task guided by auditory and/or visual cues. In both experiments the auditory cues were constructed using virtual 3-D sound techniques based on nonindividualized head-related transfer functions. In Experiment 1 the visual cue was constructed in the form of a continuously updated 2-D arrow. In Experiment 2 the visual cue was a nonstereoscopic, perspective-based 3-D arrow. The results suggested that virtual spatial auditory cues reduced acquisition time but were not as effective as the virtual visual cues. Experiencing the 3-D perspective-based arrow rather than the 2-D arrow produced a faster acquisition time not only in the visually aided conditions but also when the auditory cues were presented in isolation. Suggested novel applications include providing 3-D nonstereoscopic, perspective-based visual information on radar displays, which may lead to a better integration with spatial virtual auditory information

    Motor Intentions versus Social Intentions: One System or Multiple Systems?

    No full text
    In this fine book philosopher Pierre Jacob and well-known cognitive neuroscientist Marc Jeannerod collaborate to bring together many key findings in the visual ventral (\u2018what\u2019) and dorsal (\u2018where\u2019 and \u2018how\u2019) systems. One of Jacob and Jeannerod\u2019s major contribution is to highlight the mechanisms that allow for skilful social interactions. They propose a distinction between the \u2018mirror neuron\u2019 system for perceiving and responding to object-oriented actions and a \u2018social perception network\u2019 devoted to the visual analysis of human actions directed towards conspecifics. In this review we will discuss some recent neurophysiological, neuropsychological and brain imaging studies suggesting that this dichotomy might be too strict

    Effects of Increasing Visual Load on Aurally and Visually Guided Target Acquisition in a Virtual Environment

    No full text
    The aim of the present study is to investigate interactions between vision and audition during a target acquisition task performed in a virtual environment. We measured the time taken to locate a visual target (acquisition time) signalled by auditory and/or visual cues in conditions of variable visual load. Visual load was increased by introducing a secondary visual task. The auditory cue was constructed using virtual three-dimensional (3D) sound techniques. The visual cue was constructed in the form of a 3D updating arrow. The results suggested that both auditory and visual cues reduced acquisition time as compared to an uncued condition. Whereas the visual cue elicited faster acquisition time than the auditory cue, the combination of the two cues produced the fastest acquisition time. The introduction of secondary visual task differentially affected acquisition time depending on cue modality. In conditions of high visual load, acquiring a target signalled by the auditory cue led to slower and more error-prone performance than acquiring a target signalled by either the visual cue alone or by both the visual and auditory cues
    corecore