288 research outputs found

    Auditory Induced Vection: Exploring Angular Acceleration Of Sound Sources

    Get PDF
    Vection designa a terminologia para self-motion illusions. Um exemplo comum para esta sensação é quando se estå sentado num comboio parado e, ao lado do mesmo, outro comboio igualmente parado começa a marcha, dando a sensação que é o comboio do observador que se move.Apesar de esta sensação estar maioritariamente ligada ao estímulo visual, estudos demonstraram que é possível induzir vection através do sistema auditivo.Grande parte dos estudos relacionados utilizaram reprodução binaural, devido à sua eficåcia na indução desta ilusão de movimento. Neste estudo propomos estudar os efeitos da aceleração angular na indução de vection auditivo, através de um sistema multi-canal de 8 colunas dispostas em círculo.Vection effect is the body movement sensation, when there is no movement occurring. The main example given for this vection sensation is when someone is sitting in a stationary train and another train starts moving alongside of the stationary train where the perceiver is, providing an illusion of movement. Although this sensation is mostly associated with the visual system, studies demonstrated that brain has a movement-sensitive area in the auditory cortex and that it is possible to induce auditory vection.Related studies uses binaural reproduction, which has been shown to be effective on AIV. However, in this project, we aim to test factors of angular acceleration reproduced by an 8 speaker array, circularly disposed

    Effects of Auditory Vection Speed and Directional Congruence on Perceptions of Visual Vection

    Get PDF
    Spatial disorientation is a major contributor to aircraft mishaps. One potential contributing factor is vection, an illusion of self-motion. Although vection is commonly thought of as a visual illusion, it can also be produced through audition. The purpose of the current experiment was to explore interactions between conflicting visual and auditory vection cues, specifically with regard to the speed and direction of rotation. The ultimate goal was to explore the extent to which aural vection could diminish or enhance the perception of visual vection. The study used a 3 × 2 within-groups factorial design. Participants were exposed to three levels of aural rotation velocity (slower, matched, and faster, relative to visual rotation speed) and two levels of aural rotational congruence (congruent or incongruent rotation) including two control conditions (visual and aural-only). Dependent measures included vection onset time, vection direction judgements, subjective vection strength ratings, vection speed ratings, and horizontal nystagmus frequency. Subjective responses to motion were assessed pre and post treatment, and oculomotor responses were assessed before, during, and following exposure to circular vection. The results revealed a significant effect of stimulus condition on vection strength. Specifically, directionally-congruent aural-visual vection resulted in significantly stronger vection than visual and aural vection alone. Perceptions of directionally-congruent aural-visual vection were slightly stronger vection than directionally-incongruent aural-visual vection, but not significantly so. No significant effects of aural rotation velocity on vection strength were observed. The results suggest directionally-incongruent aural vection could be used as a countermeasure for visual vection and directionally-congruent aural vection could be used to improve vection in virtual environments, provided further research is done

    Moving Sounds Enhance the Visually-Induced Self-Motion Illusion (Circular Vection) in Virtual Reality

    Get PDF
    While rotating visual and auditory stimuli have long been known to elicit self-motion illusions (“circular vection”), audiovisual interactions have hardly been investigated. Here, two experiments investigated whether visually induced circular vection can be enhanced by concurrently rotating auditory cues that match visual landmarks (e.g., a fountain sound). Participants sat behind a curved projection screen displaying rotating panoramic renderings of a market place. Apart from a no-sound condition, headphone-based auditory stimuli consisted of mono sound, ambient sound, or low-/high-spatial resolution auralizations using generic head-related transfer functions (HRTFs). While merely adding nonrotating (mono or ambient) sound showed no effects, moving sound stimuli facilitated both vection and presence in the virtual environment. This spatialization benefit was maximal for a medium (20 degrees × 15 degrees) FOV, reduced for a larger (54 degrees × 45 degrees) FOV and unexpectedly absent for the smallest (10 degrees × 7.5 degrees) FOV. Increasing auralization spatial fidelity (from low, comparable to five-channel home theatre systems, to high, 5 degree resolution) provided no further benefit, suggesting a ceiling effect. In conclusion, both self-motion perception and presence can benefit from adding moving auditory stimuli. This has important implications both for multimodal cue integration theories and the applied challenge of building affordable yet effective motion simulators

    Presence 2005: the eighth annual international workshop on presence, 21-23 September, 2005 University College London (Conference proceedings)

    Get PDF
    OVERVIEW (taken from the CALL FOR PAPERS) Academics and practitioners with an interest in the concept of (tele)presence are invited to submit their work for presentation at PRESENCE 2005 at University College London in London, England, September 21-23, 2005. The eighth in a series of highly successful international workshops, PRESENCE 2005 will provide an open discussion forum to share ideas regarding concepts and theories, measurement techniques, technology, and applications related to presence, the psychological state or subjective perception in which a person fails to accurately and completely acknowledge the role of technology in an experience, including the sense of 'being there' experienced by users of advanced media such as virtual reality. The concept of presence in virtual environments has been around for at least 15 years, and the earlier idea of telepresence at least since Minsky's seminal paper in 1980. Recently there has been a burst of funded research activity in this area for the first time with the European FET Presence Research initiative. What do we really know about presence and its determinants? How can presence be successfully delivered with today's technology? This conference invites papers that are based on empirical results from studies of presence and related issues and/or which contribute to the technology for the delivery of presence. Papers that make substantial advances in theoretical understanding of presence are also welcome. The interest is not solely in virtual environments but in mixed reality environments. Submissions will be reviewed more rigorously than in previous conferences. High quality papers are therefore sought which make substantial contributions to the field. Approximately 20 papers will be selected for two successive special issues for the journal Presence: Teleoperators and Virtual Environments. PRESENCE 2005 takes place in London and is hosted by University College London. The conference is organized by ISPR, the International Society for Presence Research and is supported by the European Commission's FET Presence Research Initiative through the Presencia and IST OMNIPRES projects and by University College London

    Crossmodal perception in virtual reality

    Get PDF
    With the proliferation of low-cost, consumer level, head-mounted displays (HMDs) we are witnessing a reappearance of virtual reality. However, there are still important stumbling blocks that hinder the achievable visual quality of the results. Knowledge of human perception in virtual environments can help overcome these limitations. In this work, within the much-studied area of perception in virtual environments, we look into the less explored area of crossmodal perception, that is, the interaction of different senses when perceiving the environment. In particular, we look at the influence of sound on visual perception in a virtual reality scenario. First, we assert the existence of a crossmodal visuo-auditory effect in a VR scenario through two experiments, and find that, similar to what has been reported in conventional displays, our visual perception is affected by auditory stimuli in a VR setup. The crossmodal effect in VR is, however, lower than that present in a conventional display counterpart. Having asserted the effect, a third experiment looks at visuo-auditory crossmodality in the context of material appearance perception. We test different rendering qualities, together with the presence of sound, for a series of materials. The goal of the third experiment is twofold: testing whether known interactions in traditional displays hold in VR, and finding insights that can have practical applications in VR content generation (e.g., by reducing rendering costs)

    Felt_space infrastructure: Hyper vigilant spatiality to valence the visceral dimension

    Get PDF
    Felt_space infrastructure: Hypervigilant spatiality to valence the visceral dimension. This thesis evolves perception as a hypothesis to reframe architectural praxis negotiated through agent-situation interaction. The research questions the geometric principles of architectural ordination to originate the ‘felt_space infrastructure’, a relational system of measurement concerned with the role of perception in mediating sensory space and the cognised environment. The methodological model for this research fuses perception and environmental stimuli, into a consistent generative process that penetrates the inner essence of space, to reveal the visceral parameter. These concepts are applied to develop a ‘coefficient of affordance’ typology, ‘hypervigilant’ tool set, and ‘cognitive_tope’ design methodology. Thus, by extending the architectural platform to consider perception as a design parameter, the thesis interprets the ‘inference schema’ as an instructional model to coordinate the acquisition of spatial reality through tensional and counter-tensional feedback dynamics. Three site-responsive case studies are used to advance the thesis. The first case study is descriptive and develops a typology of situated cognition to extend the ‘granularity’ of perceptual sensitisation (i.e. a fine-grained means of perceiving space). The second project is relational and questions how mapping can coordinate perceptual, cognitive and associative attention, as a ‘multi-webbed vector field’ comprised of attractors and deformations within a viewer-centred gravitational space. The third case study is causal, and demonstrates how a transactional-biased schema can generate, amplify and attenuate perceptual misalignment, thus triggering a visceral niche. The significance of the research is that it progresses generative perception as an additional variable for spatial practice, and promotes transactional methodologies to gain enhanced modes of spatial acuity to extend the repertoire of architectural practice
    • 

    corecore