81 research outputs found

    VR/AR and hearing research: current examples and future challenges

    Get PDF
    A well-known issue in clinical audiology and hearing research is the level of abstraction of traditional experimental assessments and methods, which lack ecological validity and differ significantly from real-life experiences, often resulting in unreliable outcomes. Attempts to deal with this matter by, for example, performing experiments in real-life contexts, can be problematic due to the difficulty of accurately identifying control-specific parameters and events. Virtual and augmented reality (VR/AR) have the potential to provide dynamic and immersive audiovisual experiences that are at the same time realistic and highly controllable. Several successful attempts have been made to create and validate VR-based implementations of standard audiological and linguistic tests, as well as to design procedures and technologies to assess meaningful and ecologically-valid data. Similarly, new viewpoints on auditory perception have been provided by looking at hearing training and auditory sensory augmentation, aiming at improving perceptual skills in tasks such as speech understanding and sound-source localisation. In this contribution, we bring together researchers active in this domain. We briefly describe experiments they have designed, and jointly identify challenges that are still open and common approaches to tackle the

    Forested areas of the Warsaw section of the Vistula River – public perception and the conditions for sustainable development of tourism and recreation function

    No full text

    Forest from the large city perspective: perception and philosophy contexts

    No full text
    The world of nature, understood as a space for human existence, is perceived in different perspectives and categories. From an economic point of view, it can be treated as a source of raw materials and monetary profits, while in a humanistic or ethical perspective as a space for the implementation of human aspirations and expectations. In parallel, concepts of the natural environment have their three perspectives: global, national and local. Each of them reflects different communities, needs, and possibilities of action. In this context, the forest in the perspective of a big city, due to its location between the independence of nature and the influence of human civilization, appears as a specific creation of the natural world. It is more a part of the human life environment than a fully autonomous creation of nature. The proximity of the city, functioning within its borders or borderland makes it dependent on human activities. Even if a person does not shape themselves for their recreational needs, they are exposed to the influence of factors that destroy nature. The integration of the forest with the everyday space of man causes the need to adapt the forest space to the needs of residents. Even if these are ‘forest−friendly’ interventions, from a human point of view (such as thematic paths, boards, trails, viewpoints), they are a disturbance to the natural environment. The scope of the article includes a review of various concepts and visions explaining the nature of the human−natural environment relationship. The main objective was to show the special role of the forest functioning in the space of a big city. The adopted assumptions also allowed to outline the framework of a specific structure of forest perception, which was presented in the final part of the study

    PHOnA: A public dataset of measured headphone transfer functions

    No full text
    A dataset of measured headphone transfer functions (HpTFs), the Princeton Headphone Open Archive (PHOnA), is presented. Extensive studies of HpTFs have been conducted for the past twenty years, each requiring a separate set of measurements, but this data has not yet been publicly shared. PHOnA aggregates HpTFs from different laboratories, including measurements for multiple different headphones, subjects, and repositionings of headphones for each subject. The dataset uses the spatially oriented format for acoustics (SOFA), and SOFA conventions are proposed for efficiently storing HpTFs. PHOnA is intended to provide a foundation for machine learning techniques applied to HpTF equalization. This shared data will allow optimization of equalization algorithms to provide more universal solutions to perceptually transparent headphone reproduction

    Evaluation of spatial tasks in virtual acoustic environments by means of modeling individual localization performances

    No full text
    Virtual acoustic environments (VAEs) are an excellent tool in hearing research, especially in the context of investigating spatial-hearing abilities. On the one hand, the development of VAEs requires a solid evaluation, which can be simplified by applying auditory models. On the other hand, VAE research provides data, which can support the further improvement of auditory models. Here, we describe how Bayesian inference can predict listeners' behavior when estimating the spatial direction of a static sound source presented in a VAE experiment. We show which components of the behavioral process are reflected in the model structure. Importantly, we highlight which acoustic cues are important to obtain accurate model predictions of listeners' localization performance in VAE. Moreover, we describe the influence of spatial priors and sensorimotor noise on response behavior. To account for inter-individual differences, we further demonstrate the necessity of individual calibration of sensory noise parameters in addition to the individual acoustic properties captured in head-related transfer functions

    A Bayesian model for human directional localization of broadband static sound sources

    No full text
    Humans estimate sound-source directions by combining prior beliefs with sensory evidence. Prior beliefs represent statistical knowledge about the environment, and the sensory evidence consists of auditory features such as interaural disparities and monaural spectral shapes. Models of directional sound localization often impose constraints on the contribution of these features to either the horizontal or vertical dimension. Instead, we propose a Bayesian model that flexibly incorporates each feature according to its spatial precision and integrates prior beliefs in the inference process. The model estimates the direction of a single, broadband, stationary sound source presented to a static human listener in an anechoic environment. We simplified interaural features to be broadband and compared two model variants, each considering a different type of monaural spectral features: magnitude profiles and gradient profiles. Both model variants were fitted to the baseline performance of five listeners and evaluated on the effects of localizing with non-individual head-related transfer functions (HRTFs) and sounds with rippled spectrum. We found that the variant equipped with spectral gradient profiles outperformed other localization models. The proposed model appears particularly useful for the evaluation of HRTFs and may serve as a basis for future extensions towards modeling dynamic listening conditions
    corecore