49 research outputs found

    Analysis of head rotation trajectories during a sound localization task

    Get PDF
    International audienceDynamic changes of the Head-Related Transfer Function renderings as a function of head movement have been shown to be an important cue in sound localization. To investigate the cognitive process of dynamic sound localization, quantification of the characteristics of head movements is needed. In this study, trajectories of head rotation in a sound localization task were measured and analyzed. Listeners were asked to orient themselves towards the direction of active sound source via localization, being one of five loudspeakers located at 30 • intervals in the horizontal plane. A 1 s pink noise burst stimulus was emitted from different speakers in random order. The range of expected head rotations (EHR) for a given stimulus were, therefore, from 30 • to 120 •. Head orientation was measured with a motion capture system (yaw, pitch, and roll). Analysis examined angular velocity, overshoot, and reaction time (RT). Results show that angular velocity increased as EHR increased. No relationship between overshoot and EHR was observed. RT was almost constant (≈260 ms) regardless of EHR. This may suggest that dynamic sound localization studies could be difficult for a short stimulus with duration less than 260 ms

    City Silhouette, World Climate

    Get PDF
    Global emissions of carbon dioxide need to fall lest climate change will accelerate. Any effective climate policy must raise the price of carbon consumption. From an urban perspective, one desirable effect of a carbon tax would be to induce households to move closer to where they work. This paper shows that: If the initial distribution of commuting distances (the city silhouette) is skewed towards the periphery then a carbon tax will leave resident landlords better off - even if these landlords need to shoulder those extra commuting costs themselves, too. If resident landlords are decisive then this insight provides an urban silhouette based explanation of why some governments appear so much more willing to confront their citizens with the true cost of emitting carbon dioxide than others. More briefly, the paper suggests a connection between urban form and climate politics

    Inter-frequency band correlations in auditory filtered median plane HRTFs

    Get PDF
    International audienceSpectral cues in head-related transfer functions (HRTF), such as peaks and notches occurring above 4 kHz, are important for sound localization in the median plane. However, it may be complicated for the auditory system to detect absolute frequency and level peaks and notches, mapping them to three-dimensional positions. In contrast, it may be more reasonable that comparisons are made of the relative level differences between frequency bands due to various peaks and notches. With this approach, it is not necessary to detect peaks and notches directly, only comparisons in levels across frequency bands are needed. In this paper, we analyze level changes of median plane HRTFs in narrow frequency bands using auditory filters and inter-band correlations. These changes are investigated to clarify effects of peaks and notches on comprehensive level changes in the corresponding HRTFs.We investigated 105 HRTF sets from the RIEC (Research Institution of Electrical Communication, Tohoku University) database, available in the SOFA format standard. HRTFs were measured using a spherical loudspeaker array at RIEC for individual listeners. Head-related impulse responses (HRIRs) were acquired in the median plane from front (0°) to rear (180°) in 10°-steps. Each HRIR was then filtered by a band limited auditory filter. A Gammatone filter was employed in this analysis, with 40 equivalent rectangular bandwidth (ERB) over the full audible frequency range (up to 20 kHz). Output power level of the filtered HRIRs for the 19 median plane angles was calculated, resulting in 760 values (19 angles x 40 bands) for each listener. From these values, the level change of individual frequency bands was obtained as a function of angle in median plane. We then calculated the correlation across frequency bands for the level change as a function of angle. This produced 39 cross-correlation values and 1 auto-correlation for each band with a correlation matrix of 40 bands x 40 bands for each listener. Examination of the correlation matrixes showed similarities that could be summarized by clustering the analyzed bands into the following five aggregated approximate frequency bands:Band-1: 0 to 0.7 kHz, almost no level changes observed.Band-2: 0.7 to 1 kHz, observed negative correlation to odd bands (Band-1, Band-3, Band-5, level changes approximately 3 dB.Band-3: 1 kHz to 6 kHz, as the median plane angle increases, observed level decreases by approximately 5 dB.Band-4: 6 kHz to 10 kHz, observed level decreases as the median plane angle exceeds 120°. Observed negative correlation to Band-1, 3, and 5.Band-5: > 10 kHz, observed level decreases by approximately 20 dB until the median plane angle reaches approximately 120°.The general observation shows that while Band-2 has a negative correlation, its actual level change is relatively small, so it may be integrated into Band-1 and Band-3. Furthermore, Band-5 has a positive correlation with Band-1 and Band-3. In contrast, Band-4 has a negative correlation and its level change is significant. In addition, it can be noted that Band-4 includes various spectral cues as notches and peaks in the HRTFs. This means that these negative correlations can be caused by both notches and peaks. It should be noted however, that this correlation was done per HRTF (or per individual) and that the exact frequency delimitations for the five aggregated bands with their respective observed behavior varied across HRTFs. Further discussions concern the effects of peaks and notches in HRTFs based on previous experiments evaluating sound localization in the median plane using binaural representations. For these experiments, HRTFs were simplified; removing peaks and notches, while the levels of each aggregated frequency bands were averaged. Results showed that median plane sound localization remains possible, even without clearly present peaks and notches

    Effect of Large System Latency of Virtual Auditory Display on Listener's Head Movement in Sound Localization Task

    Get PDF
    Virtual Auditory Display (VAD) technology is expected to enable the development of new communication tools and many other related applications. However, in computer-network-based communications, large latencies can sometimes occur. Therefore, the influence of large system latency (SL), up to 2 s, on VAD-based sound localization tasks was investigated in terms of the precision and time course of sound localization performance by listeners engaged in head movements. A software VAD system developed by the authors on a Linux PC (with SL of 12 ms) was used in the experiments. Listeners were asked to indicate the location of a virtual sound source by moving their heads in order to face the direction of the perceived sound image. Virtual sound sources were presented to the listeners with one of seven amounts of system latency (12, 50, 100, 200, 500, 1000 and 2000 ms). While the latency detection threshold has been estimated as an SL of about 75 ms, no significant influence on accuracy of sound localization was observed for any of the tested SLs. On the other hand, the time to conclude the sound localization increased as the SL increased. Moreover, a remarkable overshoot was observed in the listener's head movement particularly when SL was greater than 500 ms. This strongly suggests that the tolerable SL caused by network communications should be kept smaller than 500 ms for VAD applications

    Alternation of Sound Location Induces Visual Motion Perception of a Static Object

    Get PDF
    Background: Audition provides important cues with regard to stimulus motion although vision may provide the most salient information. It has been reported that a sound of fixed intensity tends to be judged as decreasing in intensity after adaptation to looming visual stimuli or as increasing in intensity after adaptation to receding visual stimuli. This audiovisual interaction in motion aftereffects indicates that there are multimodal contributions to motion perception at early levels of sensory processing. However, there has been no report that sounds can induce the perception of visual motion. Methodology/Principal Findings: A visual stimulus blinking at a fixed location was perceived to be moving laterally when the flash onset was synchronized to an alternating left-right sound source. This illusory visual motion was strengthened with an increasing retinal eccentricity (2.5 deg to 20 deg) and occurred more frequently when the onsets of the audio and visual stimuli were synchronized. Conclusions/Significance: We clearly demonstrated that the alternation of sound location induces illusory visual motion when vision cannot provide accurate spatial information. The present findings strongly suggest that the neural representations of auditory and visual motion processing can bias each other, which yields the best estimates of externa

    Individualization Feature of Head-Related Transfer Functions Based on Subjective Evaluation

    Get PDF
    Presented at the 14th International Conference on Auditory Display (ICAD2008) on June 24-27, 2008 in Paris, France.To realize a three-dimensional virtual sound image with a Virtual Auditory Display (VAD), it is important to individualize Head Related Transfer Functions (HRTFs) for listeners. The purpose of the present study was to establish a fitting method for HRTFs based on a listening test. To this end, a number of sets of virtual images were synthesized using HRTFs of different individuals. The listeners were then asked to choose appropriate virtual sound images located in the intended orbits. To achieve our fitting method of HRTFs by such a subjective evaluation, it is desirable that the same HRTFs can be chosen with stability. In this study, the process used to select a set of HRTFs subjectively was examined in detail, and the features of the individualization of HRTFs by subjective evaluation were investigated. First of all, the process to choose the best of 32 sets of HRTFs by a Swiss-style tournament was repeated ten times, and the regularity of wins in the tournament was examined. As a result, it was understood that the same set of HRTFs is not always chosen and that the individualization method has probability features. The strength of the sets of HRTFs which won the tour- nament several times was then evaluated. A round-robin comparison with the 130 sets of HRTFs in our HRTF-corpus was repeated twenty times. It was shown that a subjective evaluation itself was also a probability feature. Moreover, the percentage of winning for the set of HRTFs which won the tournament was estimated to be about 15% from the results of the round-robin comparison

    A new rendering method of moving sound with the doppler effect

    Get PDF
    Presented at the 11th International Conference on Auditory Display (ICAD2005)This paper presents a new rendering method of a moving sound with the Doppler effect. In the conventional rendering method of moving sound, Head Related Impulse Responses (HRIRs) are simply changed according to the sound position. However, the Doppler effect cannot be added to a sound using this method. When a sound object moves with high speed, the pitch of a sound object should be controlled by some other rendering method. In our method, each HRIR is divided into two components, such as an initial delay and a main wave form. The two initial delays of both right and left ears are respectively recalculated based on relative speeds and a propagation path. These resultant initial delays are used in rendering. Thereby, the Doppler effect can be added automatically to a sound merely by setting the sound position in this algorithm. Details of this algorithm are discussed in this paper

    Training effect of a virtual auditory game on sound localization ability of the visually impaired

    Get PDF
    Presented at the 11th International Conference on Auditory Display (ICAD2005)It is essential for a visually impaired person to correctly identify the position of a sound source because such identification enables him/her to recognize his/her surroundings, including obstacles. We developed training equipment to help the visually impaired to improve their ability to identify the position of a sound source by applying a auditory display technique. Training for ten days with the system was conducted. As a result, the ability to identify a sound source position was improved

    Detection Thresholds of Sound Image Movement Deteriorate during Sound Localization

    No full text
    Although a sound position without head movement localized, front-back confusion frequently occurs. Moreover, sound localization accuracy, especially front-back confusion, can be dramatically improved by listener head movement. This clearly shows that in sound localization both static cues involved in the sound signal input to the two ears and dynamic cues caused by listener motion are used. However, there have been few studies concerning spatial hearing dynamic situations. In this study, therefore, listener detection thresholds of movement of a sound stimulus during a sound localization task with head rotation were measured. Participants were first trained to rotate their heads at an indicated speed. Then during a sound localization trial, they were instructed to rotate their heads the direction of a sound stimulus at the speed. As a 2AFC paradigm was used, in one of two successive trials, the sound position (azimuthal angle) slightly moved during the rotation. Participants were asked to judge in which trial the sound stimuli moved. Results revealed that detection thresholds were dynamically raised when participants rotated their heads. Moreover, this effect did not depend on the velocities. These findings may suggest that a process similar to saccadic suppression in vision exists in dynamic sound localization
    corecore