156,591 research outputs found

    Indiscriminable sounds determine the direction of visual motion

    Get PDF
    On cross-modal interactions, top-down controls such as attention and explicit identification of cross-modal inputs were assumed to play crucial roles for the optimization. Here we show the establishment of cross-modal associations without such top-down controls. The onsets of two circles producing apparent motion perception were accompanied by indiscriminable sounds consisting of six identical and one unique sound frequencies. After adaptation to the visual apparent motion with the sounds, the sounds acquired a driving effect for illusory visual apparent motion perception. Moreover, the pure tones with each unique frequency of the sounds acquired the same effect after the adaptation, indicating that the difference in the indiscriminable sounds was implicitly coded. We further confrimed that the aftereffect didnot transfer between eyes. These results suggest that the brain establishes new neural representations between sound frequency and visual motion without clear identification of the specific relationship between cross-modal stimuli in early perceptual processing stages

    Implementation of an Intelligent Force Feedback Multimedia Game

    Get PDF
    This is the published version. Copyright De GruyterThis paper presents the design and programming of an intelligent multimedia computer game, enhanced with force feedback. The augmentation of game images and sounds with appropriate force feedback improves the quality of the game, making it more interesting and more interactive. We used the Immersion Corporation's force feedback joystick, the I-FORCE Studio computation engine, and the Microsoft DirectX Software Development Kit (SDK) to design the multimedia game, running in the Windows NT operating system. In this game, the world contains circles of different sizes and masses. When the circles hit each other, collisions take place, which are shown to, and felt by, the user. When the circles hit together, the overall score increases; the larger the size of the circle, the higher the score increase. The initial score is set to zero, and when the game ends, a lower score represents a better performance. This game was used to examine the behavior of the users under different environments through their respective scores and comments. The analysis of experimental results helps in the comparative study of different kinds of multimedia combinations

    Storage of Doppler-Shift Information in the Echolocation System of the "CF-FM"-Bat, Rhinolophus ferrumequinum

    Get PDF
    The greater horseshoe bat (Rhinolophus ferrumequinum) emits echolocation sounds consisting of a long constant-frequency (CF) component preceeded and followed by a short frequency-modulated (FM) component. When an echo returns with an upward Doppler-shift, the bat compensates for the frequency-shift by lowering the emitted frequency in the subsequent orientation sounds and stabilizes the echo image. The bat can accurately store frequency-shift information during silent periods of at least several minutes. The stored frequency-shift information is not affected by tone bursts delivered during silent periods without an overlap with an emitted orientation sound. The system for storage of Doppler-shift information has properties similar to a sample and hold circuit with sampling at vocalization time and with a rather flat slewing rate for the stored frequency information

    Speaker-sex discrimination for voiced and whispered vowels at short durations

    Get PDF
    Whispered vowels, produced with no vocal fold vibration, lack the periodic temporal fine structure which in voiced vowels underlies the perceptual attribute of pitch (a salient auditory cue to speaker sex). Voiced vowels possess no temporal fine structure at very short durations (below two glottal cycles). The prediction was that speaker-sex discrimination performance for whispered and voiced vowels would be similar for very short durations but, as stimulus duration increases, voiced vowel performance would improve relative to whispered vowel performance as pitch information becomes available. This pattern of results was shown for women’s but not for men’s voices. A whispered vowel needs to have a duration three times longer than a voiced vowel before listeners can reliably tell whether it’s spoken by a man or woman (∌30 ms vs. ∌10 ms). Listeners were half as sensitive to information about speaker-sex when it is carried by whispered compared with voiced vowels

    Monitoring sound levels and soundscape quality in the living rooms of nursing homes : a case study in Flanders (Belgium)

    Get PDF
    Recently there has been an increasing interest in the acoustic environment and its perceptual counterpart (i.e., the soundscape) of care facilities and their potential to affect the experience of residents with dementia. There is evidence that too loud sounds or poor soundscape quality more generally can affect negatively the quality of life of people with dementia and increase agitation. The AcustiCare project aims to use the soundscape approach to enhance the Quality of Life (QoL) of residents and to reduce Behavioral and Psychological Symptoms of Dementia (BPSD), as well as improving the everyday experience of nursing homes for both residents and staff members. In order to gain further insights into the sound environments of such facilities, sound level monitoring and soundscape data collection campaigns were conducted in the living rooms of five nursing homes in Flanders. Results showed that sound levels (dB) and loudness levels (sone) did not vary significantly between days of the week, but they did so between moments of the day and between living rooms. From the perceptual point of view, several soundscape attributes and the perceived prominence of different sound source types varied significantly between the living rooms investigated, and a positive correlation was found between sound levels and the number of persons present in the living rooms. These findings claim for further attention on the potential role of the sound domain in nursing homes, which should promote (and not only permit) better living and working conditions for residents and staff members of nursing homes

    Efficient coding of spectrotemporal binaural sounds leads to emergence of the auditory space representation

    Full text link
    To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficient coding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes for natural binaural sounds. Firstly, it is demonstrated that a linear efficient coding transform - Independent Component Analysis (ICA) trained on spectrograms of naturalistic simulated binaural sounds extracts spatial information present in the signal. A simple hierarchical ICA extension allowing for decoding of sound position is proposed. Furthermore, it is shown that units revealing spatial selectivity can be learned from a binaural recording of a natural auditory scene. In both cases a relatively small subpopulation of learned spectrogram features suffices to perform accurate sound localization. Representation of the auditory space is therefore learned in a purely unsupervised way by maximizing the coding efficiency and without any task-specific constraints. This results imply that efficient coding is a useful strategy for learning structures which allow for making behaviorally vital inferences about the environment.Comment: 22 pages, 9 figure

    Bio-inspired broad-class phonetic labelling

    Get PDF
    Recent studies have shown that the correct labeling of phonetic classes may help current Automatic Speech Recognition (ASR) when combined with classical parsing automata based on Hidden Markov Models (HMM).Through the present paper a method for Phonetic Class Labeling (PCL) based on bio-inspired speech processing is described. The methodology is based in the automatic detection of formants and formant trajectories after a careful separation of the vocal and glottal components of speech and in the operation of CF (Characteristic Frequency) neurons in the cochlear nucleus and cortical complex of the human auditory apparatus. Examples of phonetic class labeling are given and the applicability of the method to Speech Processing is discussed
    • 

    corecore