16,843 research outputs found

    Sound Navigation System Based on Kansei Interaction

    Get PDF
    We developed a sound navigation system that can interact with movement using Kansei behavioral information. This system is based on unconscious human behavior, such as putting hands over ears while listening to something carefully. We collected candid videos of people to watch their behavior. Observing unconscious behavior is important for developing an idea of the flow of a tangible interactive system because data collected from observation can be applied to an emotion-based interface to control the system. The sound navigation system was successfully developed into a sound scope headphone to focus on the sound of a target instrument in an orchestra or jazz band. Furthermore, the target sound can be changed from one instrument to another by turning your head in the perceived direction of the target instrument. The headphones were equipped with three sensors: a digital compass to detect head position (when turning left and right), an acceleration sensor (when looking up and down), and a bend sensor for emphasizing the target sound when hands are put on ears. We found the users, which ranged from young children to elderly people, successfully controlled the headphones and were satisfied with the easy and novel interaction between their movements and the sound

    Hearing in three dimensions: Sound localization

    Get PDF
    The ability to localize a source of sound in space is a fundamental component of the three dimensional character of the sound of audio. For over a century scientists have been trying to understand the physical and psychological processes and physiological mechanisms that subserve sound localization. This research has shown that important information about sound source position is provided by interaural differences in time of arrival, interaural differences in intensity and direction-dependent filtering provided by the pinnae. Progress has been slow, primarily because experiments on localization are technically demanding. Control of stimulus parameters and quantification of the subjective experience are quite difficult problems. Recent advances, such as the ability to simulate a three dimensional sound field over headphones, seem to offer potential for rapid progress. Research using the new techniques has already produced new information. It now seems that interaural time differences are a much more salient and dominant localization cue than previously believed

    Head-related Impulse Response Cues for Spatial Auditory Brain-computer Interface

    Full text link
    This study provides a comprehensive test of a head-related impulse response (HRIR) cues for a spatial auditory brain-computer interface (saBCI) speller paradigm. We present a comparison with the conventional virtual sound headphone-based spatial auditory modality. We propose and optimize the three types of sound spatialization settings using a variable elevation in order to evaluate the HRIR efficacy for the saBCI. Three experienced and seven naive BCI users participated in the three experimental setups based on ten presented Japanese syllables. The obtained EEG auditory evoked potentials (AEP) resulted with encouragingly good and stable P300 responses in online BCI experiments. Our case study indicated that users could perceive elevation in the saBCI experiments generated using the HRIR measured from a general head model. The saBCI accuracy and information transfer rate (ITR) scores have been improved comparing to the classical horizontal plane-based virtual spatial sound reproduction modality, as far as the healthy users in the current pilot study are concerned.Comment: 4 pages, 4 figures, accepted for EMBC 2015, IEEE copyrigh

    Auditory display for the blind

    Get PDF
    A system for providing an auditory display of two-dimensional patterns as an aid to the blind is described. It includes a scanning device for producing first and second voltages respectively indicative of the vertical and horizontal positions of the scan and a further voltage indicative of the intensity at each point of the scan and hence of the presence or absence of the pattern at that point. The voltage related to scan intensity controls transmission of the sounds to the subject so that the subject knows that a portion of the pattern is being encountered by the scan when a tone is heard, the subject determining the position of this portion of the pattern in space by the frequency and interaural difference information contained in the tone

    An auralisation method for real time subjective testing of modal parameters.

    Get PDF
    Subjective testing is necessary when attempting to determine the human response to audio quality. Small rooms, such as recording studio control rooms themselves have an effect upon the quality of the perceived audio reproduction. Of particular interest is the low frequency region where resonances, or ‘room modes’, occur. It is necessary to test a number of modal parameters individually and be able to alter them instantly during testing in response to listener perception. An auralisation method has been developed which is used to compare musical samples within modelled rooms. Methods are discussed in the context of providing a practical system, where real time testing is feasible. The formation of the room’s transfer function is discussed, as are a number of issues relating to the generation of audio samples. This work is then placed in context with a brief explanation of how the system is to be used in a real subjective test

    Sonic autoethnographies: personal listening as compositional context

    Get PDF
    This article discusses a range of self-reflexive tendencies in field recording, soundscape composition and studio production, and explores examples of sonic practices and works in which the personal listening experiences of the composer are a key contextual and compositional element. As broad areas for discussion, particular attention is given to soundscape composition as self-narrative (exploring the representation of the recordist in soundscape works) and to producing the hyperreal and the liminal (considering spatial characteristics of contemporary auditory experience and their consequences for sonic practice). The discussion then focuses on the specific application of autoethnographic research methods to the practice and the understanding of soundscape composition. Compositional strategies employed in two recent pieces by the author are considered in detail. The aim of this discussion is to link autoethnography to specific ideas about sound and listening, and to some tendencies in field recording, soundscape composition and studio production, while also providing context for the discussion of the author’s own practice and works. In drawing together this range of ideas, methods and work, sonic autoethnography is aligned with an emerging discourse around reflexive, embodied sound work

    Relative Auditory Distance Discrimination With Virtual Nearby Sound Sources

    Get PDF
    In this paper a psychophysical experiment targeted at exploring relative distance discrimination thresholds with binaurally rendered virtual sound sources in the near field is described. Pairs of virtual sources are spatialized around 6 different spatial locations (2 directions 7 3 reference distances) through a set of generic far-field Head-Related Transfer Functions (HRTFs) coupled with a near-field correction model proposed in the literature, known as DVF (Distance Variation Function). Individual discrimination thresholds for each spatial location and for each of the two orders of presentation of stimuli (approaching or receding) are calculated on 20 subjects through an adaptive procedure. Results show that thresholds are higher than those reported in the literature for real sound sources, and that approaching and receding stimuli behave differently. In particular, when the virtual source is close (< 25 cm) thresholds for the approaching condition are significantly lower compared to thresholds for the receding condition, while the opposite behaviour appears for greater distances (~ 1 m). We hypothesize such an asymmetric bias to be due to variations in the absolute stimulus level

    Virtual Audio - Three-Dimensional Audio in Virtual Environments

    Get PDF
    Three-dimensional interactive audio has a variety ofpotential uses in human-machine interfaces. After lagging seriously behind the visual components, the importance of sound is now becoming increas-ingly accepted. This paper mainly discusses background and techniques to implement three-dimensional audio in computer interfaces. A case study of a system for three-dimensional audio, implemented by the author, is described in great detail. The audio system was moreover integrated with a virtual reality system and conclusions on user tests and use of the audio system is presented along with proposals for future work at the end of the paper. The thesis begins with a definition of three-dimensional audio and a survey on the human auditory system to give the reader the needed knowledge of what three-dimensional audio is and how human auditory perception works
    corecore