304,284 research outputs found

    DigiWall - an audio mostly game

    Get PDF
    DigiWall is a hybrid between a climbing wall and a computer game. The climbing grips are equipped with touch sensors and lights. The interface has no computer screen. Instead sound and music are principle drivers of DigiWall interaction models. The gaming experience combines sound and music with physical movement and the sparse visuals of the climbing grips. The DigiWall soundscape carries both verbal and nonverbal information. Verbal information includes instructions on how to play a game, scores, level numbers etc. Non-verbal information is about speed, position, direction, events etc. Many different types of interaction models are possible: competitions, collaboration exercises and aesthetic experiences

    Visual to Sound: Generating Natural Sound for Videos in the Wild

    Full text link
    As two of the five traditional human senses (sight, hearing, taste, smell, and touch), vision and sound are basic sources through which humans understand the world. Often correlated during natural events, these two modalities combine to jointly affect human perception. In this paper, we pose the task of generating sound given visual input. Such capabilities could help enable applications in virtual reality (generating sound for virtual scenes automatically) or provide additional accessibility to images or videos for people with visual impairments. As a first step in this direction, we apply learning-based methods to generate raw waveform samples given input video frames. We evaluate our models on a dataset of videos containing a variety of sounds (such as ambient sounds and sounds from people/animals). Our experiments show that the generated sounds are fairly realistic and have good temporal synchronization with the visual inputs.Comment: Project page: http://bvision11.cs.unc.edu/bigpen/yipin/visual2sound_webpage/visual2sound.htm

    Lost Oscillations: Exploring a City’s Space and Time With an Interactive Auditory Art Installation

    Get PDF
    Presented at the 22nd International Conference on Auditory Display (ICAD-2016)Lost Oscillations is a spatio-temporal sound art installation that allows users to explore the past and present of a city's soundscape. Participants are positioned in the center of an octophonic speaker array; situated in the middle of the array is a touch-sensitive user interface. The user interface is a stylized representation of a map of Christchurch, New Zealand, with electrodes placed throughout the map. Upon touching an electrode, one of many sound recordings made at the electrode's real-world location is chosen and played; users must stay in contact with the electrodes in order for the sounds to continue playing, requiring commitment from users in order to explore the soundscape. The sound recordings have been chosen to represent Christchurch's development throughout its history, allowing participants to explore the evolution of the city from the early 20th Century through to its post-earthquake reconstruction. This paper discusses the motivations for Lost Oscillations before presenting the installation's design, development, and presentation

    Learning and adaptation in speech production without a vocal tract

    Get PDF
    How is the complex audiomotor skill of speaking learned? To what extent does it depend on the specific characteristics of the vocal tract? Here, we developed a touchscreen-based speech synthesizer to examine learning of speech production independent of the vocal tract. Participants were trained to reproduce heard vowel targets by reaching to locations on the screen without visual feedback and receiving endpoint vowel sound auditory feedback that depended continuously on touch location. Participants demonstrated learning as evidenced by rapid increases in accuracy and consistency in the production of trained targets. This learning generalized to productions of novel vowel targets. Subsequent to learning, sensorimotor adaptation was observed in response to changes in the location-sound mapping. These findings suggest that participants learned adaptable sensorimotor maps allowing them to produce desired vowel sounds. These results have broad implications for understanding the acquisition of speech motor control.Published versio

    Heat conduction tuning using the wave nature of phonons

    Full text link
    The world communicates to our senses of vision, hearing and touch in the language of waves, as the light, sound, and even heat essentially consist of microscopic vibrations of different media. The wave nature of light and sound has been extensively investigated over the past century and is now widely used in modern technology. But the wave nature of heat has been the subject of mostly theoretical studies, as its experimental demonstration, let alone practical use, remains challenging due to the extremely short wavelengths of these waves. Here we show a possibility to use the wave nature of heat for thermal conductivity tuning via spatial short-range order in phononic crystal nanostructures. Our experimental and theoretical results suggest that interference of thermal phonons occurs in strictly periodic nanostructures and slows the propagation of heat. This finding broadens the methodology of heat transfer engineering by expanding its territory to the wave nature of heat
    • …
    corecore