226 research outputs found

    Visualizing sound emission of elephant vocalizations: evidence for two rumble production types

    Get PDF
    Recent comparative data reveal that formant frequencies are cues to body size in animals, due to a close relationship between formant frequency spacing, vocal tract length and overall body size. Accordingly, intriguing morphological adaptations to elongate the vocal tract in order to lower formants occur in several species, with the size exaggeration hypothesis being proposed to justify most of these observations. While the elephant trunk is strongly implicated to account for the low formants of elephant rumbles, it is unknown whether elephants emit these vocalizations exclusively through the trunk, or whether the mouth is also involved in rumble production. In this study we used a sound visualization method (an acoustic camera) to record rumbles of five captive African elephants during spatial separation and subsequent bonding situations. Our results showed that the female elephants in our analysis produced two distinct types of rumble vocalizations based on vocal path differences: a nasally- and an orally-emitted rumble. Interestingly, nasal rumbles predominated during contact calling, whereas oral rumbles were mainly produced in bonding situations. In addition, nasal and oral rumbles varied considerably in their acoustic structure. In particular, the values of the first two formants reflected the estimated lengths of the vocal paths, corresponding to a vocal tract length of around 2 meters for nasal, and around 0.7 meters for oral rumbles. These results suggest that African elephants may be switching vocal paths to actively vary vocal tract length (with considerable variation in formants) according to context, and call for further research investigating the function of formant modulation in elephant vocalizations. Furthermore, by confirming the use of the elephant trunk in long distance rumble production, our findings provide an explanation for the extremely low formants in these calls, and may also indicate that formant lowering functions to increase call propagation distances in this species'

    Physics experiments using simultaneously more than one smartphone sensors

    Get PDF
    In the last years, numerous Physics experiments using smartphone sensors have been reported in the literature. In this presentation we focus on a less-explored feature of the smartphones: the possibility of using (measure and register data) simultaneously with more than one sensor. To illustrate, in the field of mechanics simultaneous use of the accelerometer and gyroscope (angular velocity sensor) or in optics experiments synchronous use of the ambient light and orientation sensors have been proposed. Indeed, this is a characteristic that simplifies experimental setups allowing to see through the physics concepts and, last but not least, reducing the costs.Comment: 6 pages, 3 tables, 4 figures, Extended abstract, GIREP-MPTL 2018 - Research and Innovation in Physics education:two sides of the same coin. 9th-13th July 2018, Donostia-San Sebastian, Spain https://www.girep2018.com

    A study of methods to predict and measure the transmission of sound through the walls of light aircraft. A survey of techniques for visualization of noise fields

    Get PDF
    A survey of the most widely used methods for visualizing acoustic phenomena is presented. Emphasis is placed on acoustic processes in the audible frequencies. Many visual problems are analyzed on computer graphic systems. A brief description of the current technology in computer graphics is included. The visualization technique survey will serve as basis for recommending an optimum scheme for displaying acoustic fields on computer graphic systems

    Sensing synesthesia

    Get PDF
    Sensing Synesthesia is an exhibition of experiments, carried out through the medium of graphic design as an attempt to generate a synesthesiac experience by visualizing sound. Since many elements within the realms of sound and sight are relative, creating a genuine synesthesiac experience for a viewing audience proved challenging. To address this problem, I created visual elements that corresponded with personal convictions, emotions and proclamations and presented them in a way congruent to the sounds being heard. Through these experiments, I discovered the personal growth of myself: the sharpened skills as a graphic designer, initiated interest in hand-rendered type as well as graffiti art as a style. Furthermore, I aimed that the interrelated, impactful relationship between sight and sound we all encounter on a daily basis generates a deeper experience despite our level of awareness

    Visually Guided Sound Source Separation using Cascaded Opponent Filter Network

    Get PDF
    The objective of this paper is to recover the original component signals from a mixture audio with the aid of visual cues of the sound sources. Such task is usually referred as visually guided sound source separation. The proposed Cascaded Opponent Filter (COF) framework consists of multiple stages, which recursively refine the source separation. A key element in COF is a novel opponent filter module that identifies and relocates residual components between sources. The system is guided by the appearance and motion of the source, and, for this purpose, we study different representations based on video frames, optical flows, dynamic images, and their combinations. Finally, we propose a Sound Source Location Masking (SSLM) technique, which, together with COF, produces a pixel level mask of the source location. The entire system is trained end-to-end using a large set of unlabelled videos. We compare COF with recent baselines and obtain the state-of-the-art performance in three challenging datasets (MUSIC, A-MUSIC, and A-NATURAL). Project page: https://ly-zhu.github.io/cof-net.Comment: main paper 14 pages, ref 3 pages, and supp 7 pages. Revised argument in section 3 and

    Hear You

    Get PDF
    Hear You is a thesis that explores how my relationship with glass is used to discuss the connection and interaction between sound, shape, material, and surrounding things. I trace the origins of these interests related to my sensitive and introverted character since I was a child and how it has influenced my interests in communicating in unique visual and audible ways. This thesis links personal experience to scholarly research into sensory processing, high sensitivity personality traits, the concept of quiet, and the power of sound. Hear You concludes with an assessment of a body of work that merges interests in glass, sound, and interactivity to explore various ways in which the intersection of seeing and hearing can be used as tools for thinking about feeling

    SoVeAt: a tool for visualizing sound velocity data for Naval applications

    Get PDF
    This report discusses various functionalities of Sound Velocity Atlas (SoVeAT) tool developed for use by Naval Operations Data Processing and Analysis Centre (NODPAC) a wing of Indian Navy. The subsurface profile data of temperature and salinity (T/S) used in developing this tool is the Argo data which is obtained from INCOIS Argo mirror archives which is obtained from two Global Data Assembly Centre’s (GDAC) namely Coriolis in France and USGODAE in USA. These data sets are processed, quality controlled and merged to form a unique data set for enhancing the Sound Velocity climatology of Indian Ocean (30E - 120 E and 69 S - 30 N). With this sound velocity data derived from Argo T/S, Graphic User Interface (GUI) based tool is built for visualizing parameters viz., Sound Velocity, Temperature, Salinity and bathymetry. This tool has capability to generate climatology dynamically between any chosen periods apart from visualizing various plots which are useful for Navy while at sea. Also provision for adding newly observed T/S data is provided making this most robust sound velocity tool for use by the Indian Navy
    corecore