222 research outputs found

    The acoustics of concentric sources and receivers – human voice and hearing applications

    Get PDF
    One of the most common ways in which we experience environments acoustically is by listening to the reflections of our own voice in a space. By listening to our own voice we adjust its characteristics to suit the task and audience. This is of particular importance in critical voice tasks such as actors or singers on a stage with no additional electroacoustic or other amplification (e.g. in ear monitors, loudspeakers, etc.). Despite the usualness of this situation, there are very few acoustic measurements aimed to quantify it and even fewer that address the problem of having a source and receiver that are very closely located. The aim of this thesis is to introduce new measurement transducers and methods that quantify correctly this situation. This is achieved by analysing the characteristics of the human as a source, a receiver and their interaction in close proximity when placed in acoustical environments. The characteristics of the human voice and human ear are analysed in this thesis in a similar manner as a loudspeaker or microphone would be analysed. This provides the basis for further analysis by making them analogous to measurement transducers. These results are then used to explore the consequences of having a source and receiver very closely located using acoustic room simulation. Different techniques for processing data using directional transducers in real rooms are introduced. The majority of the data used in this thesis was obtained in rooms used for performance. The final chapters of this thesis include details of the design and construction of a concentric directional transducer, where an array of microphones and loudspeakers occupy the same structure. Finally, sample measurements with this transducer are presented

    Ultra-high-speed imaging of bubbles interacting with cells and tissue

    Get PDF
    Ultrasound contrast microbubbles are exploited in molecular imaging, where bubbles are directed to target cells and where their high-scattering cross section to ultrasound allows for the detection of pathologies at a molecular level. In therapeutic applications vibrating bubbles close to cells may alter the permeability of cell membranes, and these systems are therefore highly interesting for drug and gene delivery applications using ultrasound. In a more extreme regime bubbles are driven through shock waves to sonoporate or kill cells through intense stresses or jets following inertial bubble collapse. Here, we elucidate some of the underlying mechanisms using the 25-Mfps camera Brandaris128, resolving the bubble dynamics and its interactions with cells. We quantify acoustic microstreaming around oscillating bubbles close to rigid walls and evaluate the shear stresses on nonadherent cells. In a study on the fluid dynamical interaction of cavitation bubbles with adherent cells, we find that the nonspherical collapse of bubbles is responsible for cell detachment. We also visualized the dynamics of vibrating microbubbles in contact with endothelial cells followed by fluorescent imaging of the transport of propidium iodide, used as a membrane integrity probe, into these cells showing a direct correlation between cell deformation and cell membrane permeability

    Three-dimensional point-cloud room model in room acoustics simulations

    Get PDF

    A Novel Non-Acoustic Voiced Speech Sensor: Experimental Results and Characterization

    Get PDF
    Recovering clean speech from an audio signal with additive noise is a problem that has plagued the signal processing community for decades. One promising technique currently being utilized in speech-coding applications is a multi-sensor approach, in which a microphone is used in conjunction with optical, mechanical, and electrical non-acoustic speech sensors to provide greater versatility in signal processing algorithms. One such non-acoustic glottal waveform sensor is the Tuned Electromagnetic Resonator Collar (TERC) sensor, first developed in [BLP+02]. The sensor is based on Magnetic Resonance Imaging (MRI) concepts, and is designed to detect small changes in capacitance caused by changes to the state of the vocal cords - the glottal waveform. Although preliminary simulations in [BLP+02] have validated the basic theory governing the TERC sensor\u27s operation, results from human subject testing are necessary to accurately characterize the sensor\u27s performance in practice. To this end, a system was designed and developed to provide real-time audio recordings from the sensor while attached to a human test subject. From these recordings, executed in a variety of acoustic noise environments, the practical functionality of the TERC sensor was demonstrated. The sensor in its current evolution is able to detect a periodic waveform during voiced speech, with two clear harmonics and a fundamental frequency equal to that of the speech it is detecting. This waveform is representative of the glottal waveform, with little or no articulation as initially hypothesized. Though statistically significant conclusions about the sensor\u27s immunity to environmental noise are difficult to draw, the results suggest that the TERC sensor is considerably more resistant to the effects of noise than typical acoustic sensors, making it a valuable addition to the multi-sensor speech processing approach

    SoundCompass: a distributed MEMS microphone array-based sensor for sound source localization

    Get PDF
    Sound source localization is a well-researched subject with applications ranging from localizing sniper fire in urban battlefields to cataloging wildlife in rural areas. One critical application is the localization of noise pollution sources in urban environments, due to an increasing body of evidence linking noise pollution to adverse effects on human health. Current noise mapping techniques often fail to accurately identify noise pollution sources, because they rely on the interpolation of a limited number of scattered sound sensors. Aiming to produce accurate noise pollution maps, we developed the SoundCompass, a low-cost sound sensor capable of measuring local noise levels and sound field directionality. Our first prototype is composed of a sensor array of 52 Microelectromechanical systems (MEMS) microphones, an inertial measuring unit and a low-power field-programmable gate array (FPGA). This article presents the SoundCompass's hardware and firmware design together with a data fusion technique that exploits the sensing capabilities of the SoundCompass in a wireless sensor network to localize noise pollution sources. Live tests produced a sound source localization accuracy of a few centimeters in a 25-m2 anechoic chamber, while simulation results accurately located up to five broadband sound sources in a 10,000-m2 open field
    • …
    corecore