19 research outputs found
Sound level measurements in music practice rooms.
Average sound levels and percentage of daily dose of noise exposure were measured in the practice rooms of a university school of music, with the primary objective of determining whether sound levels in student practice rooms were high enough to warrant concern for hearing conservation. A secondary objective was to determine whether any instrument group was at higher risk for music-induced hearing loss due to exposure levels. Students representing 4 instrument groups were tested: brass, wind, string and voice. Measurements were taken using a dosimeter or DoseBadge clipped to the shoulder during 40 students’ individual practice sessions. These readings provided average exposure levels as well as the percentage of total allowed exposure (dose) obtained during the practice session. The mean measurement time for this study was 47 minutes (SD = 22). Mean sound levels measured averaged 87-95 dB(A) (SD = 3.5-5.9). Mean average levels for the brass players were significantly higher than other instrument groups. Using the mean duration of daily practice reported by the participants to estimate dose, 48% would exceed the allowable sound exposure. Implications for professional musicians are discussed, including the need for 12-hour breaks and the use of musicians’ earplugs. The implementation of a Hearing Protection Policy in the School of Music will also be discussed
Loudness of the singing voice: A room acoustics perspective
This thesis is examining ectophonic (sounds created outside the human body) and autophonic (sound from one’s own voice) loudness perception for the operatic voice, within the context of room acoustics. Ectophonic loudness perception was modelled within the context of room acoustics for the operatic voice in chapter two. These models were then used to explore the loudness envelope of the messa di voce (MDV), where psychoacoustically based measures were shown to perform better than physical acoustic measures used in previous studies. The third chapter addressed autophonic loudness perception, while presenting limitations in modelling it in a manner similar to ectophonic loudness models. Some of these limitations were addressed in chapter four with two experiments where autophonic loudness of opera singers was explored using direct psychoacoustical scaling methods, within simulated room acoustic environments. In the first experiment, a power law relationship between autophonic loudness and the sound pressures produced was noticed for the magnitude production task, with different power law exponents for different phonemes. The contribution of room acoustics for autophonic loudness scaling was not statistically significant. Lombard slope, as it applies to autophonic perception and room acoustics was also studied, with some evidence found in support. The second experiment in chapter four explored autophonic loudness for more continuous vocalisations (crescendi, decrescendi, and MDV) using adapted direct scaling methods. The results showed that sensorimotor mechanisms seem to be more important than hearing and room acoustics in autophonic loudness perception, which is consistent with previous research. Overall, this thesis showed that the room acoustics effect on the loudness of the singing voice needs to be assessed based on the communication scenario. This has relevance for voice analysis, loudness perception in general, room acoustics simulation, and vocal pedagogy
DETECTION OF HEALTH-RELATED BEHAVIOURS USING HEAD-MOUNTED DEVICES
The detection of health-related behaviors is the basis of many mobile-sensing applications for healthcare and can trigger other inquiries or interventions. Wearable sensors have been widely used for mobile sensing due to their ever-decreasing cost, ease of deployment, and ability to provide continuous monitoring. In this dissertation, we develop a generalizable approach to sensing eating-related behavior.
First, we developed Auracle, a wearable earpiece that can automatically detect eating episodes. Using an off-the-shelf contact microphone placed behind the ear, Auracle captures the sound of a person chewing as it passes through the head. This audio data is then processed by a custom circuit board. We collected data with 14 participants for 32 hours in free-living conditions and achieved accuracy exceeding 92.8% and F1 score exceeding77.5% for eating detection with 1-minute resolution.
Second, we adapted Auracle for measuring children’s eating behavior, and improved the accuracy and robustness of the eating-activity detection algorithms. We used this improved prototype in a laboratory study with a sample of 10 children for 60 total sessions and collected 22.3 hours of data in both meal and snack scenarios. Overall, we achieved 95.5% accuracy and 95.7% F1 score for eating detection with 1-minute resolution.
Third, we developed a computer-vision approach for eating detection in free-living scenarios. Using a miniature head-mounted camera, we collected data with 10 participants for about 55 hours. The camera was fixed under the brim of a cap, pointing to the mouth of the wearer and continuously recording video (but not audio) throughout their normal daily activity. We evaluated performance for eating detection using four different Convolutional Neural Network (CNN) models. The best model achieved 90.9% accuracy and 78.7%F1 score for eating detection with 1-minute resolution. Finally, we validated the feasibility of deploying the 3D CNN model in wearable or mobile platforms when considering computation, memory, and power constraints
A system for room acoustic simulation for one's own voice
The real-time simulation of room acoustical environments for one’s own voice, using generic software, has been difficult until very recently due to the computational load involved: requiring real-time convolution of a person’s voice with a potentially large number of long room impulse responses. This thesis is presenting a room acoustical simulation system with a software-based solution to perform real-time convolutions with headtracking; to simulate the effect of room acoustical environments on the sound of one’s own voice, using binaural technology. In order to gather data to implement headtracking in the system, human head- movements are characterized while reading a text aloud. The rooms that are simulated with the system are actual rooms that are characterized by measuring the room impulse response from the mouth to ears of the same head (oral binaural room impulse response, OBRIR). By repeating this process at 2o increments in the yaw angle on the horizontal plane, the rooms are binaurally scanned around a given position to obtain a collection of OBRIRs, which is then used by the software-based convolution system. In the rooms that are simulated with the system, a person equipped with a near- mouth microphone and near-ear loudspeakers can speak or sing, and hear their voice as it would sound in the measured rooms, while physically being in an anechoic room. By continually updating the person’s head orientation using headtracking, the corresponding OBRIR is chosen for convolution with their voice. The system described in this thesis achieves the low latency that is required to simulate nearby reflections, and it can perform convolution with long room impulse responses. The perceptual validity of the system is studied with two experiments, involving human participants reading aloud a set-text. The system presented in this thesis can be used to design experiments that study the various aspects of the auditory perception of the sound of one’s own voice in room environments. The system can also be adapted to incorporate a module that enables listening to the sound of one’s own voice in commercial applications such as architectural acoustic room simulation software, teleconferencing systems, virtual reality and gaming applications, etc
A system for room acoustic simulation for one's own voice
The real-time simulation of room acoustical environments for one’s own voice, using generic software, has been difficult until very recently due to the computational load involved: requiring real-time convolution of a person’s voice with a potentially large number of long room impulse responses. This thesis is presenting a room acoustical simulation system with a software-based solution to perform real-time convolutions with headtracking; to simulate the effect of room acoustical environments on the sound of one’s own voice, using binaural technology. In order to gather data to implement headtracking in the system, human head- movements are characterized while reading a text aloud. The rooms that are simulated with the system are actual rooms that are characterized by measuring the room impulse response from the mouth to ears of the same head (oral binaural room impulse response, OBRIR). By repeating this process at 2o increments in the yaw angle on the horizontal plane, the rooms are binaurally scanned around a given position to obtain a collection of OBRIRs, which is then used by the software-based convolution system. In the rooms that are simulated with the system, a person equipped with a near- mouth microphone and near-ear loudspeakers can speak or sing, and hear their voice as it would sound in the measured rooms, while physically being in an anechoic room. By continually updating the person’s head orientation using headtracking, the corresponding OBRIR is chosen for convolution with their voice. The system described in this thesis achieves the low latency that is required to simulate nearby reflections, and it can perform convolution with long room impulse responses. The perceptual validity of the system is studied with two experiments, involving human participants reading aloud a set-text. The system presented in this thesis can be used to design experiments that study the various aspects of the auditory perception of the sound of one’s own voice in room environments. The system can also be adapted to incorporate a module that enables listening to the sound of one’s own voice in commercial applications such as architectural acoustic room simulation software, teleconferencing systems, virtual reality and gaming applications, etc
Recommended from our members
Stethoscope acoustics
Audible-frequency sounds from the human body are an invaluable diagnostic tool. For over 200 years, stethoscopes have been used to listen to these sounds. Despite this, the physics of how stethoscopes work remain poorly understood.
While the stethoscope itself is a simple device, its performance depends on how it is coupled to both the patient and the clinician. Existing models do not adequately address these interactions, forcing design choices to be made based on simple heuristics. The aims of this thesis are to provide a theoretical framework for understanding the acoustics of stethoscopes, propose a low order model to simulate the response, and develop an experimental methodology to validate the model.
When a stethoscope is pressed against the chest, body sounds induce small perturbations around the equilibrium position of the nonlinear chest-stethoscope system. In this thesis, a lumped-element approach is used to model these perturbations. The resulting models are validated using experiments conducted on a phantom (a laboratory model representing the human chest). Impedance measurements on the phantom and on the human chest allow differences between these systems to be accounted for.
The models presented in this thesis capture the trends associated with each of the key design parameters. Minimising the cavity volume maximises the response, while tubing significantly attenuates low frequencies and introduces distorting standing-wave resonances. Using a diaphragm attenuates the response and shifts the resonances to higher frequencies, but also allows smaller air cavities to be used. Holding a stethoscope against the chest sets the equilibrium position of the coupled system and provides a damping-dominated impedance load on the chestpiece. The strong dependence of a stethoscope’s performance on external factors, such as the properties of the chest and the way it is held, makes it difficult to compare sensors objectively.
The work presented in this thesis dispels various misconceptions about how stethoscopes work and can be used to inform design choices, ultimately improving the diagnostic capabilities of future stethoscopes.Magdalene College, Cambridge Philosophical Societ