946 research outputs found

    Sonic Booms in Atmospheric Turbulence (SonicBAT): The Influence of Turbulence on Shaped Sonic Booms

    Get PDF
    The objectives of the Sonic Booms in Atmospheric Turbulence (SonicBAT) Program were to develop and validate, via research flight experiments under a range of realistic atmospheric conditions, one numeric turbulence model research code and one classic turbulence model research code using traditional N-wave booms in the presence of atmospheric turbulence, and to apply these models to assess the effects of turbulence on the levels of shaped sonic booms predicted from low boom aircraft designs. The SonicBAT program has successfully investigated sonic boom turbulence effects through the execution of flight experiments at two NASA centers, Armstrong Flight Research Center (AFRC) and Kennedy Space Center (KSC), collecting a comprehensive set of acoustic and atmospheric turbulence data that were used to validate the numeric and classic turbulence models developed. The validated codes were incorporated into the PCBoom sonic boom prediction software and used to estimate the effect of turbulence on the levels of shaped sonic booms associated with several low boom aircraft designs. The SonicBAT program was a four year effort that consisted of turbulence model development and refinement throughout the entire period as well as extensive flight test planning that culminated with the two research flight tests being conducted in the second and third years of the program. The SonicBAT team, led by Wyle, includes partners from the Pennsylvania State University, Lockheed Martin, Gulfstream Aerospace, Boeing, Eagle Aeronautics, Technical & Business Systems, and the Laboratory of Fluid Mechanics and Acoustics (France). A number of collaborators, including the Japan Aerospace Exploration Agency, also participated by supporting the experiments with human and equipment resources at their own expense. Three NASA centers, AFRC, Langley Research Center (LaRC), and KSC were essential to the planning and conduct of the experiments. The experiments involved precision flight of either an F-18A or F-18B executing steady, level passes at supersonic airspeeds in a turbulent atmosphere to create sonic boom signatures that had been distorted by turbulence. The flights spanned a range of atmospheric turbulence conditions at NASA Armstrong and Kennedy in order to provide a variety of conditions for code validations. The SonicBAT experiments at both sites were designed to capture simultaneous F-18A or F-18B onboard flight instrumentation data, high fidelity ground based and airborne acoustic data, surface and upper air meteorological data, and additional meteorological data from ultrasonic anemometers and SODARs to determine the local atmospheric turbulence and boundary layer height

    Measurement of Phased Array Point Spread Functions for Use with Beamforming

    Get PDF
    Microphone arrays can be used to localize and estimate the strengths of acoustic sources present in a region of interest. However, the array measurement of a region, or beam map, is not an accurate representation of the acoustic field in that region. The true acoustic field is convolved with the array s sampling response, or point spread function (PSF). Many techniques exist to remove the PSF's effect on the beam map via deconvolution. Currently these methods use a theoretical estimate of the array point spread function and perhaps account for installation offsets via determination of the microphone locations. This methodology fails to account for any reflections or scattering in the measurement setup and still requires both microphone magnitude and phase calibration, as well as a separate shear layer correction in an open-jet facility. The research presented seeks to investigate direct measurement of the array's PSF using a non-intrusive acoustic point source generated by a pulsed laser system. Experimental PSFs of the array are computed for different conditions to evaluate features such as shift-invariance, shear layers and model presence. Results show that experimental measurements trend with theory with regard to source offset. The source shows expected behavior due to shear layer refraction when observed in a flow, and application of a measured PSF to NACA 0012 aeroacoustic trailing-edge noise data shows a promising alternative to a classic shear layer correction method

    Mathematical modelling ano optimization strategies for acoustic source localization in reverberant environments

    Get PDF
    La presente Tesis se centra en el uso de técnicas modernas de optimización y de procesamiento de audio para la localización precisa y robusta de personas dentro de un entorno reverberante dotado con agrupaciones (arrays) de micrófonos. En esta tesis se han estudiado diversos aspectos de la localización sonora, incluyendo el modelado, la algoritmia, así como el calibrado previo que permite usar los algoritmos de localización incluso cuando la geometría de los sensores (micrófonos) es desconocida a priori. Las técnicas existentes hasta ahora requerían de un número elevado de micrófonos para obtener una alta precisión en la localización. Sin embargo, durante esta tesis se ha desarrollado un nuevo método que permite una mejora de más del 30\% en la precisión de la localización con un número reducido de micrófonos. La reducción en el número de micrófonos es importante ya que se traduce directamente en una disminución drástica del coste y en un aumento de la versatilidad del sistema final. Adicionalmente, se ha realizado un estudio exhaustivo de los fenómenos que afectan al sistema de adquisición y procesado de la señal, con el objetivo de mejorar el modelo propuesto anteriormente. Dicho estudio profundiza en el conocimiento y modelado del filtrado PHAT (ampliamente utilizado en localización acústica) y de los aspectos que lo hacen especialmente adecuado para localización. Fruto del anterior estudio, y en colaboración con investigadores del instituto IDIAP (Suiza), se ha desarrollado un sistema de auto-calibración de las posiciones de los micrófonos a partir del ruido difuso presente en una sala en silencio. Esta aportación relacionada con los métodos previos basados en la coherencia. Sin embargo es capaz de reducir el ruido atendiendo a parámetros físicos previamente conocidos (distancia máxima entre los micrófonos). Gracias a ello se consigue una mejor precisión utilizando un menor tiempo de cómputo. El conocimiento de los efectos del filtro PHAT ha permitido crear un nuevo modelo que permite la representación 'sparse' del típico escenario de localización. Este tipo de representación se ha demostrado ser muy conveniente para localización, permitiendo un enfoque sencillo del caso en el que existen múltiples fuentes simultáneas. La última aportación de esta tesis, es el de la caracterización de las Matrices TDOA (Time difference of arrival -Diferencia de tiempos de llegada, en castellano-). Este tipo de matrices son especialmente útiles en audio pero no están limitadas a él. Además, este estudio transciende a la localización con sonido ya que propone métodos de reducción de ruido de las medias TDOA basados en una representación matricial 'low-rank', siendo útil, además de en localización, en técnicas tales como el beamforming o el autocalibrado

    Acoustic sensor network geometry calibration and applications

    Get PDF
    In the modern world, we are increasingly surrounded by computation devices with communication links and one or more microphones. Such devices are, for example, smartphones, tablets, laptops or hearing aids. These devices can work together as nodes in an acoustic sensor network (ASN). Such networks are a growing platform that opens the possibility for many practical applications. ASN based speech enhancement, source localization, and event detection can be applied for teleconferencing, camera control, automation, or assisted living. For this kind of applications, the awareness of auditory objects and their spatial positioning are key properties. In order to provide these two kinds of information, novel methods have been developed in this thesis. Information on the type of auditory objects is provided by a novel real-time sound classification method. Information on the position of human speakers is provided by a novel localization and tracking method. In order to localize with respect to the ASN, the relative arrangement of the sensor nodes has to be known. Therefore, different novel geometry calibration methods were developed. Sound classification The first method addresses the task of identification of auditory objects. A novel application of the bag-of-features (BoF) paradigm on acoustic event classification and detection was introduced. It can be used for event and speech detection as well as for speaker identification. The use of both mel frequency cepstral coefficient (MFCC) and Gammatone frequency cepstral coefficient (GFCC) features improves the classification accuracy. By using soft quantization and introducing supervised training for the BoF model, superior accuracy is achieved. The method generalizes well from limited training data. It is working online and can be computed in a fraction of real-time. By a dedicated training strategy based on a hierarchy of stationarity, the detection of speech in mixtures with noise was realized. This makes the method robust against severe noises levels corrupting the speech signal. Thus it is possible to provide control information to a beamformer in order to realize blind speech enhancement. A reliable improvement is achieved in the presence of one or more stationary noise sources. Speaker localization The localization method enables each node to determine the direction of arrival (DoA) of concurrent sound sources. The author's neuro-biologically inspired speaker localization method for microphone arrays was refined for the use in ASNs. By implementing a dedicated cochlear and midbrain model, it is robust against the reverberation found in indoor rooms. In order to better model the unknown number of concurrent speakers, an application of the EM algorithm that realizes probabilistic clustering according to auditory scene analysis (ASA) principles was introduced. Based on this approach, a system for Euclidean tracking in ASNs was designed. Each node applies the node wise localization method and shares probabilistic DoA estimates together with an estimate of the spectral distribution with the network. As this information is relatively sparse, it can be transmitted with low bandwidth. The system is robust against jitter and transmission errors. The information from all nodes is integrated according to spectral similarity to correctly associate concurrent speakers. By incorporating the intersection angle in the triangulation, the precision of the Euclidean localization is improved. Tracks of concurrent speakers are computed over time, as is shown with recordings in a reverberant room. Geometry calibration The central task of geometry calibration has been solved with special focus on sensor nodes equipped with multiple microphones. Novel methods were developed for different scenarios. An audio-visual method was introduced for the calibration of ASNs in video conferencing scenarios. The DoAs estimates are fused with visual speaker tracking in order to provide sensor positions in a common coordinate system. A novel acoustic calibration method determines the relative positioning of the nodes from ambient sounds alone. Unlike previous methods that only infer the positioning of distributed microphones, the DoA is incorporated and thus it becomes possible to calibrate the orientation of the nodes with a high accuracy. This is very important for all applications using the spatial information, as the triangulation error increases dramatically with bad orientation estimates. As speech events can be used, the calibration becomes possible without the requirement of playing dedicated calibration sounds. Based on this, an online method employing a genetic algorithm with incremental measurements was introduced. By using the robust speech localization method, the calibration is computed in parallel to the tracking. The online method is be able to calibrate ASNs in real time, as is shown with recordings of natural speakers in a reverberant room. The informed acoustic sensor network All new methods are important building blocks for the use of ASNs. The online methods for localization and calibration both make use of the neuro-biologically inspired processing in the nodes which leads to state-of-the-art results, even in reverberant enclosures. The high robustness and reliability can be improved even more by including the event detection method in order to exclude non-speech events. When all methods are combined, both semantic information on what is happening in the acoustic scene as well as spatial information on the positioning of the speakers and sensor nodes is automatically acquired in real time. This realizes truly informed audio processing in ASNs. Practical applicability is shown by application to recordings in reverberant rooms. The contribution of this thesis is thus not only to advance the state-of-the-art in automatically acquiring information on the acoustic scene, but also pushing the practical applicability of such methods

    Informed Sound Source Localization for Hearing Aid Applications

    Get PDF

    Online Audio-Visual Multi-Source Tracking and Separation: A Labeled Random Finite Set Approach

    Get PDF
    The dissertation proposes an online solution for separating an unknown and time-varying number of moving sources using audio and visual data. The random finite set framework is used for the modeling and fusion of audio and visual data. This enables an online tracking algorithm to estimate the source positions and identities for each time point. With this information, a set of beamformers can be designed to separate each desired source and suppress the interfering sources

    Robust acoustic beamforming in the presence of channel propagation uncertainties

    No full text
    Beamforming is a popular multichannel signal processing technique used in conjunction with microphone arrays to spatially filter a sound field. Conventional optimal beamformers assume that the propagation channels between each source and microphone pair are a deterministic function of the source and microphone geometry. However in real acoustic environments, there are several mechanisms that give rise to unpredictable variations in the phase and amplitudes of the propagation channels. In the presence of these uncertainties the performance of beamformers degrade. Robust beamformers are designed to reduce this performance degradation. However, robust beamformers rely on tuning parameters that are not closely related to the array geometry. By modeling the uncertainty in the acoustic channels explicitly we can derive more accurate expressions for the source-microphone channel variability. As such we are able to derive beamformers that are well suited to the application of acoustics in realistic environments. Through experiments we validate the acoustic channel models and through simulations we show the performance gains of the associated robust beamformer. Furthermore, by modeling the speech short time Fourier transform coefficients we are able to design a beamformer framework in the power domain. By utilising spectral subtraction we are able to see performance benefits over ideal conventional beamformers. Including the channel uncertainties models into the weights design improves robustness.Open Acces

    Sound Based Positioning

    Get PDF
    With a growing interest in non-GPS positioning, navigation, and timing (PNT), sound based positioning provides a precise way to locate both sound sources and microphones through audible signals of opportunity (SoOPs). Exploiting SoOPs allows for passive location estimation. But, attributing each signal to a specific source location when signals are simultaneously emitting proves problematic. Using an array of microphones, unique SoOPs are identified and located through steered response beamforming. Sound source signals are then isolated through time-frequency masking to provide clear reference stations by which to estimate the location of a separate microphone through time difference of arrival measurements. Results are shown for real data

    Array signal processing for source localization and enhancement

    Get PDF
    “A common approach to the wide-band microphone array problem is to assume a certain array geometry and then design optimal weights (often in subbands) to meet a set of desired criteria. In addition to weights, we consider the geometry of the microphone arrangement to be part of the optimization problem. Our approach is to use particle swarm optimization (PSO) to search for the optimal geometry while using an optimal weight design to design the weights for each particle’s geometry. The resulting directivity indices (DI’s) and white noise SNR gains (WNG’s) form the basis of the PSO’s fitness function. Another important consideration in the optimal weight design are several regularization parameters. By including those parameters in the particles, we optimize their values as well in the operation of the PSO. The proposed method allows the user great flexibility in specifying desired DI’s and WNG’s over frequency by virtue of the PSO fitness function. Although the above method discusses beam and nulls steering for fixed locations, in real time scenarios, it requires us to estimate the source positions to steer the beam position adaptively. We also investigate source localization of sound and RF sources using machine learning techniques. As for the RF source localization, we consider radio frequency identification (RFID) antenna tags. Using a planar RFID antenna array with beam steering capability and using received signal strength indicator (RSSI) value captured for each beam position, the position of each RFID antenna tag is estimated. The proposed approach is also shown to perform well under various challenging scenarios”--Abstract, page iv
    corecore