374 research outputs found

    MICROPHONE ARRAY OPTIMIZATION IN IMMERSIVE ENVIRONMENTS

    Get PDF
    The complex relationship between array gain patterns and microphone distributions limits the application of traditional optimization algorithms on irregular arrays, which show enhanced beamforming performance for human speech capture in immersive environments. This work analyzes the relationship between irregular microphone geometries and spatial filtering performance with statistical methods. Novel geometry descriptors are developed to capture the properties of irregular microphone distributions showing their impact on array performance. General guidelines and optimization methods for regular and irregular array design are proposed in immersive (near-field) environments to obtain superior beamforming ability for speech applications. Optimization times are greatly reduced through the objective function rules using performance-based geometric descriptions of microphone distributions that circumvent direct array gain computations over the space of interest. In addition, probabilistic descriptions of acoustic scenes are introduced to incorporate various levels of prior knowledge for the source distribution. To verify the effectiveness of the proposed optimization methods, simulated gain patterns and real SNR results of the optimized arrays are compared to corresponding traditional regular arrays and arrays obtained from direct exhaustive searching methods. Results show large SNR enhancements for the optimized arrays over arbitrary randomly generated arrays and regular arrays, especially at low microphone densities. The rapid convergence and acceptable processing times observed during the experiments establish the feasibility of proposed optimization methods for array geometry design in immersive environments where rapid deployment is required with limited knowledge of the acoustic scene, such as in mobile platforms and audio surveillance applications

    Microphone Array Speech Enhancement Via Beamforming Based Deep Learning Network

    Get PDF
    In general, in-car speech enhancement is an application of the microphone array speech enhancement in particular acoustic environments. Speech enhancement inside the moving cars is always an interesting topic and the researchers work to create some modules to increase the quality of speech and intelligibility of speech in cars. The passenger dialogue inside the car, the sound of other equipment, and a wide range of interference effects are major challenges in the task of speech separation in-car environment. To overcome this issue, a novel Beamforming based Deep learning Network (Bf-DLN) has been proposed for speech enhancement. Initially, the captured microphone array signals are pre-processed using an Adaptive beamforming technique named Least Constrained Minimum Variance (LCMV). Consequently, the proposed method uses a time-frequency representation to transform the pre-processed data into an image. The smoothed pseudo-Wigner-Ville distribution (SPWVD) is used for converting time-domain speech inputs into images. Convolutional deep belief network (CDBN) is used to extract the most pertinent features from these transformed images. Enhanced Elephant Heard Algorithm (EEHA) is used for selecting the desired source by eliminating the interference source. The experimental result demonstrates the effectiveness of the proposed strategy in removing background noise from the original speech signal. The proposed strategy outperforms existing methods in terms of PESQ, STOI, SSNRI, and SNR. The PESQ of the proposed Bf-DLN has a maximum PESQ of 1.98, whereas existing models like Two-stage Bi-LSTM has 1.82, DNN-C has 1.75 and GCN has 1.68 respectively. The PESQ of the proposed method is 1.75%, 3.15%, and 4.22% better than the existing GCN, DNN-C, and Bi-LSTM techniques. The efficacy of the proposed method is then validated by experiments

    Informed algorithms for sound source separation in enclosed reverberant environments

    Get PDF
    While humans can separate a sound of interest amidst a cacophony of contending sounds in an echoic environment, machine-based methods lag behind in solving this task. This thesis thus aims at improving performance of audio separation algorithms when they are informed i.e. have access to source location information. These locations are assumed to be known a priori in this work, for example by video processing. Initially, a multi-microphone array based method combined with binary time-frequency masking is proposed. A robust least squares frequency invariant data independent beamformer designed with the location information is utilized to estimate the sources. To further enhance the estimated sources, binary time-frequency masking based post-processing is used but cepstral domain smoothing is required to mitigate musical noise. To tackle the under-determined case and further improve separation performance at higher reverberation times, a two-microphone based method which is inspired by human auditory processing and generates soft time-frequency masks is described. In this approach interaural level difference, interaural phase difference and mixing vectors are probabilistically modeled in the time-frequency domain and the model parameters are learned through the expectation-maximization (EM) algorithm. A direction vector is estimated for each source, using the location information, which is used as the mean parameter of the mixing vector model. Soft time-frequency masks are used to reconstruct the sources. A spatial covariance model is then integrated into the probabilistic model framework that encodes the spatial characteristics of the enclosure and further improves the separation performance in challenging scenarios i.e. when sources are in close proximity and when the level of reverberation is high. Finally, new dereverberation based pre-processing is proposed based on the cascade of three dereverberation stages where each enhances the twomicrophone reverberant mixture. The dereverberation stages are based on amplitude spectral subtraction, where the late reverberation is estimated and suppressed. The combination of such dereverberation based pre-processing and use of soft mask separation yields the best separation performance. All methods are evaluated with real and synthetic mixtures formed for example from speech signals from the TIMIT database and measured room impulse responses

    雑音特性の変動を伴う多様な環境で実用可能な音声強調

    Get PDF
    筑波大学 (University of Tsukuba)201

    Development of an automated speech recognition interface for personal emergency response systems

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Demands on long-term-care facilities are predicted to increase at an unprecedented rate as the baby boomer generation reaches retirement age. Aging-in-place (i.e. aging at home) is the desire of most seniors and is also a good option to reduce the burden on an over-stretched long-term-care system. Personal Emergency Response Systems (PERSs) help enable older adults to age-in-place by providing them with immediate access to emergency assistance. Traditionally they operate with push-button activators that connect the occupant via speaker-phone to a live emergency call-centre operator. If occupants do not wear the push button or cannot access the button, then the system is useless in the event of a fall or emergency. Additionally, a false alarm or failure to check-in at a regular interval will trigger a connection to a live operator, which can be unwanted and intrusive to the occupant. This paper describes the development and testing of an automated, hands-free, dialogue-based PERS prototype.</p> <p>Methods</p> <p>The prototype system was built using a ceiling mounted microphone array, an open-source automatic speech recognition engine, and a 'yes' and 'no' response dialog modelled after an existing call-centre protocol. Testing compared a single microphone versus a microphone array with nine adults in both noisy and quiet conditions. Dialogue testing was completed with four adults.</p> <p>Results and discussion</p> <p>The microphone array demonstrated improvement over the single microphone. In all cases, dialog testing resulted in the system reaching the correct decision about the kind of assistance the user was requesting. Further testing is required with elderly voices and under different noise conditions to ensure the appropriateness of the technology. Future developments include integration of the system with an emergency detection method as well as communication enhancement using features such as barge-in capability.</p> <p>Conclusion</p> <p>The use of an automated dialog-based PERS has the potential to provide users with more autonomy in decisions regarding their own health and more privacy in their own home.</p

    Exploiting CNNs for Improving Acoustic Source Localization in Noisy and Reverberant Conditions

    Get PDF
    This paper discusses the application of convolutional neural networks (CNNs) to minimum variance distortionless response localization schemes. We investigate the direction of arrival estimation problems in noisy and reverberant conditions using a uniform linear array (ULA). CNNs are used to process the multichannel data from the ULA and to improve the data fusion scheme, which is performed in the steered response power computation. CNNs improve the incoherent frequency fusion of the narrowband response power by weighting the components, reducing the deleterious effects of those components affected by artifacts due to noise and reverberation. The use of CNNs avoids the necessity of previously encoding the multichannel data into selected acoustic cues with the advantage to exploit its ability in recognizing geometrical pattern similarity. Experiments with both simulated and real acoustic data demonstrate the superior localization performance of the proposed SRP beamformer with respect to other state-of-the-art techniques

    Clustering Inverse Beamforming and multi-domain acoustic imaging approaches for vehicles NVH

    Get PDF
    Il rumore percepito all’interno della cabina di un veicolo è un aspetto molto rilevante nella valutazione della sua qualità complessiva. Metodi sperimentali di acoustic imaging, quali beamforming e olografia acustica, sono usati per identificare le principali sorgenti che contribuiscono alla rumorosità percepita all’interno del veicolo. L’obiettivo della tesi proposta è di fornire strumenti per effettuare dettagliate analisi quantitative tramite tali tecniche, ad oggi relegate alle fasi di studio preliminare, proponendo un approccio modulare che si avvale di analisi dei fenomeni vibro-acustici nel dominio della frequenza, del tempo e dell’angolo di rotazione degli elementi rotanti tipicamente presenti in un veicolo. Ciò permette di ridurre tempi e costi della progettazione, garantendo, al contempo, una maggiore qualità del pacchetto vibro-acustico. L’innovativo paradigma proposto prevede l’uso combinato di algoritmi di pre- e post- processing con tecniche inverse di acoustic imaging per lo studio di rilevanti problematiche quali l’identificazione di sorgenti sonore esterne o interne all’abitacolo e del rumore prodotto da dispositivi rotanti. Principale elemento innovativo della tesi è la tecnica denominata Clustering Inverse Beamforming. Essa si basa su un approccio statistico che permette di incrementare l’accuratezza (range dinamico, localizzazione e quantificazione) di una immagine acustica tramite la combinazione di soluzioni, del medesimo problema inverso, ottenute considerando diversi sotto-campioni dell’informazione sperimentale disponibile, variando, in questo modo, in maniera casuale la sua formulazione matematica. Tale procedimento garantisce la ricostruzione nel dominio della frequenza e del tempo delle sorgenti sonore identificate. Un metodo innovativo è stato inoltre proposto per la ricostruzione, ove necessario, di sorgenti sonore nel dominio dell’angolo. I metodi proposti sono stati supportati da argomentazioni teoriche e validazioni sperimentali su scala accademica e industriale.The interior sound perceived in vehicle cabins is a very important attribute for the user. Experimental acoustic imaging methods such as beamforming and Near-field Acoustic Holography are used in vehicles noise and vibration studies because they are capable of identifying the noise sources contributing to the overall noise perceived inside the cabin. However these techniques are often relegated to the troubleshooting phase, thus requiring additional experiments for more detailed NVH analyses. It is therefore desirable that such methods evolve towards more refined solutions capable of providing a larger and more detailed information. This thesis proposes a modular and multi-domain approach involving direct and inverse acoustic imaging techniques for providing quantitative and accurate results in frequency, time and angle domain, thus targeting three relevant types of problems in vehicles NVH: identification of exterior sources affecting interior noise, interior noise source identification, analysis of noise sources produced by rotating machines. The core finding of this thesis is represented by a novel inverse acoustic imaging method named Clustering Inverse Beamforming (CIB). The method grounds on a statistical processing based on an Equivalent Source Method formulation. In this way, an accurate localization, a reliable ranking of the identified sources in frequency domain and their separation into uncorrelated phenomena is obtained. CIB is also exploited in this work for allowing the reconstruction of the time evolution of the sources sought. Finally a methodology for decomposing the acoustic image of the sound field generated by a rotating machine as a function of the angular evolution of the machine shaft is proposed. This set of findings aims at contributing to the advent of a new paradigm of acoustic imaging applications in vehicles NVH, supporting all the stages of the vehicle design with time-saving and cost-efficient experimental techniques. The proposed innovative approaches are validated on several simulated and real experiments
    corecore