42 research outputs found

    Localization of sound sources : a systematic review

    Get PDF
    Sound localization is a vast field of research and advancement which is used in many useful applications to facilitate communication, radars, medical aid, and speech enhancement to but name a few. Many different methods are presented in recent times in this field to gain benefits. Various types of microphone arrays serve the purpose of sensing the incoming sound. This paper presents an overview of the importance of using sound localization in different applications along with the use and limitations of ad-hoc microphones over other microphones. In order to overcome these limitations certain approaches are also presented. Detailed explanation of some of the existing methods that are used for sound localization using microphone arrays in the recent literature is given. Existing methods are studied in a comparative fashion along with the factors that influence the choice of one method over the others. This review is done in order to form a basis for choosing the best fit method for our use

    Effects of errorless learning on the acquisition of velopharyngeal movement control

    Get PDF
    Session 1pSC - Speech Communication: Cross-Linguistic Studies of Speech Sound Learning of the Languages of Hong Kong (Poster Session)The implicit motor learning literature suggests a benefit for learning if errors are minimized during practice. This study investigated whether the same principle holds for learning velopharyngeal movement control. Normal speaking participants learned to produce hypernasal speech in either an errorless learning condition (in which the possibility for errors was limited) or an errorful learning condition (in which the possibility for errors was not limited). Nasality level of the participants’ speech was measured by nasometer and reflected by nasalance scores (in %). Errorless learners practiced producing hypernasal speech with a threshold nasalance score of 10% at the beginning, which gradually increased to a threshold of 50% at the end. The same set of threshold targets were presented to errorful learners but in a reversed order. Errors were defined by the proportion of speech with a nasalance score below the threshold. The results showed that, relative to errorful learners, errorless learners displayed fewer errors (50.7% vs. 17.7%) and a higher mean nasalance score (31.3% vs. 46.7%) during the acquisition phase. Furthermore, errorless learners outperformed errorful learners in both retention and novel transfer tests. Acknowledgment: Supported by The University of Hong Kong Strategic Research Theme for Sciences of Learning © 2012 Acoustical Society of Americapublished_or_final_versio

    Desenvolvimento de metodologias para localização indoor de smartphones com exatidão ao centímetro

    Get PDF
    Doutoramento em Engenharia ElectrotécnicaThis thesis describes the design and implementation of a reliable centimeter-level indoor positioning system fully compatible with a conventional smartphone. The proposed system takes advantage of the smartphone audio I/O and processing capabilities to perform acoustic ranging in the audio band using non-invasive audio signals and it has been developed having in mind applications that require high accuracy, such as augmented reality, virtual reality, gaming and audio guides. The system works in a distributed operation mode, i.e. each smartphone is able to obtain its own position using only acoustic signals. To support the positioning system, a Wireless Sensor Network (WSN) of synchronized acoustic beacons is used. To keep the infrastructure in sync we have developed an Automatic Time Synchronization and Syntonization (ATSS) protocol with a standard deviation of the sync offset error below 1.25 μs. Using an improved Time Difference of Arrival (TDoA) estimation approach (which takes advantage of the beacon signals’ periodicity) and by performing Non-Line-of-Sight (NLoS) mitigation, we were able to obtain very stable and accurate position estimates with an absolute mean error of less than 10 cm in 95% of the cases and a mean standard deviation of 2.2 cm for a position refresh period of 350 ms.Esta tese descreve o projeto e a implementação de um sistema de localização para ambientes interiores totalmente compatível com um smartphone convencional. O sistema proposto explora a capacidade de aquisição de sinais áudio e de processamento do smartphone para medir distâncias utilizando sinais acústicos na banda do audível; foram utilizados sinais áudio não-invasivos, i.e. com reduzido impacto perceptual em humanos. No desenvolvimento deste sistema foram consideradas aplicações que exigem elevada exatidão, na ordem dos centímetros, tais como realidade aumentada, realidade virtual, jogos ou guias virtuais. Utilizou-se uma infraestrutura de faróis de baixo custo suportada por uma rede de sensores sem fios (RSSF). Para manter a infraestrutura síncrona, foi desenvolvido um protocolo de sincronização e sintonização automática, (Automatic Time Synchronization and Syntonization - ATSS) que garante um desvio padrão do erro de offset abaixo de 1.25 μs. Cada smartphone efectua medidas MT-TDoA que posteriormente são utilizadas pelo algoritmo de localização hiperbólica. As estimativas de posição resultantes são estáveis e precisas, com um erro médio absoluto menor do que 10 cm em 95% dos casos e um desvio padrão médio de 2.2 cm, para um período de atualização de posição de 350 ms

    Système d'audition artificielle embarqué optimisé pour robot mobile muni d'une matrice de microphones

    Get PDF
    Dans un environnement non contrôlé, un robot doit pouvoir interagir avec les personnes d’une façon autonome. Cette autonomie doit également inclure une interaction grâce à la voix humaine. Lorsque l’interaction s’effectue à une distance de quelques mètres, des phénomènes tels que la réverbération et la présence de bruit ambiant doivent être pris en considération pour effectuer efficacement des tâches comme la reconnaissance de la parole ou de locuteur. En ce sens, le robot doit être en mesure de localiser, suivre et séparer les sources sonores présentes dans son environnement. L’augmentation récente de la puissance de calcul des processeurs et la diminution de leur consommation énergétique permettent dorénavant d’intégrer ces systèmes d’audition articielle sur des systèmes embarqués en temps réel. L’audition robotique est un domaine relativement jeune qui compte deux principales librairies d’audition artificielle : ManyEars et HARK. Jusqu’à présent, le nombre de microphones se limite généralement à huit, en raison de l’augmentation rapide de charge de calculs lorsque des microphones supplémentaires sont ajoutés. De plus, il est parfois difficile d’utiliser ces librairies avec des robots possédant des géométries variées puisqu’il est nécessaire de les calibrer manuellement. Cette thèse présente la librairie ODAS qui apporte des solutions à ces difficultés. Afin d’effectuer une localisation et une séparation plus robuste aux matrices de microphones fermées, ODAS introduit un modèle de directivité pour chaque microphone. Une recherche hiérarchique dans l’espace permet également de réduire la quantité de calculs nécessaires. De plus, une mesure de l’incertitude du délai d’arrivée du son est introduite pour ajuster automatiquement plusieurs paramètres et ainsi éviter une calibration manuelle du système. ODAS propose également un nouveau module de suivi de sources sonores qui emploie des filtres de Kalman plutôt que des filtres particulaires. Les résultats démontrent que les méthodes proposées réduisent la quantité de fausses détections durant la localisation, améliorent la robustesse du suivi pour des sources sonores multiples et augmentent la qualité de la séparation de 2.7 dB dans le cas d’un formateur de faisceau à variance minimale. La quantité de calculs requis diminue par un facteur allant jusqu’à 4 pour la localisation et jusqu’à 30 pour le suivi par rapport à la librairie ManyEars. Le module de séparation des sources sonores exploite plus efficacement la géométrie de la matrice de microphones, sans qu’il soit nécessaire de mesurer et calibrer manuellement le système. Avec les performances observées, la librairie ODAS ouvre aussi la porte à des applications dans le domaine de la détection des drones par le bruit, la localisation de bruits extérieurs pour une navigation plus efficace pour les véhicules autonomes, des assistants main-libre à domicile et l’intégration dans des aides auditives

    Adaptive time-frequency analysis for cognitive source separation

    Get PDF
    This thesis introduces a framework for separating two speech sources in non-ideal, reverberant environments. The source separation architecture tries to mimic the extraordinary abilities of the human auditory system when performing source separation. A movable human dummy head residing in a normal office room is used to model the conditions humans experience when listening to complex auditory scenes. This thesis first investigates how the orthogonality of speech sources in the time-frequency domain drops with different reverberation times of the environment and shows that separation schemes based on ideal binary time-frequency-masks are suitable to perform source separation also under humanoid reverberant conditions. Prior to separating the sources, the movable human dummy head analyzes the auditory scene and estimates the positions of the sources and the fundamental frequency tracks. The source localization is implemented using an iterative approach based on the interaural time differences between the two ears and achieves a localization blur of less than three degrees in the azimuth plane. The source separation architecture implemented in this thesis extracts the orthogonal timefrequency points of the speech mixtures. It combines the positive features of the STFT with the positive features of the cochleagram representation. The overall goal of the source separation is to find the ideal STFT-mask. The core source separation process however is based on the analysis of the corresponding region in an additionally computed cochleagram, which shows more reliable Interaural Time Difference (ITD) estimations that are used for separation. Several algorithms based on the ITD and the fundamental frequency of the target source are evaluated for their source separation capabilities. To enhance the separation capabilities of the single algorithms, the results of the different algorithms are combined to compute a final estimate. In this way SIR gains of approximately 30 dB for two source scenarios are achieved. For three source scenarios SIR gains of up to 16 dB are attained. Compared to the standard binaural signal processing approaches like DUET and Fixed Beamforming the presented approach achieves up to 29 dB SIR gain.Diese Dissertation beschreibt ein Framework zur Separation zweier Quellen in nicht-idealen, echobehafteten Umgebungen. Die Architektur zur Quellenseparation orientiert sich dabei an den außergewöhnlichen Separationsfähigkeiten des menschlichen Gehörs. Um die Bedingungen eines Menschen in einer komplexen auditiven Szene zu imitieren, wird ein beweglicher, menschlicher Kunstkopf genutzt, der sich in einem üblichen Büroraum befindet. In einem ersten Schritt analysiert diese Dissertation, inwiefern die Orthogonalität von Sprachsignalen im Zeit-Frequenz-Bereich mit unterschiedlichen Nachhallzeiten abnimmt. Trotz der Orthogonalitätsabnahme sind Separationsansätze basierend auf idealen binären Masken geeignet um eine Trennung von Sprachsignalen auch unter menschlichen, echobehafteten Bedingungen zu realisieren. Bevor die Quellen getrennt werden, analysiert der bewegliche Kunstkopf die auditive Szene und schätzt die Positionen der einzelnen Quellen und den Verlauf der Grundfrequenz der Sprecher ab. Die Quellenlokalisation wird durch einen iterativen Ansatz basierend auf den Zeitunterschieden zwischen beiden Ohren verwirklicht und erreicht eine Lokalisierungsgenauigkeit von weniger als drei Grad in der Azimuth-Ebene. Die Quellenseparationsarchitektur die in dieser Arbeit implementiert wird, extrahiert die orthogonalen Zeit-Frequenz-Punkte der Sprachmixturen. Dazu werden die positiven Eigenschaften der STFT mit den positiven Eigenschaften des Cochleagrams kombiniert. Ziel ist es, die ideale STFT-Maske zu finden. Die eigentliche Quellentrennung basiert jedoch auf der Analyse der entsprechenden Region eines zusätzlich berechneten Cochleagrams. Auf diese Weise wird eine weitaus verlässlichere Auswertung der Zeitunterschiede zwischen den beiden Ohren verwirklicht. Mehrere Algorithmen basierend auf den interauralen Zeitunterschieden und der Grundfrequenz der Zielquelle werden bezüglich ihrer Separationsfähigkeiten evaluiert. Um die Trennungsmöglichkeiten der einzelnen Algorithmen zu erhöhen, werden die einzelnen Ergebnisse miteinander verknüpft um eine finale Abschätzung zu gewinnen. Auf diese Weise können SIR Gewinne von ungefähr 30 dB für Szenarien mit zwei Quellen erzielt werden. Für Szenarien mit drei Quellen werden Gewinne von bis zu 16 dB erzielt. Verglichen mit binauralen Standardverfahren zur Quellentrennung wie DUET oder Fixed Beamforming, gewinnt der vorgestellte Ansatz bis zu 29 dB SIR

    Predicting room acoustical behavior with the ODEON computer model

    Get PDF

    On the applicability of models for outdoor sound (A)

    Get PDF

    Acoustic Source Localisation in constrained environments

    Get PDF
    Acoustic Source Localisation (ASL) is a problem with real-world applications across multiple domains, from smart assistants to acoustic detection and tracking. And yet, despite the level of attention in recent years, a technique for rapid and robust ASL remains elusive – not least in the constrained environments in which such techniques are most likely to be deployed. In this work, we seek to address some of these current limitations by presenting improvements to the ASL method for three commonly encountered constraints: the number and configuration of sensors; the limited signal sampling potentially available; and the nature and volume of training data required to accurately estimate Direction of Arrival (DOA) when deploying a particular supervised machine learning technique. In regard to the number and configuration of sensors, we find that accuracy can be maintained at state-of-the-art levels, Steered Response Power (SRP), while reducing computation sixfold, based on direct optimisation of well known ASL formulations. Moreover, we find that the circular microphone configuration is the least desirable as it yields the highest localisation error. In regard to signal sampling, we demonstrate that the computer vision inspired algorithm presented in this work, which extracts selected keypoints from the signal spectrogram, and uses them to select signal samples, outperforms an audio fingerprinting baseline while maintaining a compression ratio of 40:1. In regard to the training data employed in machine learning ASL techniques, we show that the use of music training data yields an improvement of 19% against a noise data baseline while maintaining accuracy using only 25% of the training data, while training with speech as opposed to noise improves DOA estimation by an average of 17%, outperforming the Generalised Cross-Correlation technique by 125% in scenarios in which the test and training acoustic environments are matched.Heriot-Watt University James Watt Scholarship (JSW) in the School of Engineering & Physical Sciences
    corecore