14 research outputs found

    An exploratory assessment of multistream deep neural network fusion : design and applications

    Get PDF
    Tese (doutorado) — Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Mecânica, 2022.Os métodos de aprendizado de máquina dependem muito de quão bom o extrator de características selecionado pode representar os dados brutos de entrada. Atualmente, temos mais dados e capacidade computacional para lidar com isso. Com as Redes Neurais Convolucionais temos uma rede que é mais fácil de treinar e generaliza muito melhor do que o habitual. Há, no entanto, uma boa quantidade de características que são essenciais, mas são descartadas nesse processo, mesmo quando se utiliza uma CNN poderosa. As Redes Neurais Convolucionais Multistream podem processar mais de uma entrada usando fluxos separados e são projetadas usando qualquer arquitetura CNN clássica como base. O uso de M-CNNs gera mais informação de características e, assim, melhora o resultado geral. Este trabalho explorou arquiteturas M-CNNs e como os sinais de fluxo se comportam durante o processamento, chegando a uma nova estratégia de fusão cruzada de M-CNNs. O novo módulo é validado, inicialmente, com um conjunto de dados padrão, CIFAR-10, e comparado com as redes correspondentes (single-stream CNN e late fusion M-CNN). Os primeiros resultados neste cenário mostraram que nosso modelo adaptado superou todos os modelos mencionados acima em pelo menos 28% em comparação com todos os modelos testados. Expandindo o teste, usamos a base de antigas redes estado-da-arte na classificação de imagens e conjuntos de dados adicionais para investigar se a técnica pode colocar essas estruturas de volta ao jogo. No conjunto de dados NORB, mostramos que podemos aumentar a precisão em até 63, 21% quando comparado às estruturas básicas de M-CNNs. Variando nossas aplicações, o mAP@75 do conjunto de dados de detecção e reconhecimento de objetos BDD100K melhorou em 50, 16% em comparação com sua versão não adaptada, mesmo quando treinado do zero. A fusão proposta demonstrou robustez e estabilidade, mesmo quando distratores foram usados como entradas. Embora nosso objetivo seja reutilizar arquiteturas estado-da-arte anteriores com poucas modificações, também expomos as desvantagens de nossa estratégia explorada.Machine-learning methods depend heavily on how well the selected feature extractor can represent the raw input data. Nowadays, we have more data and computational capacity to deal with it. With Convolutional Neural Networks, we have a network that is easier to train and generalizes much better than usual. However, a good amount of essential features are discarded in this process, even when using a powerful CNN. Multistream Convolutional Neural Networks can process more than one input using separate streams and are designed using any classical CNN architecture as a base. The use of M-CNNs generates more features and thus, improves the overall outcome. This work explored M-CNNs architectures and how the stream signals behave during the processing, coming up with a novel M-CNN cross-fusion strategy. The new module is first validated with a standard dataset, CIFAR-10, and compared with the corresponding networks (single-stream CNN and late fusion M-CNN). Early results on this scenario showed that our adapted model outperformed all the abovementioned models by at least 28% compared to all tested models. Expanding the test, we used the backbones of former state-of-the-art networks on image classification and additional datasets to investigate if the technique can put these designs back in the game. On the NORB dataset, we showed that we could increase accuracy up to 63.21% compared to basic M-CNNs structures. Varying our applications, the mAP@75 of the BDD100K multi-object detection and recognition dataset improved by 50.16% compared to its unadapted version, even when trained from scratch. The proposed fusion demonstrated robustness and stability, even when distractors were used as inputs. While our goal is to reuse previous state-of-the-art architectures with few modifications, we also expose the disadvantages of our explored strategy

    Sound Object Recognition

    Get PDF
    Humans are constantly exposed to a variety of acoustic stimuli ranging from music and speech to more complex acoustic scenes like a noisy marketplace. The human auditory perception mechanism is able to analyze these different kinds of sounds and extract meaningful information suggesting that the same processing mechanism is capable of representing different sound classes. In this thesis, we test this hypothesis by proposing a high dimensional sound object representation framework, that captures the various modulations of sound by performing a multi-resolution mapping. We then show that this model is able to capture a wide variety of sound classes (speech, music, soundscapes) by applying it to the tasks of speech recognition, speaker verification, musical instrument recognition and acoustic soundscape recognition. We propose a multi-resolution analysis approach that captures the detailed variations in the spectral characterists as a basis for recognizing sound objects. We then show how such a system can be fine tuned to capture both the message information (speech content) and the messenger information (speaker identity). This system is shown to outperform state-of-art system for noise robustness at both automatic speech recognition and speaker verification tasks. The proposed analysis scheme with the included ability to analyze temporal modulations was used to capture musical sound objects. We showed that using a model of cortical processing, we were able to accurately replicate the human perceptual similarity judgments and also were able to get a good classification performance on a large set of musical instruments. We also show that neither just the spectral feature or the marginals of the proposed model are sufficient to capture human perception. Moreover, we were able to extend this model to continuous musical recordings by proposing a new method to extract notes from the recordings. Complex acoustic scenes like a sports stadium have multiple sources producing sounds at the same time. We show that the proposed representation scheme can not only capture these complex acoustic scenes, but provides a flexible mechanism to adapt to target sources of interest. The human auditory perception system is known to be a complex system where there are both bottom-up analysis pathways and top-down feedback mechanisms. The top-down feedback enhances the output of the bottom-up system to better realize the target sounds. In this thesis we propose an implementation of top-down attention module which is complimentary to the high dimensional acoustic feature extraction mechanism. This attention module is a distributed system operating at multiple stages of representation, effectively acting as a retuning mechanism, that adapts the same system to different tasks. We showed that such an adaptation mechanism is able to tremendously improve the performance of the system at detecting the target source in the presence of various distracting background sources

    Computational Models of Representation and Plasticity in the Central Auditory System

    Get PDF
    The performance for automated speech processing tasks like speech recognition and speech activity detection rapidly degrades in challenging acoustic conditions. It is therefore necessary to engineer systems that extract meaningful information from sound while exhibiting invariance to background noise, different speakers, and other disruptive channel conditions. In this thesis, we take a biomimetic approach to these problems, and explore computational strategies used by the central auditory system that underlie neural information extraction from sound. In the first part of this thesis, we explore coding strategies employed by the central auditory system that yield neural responses that exhibit desirable noise robustness. We specifically demonstrate that a coding strategy based on sustained neural firings yields richly structured spectro-temporal receptive fields (STRFs) that reflect the structure and diversity of natural sounds. The emergent receptive fields are comparable to known physiological neuronal properties and can be employed as a signal processing strategy to improve noise invariance in a speech recognition task. Next, we extend the model of sound encoding based on spectro-temporal receptive fields to incorporate the cognitive effects of selective attention. We propose a framework for modeling attention-driven plasticity that induces changes to receptive fields driven by task demands. We define a discriminative cost function whose optimization and solution reflect a biologically plausible strategy for STRF adaptation that helps listeners better attend to target sounds. Importantly, the adaptation patterns predicted by the framework have a close correspondence with known neurophysiological data. We next generalize the framework to act on the spectro-temporal dynamics of task-relevant stimuli, and make predictions for tasks that have yet to be experimentally measured. We argue that our generalization represents a form of object-based attention, which helps shed light on the current debate about auditory attentional mechanisms. Finally, we show how attention-modulated STRFs form a high-fidelity representation of the attended target, and we apply our results to obtain improvements in a speech activity detection task. Overall, the results of this thesis improve our general understanding of central auditory processing, and our computational frameworks can be used to guide further studies in animal models. Furthermore, our models inspire signal processing strategies that are useful for automated speech and sound processing tasks

    Identification des indices acoustiques utilisés lors de la compréhension de la parole dégradée

    Get PDF
    There is today a broad consensus in the scientific community regarding the involvement of acoustic cues in speech perception. Up to now, however, the precise mechanisms underlying the transformation from continuous acoustic stream into discrete linguistic units remain largely undetermined. This is partly due to the lack of an effective method for identifying and characterizing the auditory primitives of speech. Since the earliest studies on the acoustic–phonetic interface by the Haskins Laboratories in the 50’s, a number of approaches have been proposed; they are nevertheless inherently limited by the non-naturalness of the stimuli used, the constraints of the experimental apparatus, and the a priori knowledge needed. The present thesis aimed at introducing a new method capitalizing on the speech-in-noise situation for revealing the acoustic cues used by the listeners.As a first step, we adapted the Classification Image technique, developed in the visual domain, to a phoneme categorization task in noise. The technique relies on a Generalized Linear Model to link each participant’s response to the specific configuration of noise, on a trial-by-trail basis, thereby estimating the perceptual weighting of the different time-frequency regions for the decision. We illustrated the effectiveness of our Auditory Classification Image method through 2 examples: a /aba/-/ada/ categorization and a /da/-/ga/ categorization in context /al/ or /aʁ/. Our analysis confirmed that the F2 and F3 onsets were crucial for the tasks, as suggested in previous studies, but also revealed unexpected cues. In a second step, we relied on this new method to compare the results of musical experts (N=19) or dyslexics participants (N=18) to those of controls. This enabled us to explore the specificities of each group’s listening strategies.All the results taken together show that the Auditory Classification Image method may be a more precise and more straightforward approach to investigate the mechanisms at work at the acoustic-phonetic interface.Bien qu’il existe un large consensus de la communauté scientifique quant au rôle des indices acoustiques dans la compréhension de la parole, les mécanismes exacts permettant la transformation d’un flux acoustique continu en unités linguistiques élémentaires demeurent aujourd’hui largement méconnus. Ceci est en partie dû à l’absence d’une méthodologie efficace pour l’identification et la caractérisation des primitives auditives de la parole. Depuis les premières études de l’interface acoustico-phonétique par les Haskins Laboratories dans les années 50, différentes approches ont été proposées ; cependant, toutes sont fondamentalement limitées par l’artificialité des stimuli utilisés, les contraintes du protocole expérimental et le poids des connaissances a priori nécessaires. Le présent travail de thèse s’est intéressé { la mise en oeuvre d’une nouvelle méthode tirant parti de la situation de compréhension de parole dégradée pour mettre en évidence les indices acoustiques utilisés par l’auditeur.Dans un premier temps, nous nous sommes appuyés sur la littérature dans le domaine visuel en adaptant la méthode des Images de Classification à une tâche auditive de catégorisation de phonèmes dans le bruit. En reliant la réponse de l’auditeur { chaque essai à la configuration précise du bruit lors de cet essai, au moyen d’un Modèle Linéaire Généralisé, il est possible d’estimer le poids des différentes régions temps-fréquence dans la décision. Nous avons illustré l’efficacité de notre méthode, appelée Image de Classification Auditive, à travers deux exemples : une catégorisation /aba/-/ada/, et une catégorisation /da/-/ga/ en contexte /al/ ou /aʁ/. Notre analyse a confirmé l’implication des attaques des formants F2 et F3, déjà suggérée par de précédentes études, mais a également permis de révéler des indices inattendus. Dans un second temps, nous avons employé cette technique pour comparer les résultats de participants musiciens experts (N=19) ou dyslexiques (N=18) avec ceux de participants contrôles. Ceci nous a permis d’étudier les spécificités des stratégies d’écoute de ces différents groupes.L’ensemble des résultats suggèrent que les Images de Classification Auditives pourraient constituer une nouvelle approche, plus précise et plus naturelle, pour explorer et décrire les mécanismes { l’oeuvre au niveau de l’interface acoustico-phonétique

    Models and Analysis of Vocal Emissions for Biomedical Applications

    Get PDF
    The MAVEBA Workshop proceedings, held on a biannual basis, collect the scientific papers presented both as oral and poster contributions, during the conference. The main subjects are: development of theoretical and mechanical models as an aid to the study of main phonatory dysfunctions, as well as the biomedical engineering methods for the analysis of voice signals and images, as a support to clinical diagnosis and classification of vocal pathologies

    Modeling speech intelligibility based on the signal-to-noise envelope power ratio

    Get PDF

    Machine learning approaches to video activity recognition: from computer vision to signal processing

    Get PDF
    244 p.La investigación presentada se centra en técnicas de clasificación para dos tareas diferentes, aunque relacionadas, de tal forma que la segunda puede ser considerada parte de la primera: el reconocimiento de acciones humanas en vídeos y el reconocimiento de lengua de signos.En la primera parte, la hipótesis de partida es que la transformación de las señales de un vídeo mediante el algoritmo de Patrones Espaciales Comunes (CSP por sus siglas en inglés, comúnmente utilizado en sistemas de Electroencefalografía) puede dar lugar a nuevas características que serán útiles para la posterior clasificación de los vídeos mediante clasificadores supervisados. Se han realizado diferentes experimentos en varias bases de datos, incluyendo una creada durante esta investigación desde el punto de vista de un robot humanoide, con la intención de implementar el sistema de reconocimiento desarrollado para mejorar la interacción humano-robot.En la segunda parte, las técnicas desarrolladas anteriormente se han aplicado al reconocimiento de lengua de signos, pero además de ello se propone un método basado en la descomposición de los signos para realizar el reconocimiento de los mismos, añadiendo la posibilidad de una mejor explicabilidad. El objetivo final es desarrollar un tutor de lengua de signos capaz de guiar a los usuarios en el proceso de aprendizaje, dándoles a conocer los errores que cometen y el motivo de dichos errores
    corecore