405 research outputs found

    Ensemble of convolutional neural networks to improve animal audio classification

    Get PDF
    Abstract In this work, we present an ensemble for automated audio classification that fuses different types of features extracted from audio files. These features are evaluated, compared, and fused with the goal of producing better classification accuracy than other state-of-the-art approaches without ad hoc parameter optimization. We present an ensemble of classifiers that performs competitively on different types of animal audio datasets using the same set of classifiers and parameter settings. To produce this general-purpose ensemble, we ran a large number of experiments that fine-tuned pretrained convolutional neural networks (CNNs) for different audio classification tasks (bird, bat, and whale audio datasets). Six different CNNs were tested, compared, and combined. Moreover, a further CNN, trained from scratch, was tested and combined with the fine-tuned CNNs. To the best of our knowledge, this is the largest study on CNNs in animal audio classification. Our results show that several CNNs can be fine-tuned and fused for robust and generalizable audio classification. Finally, the ensemble of CNNs is combined with handcrafted texture descriptors obtained from spectrograms for further improvement of performance. The MATLAB code used in our experiments will be provided to other researchers for future comparisons at https://github.com/LorisNanni

    Medical image retrieval for augmenting diagnostic radiology

    Get PDF
    Even though the use of medical imaging to diagnose patients is ubiquitous in clinical settings, their interpretations are still challenging for radiologists. Many factors make this interpretation task difficult, one of which is that medical images sometimes present subtle clues yet are crucial for diagnosis. Even worse, on the other hand, similar clues could indicate multiple diseases, making it challenging to figure out the definitive diagnoses. To help radiologists quickly and accurately interpret medical images, there is a need for a tool that can augment their diagnostic procedures and increase efficiency in their daily workflow. A general-purpose medical image retrieval system can be such a tool as it allows them to search and retrieve similar cases that are already diagnosed to make comparative analyses that would complement their diagnostic decisions. In this thesis, we contribute to developing such a system by proposing approaches to be integrated as modules of a single system, enabling it to handle various information needs of radiologists and thus augment their diagnostic processes during the interpretation of medical images. We have mainly studied the following retrieval approaches to handle radiologists’different information needs; i) Retrieval Based on Contents, ii) Retrieval Based on Contents, Patients’ Demographics, and Disease Predictions, and iii) Retrieval Based on Contents and Radiologists’ Text Descriptions. For the first study, we aimed to find an effective feature representation method to distinguish medical images considering their semantics and modalities. To do that, we have experimented different representation techniques based on handcrafted methods (mainly texture features) and deep learning (deep features). Based on the experimental results, we propose an effective feature representation approach and deep learning architectures for learning and extracting medical image contents. For the second study, we present a multi-faceted method that complements image contents with patients’ demographics and deep learning-based disease predictions, making it able to identify similar cases accurately considering the clinical context the radiologists seek. For the last study, we propose a guided search method that integrates an image with a radiologist’s text description to guide the retrieval process. This method guarantees that the retrieved images are suitable for the comparative analysis to confirm or rule out initial diagnoses (the differential diagnosis procedure). Furthermore, our method is based on a deep metric learning technique and is better than traditional content-based approaches that rely on only image features and, thus, sometimes retrieve insignificant random images

    An affective computing and image retrieval approach to support diversified and emotion-aware reminiscence therapy sessions

    Get PDF
    A demĂȘncia Ă© uma das principais causas de dependĂȘncia e incapacidade entre as pessoas idosas em todo o mundo. A terapia de reminiscĂȘncia Ă© uma terapia nĂŁo farmacolĂłgica comummente utilizada nos cuidados com demĂȘncia devido ao seu valor terapĂȘutico para as pessoas com demĂȘncia. Esta terapia Ă© Ăștil para criar uma comunicação envolvente entre pessoas com demĂȘncia e o resto do mundo, utilizando as capacidades preservadas da memĂłria a longo prazo, em vez de enfatizar as limitaçÔes existentes por forma a aliviar a experiĂȘncia de fracasso e isolamento social. As soluçÔes tecnolĂłgicas de assistĂȘncia existentes melhoram a terapia de reminiscĂȘncia ao proporcionar uma experiĂȘncia mais envolvente para todos os participantes (pessoas com demĂȘncia, familiares e clĂ­nicos), mas nĂŁo estĂŁo livres de lacunas: a) os dados multimĂ©dia utilizados permanecem inalterados ao longo das sessĂ”es, e hĂĄ uma falta de personalização para cada pessoa com demĂȘncia; b) nĂŁo tĂȘm em conta as emoçÔes transmitidas pelos dados multimĂ©dia utilizados nem as reacçÔes emocionais da pessoa com demĂȘncia aos dados multimĂ©dia apresentados; c) a perspectiva dos cuidadores ainda nĂŁo foi totalmente tida em consideração. Para superar estes desafios, seguimos uma abordagem de concepção centrada no utilizador atravĂ©s de inquĂ©ritos mundiais, entrevistas de seguimento, e grupos de discussĂŁo com cuidadores formais e informais para informar a concepção de soluçÔes tecnolĂłgicas no Ăąmbito dos cuidados de demĂȘncia. Para cumprir com os requisitos identificados, propomos novos mĂ©todos que facilitam a inclusĂŁo de emoçÔes no loop durante a terapia de reminiscĂȘncia para personalizar e diversificar o conteĂșdo das sessĂ”es ao longo do tempo. As contribuiçÔes desta tese incluem: a) um conjunto de requisitos funcionais validados recolhidos com os cuidadores formais e informais, os resultados esperados com o cumprimento de cada requisito, e um modelo de arquitectura para o desenvolvimento de soluçÔes tecnolĂłgicas de assistĂȘncia para cuidados de demĂȘncia; b) uma abordagem end-to-end para identificar automaticamente mĂșltiplas informaçÔes emocionais transmitidas por imagens; c) uma abordagem para reduzir a quantidade de imagens que precisam ser anotadas pelas pessoas sem comprometer o desempenho dos modelos de reconhecimento; d) uma tĂ©cnica de fusĂŁo tardia interpretĂĄvel que combina dinamicamente mĂșltiplos sistemas de recuperação de imagens com base em conteĂșdo para procurar eficazmente por imagens semelhantes para diversificar e personalizar o conjunto de imagens disponĂ­veis para serem utilizadas nas sessĂ”es.Dementia is one of the major causes of dependency and disability among elderly subjects worldwide. Reminiscence therapy is an inexpensive non-pharmacological therapy commonly used within dementia care due to its therapeutic value for people with dementia. This therapy is useful to create engaging communication between people with dementia and the rest of the world by using the preserved abilities of long-term memory rather than emphasizing the existing impairments to alleviate the experience of failure and social isolation. Current assistive technological solutions improve reminiscence therapy by providing a more lively and engaging experience to all participants (people with dementia, family members, and clinicians), but they are not free of drawbacks: a) the multimedia data used remains unchanged throughout sessions, and there is a lack of customization for each person with dementia; b) they do not take into account the emotions conveyed by the multimedia data used nor the person with dementia’s emotional reactions to the multimedia presented; c) the caregivers’ perspective have not been fully taken into account yet. To overcome these challenges, we followed a usercentered design approach through worldwide surveys, follow-up interviews, and focus groups with formal and informal caregivers to inform the design of technological solutions within dementia care. To fulfil the requirements identified, we propose novel methods that facilitate the inclusion of emotions in the loop during reminiscence therapy to personalize and diversify the content of the sessions over time. Contributions from this thesis include: a) a set of validated functional requirements gathered from formal and informal caregivers, the expected outcomes with the fulfillment of each requirement, and an architecture’s template for the development of assistive technology solutions for dementia care; b) an end-to-end approach to automatically identify multiple emotional information conveyed by images; c) an approach to reduce the amount of images that need to be annotated by humans without compromising the recognition models’ performance; d) an interpretable late-fusion technique that dynamically combines multiple content-based image retrieval systems to effectively search for similar images to diversify and personalize the pool of images available to be used in sessions

    Multispecies bird sound recognition using a fully convolutional neural network

    Get PDF
    [EN] This study proposes a method based on fully convolutional neural networks (FCNs) to identify migratory birds from their songs, with the objective of recognizing which birds pass through certain areas and at what time. To determine the best FCN architecture, extensive experimentation was conducted through a grid search, exploring the optimal depth, width, and activation function of the network. The results showed that the optimal number of filters is 400 in the widest layer, with 4 convolutional blocks with maxpooling and an adaptive activation function. The proposed FCN offers a significant advantage over other techniques, as it can recognize the sound of a bird in audio of any length with an accuracy greater than 85%. Furthermore, due to its architecture, the network can detect more than one species from audio and can carry out near-real-time sound recognition. Additionally, the proposed method is lightweight, making it ideal for deployment and use in IoT devices. The study also presents a comparative analysis of the proposed method against other techniques, demonstrating an improvement of over 67% in the best-case scenario. These findings contribute to advancing the field of bird sound recognition and provide valuable insights into the practical application of FCNs in real-world scenarios.S

    Multi-Modal Domain Adaptation for Fine-Grained Action Recognition

    Get PDF
    Fine-grained action recognition datasets exhibit environmental bias, where multiple video sequences are captured from a limited number of environments. Training a model in one environment and deploying in another results in a drop in performance due to an unavoidable domain shift. Unsupervised Domain Adaptation (UDA) approaches have frequently utilised adversarial training between the source and target domains. However, these approaches have not explored the multi-modal nature of video within each domain. In this work we exploit the correspondence of modalities as a self-supervised alignment approach for UDA in addition to adversarial alignment. We test our approach on three kitchens from our large-scale dataset, EPIC-Kitchens, using two modalities commonly employed for action recognition: RGB and Optical Flow. We show that multi-modal self-supervision alone improves the performance over source-only training by 2.4% on average. We then combine adversarial training with multi-modal self-supervision, showing that our approach outperforms other UDA methods by 3%.Comment: Accepted to CVPR 2020 for an oral presentatio

    Multimodal Affective Communication Analysis: Fusing Speech Emotion and Text Sentiment Using Machine Learning

    Get PDF
    © 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/)Affective communication, encompassing verbal and non-verbal cues, is crucial for understanding human interactions. This study introduces a novel framework for enhancing emotional understanding by fusing speech emotion recognition (SER) and sentiment analysis (SA). We leverage diverse features and both classical and deep learning models, including Gaussian naive Bayes (GNB), support vector machines (SVMs), random forests (RFs), multilayer perceptron (MLP), and a 1D convolutional neural network (1D-CNN), to accurately discern and categorize emotions in speech. We further extract text sentiment from speech-to-text conversion, analyzing it using pre-trained models like bidirectional encoder representations from transformers (BERT), generative pre-trained transformer 2 (GPT-2), and logistic regression (LR). To improve individual model performance for both SER and SA, we employ an extended dynamic Bayesian mixture model (DBMM) ensemble classifier. Our most significant contribution is the development of a novel two-layered DBMM (2L-DBMM) for multimodal fusion. This model effectively integrates speech emotion and text sentiment, enabling the classification of more nuanced, second-level emotional states. Evaluating our framework on the EmoUERJ (Portuguese) and ESD (English) datasets, the extended DBMM achieves accuracy rates of 96% and 98% for SER, 85% and 95% for SA, and 96% and 98% for combined emotion classification using the 2L-DBMM, respectively. Our findings demonstrate the superior performance of the extended DBMM for individual modalities compared to individual classifiers and the 2L-DBMM for merging different modalities, highlighting the value of ensemble methods and multimodal fusion in affective communication analysis. The results underscore the potential of our approach in enhancing emotional understanding with broad applications in fields like mental health assessment, human–robot interaction, and cross-cultural communication.Peer reviewe
    • 

    corecore