39 research outputs found

    A comparison of acoustic and linguistics methodologies for Alzheimer’s dementia recognition

    Get PDF
    In the light of the current COVID-19 pandemic, the need for remote digital health assessment tools is greater than ever. This statement is especially pertinent for elderly and vulnerable populations. In this regard, the INTERSPEECH 2020 Alzheimer’s Dementia Recognition through Spontaneous Speech (ADReSS) Challenge offers competitors the opportunity to develop speech and language-based systems for the task of Alzheimer’s Dementia (AD) recognition. The challenge data consists of speech recordings and their transcripts, the work presented herein is an assessment of different contemporary approaches on these modalities. Specifically, we compared a hierarchical neural network with an attention mechanism trained on linguistic features with three acoustic-based systems: (i) Bag-of-Audio-Words (BoAW) quantising different low-level descriptors, (ii) a Siamese Network trained on log-Mel spectrograms, and (iii) a Convolutional Neural Network (CNN) end-to-end system trained on raw waveforms. Key results indicate the strength of the linguistic approach over the acoustics systems. Our strongest test-set result was achieved using a late fusion combination of BoAW, End-to-End CNN, and hierarchical-attention networks, which outperformed the challenge baseline in both the classification and regression tasks

    Towards automatic airborne pollen monitoring: From commercial devices to operational by mitigating class-imbalance in a deep learning approach.

    No full text
    Allergic diseases have been the epidemic of the century among chronic diseases. Particularly for pollen allergies, and in the context of climate change, as airborne pollen seasons have been shifting earlier and abundances have been becoming higher, pollen monitoring plays an important role in generating high-risk allergy alerts. However, this task requires labour-intensive and time-consuming manual classification via optical microscopy. Even new-generation, automatic, monitoring devices require manual pollen labelling to increase accuracy and to advance to genuinely operational devices. Deep Learning-based models have the potential to increase the accuracy of automated pollen monitoring systems. In the current research, transfer learning-based convolutional neural networks were employed to classify pollen grains from microscopic images. Given a high imbalance in the dataset, we incorporated class weighted loss, focal loss and weight vector normalisation for class balancing as well as data augmentation and weight penalties for regularisation. Airborne pollen has been routinely recorded by a Bio-Aerosol Analyzer (BAA500, Hund GmbH) located in Augsburg, Germany. Here we utilised a database referring to manually classified airborne pollen images of the whole pollen diversity throughout an annual pollen season. By using the cropped pollen images collected by this device, we achieved an unweighted average F1 score of 93.8% across 15 classes and an unweighted average F1 score of 75.9% across 31 classes. The majority of taxa (9 of 15), being also the most abundant and allergenic, showed a recall of at least 95%, reaching up to a remarkable 100% in pollen from Taxus and Urticaceae. The recent introduction of novel pollen monitoring devices worldwide has pointed to the necessity for real-time, automatic measurements of airborne pollen and fungal spores. Thus, we may improve everyday clinical practice and achieve the most efficient prophylaxis of allergic patients

    MEDAS: an open-source platform as a service to help break the walls between medicine and informatics

    No full text
    In the past decade, deep learning (DL) has achieved unprecedented success in numerous fields, such as computer vision and healthcare. Particularly, DL is experiencing an increasing development in advanced medical image analysis applications in terms of segmentation, classification, detection, and other tasks. On the one hand, tremendous needs that leverage DL’s power for medical image analysis arise from the research community of a medical, clinical, and informatics background to share their knowledge, skills, and experience jointly. On the other hand, barriers between disciplines are on the road for them, often hampering a full and efficient collaboration. To this end, we propose our novel open-source platform, i.e., MEDAS–the MEDical open-source platform As Service. To the best of our knowledge, MEDAS is the first open-source platform providing collaborative and interactive services for researchers from a medical background using DL-related toolkits easily and for scientists or engineers from informatics modeling faster. Based on tools and utilities from the idea of RINV (Rapid Implementation aNd Verification), our proposed platform implements tools in pre-processing, post-processing, augmentation, visualization, and other phases needed in medical image analysis. Five tasks, concerning lung, liver, brain, chest, and pathology, are validated and demonstrated to be efficiently realizable by using MEDAS

    Editorial-neural networks and learning systems come together

    No full text
    This issue marks the beginning of the IEEE Transactions on Neural Networks and Learning Systems (TNNLS). By adding «Learning Systems» to the title, we now state explicitly the scope of the TRANSACTIONS to include neural networks as well as related learning systems. This issue marks a new era in the history of our TRANSACTIONS. The TRANSACTIONS is now ready to face the challenges of the next 10-20 years. With the evolution of the fields of neural networks in particular and computational intelligence in general, the IEEE Transactions on Neural Networks and Learning Systems will continue to grow and to succeed in this ever-changing world. Also included are a few comments about the review process of TNN manuscripts and the introduction of 14 new TNNLS Associate Editors. © 2012 IEEE

    Audio-based AI classifiers show no evidence of improved COVID-19 screening over simple symptoms checkers

    No full text
    Recent work has reported that respiratory audio-trained AI classifiers can accurately predict SARS-CoV-2 infection status. However, it has not yet been determined whether such model performance is driven by latent audio biomarkers with true causal links to SARS-CoV-2 infection or by confounding effects, such as recruitment bias, present in observational studies. Here we undertake a large-scale study of audio-based AI classifiers as part of the UK government’s pandemic response. We collect a dataset of audio recordings from 67,842 individuals, with linked metadata, of whom 23,514 had positive polymerase chain reaction tests for SARS-CoV-2. In an unadjusted analysis, similar to that in previous works, AI classifiers predict SARS-CoV-2 infection status with high accuracy (ROC–AUC = 0.846 [0.838–0.854]). However, after matching on measured confounders, such as self-reported symptoms, performance is much weaker (ROC–AUC = 0.619 [0.594–0.644]). Upon quantifying the utility of audio-based classifiers in practical settings, we find them to be outperformed by predictions on the basis of user-reported symptoms. We make best-practice recommendations for handling recruitment bias, and for assessing audio-based classifiers by their utility in relevant practical settings. Our work provides insights into the value of AI audio analysis and the importance of study design and treatment of confounders in AI-enabled diagnostics

    Electroweak measurements in electron–positron collisions at w-boson-pair energies at lep

    Get PDF
    Contains fulltext : 121524.pdf (preprint version ) (Open Access
    corecore