2,230 research outputs found

    Speech Reading Training and Audio-Visual Integration in a Child with Autism Spectrum Disorder

    Get PDF
    Children with Autism Spectrum Disorder (ASD) typically have deficits in communication abilities. Deficits include social, linguistic, and pragmatic difficulties and difficulties in the ability to perceive and integrate audiovisual (AV) stimuli. It is also common for those with ASD to have weaker speech reading skills as compared to typically developing, age-matched peers. Speech reading skills are known to enhance speech perception in naturalistic, noisy environments. In children with ASD, the combination of both poor AV integration and poor speech reading is thought to have significant effects on vocabulary acquisition. Studies have demonstrated that speech reading training can significantly enhance syllable discrimination in noise. However, it has not been investigated whether speech reading training could be generalized to more naturalistic stimuli such as words and in noisy environments. The purpose of the current study was to implement speech training in a child with ASD at the word level using a multiple baseline, changing criterion design. The child identified words in increasingly higher levels of background noise. During the baseline measures, AV speech was presented at a Signal to Noise Ratio, SNR, of 0dB. At the SNR of 0dB, both the speech and noise signals were equal. Speech reading training was implemented at the SNR of +4dB. At +4dB, the speech signal was louder than noise signal, making the task less challenging. The child was asked to watch and listen to the AV speech and choose what word he heard from a four choice list. The participant showed increases in receptive language processing over the course of the four training sessions when compared to the multiple baseline measures. Speech reading training enhanced receptive language processing for words in the SNR of 0dB from the initial pre-training baseline to post-training measure. The results from the study are consistent with previous findings that demonstrate increases in syllable identification after speech training using AV speech and suggest that such gains may also be trained for words in noisy environments

    A multi-modal dance corpus for research into real-time interaction between humans in online virtual environments

    Get PDF
    We present a new, freely available, multimodal corpus for research into, amongst other areas, real-time realistic interaction between humans in online virtual environments. The specific corpus scenario focuses on an online dance class application scenario where students, with avatars driven by whatever 3D capture technology are locally available to them, can learn choerographies with teacher guidance in an online virtual ballet studio. As the data corpus is focused on this scenario, it consists of student/teacher dance choreographies concurrently captured at two different sites using a variety of media modalities, including synchronised audio rigs, multiple cameras, wearable inertial measurement devices and depth sensors. In the corpus, each of the several dancers perform a number of fixed choreographies, which are both graded according to a number of specific evaluation criteria. In addition, ground-truth dance choreography annotations are provided. Furthermore, for unsynchronised sensor modalities, the corpus also includes distinctive events for data stream synchronisation. Although the data corpus is tailored specifically for an online dance class application scenario, the data is free to download and used for any research and development purposes

    Natural Language Processing Methods for Acoustic and Landmark Event-Based Features in Speech-Based Depression Detection

    Full text link
    The processing of speech as an explicit sequence of events is common in automatic speech recognition (linguistic events), but has received relatively little attention in paralinguistic speech classification despite its potential for characterizing broad acoustic event sequences. This paper proposes a framework for analyzing speech as a sequence of acoustic events, and investigates its application to depression detection. In this framework, acoustic space regions are tokenized to 'words' representing speech events at fixed or irregular intervals. This tokenization allows the exploitation of acoustic word features using proven natural language processing methods. A key advantage of this framework is its ability to accommodate heterogeneous event types: herein we combine acoustic words and speech landmarks, which are articulation-related speech events. Another advantage is the option to fuse such heterogeneous events at various levels, including the embedding level. Evaluation of the proposed framework on both controlled laboratory-grade supervised audio recordings as well as unsupervised self-administered smartphone recordings highlight the merits of the proposed framework across both datasets, with the proposed landmark-dependent acoustic words achieving improvements in F1(depressed) of up to 15% and 13% for SH2-FS and DAIC-WOZ respectively, relative to acoustic speech baseline approaches

    Egocentric Auditory Attention Localization in Conversations

    Full text link
    In a noisy conversation environment such as a dinner party, people often exhibit selective auditory attention, or the ability to focus on a particular speaker while tuning out others. Recognizing who somebody is listening to in a conversation is essential for developing technologies that can understand social behavior and devices that can augment human hearing by amplifying particular sound sources. The computer vision and audio research communities have made great strides towards recognizing sound sources and speakers in scenes. In this work, we take a step further by focusing on the problem of localizing auditory attention targets in egocentric video, or detecting who in a camera wearer's field of view they are listening to. To tackle the new and challenging Selective Auditory Attention Localization problem, we propose an end-to-end deep learning approach that uses egocentric video and multichannel audio to predict the heatmap of the camera wearer's auditory attention. Our approach leverages spatiotemporal audiovisual features and holistic reasoning about the scene to make predictions, and outperforms a set of baselines on a challenging multi-speaker conversation dataset. Project page: https://fkryan.github.io/saa

    Understanding and Improving Recurrent Networks for Human Activity Recognition by Continuous Attention

    Full text link
    Deep neural networks, including recurrent networks, have been successfully applied to human activity recognition. Unfortunately, the final representation learned by recurrent networks might encode some noise (irrelevant signal components, unimportant sensor modalities, etc.). Besides, it is difficult to interpret the recurrent networks to gain insight into the models' behavior. To address these issues, we propose two attention models for human activity recognition: temporal attention and sensor attention. These two mechanisms adaptively focus on important signals and sensor modalities. To further improve the understandability and mean F1 score, we add continuity constraints, considering that continuous sensor signals are more robust than discrete ones. We evaluate the approaches on three datasets and obtain state-of-the-art results. Furthermore, qualitative analysis shows that the attention learned by the models agree well with human intuition.Comment: 8 pages. published in The International Symposium on Wearable Computers (ISWC) 201

    GSR Analysis for Stress: Development and Validation of an Open Source Tool for Noisy Naturalistic GSR Data

    Full text link
    The stress detection problem is receiving great attention in related research communities. This is due to its essential part in behavioral studies for many serious health problems and physical illnesses. There are different methods and algorithms for stress detection using different physiological signals. Previous studies have already shown that Galvanic Skin Response (GSR), also known as Electrodermal Activity (EDA), is one of the leading indicators for stress. However, the GSR signal itself is not trivial to analyze. Different features are extracted from GSR signals to detect stress in people like the number of peaks, max peak amplitude, etc. In this paper, we are proposing an open-source tool for GSR analysis, which uses deep learning algorithms alongside statistical algorithms to extract GSR features for stress detection. Then we use different machine learning algorithms and Wearable Stress and Affect Detection (WESAD) dataset to evaluate our results. The results show that we are capable of detecting stress with the accuracy of 92 percent using 10-fold cross-validation and using the features extracted from our tool.Comment: 6 pages and 5 figures. Link to the github of the tool: https://github.com/HealthSciTech/pyED

    Leading and following: Noise differently affects semantic and acoustic processing during naturalistic speech comprehension

    Get PDF
    Despite the distortion of speech signals caused by unavoidable noise in daily life, our ability to comprehend speech in noisy environments is relatively stable. However, the neural mechanisms underlying reliable speech-in-noise comprehension remain to be elucidated. The present study investigated the neural tracking of acoustic and semantic speech information during noisy naturalistic speech comprehension. Participants listened to narrative audio recordings mixed with spectrally matched stationary noise at three signal-to-ratio (SNR) levels (no noise, 3 dB, -3 dB), and 60-channel electroencephalography (EEG) signals were recorded. A temporal response function (TRF) method was employed to derive event-related-like responses to the continuous speech stream at both the acoustic and the semantic levels. Whereas the amplitude envelope of the naturalistic speech was taken as the acoustic feature, word entropy and word surprisal were extracted via the natural language processing method as two semantic features. Theta-band frontocentral TRF responses to the acoustic feature were observed at around 400 ms following speech fluctuation onset over all three SNR levels, and the response latencies were more delayed with increasing noise. Delta-band frontal TRF responses to the semantic feature of word entropy were observed at around 200 to 600 ms leading to speech fluctuation onset over all three SNR levels. The response latencies became more leading with increasing noise and decreasing speech comprehension and intelligibility. While the following responses to speech acoustics were consistent with previous studies, our study revealed the robustness of leading responses to speech semantics, which suggests a possible predictive mechanism at the semantic level for maintaining reliable speech comprehension in noisy environments
    corecore