1,325 research outputs found

    Aerospace medicine and biology: A continuing bibliography with indexes (supplement 333)

    Get PDF
    This bibliography lists 122 reports, articles and other documents introduced into the NASA Scientific and Technical Information System during January, 1990. Subject coverage includes: aerospace medicine and psychology, life support systems and controlled environments, safety equipment, exobiology and extraterrestrial life, and flight crew behavior and performance

    Prediction of sleepiness ratings from voice by man and machine

    Get PDF
    This paper looks in more detail at the Interspeech 2019 computational paralinguistics challenge on the prediction of sleepiness ratings from speech. In this challenge, teams were asked to train a regression model to predict sleepiness from samples of the Düsseldorf Sleepy Language Corpus (DSLC). This challenge was notable because the performance of all entrants was uniformly poor, with even the winning system only achieving a correlation of r=0.37. We look at whether the task itself is achievable, and whether the corpus is suited to training a machine learning system for the task. We perform a listening experiment using samples from the corpus and show that a group of human listeners can achieve a correlation of r=0.7 on this task, although this is mainly by classifying the recordings into one of three sleepiness groups. We show that the corpus, because of its construction, confounds variation with sleepiness and variation with speaker identity, and this was the reason that machine learning systems failed to perform well. We conclude that sleepiness rating prediction from voice is not an impossible task, but that good performance requires more information about sleepy speech and its variability across listeners than is available in the DSLC corpu

    Estimating Fatigue from Predetermined Speech Samples Transmitted by Operator Communication Systems

    Get PDF
    We present an estimation of fatigue level within individual operators using voice analysis. One advantage of voice analysis is its utilization of already existing operator communications hardware (2-way radio). From the driver viewpoint it’s an unobtrusive, non-interfering, secondary task. The expected fatigue induced speech changes refer to the voice categories of intensity, rhythm, pause patterns, intonation, speech rate, articulation, and speech quality. Due to inter-individual differences in speech pattern we recorded speaker dependent baselines under alert conditions. Furthermore, sophisticated classification tools (e.g. Support Vector Machine, Multi-Layer Perceptron) were applied to distinguish these different fatigue clusters. To validate the voice analysis predetermined speech samples gained from a driving simulator based sleep deprivation study (N=12; 01.00-08.00 a.m.) are used. Using standard acoustic feature computation procedures we selected 1748 features and fed them into 8 machine learning methods. After each combining the output of each single classifier we yielded a recognition rate of 83.8% in classifying slight from strong fatigue

    Introducing non-linear analysis into sustained speech characterization to improve sleep apnea detection

    Get PDF
    We present a novel approach for detecting severe obstructive sleep apnea (OSA) cases by introducing non-linear analysis into sustained speech characterization. The proposed scheme was designed for providing additional information into our baseline system, built on top of state-of-the-art cepstral domain modeling techniques, aiming to improve accuracy rates. This new information is lightly correlated with our previous MFCC modeling of sustained speech and uncorrelated with the information in our continuous speech modeling scheme. Tests have been performed to evaluate the improvement for our detection task, based on sustained speech as well as combined with a continuous speech classifier, resulting in a 10% relative reduction in classification for the first and a 33% relative reduction for the fused scheme. Results encourage us to consider the existence of non-linear effects on OSA patients' voices, and to think about tools which could be used to improve short-time analysis

    Introducing non-linear analysis into sustained speech characterization to improve sleep apnea detection

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-25020-0_28Proceedings of 5th International Conference on Nonlinear Speech Processing, NOLISP 2011, Las Palmas de Gran Canaria (Spain)We present a novel approach for detecting severe obstructive sleep apnea (OSA) cases by introducing non-linear analysis into sustained speech characterization. The proposed scheme was designed for providing additional information into our baseline system, built on top of state-of-the-art cepstral domain modeling techniques, aiming to improve accuracy rates. This new information is lightly correlated with our previous MFCC modeling of sustained speech and uncorrelated with the information in our continuous speech modeling scheme. Tests have been performed to evaluate the improvement for our detection task, based on sustained speech as well as combined with a continuous speech classifier, resulting in a 10% relative reduction in classification for the first and a 33% relative reduction for the fused scheme. Results encourage us to consider the existence of non-linear effects on OSA patients’ voices, and to think about tools which could be used to improve short-time analysis.The activities described in this paper were funded by the Spanish Ministry of Science and Innovation as part of the TEC2009-14719-C02-02 (PriorSpeech) project

    Severe apnoea detection using speaker recognition techniques

    Full text link
    Proceedings of the International Conference on Bio-inspired Systems and Signal Processing (BIOSIGNALS 2009)The aim of this paper is to study new possibilities of using Automatic Speaker Recognition techniques (ASR) for detection of patients with severe obstructive sleep apnoea (OSA). Early detection of severe apnoea cases can be very useful to give priority to their early treatment optimizing the expensive and timeconsuming tests of current diagnosis methods based on full overnight sleep in a hospital. This work is part of an on-going collaborative project between medical and signal processing communities to promote new research efforts on automatic OSA diagnosis through speech processing technologies applied on a carefully designed speech database of healthy subjects and apnoea patients. So far, in this contribution we present and discuss several approaches of applying generative Gaussian Mixture Models (GMMs), generally used in ASR systems, to model specific acoustic properties of continuous speech signals in different linguistic contexts reflecting discriminative physiological characteristics found in OSA patients. Finally, experimental results on the discriminative power of speaker recognition techniques adapted to severe apnoea detection are presented. These results obtain a correct classification rate of 81.25%, representing a promising result underlining the interest of this research framework and opening further perspectives for improvement using more specific speech recognition technologiesThe activities described in this paper were funded by the Spanish Ministry of Science and Technology as part of the TEC2006-13170-C02-01 project

    Perception of Alcoholic Intoxication in Speech

    Get PDF
    The ALC sub-challenge of the Interspeech Speaker State Chal-lenge (ISSC) aims at the automatic classification of speech sig-nals into intoxicated and sober speech. In this context we con-ducted a perception experiment on data derived from the same corpus to analyze the human performance on the same task. The results show that human still outperform comparable baseline results of ISSC. Female and male listeners perform on the same level, but there is strong evidence that intoxication in female voices is easier to be recognized than in male voices. Prosodic features contribute to the decision of human listeners but seem not to be dominant. In analogy to Doddington’s zoo of speaker verification we find some evidence for the existence of lambs and goats but no wolves. Index Terms: alcoholic intoxication, speech perception, forced choice, intonation, Alcohol Language Corpu

    I hear you eat and speak: automatic recognition of eating condition and food type, use-cases, and impact on ASR performance

    Get PDF
    We propose a new recognition task in the area of computational paralinguistics: automatic recognition of eating conditions in speech, i. e., whether people are eating while speaking, and what they are eating. To this end, we introduce the audio-visual iHEARu-EAT database featuring 1.6 k utterances of 30 subjects (mean age: 26.1 years, standard deviation: 2.66 years, gender balanced, German speakers), six types of food (Apple, Nectarine, Banana, Haribo Smurfs, Biscuit, and Crisps), and read as well as spontaneous speech, which is made publicly available for research purposes. We start with demonstrating that for automatic speech recognition (ASR), it pays off to know whether speakers are eating or not. We also propose automatic classification both by brute-forcing of low-level acoustic features as well as higher-level features related to intelligibility, obtained from an Automatic Speech Recogniser. Prediction of the eating condition was performed with a Support Vector Machine (SVM) classifier employed in a leave-one-speaker-out evaluation framework. Results show that the binary prediction of eating condition (i. e., eating or not eating) can be easily solved independently of the speaking condition; the obtained average recalls are all above 90%. Low-level acoustic features provide the best performance on spontaneous speech, which reaches up to 62.3% average recall for multi-way classification of the eating condition, i. e., discriminating the six types of food, as well as not eating. The early fusion of features related to intelligibility with the brute-forced acoustic feature set improves the performance on read speech, reaching a 66.4% average recall for the multi-way classification task. Analysing features and classifier errors leads to a suitable ordinal scale for eating conditions, on which automatic regression can be performed with up to 56.2% determination coefficient

    Design of a multimodal database for research on automatic detection of severe apnoea cases

    Get PDF
    The aim of this paper is to present the design of a multimodal database suitable for research on new possibilities for automatic diagnosis of patients with severe obstructive sleep apnoea (OSA). Early detection of severe apnoea cases can be very useful to give priority to their early treatment optimizing the expensive and time-consuming tests of current diagnosis methods based on full overnight sleep in a hospital. This work is part of an on-going collaborative project between medical and signal processing groups towards the design of a multimodal database as an innovative resource to promote new research efforts on automatic OSA diagnosis through speech and image processing technologies. In this contribution we present the multimodal design criteria derived from the analysis of specific voice properties related to OSA physiological effects as well as from the morphological facial characteristics in apnoea patients. Details on the database structure and data collection methodology are also given as it is intended to be an open resource to promote further research in this field. Finally, preliminary experimental results on automatic OSA voice assessment are presented for the collected speech data in our OSA multimodal database. Standard GMM speaker recognition techniques obtain an overall correct classification rate of 82%. This represents an initial promising result underlining the interest of this research framework and opening further perspectives for improvement using more specific speech and image recognition technologies
    • …
    corecore