42,508 research outputs found

    Aerospace Medicine and Biology. A continuing bibliography (Supplement 226)

    Get PDF
    This bibliography lists 129 reports, articles, and other documents introduced into the NASA scientific and technical information system in November 1981

    Future bathroom: A study of user-centred design principles affecting usability, safety and satisfaction in bathrooms for people living with disabilities

    Get PDF
    Research and development work relating to assistive technology 2010-11 (Department of Health) Presented to Parliament pursuant to Section 22 of the Chronically Sick and Disabled Persons Act 197

    Assistive technologies : short overview and trends

    Get PDF
    This paper gives a brief overview of currently existing assistive technologies for different kinds of disabilities. An elaborate discussion of all types of assistive technologies is beyond the scope of this paper. Assistive technologies have evolved dramatically in recent years and will continue to be further developed thanks to major progress in artificial intelligence, machine learning, robotics, and other areas. Previously, assistive technologies were highly specialized and were often difficult or expensive to acquire. Today, however, many assistive technologies are included in mainstream products and services. An introduction and state of the art of assistive technologies are presented first. These are followed by an overview of technological trends in assistive technologies and a conclusion

    Towards An Intelligent Fuzzy Based Multimodal Two Stage Speech Enhancement System

    Get PDF
    This thesis presents a novel two stage multimodal speech enhancement system, making use of both visual and audio information to filter speech, and explores the extension of this system with the use of fuzzy logic to demonstrate proof of concept for an envisaged autonomous, adaptive, and context aware multimodal system. The design of the proposed cognitively inspired framework is scalable, meaning that it is possible for the techniques used in individual parts of the system to be upgraded and there is scope for the initial framework presented here to be expanded. In the proposed system, the concept of single modality two stage filtering is extended to include the visual modality. Noisy speech information received by a microphone array is first pre-processed by visually derived Wiener filtering employing the novel use of the Gaussian Mixture Regression (GMR) technique, making use of associated visual speech information, extracted using a state of the art Semi Adaptive Appearance Models (SAAM) based lip tracking approach. This pre-processed speech is then enhanced further by audio only beamforming using a state of the art Transfer Function Generalised Sidelobe Canceller (TFGSC) approach. This results in a system which is designed to function in challenging noisy speech environments (using speech sentences with different speakers from the GRID corpus and a range of noise recordings), and both objective and subjective test results (employing the widely used Perceptual Evaluation of Speech Quality (PESQ) measure, a composite objective measure, and subjective listening tests), showing that this initial system is capable of delivering very encouraging results with regard to filtering speech mixtures in difficult reverberant speech environments. Some limitations of this initial framework are identified, and the extension of this multimodal system is explored, with the development of a fuzzy logic based framework and a proof of concept demonstration implemented. Results show that this proposed autonomous,adaptive, and context aware multimodal framework is capable of delivering very positive results in difficult noisy speech environments, with cognitively inspired use of audio and visual information, depending on environmental conditions. Finally some concluding remarks are made along with proposals for future work

    Focal Spot, Spring 2002

    Get PDF
    https://digitalcommons.wustl.edu/focal_spot_archives/1090/thumbnail.jp

    A convolutional neural-network model of human cochlear mechanics and filter tuning for real-time applications

    Full text link
    Auditory models are commonly used as feature extractors for automatic speech-recognition systems or as front-ends for robotics, machine-hearing and hearing-aid applications. Although auditory models can capture the biophysical and nonlinear properties of human hearing in great detail, these biophysical models are computationally expensive and cannot be used in real-time applications. We present a hybrid approach where convolutional neural networks are combined with computational neuroscience to yield a real-time end-to-end model for human cochlear mechanics, including level-dependent filter tuning (CoNNear). The CoNNear model was trained on acoustic speech material and its performance and applicability were evaluated using (unseen) sound stimuli commonly employed in cochlear mechanics research. The CoNNear model accurately simulates human cochlear frequency selectivity and its dependence on sound intensity, an essential quality for robust speech intelligibility at negative speech-to-background-noise ratios. The CoNNear architecture is based on parallel and differentiable computations and has the power to achieve real-time human performance. These unique CoNNear features will enable the next generation of human-like machine-hearing applications

    Word Recognition and Learning: Effects of Hearing Loss and Amplification Feature

    Get PDF
    abstract: Two amplification features were examined using auditory tasks that varied in stimulus familiarity. It was expected that the benefits of certain amplification features would increase as the familiarity with the stimuli decreased. A total of 20 children and 15 adults with normal hearing as well as 21 children and 17 adults with mild to severe hearing loss participated. Three models of ear-level devices were selected based on the quality of the high-frequency amplification or the digital noise reduction (DNR) they provided. The devices were fitted to each participant and used during testing only. Participants completed three tasks: (a) word recognition, (b) repetition and lexical decision of real and nonsense words, and (c) novel word learning. Performance improved significantly with amplification for both the children and the adults with hearing loss. Performance improved further with wideband amplification for the children more than for the adults. In steady-state noise and multitalker babble, performance decreased for both groups with little to no benefit from amplification or from the use of DNR. When compared with the listeners with normal hearing, significantly poorer performance was observed for both the children and adults with hearing loss on all tasks with few exceptions. Finally, analysis of across-task performance confirmed the hypothesis that benefit increased as the familiarity of the stimuli decreased for wideband amplification but not for DNR. However, users who prefer DNR for listening comfort are not likely to jeopardize their ability to detect and learn new information when using this feature.The final version of this article, as published in Trends in Hearing, can be viewed online at: http://journals.sagepub.com/doi/10.1177/233121651770959

    Relations between nonverbal cognitive ability and spoken language development : implications for deaf toddlers who use cochlear implants.

    Get PDF
    The first aim of this dissertation was to determine whether early deafness is related to children\u27s nonverbal cognitive abilities. Performance of a group of deaf infants were compared to that of same-aged hearing infants on visual sequence learning (VSL) and visual recognition memory (VRM) tasks. The hypothesis was that if deafness is negatively related to general cognitive ability, then the deaf infants would perform more poorly than same-aged hearing infants on the two tasks. There were no significant differences in VSL (n = 19) or VRM (n = 13) performance between the two groups (Chapter III). These results are inconclusive due to the small sample sizes, but importantly, there were individual infants in both groups who demonstrated learning on the two nonverbal tasks. The second aim was to determine whether VSL and VRM ability can provide predictive information about spoken language development. The results for the normal hearing 8.5-month-olds provide evidence for a significant relation between VSL ability and spoken language outcomes (Chapter IV). Specifically, it was found that sequence learning (thought to rely on procedural memory ability) may contribute to vocabulary and gestural development in normal-hearing infants. Further research with larger samples of infants is needed to determine whether procedural learning may be important for grammar acquisition. These results suggest that VSL ability may not be related to spoken language outcomes for deaf infants who use cochlear implants (Chapter V), although VRM ability may be (Chapter VI). If this pattern of results held up for a larger sample of deaf infants, this would suggest that the nonverbal cognitive abilities tapped in the VSL and VRM tasks are not critical for at least some aspects of spoken language development in deaf children who use cochlear implants, and that potential deficits in nonverbal cognitive ability are not necessarily associated with poorer spoken language ability in this population. In future research a larger sample of deaf infants should be recruited in order to clarify whether nonverbal cognitive skills are related to early deafness, and how those nonverbal skills might relate to spoken language development in this unique population

    Predicting Speech Intelligibility

    Get PDF
    Hearing impairment, and specifically sensorineural hearing loss, is an increasingly prevalent condition, especially amongst the ageing population. It occurs primarily as a result of damage to hair cells that act as sound receptors in the inner ear and causes a variety of hearing perception problems, most notably a reduction in speech intelligibility. Accurate diagnosis of hearing impairments is a time consuming process and is complicated by the reliance on indirect measurements based on patient feedback due to the inaccessible nature of the inner ear. The challenges of designing hearing aids to counteract sensorineural hearing losses are further compounded by the wide range of severities and symptoms experienced by hearing impaired listeners. Computer models of the auditory periphery have been developed, based on phenomenological measurements from auditory-nerve fibres using a range of test sounds and varied conditions. It has been demonstrated that auditory-nerve representations of vowels in normal and noisedamaged ears can be ranked by a subjective visual inspection of how the impaired representations differ from the normal. This thesis seeks to expand on this procedure to use full word tests rather than single vowels, and to replace manual inspection with an automated approach using a quantitative measure. It presents a measure that can predict speech intelligibility in a consistent and reproducible manner. This new approach has practical applications as it could allow speechprocessing algorithms for hearing aids to be objectively tested in early stage development without having to resort to extensive human trials. Simulated hearing tests were carried out by substituting real listeners with the auditory model. A range of signal processing techniques were used to measure the model’s auditory-nerve outputs by presenting them spectro-temporally as neurograms. A neurogram similarity index measure (NSIM) was developed that allowed the impaired outputs to be compared to a reference output from a normal hearing listener simulation. A simulated listener test was developed, using standard listener test material, and was validated for predicting normal hearing speech intelligibility in quiet and noisy conditions. Two types of neurograms were assessed: temporal fine structure (TFS) which retained spike timing information; and average discharge rate or temporal envelope (ENV). Tests were carried out to simulate a wide range of sensorineural hearing losses and the results were compared to real listeners’ unaided and aided performance. Simulations to predict speech intelligibility performance of NAL-RP and DSL 4.0 hearing aid fitting algorithms were undertaken. The NAL-RP hearing aid fitting algorithm was adapted using a chimaera sound algorithm which aimed to improve the TFS speech cues available to aided hearing impaired listeners. NSIM was shown to quantitatively rank neurograms with better performance than a relative mean squared error and other similar metrics. Simulated performance intensity functions predicted speech intelligibility for normal and hearing impaired listeners. The simulated listener tests demonstrated that NAL-RP and DSL 4.0 performed with similar speech intelligibility restoration levels. Using NSIM and a computational model of the auditory periphery, speech intelligibility can be predicted for both normal and hearing impaired listeners and novel hearing aids can be rapidly prototyped and evaluated prior to real listener tests
    • …
    corecore