2,541 research outputs found
Recommended from our members
Mobile assistive technologies for the visually impaired
There are around 285 million visually impaired people worldwide, and around 370,000 people are registered as blind or partially sighted in the UK. Ongoing advances in information technology (IT) are increasing the scope for IT-based mobile assistive technologies to facilitate the independence, safety, and improved quality of life of the visually impaired. Research is being directed at making mobile phones and other handheld devices accessible via our haptic (touch) and audio sensory channels. We review research and innovation within the field of mobile assistive technology for the visually impaired and, in so doing, highlight the need for successful collaboration between clinical expertise, computer science, and domain users to realize fully the potential benefits of such technologies. We initially reflect on research that has been conducted to make mobile phones more accessible to people with vision loss. We then discuss innovative assistive applications designed for the visually impaired that are either delivered via mainstream devices and can be used while in motion (e.g., mobile phones) or are embedded within an environment that may be in motion (e.g., public transport) or within which the user may be in motion (e.g., smart homes)
Use of Portable Electronic Assistive Technology to Improve Independent Job Performance of Young Adults with Intellectual Disabilities
Poor employment outcomes for persons with intellectual disabilities (ID) persist, despite the development of legal policies designed to enhance access to gainful employment and to promote increased community integration. Recent data suggest that only 37% of young adults with ID obtain paid employment outside of the home. Among persons with ID who do obtain employment, career options are limited and nearly half are paid below minimum wage. Various strategies have been used to improve employment outcomes for those with ID, such as use of a job coach and teaching self-management strategies on the job site. These strategies often involve the use of visual or auditory prompting to assist with task completion; both of which can be provided by assistive technology. The current study examined the use of readily available, inexpensive, and discrete portable electronic assistive technology in an office setting to provide prompting and instruction to three young adults with ID. Results revealed that the technology substantially increased participants\u27 ability to independently and correctly complete office-related tasks. Implications and suggestions for future research are provided
Hearing Evaluation and Auditory Rehabilitation after Stroke
Stroke can affect all levels of the auditory system (from the inner ear to the central
tracts), and may result in various types of auditory dysfunctions, such as peripheral
hearing loss (cochlea to auditory nerve), disordered auditory processing (brainstem to
cortex), and cortical deafness. Hearing-impaired stroke survivors have an increased
risk of physical decline after discharge to the community. This may be attributed to
restricted participation in post-acute rehabilitation programs due to the hearing
impairment. Furthermore, hearing impairment may have a significant impact on
listening, linguistic skills and the overall communication of the affected stroke
patient. To date, no studies have sought to systematically characterise the auditory
function of stroke patients in detail in order to establish the different types of hearing
impairments in this cohort of patients. Such information would be clinically useful
for understanding and addressing the hearing needs of stroke survivors so that
appropriate management could be given to these patients in order to improve their
quality of life. One of the main aims of this research was to characterise and classify
the hearing impairments of stroke patients using a detailed audiological assessment
test battery in order to determine the level of clinical need and inform appropriate
rehabilitation for this patient population. We found that the most common type of
hearing impairment in stroke subjects was the combination type, ‘peripheral hearing
loss and central auditory processing disorders’, in the older subgroup (in 55%), and
auditory processing deficits in the younger subgroup (in 40%). Both types of
impairment were significantly higher in these groups than in controls.
Offering a comprehensive audiological assessment to all stroke patients would be a
costly and time-consuming process. Therefore, a preliminary screening program for
such patients needs to be identified, e.g. by means of a questionnaire, so that the full
audiological assessment could be reserved for those who fail the initial screening.
We aimed to determine whether a handheld hearing screener together with two
validated hearing questionnaires could be used as a hearing screening tool to
facilitate early identification and appropriate referral of hearing impaired stroke
patients in the subacute stage. The highest test accuracy was achieved when results
of the handheld hearing screener and hearing questionnaires were combined.
Nehzat Koohi PhD Thesis vi
Auditory disability due to impaired auditory processing (AP), despite normal puretone
thresholds, is common after stroke. However, there are currently no proven
remedial interventions for AP deficits in stroke patients. Our study is first to
investigate the benefits of personal frequency-modulated (FM) systems in stroke
patients with disordered AP. Our results demonstrated that FM systems may
substantially improve speech-in-noise deficits in stroke patients who are not eligible
for conventional hearing aids.
We also evaluated the long term benefits for speech reception in noise, after daily
ten-week use of personal FMs, in non-aphasic stroke patients with auditory
processing deficits. We found that ten weeks of using FM systems by adult stroke
patients may lead to benefits in unaided speech in noise perception. Our findings
may indicate auditory plasticity type changes
Development and Evaluation of a Real-Time Framework for a Portable Assistive Hearing Device
Testing and verification of digital hearing aid devices, and the embedded software and algorithms can prove to be a challenging task especially taking into account time-to-market considerations. This thesis describes a PC based, real-time, highly configurable framework for the evaluation of audio algorithms. Implementation of audio processing algorithms on such a platform can provide hearing aid designers and manufacturers the ability to test new and existing processing techniques and collect data about their performance in real-life situations, and without the need to develop a prototype device. The platform is based on the Eurotech Catalyst development kit and the Fedora Linux OS, and it utilizes the JACK audio engine to facilitate reliable real-time performance Additionally, we demonstrate the capabilities of this platform by implementing an audio processing chain targeted at improving speech intelligibility for people suffering from auditory neuropathy. Evaluation is performed for both noisy and noise-free environments. Subjective evaluation of the results, using normal hearing listeners and an auditory neuropathy simulator, demonstrates improvement in some conditions
The Noise Exposure Structured Interview (NESI): an instrument for the comprehensive estimation of lifetime noise exposure
Lifetime noise exposure is generally quantified by self report. The accuracy of retrospective self report is limited by respondent recall, but is also bound to be influenced by reporting procedures. Such procedures are of variable quality in current measures of lifetime noise exposure, and off-the-shelf instruments are not readily available. The Noise Exposure Structured Interview (NESI) represents an attempt to draw together some of the stronger elements of existing procedures and to provide solutions to their outstanding limitations. Reporting is not restricted to pre-specified exposure activities, and instead encompasses all activities that the respondent has experienced as noisy (defined based on sound level estimated from vocal effort). Changing exposure habits over time are reported by dividing the lifespan into discrete periods in which exposure habits were approximately stable, with life milestones used to aid recall. Exposure duration, sound level, and use of hearing protection are reported for each life period separately. Simple-to-follow methods are provided for the estimation of free-field sound level, the sound level emitted by personal listening devices, and the attenuation provided by hearing protective equipment. An energy-based means of combining the resulting data is supplied, along with a primarily energy-based method for incorporating firearm-noise exposure. Finally, the NESI acknowledges the need of some users to tailor the procedures; this flexibility is afforded and reasonable modifications are described. Competency needs of new users are addressed through detailed interview instructions (including troubleshooting tips) and a demonstration video. Limited evaluation data are available and future efforts at evaluation are proposed
History in Your Hand: Design Elements to Enhance Adoption of Mobile Multimedia Historical Tour.
Ph.D. Thesis. University of Hawaiʻi at Mānoa 2017
Sound localization accuracy in the blind population
The ability to accurately locate a sound source is crucial in the blind population to orient and mobilize independently in the environment. Sound localization is accomplished by the detection of binaural differences in intensity and time of incoming sound waves along with phase differences and spectral cues. It is dependent on auditory sensitivity and processing. However, localization ability can not be predicted from the audiogram or an auditory processing evaluation.
Auditory information is not received only from objects making sound, but also from objects reflecting sound. Auditory information used in this manner is called echolocation. Echolocation significantly enhances localization in the absence of vision. Research has shown that echolocation is an important form of localization used by the blind to facilitate independent mobility. However, the ability to localize sound is not evaluated in the blind population.
Due to the importance of localization and echolocation for independent mobility in the blind, it would seem appropriate to evaluate the accuracy of this skill set. Echolocation is dependent upon the same auditory processes as localization. More specifically, localization is a precursor to echolocation. Therefore, localization ability will be evaluated in two normal hearing groups, a young normal vision population and young blind population. Both groups will have normal hearing and auditory processing verified by an audiological evaluation that includes a central auditory screening. The localization assessment will be performed using a 24-speaker array in a sound treated chamber with four different testing conditions (1) low-pass broadband stimuli in quiet, (2) low-pass broadband stimuli in noise, (3) high-pass broadband stimuli in quiet, and (4) high-pass broadband speech stimuli in noise.
It is hypothesized that blind individuals may exhibit keener localization skills than their normal vision counterparts, particularly if they are experienced, independent travelers. Results of this study may lead to future research in localization assessment, and possibly localization training for blind individuals
Recommended from our members
“I always wanted to see the night sky”: blind user preferences for Sensory Substitution Devices
Sensory Substitution Devices (SSDs) convert visual information into another sensory channel (e.g. sound) to improve the everyday functioning of blind and visually impaired persons (BVIP). However, the range of possible functions and options for translating vision into sound is largely open-ended. To provide constraints on the design of this technology, we interviewed ten BVIPs who were briefly trained in the use of three novel devices that, collectively, showcase a large range of design permutations. The SSDs include the ‘Depth-vOICe,’ ‘Synaestheatre’ and ‘Creole’ that offer high spatial, temporal, and colour resolutions respectively via a variety of sound outputs (electronic tones, instruments, vocals). The participants identified a range of practical concerns in relation to the devices (e.g. curb detection, recognition, mental effort) but also highlighted experiential aspects. This included both curiosity about the visual world (e.g. understanding shades of colour, the shape of cars, seeing the night sky) and the desire for the substituting sound to be responsive to movement of the device and aesthetically engaging
- …