229 research outputs found
Recommended from our members
Comparison of word-, sentence, and phoneme-based training strategies in improving the perception of spectrally-distorted speech
Purpose: To compare the effectiveness of three self-administered strategies for auditory training that might improve speech perception by adult users of cochlear implants. The strategies are based, respectively, on discriminating isolated words, words in sentences, and phonemes in nonsense syllables. Method: Participants were 18 normally-hearing adults who listened to speech processed by a noise-excited vocoder to simulate the information provided by a cochlear implant. They were assigned randomly to word-, sentence-, or phoneme-based training and underwent nine 20-minute training sessions on separate days over a 2-3-week period. The effectiveness of training was assessed as the improvement in accuracy of discriminating vowels and consonants, and identifying words in sentences, relative to participants’ best performance in repeated tests prior to training. Results: Word- and sentence-based training led to significant improvements in the ability to identify words in sentences that were significantly larger than the improvements produced by phoneme-based training. There were no significant differences between the effectiveness of word- and sentence-based training. No significant improvements in consonant or vowel discrimination were found for the sentence- or phoneme-based training groups, but some improvements were found for the word-based training group. Conclusions: The word- and sentence-based training strategies were more effective than the phoneme-based strategy at improving the perception of spectrally-distorted speech
Recommended from our members
Auditory training strategies for adult users of cochlear implants
There has been growing interest recently in whether computer-based training can improve speech perception among users of cochlear implants (Fu et al., 2005; Oba et al., 2011; Ingvalson et al., 2013). This paper reports a series of experiments which first evaluated the effectiveness of different training strategies with normal-hearing participants who listened to noise-vocoded speech, before conducting a small-scale study with users of cochlear implants. Our vocoder studies revealed (1) that ‘High-Variability’ training led to greater generalisation to new talkers than training with a single talker, and (2) that word-and sentence-based training materials led to greater improvements than an approach based on phonemes in nonsense syllables. Informed by these findings, we evaluated the effectiveness of a computer-based training package that included word-and sentence-based tasks, with materials recorded by 20 talkers. We found good compliance with the training protocol, with 8 out of the 11 participants completing 15 hours of training as instructed. Following training, there was a significant improvement on a consonant test, but in general the improvements were small, highly variable, and not statistically significant. A large-scale randomised controlled trial is needed before we can be confident that computer-based auditory training is worthwhile for users of cochlear implants
Matching novel face and voice identity using static and dynamic facial images
Research investigating whether faces and voices share common source identity information has offered contradictory results. Accurate face-voice matching is consistently above chance when the facial stimuli are dynamic, but not when the facial stimuli are static. We tested whether procedural differences might help to account for the previous inconsistencies. In Experiment 1, participants completed a sequential two-alternative forced choice matching task. They either heard a voice and then saw two faces or saw a face and then heard two voices. Face – voice matching was above chance when the facial stimuli were dynamic and articulating, but not when they were static. In Experiment 2, we tested whether matching was more accurate when faces and voices were presented simultaneously. The participants saw two face–voice combinations, presented one after the other. They had to decide which combination was the same identity. As in Experiment 1, only dynamic face–voice matching was above chance. In Experiment 3, participants heard a voice and then saw two static faces presented simultaneously. With this procedure, static face–voice matching was above chance. The overall results, analyzed using multilevel modeling, showed that voices and dynamic articulating faces, as well as voices and static faces, share concordant source identity information. It seems, therefore, that above-chance static face–voice matching is sensitive to the experimental procedure employed. In addition, the inconsistencies in previous research might depend on the specific stimulus sets used; our multilevel modeling analyses show that some people look and sound more similar than others
Searching for a talking face: the effect of degrading the auditory signal
Previous research (e.g. McGurk and MacDonald, 1976) suggests that faces and voices are bound automatically, but recent evidence suggests that attention is involved in a task of searching for a talking face (Alsius and Soto-Faraco, 2011). We hypothesised that the processing demands of the stimuli may affect the amount of attentional resources required, and investigated what effect degrading the auditory stimulus had on the time taken to locate a talking face. Twenty participants were presented with between 2 and 4 faces articulating different sentences, and had to decide which of these faces matched the sentence that they heard. The results showed that in the least demanding auditory condition (clear speech in quiet), search times did not significantly increase when the number of faces increased. However, when speech was presented in background noise or was processed to simulate the information provided by a cochlear implant, search times increased as the number of faces increased. Thus, it seems that the amount of attentional resources required vary according to the processing demands of the auditory stimuli, and when processing load is increased then faces need to be individually attended to in order to complete the task. Based on these results we would expect cochlear-implant users to find the task of locating a talking face more attentionally demanding than normal hearing listeners
The effect of inserting an inter-stimulus interval in face-voice matching tasks
Voices and static faces can be matched for identity above chance level. No previous face- voice matching experiments have included an inter-stimulus interval (ISI) exceeding 1 second. We tested whether accurate identity decisions rely on high-quality perceptual representations temporarily stored in sensory memory, and therefore whether the ability to make accurate matching decisions diminishes as the ISI increases. In each trial, participants had to decide whether an unfamiliar face and voice belonged to the same person. The face and voice stimuli were presented simultaneously in Experiment 1, there was a 5 second ISI in Experiment 2, and a 10 second interval in Experiment 3. The results, analysed using multilevel modelling, revealed that static face-voice matching was significantly above chance level only when the stimuli were presented simultaneously (Experiment 1). The overall bias to respond same identity weakened as the interval increased, suggesting that this bias is explained by temporal contiguity. Taken together, the findings highlight that face-voice matching performance is reliant on comparing fast-decaying, high-quality perceptual representations. The results are discussed in terms of social functioning
Visual speech benefit in clear and degraded speech depends on the auditory intelligibility of the talker and the number of background talkers
Perceiving speech in background noise presents a significant challenge to listeners. Intelligibility can be improved by seeing the face of a talker. This is of particular value to hearing impaired people and users of cochlear implants. It is well known that auditory-only speech understanding depends on factors beyond audibility. How these factors impact on the audio-visual integration of speech is poorly understood. We investigated audio-visual integration when either the interfering background speech (Experiment 1) or intelligibility of the target talkers (Experiment 2) was manipulated. Clear speech was also contrasted with sine-wave vocoded speech to mimic the loss of temporal fine structure with a cochlear implant. Experiment 1 showed that for clear speech, the visual speech benefit was unaffected by the number of background talkers. For vocoded speech, a larger benefit was found when there was only one background talker. Experiment 2 showed that visual speech benefit depended upon the audio intelligibility of the talker and increased as intelligibility decreased. Degrading the speech by vocoding resulted in even greater benefit from visual speech information. A single “independent noise” signal detection theory model predicted the overall visual speech benefit in some conditions but could not predict the different levels of benefit across variations in the background or target talkers. This suggests that, similar to audio-only speech intelligibility, the integration of audio-visual speech cues may be functionally dependent on factors other than audibility and task difficulty, and that clinicians and researchers should carefully consider the characteristics of their stimuli when assessing audio-visual integration
Deciphering the past to inform the future: preparing for the next (“really big”) extreme event
Climate change will bring more extremes in temperature and precipitation that will impact productivity and ecosystem resilience throughout agroecosystems worldwide. Historical events can be used to identify drivers that impact future events. A catastrophic drought in the US in the 1930s resulted in an abrupt boundary between areas severely impacted by the Dust Bowl and areas that were less severely affected. Historical primary production data confirmed the location of this boundary at the border between two states (Nebraska and Iowa). Local drivers of weather and soils explained production responses across the boundary before and after the drought (1926–1948). During the drought, however, features at the landscape scale (soil properties and wind velocities) and regional scale (the Missouri River, its floodplain, and the nearby Loess Hills) explained most of the observed variance in primary production. The impact of future extreme events may be affected by land surface properties that either accentuate or ameliorate the effects of these events. Consideration of large-scale geomorphic processes may be necessary to interpret and manage for catastrophic events
Does training with amplitude modulated tones affect tone-vocoded speech perception?
Temporal-envelope cues are essential for successful speech perception. We asked here whether training on stimuli containing temporal-envelope cues without speech content can improve the perception of spectrally-degraded (vocoded) speech in which the temporal-envelope (but not the temporal fine structure) is mainly preserved. Two groups of listeners were trained on different amplitude-modulation (AM) based tasks, either AM detection or AM-rate discrimination (21 blocks of 60 trials during two days, 1260 trials; frequency range: 4Hz, 8Hz, and 16Hz), while an additional control group did not undertake any training. Consonant identification in vocoded vowel-consonant-vowel stimuli was tested before and after training on the AM tasks (or at an equivalent time interval for the control group). Following training, only the trained groups showed a significant improvement in the perception of vocoded speech, but the improvement did not significantly differ from that observed for controls. Thus, we do not find convincing evidence that this amount of training with temporal-envelope cues without speech content provide significant benefit for vocoded speech intelligibility. Alternative training regimens using vocoded speech along the linguistic hierarchy should be explored
Streptococcal peritonitis in Australian peritoneal dialysis patients: predictors, treatment and outcomes in 287 cases
Background There has not been a comprehensive, multi-centre study of streptococcal peritonitis in patients on peritoneal dialysis (PD) to date. Methods The predictors, treatment and clinical outcomes of streptococcal peritonitis were examined by binary logistic regression and multilevel, multivariate poisson regression in all Australian PD patients involving 66 centres between 2003 and 2006. Results Two hundred and eighty-seven episodes of streptococcal peritonitis (4.6% of all peritonitis episodes) occurred in 256 individuals. Its occurrence was independently predicted by Aboriginal or Torres Strait Islander racial origin. Compared with other organisms, streptococcal peritonitis was associated with significantly lower risks of relapse (3% vs 15%), catheter removal (10% vs 23%) and permanent haemodialysis transfer (9% vs 18%), as well as a shorter duration of hospitalisation (5 vs 6 days). Overall, 249 (87%) patients were successfully treated with antibiotics without experiencing relapse, catheter removal or death. The majority of streptococcal peritonitis episodes were treated with either intraperitoneal vancomycin (most common) or first-generation cephalosporins for a median period of 13 days (interquartile range 8–18 days). Initial empiric antibiotic choice did not influence outcomes. Conclusion Streptococcal peritonitis is a not infrequent complication of PD, which is more common in indigenous patients. When treated with either first-generation cephalosporins or vancomycin for a period of 2 weeks, streptococcal peritonitis is associated with lower risks of relapse, catheter removal and permanent haemodialysis transfer than other forms of PD-associated peritonitis.Stacey O'Shea, Carmel M Hawley, Stephen P McDonald, Fiona G Brown, Johan B Rosman, Kathryn J Wiggins, Kym M Bannister and David W Johnso
Genetic Variants at Chromosomes 2q35, 5p12, 6q25.1, 10q26.13, and 16q12.1 Influence the Risk of Breast Cancer in Men
Male breast cancer accounts for approximately 1% of all breast cancer. To date, risk factors for male breast cancer are poorly defined, but certain risk factors and genetic features appear common to both male and female breast cancer. Genome-wide association studies (GWAS) have recently identified common single nucleotide polymorphisms (SNPs) that influence female breast cancer risk; 12 of these have been independently replicated. To examine if these variants contribute to male breast cancer risk, we genotyped 433 male breast cancer cases and 1,569 controls. Five SNPs showed a statistically significant association with male breast cancer: rs13387042 (2q35) (odds ratio (OR) = 1.30, p = 7.98×10−4), rs10941679 (5p12) (OR = 1.26, p = 0.007), rs9383938 (6q25.1) (OR = 1.39, p = 0.004), rs2981579 (FGFR2) (OR = 1.18, p = 0.03), and rs3803662 (TOX3) (OR = 1.48, p = 4.04×10−6). Comparing the ORs for male breast cancer with the published ORs for female breast cancer, three SNPs—rs13387042 (2q35), rs3803662 (TOX3), and rs6504950 (COX11)—showed significant differences in ORs (p<0.05) between sexes. Breast cancer is a heterogeneous disease; the relative risks associated with loci identified to date show subtype and, based on these data, gender specificity. Additional studies of well-defined patient subgroups could provide further insight into the biological basis of breast cancer development
- …