20 research outputs found

    A decisional component of holistic encoding

    No full text
    human faces are used in cognition as undifferentiated wholes. General recognition theory (GRT; F. G. Ashby & J. T. Townsend, 1986) is used to represent hypotheses regarding the possible sources for the behavioral evidence supporting holistic representation. Specifically, it is suggested that holism can be understood in terms of violations of informational independence, informational separability, or decisional separability, as these constructs are defined in GRT. Stimuli were presented upright, inverted, and in an encoding task that emphasized the meaningful nature of the stimuli. Patterns of performance (recognition hit rates) were consistent with prior studies. However, there were only a handful of violations of informational separability. Instead, consistent violations of decisional separability suggested a decisional basis for holistic effects. One of the most compellingly unitary visual experiences is that of the sight of a human face. A single glimpse of a single face can leave an observer with a myriad of associations, inferences, and judgments. Small wonder, then, that the visual experience of the human face has become the focus of extensive empirical (e.g., Bruce, 1991) and theoretical (e.g., Wenger & Townsend, 2001a

    Perception-production relationships for /r-l/ by native Japanese speakers

    No full text
    We examined the possible relationship between perception and production of /r–l/ by native Japanese speakers. Little evidence of a relationship between the cues used for perception and those for production was found for native Japanese and native English speakers. However, we found evidence of a shift from a reliance on conjunctive cues to single cues as listeners progressed from relatively naïve to fully native English speakers</p

    Effects of Familiarization on Intelligibility of Dysarthric Speech in Older Adults With and Without Hearing Loss

    No full text
    PURPOSE: Familiarization tasks offer a promising platform for listener-targeted remediation of intelligibility disorders associated with dysarthria. To date, the body of work demonstrating improved understanding of dysarthric speech following a familiarization experience has been carried out on younger adults. The primary purpose of the present study was to examine the intelligibility effects of familiarization in older adults. METHOD: Nineteen older adults, with and without hearing loss, completed a familiarization protocol consisting of three phases: pretest, familiarization, and posttest. The older adults\u27 initial intelligibility and intelligibility improvement scores were compared with previously reported data collected from 50 younger adults (Borrie, Lansford, & Barrett, 2017a). RESULTS: Relative to younger adults, initial intelligibility scores were significantly lower for older adults, although additional analysis revealed that the difference was limited to older adults with hearing loss. Key, however, is that irrespective of hearing status, the older and younger adults achieved comparable intelligibility improvement following familiarization (gain of roughly 20 percentage points). CONCLUSION: This study extends previous findings of improved intelligibility of dysarthria following familiarization to a group of listeners who are critical to consider in listener-targeted remediation, namely, aging caregivers and/or spouses of individuals with dysarthria

    Can native Japanese listeners learn to differentiate/r-l/on the basis of F3 onset frequency?

    No full text
    <p>Many attempts have been made to teach native Japanese listeners to perceptually differentiate English/r-l/(e.g. rock-lock). Though improvement is evident, in no case is final performance native English-like. We focused our training on the third formant onset frequency, shown to be the most reliable indicator of/r-l/category membership. We first presented listeners with instances of synthetic/r-l/stimuli varying only in F3 onset frequency, in a forced-choice identification training task with feedback. Evidence of learning was limited. The second experiment utilized an adaptive paradigm beginning with non-speech stimuli consisting only of/r/and/l/F3 frequency trajectories progressing to synthetic speech instances of/ra-la/; half of the trainees received feedback. Improvement was shown by some listeners, suggesting some enhancement of/r-l/identification is possible following training with only F3 onset frequency. However, only a subset of these listeners showed signs of generalization of the training effect beyond the trained synthetic context.</p

    The effect of the menstrual cycle on dichotic listening.

    No full text
    The purpose of this study was to determine the effect of the menstrual cycle on responses to a dichotic listening task. It was hypothesized that participants would exhibit a stronger right ear advantage during the menstrual cycle days when estrogen levels are at their peak. It was also hypothesized that the women not taking oral contraceptives would exhibit greater variations in ear advantage over the course of their menstrual cycle than those taking oral contraceptives. Finally, it was hypothesized that the error response rates would remain similar across different listening conditions and over the menstrual cycle. The participants were 30 women who took oral contraceptives and 15 who did not. They completed nine listening sessions comprised of three dichotic listening tasks: forced-left, forced-right, and open. The data were analyzed using a mixed effects models. The participants exhibited a reduction in right ear responses on the days that corresponded to when the level of estrogen would begin to increase. This response was different from what had been hypothesized. The analysis also indicated no response differences between the two groups of women. In addition, the women exhibited fewer errors over the course of the sessions, implying that they adapted to the task. The results indicate that the women's hormone fluctuation across the menstrual cycle affected their responses to the forced-left, cognitive control, task only

    Training to improve language outcomes in cochlear implant recipients

    No full text
    Cochlear implants (CI) have brought with them hearing ability for many prelingually deafened children. Advances in CI technology have brought not only hearing ability but speech perception to these same children. Concurrent with the development of speech perception has come spoken language development, bringing with it the hopes that prelingually deafened CI recipient children will develop spoken language capabilities on par with those of normal hearing (NH) children. Unfortunately, this has not been the case, and many CI recipient children lag behind their NH peers with large variability in outcomes. It is likely that CI recipient children struggle to develop spoken language at NH-like levels because they have deficits in both auditory and cognitive skills that underlie the development of language. Fortunately, both the auditory and cognitive training literature indicate an improvement of auditory and cognitive functioning following training. It therefore stands to reason that if training improves the auditory and cognitive skills that support language learning, language development itself should also improve. In the present manuscript we will review the auditory and cognitive training and their potential impact on speech outcomes with an emphasis on the speech perception literature
    corecore