273 research outputs found

    Comprehension of familiar and unfamiliar native accents under adverse listening conditions

    Get PDF
    This study aimed to determine the relative processing cost associated with comprehension of an unfamiliar native accent under adverse listening conditions. Two sentence verification experiments were conducted in which listeners heard sentences at various signal-to-noise ratios. In Experiment 1, these sentences were spoken in a familiar or an unfamiliar native accent or in two familiar native accents. In Experiment 2, they were spoken in a familiar or unfamiliar native accent or in a nonnative accent. The results indicated that the differences between the native accents influenced the speed of language processing under adverse listening conditions and that this processing speed was modulated by the relative familiarity of the listener with the native accent. Furthermore, the results showed that the processing cost associated with the nonnative accent was larger than for the unfamiliar native accent

    Cost recovery for water supply, policy and practice in Bangladesh

    Get PDF
    The National Policy for Safe Drinking Water supply and Sanitation of Bangladesh states that “in the near future”, larger parts of the construction costs of water supply systems should be recovered from the users. Furthermore, the policy prescribes that user communities should become responsible for O&M of the water supply facilities in rural areas and should bear 100% of the costs for this. The policy states that transaction should be more towards cost recovery and financing practices for water supply should be gradual and there should be a safety net for the hard-core poor. This paper deals with the cost recovery and financing of water supply according to the National and how it is interpreted and put into practice by different organizations

    Effects of imitation on language attitudes associated with regional and standard accents of British English

    Get PDF
    This study investigated whether and how imitation of sentences spoken in Liverpool English (LE) and Standard Southern British English (SSBE), affected attitudes related to these accents. LE has low prestige and low social attractiveness, while SSBE has high prestige and high attractiveness. A previous study showed that imitation positively affects social attractiveness, but not prestige, for an accent with low attractiveness and low prestige. It is unclear how imitation affects attitudes for accents with high attractiveness and high prestige. For both accents, participants repeated or imitated sentences. They gave prestige and attractiveness ratings for either accent at three points: before the experiment (baseline, before participants had heard the accent) and after each repeat/imitation session. A positive effect of imitation on attractiveness was found for LE, but not for SSBE. Also, an effect of audio exposure is reported: ratings were less stereotypical after listening to sentences spoken in the accent

    On-line plasticity in spoken sentence comprehension: Adapting to time-compressed speech

    Get PDF
    Listeners show remarkable flexibility in Processing variation in speech signal. One striking example is the ease with which they adapt to novel speech distortions such as listening to someone with a foreign accent. Behavioural studies suggest that significant improvements in comprehension Occur rapidly - often within 10-20 sentences. In the present experiment, we investigate the neural changes underlying on-line adaptation to distorted speech using time-compressed speech. Listeners performed a sentence verification task on normal-speed and time-compressed sentences while their neural responses were recorded using fMRI. The results showed that rapid learning of the time-compressed speech occurred during presentation of the first block of 16 sentences and was associated with increased activation in left and right auditory association cortices and in left ventral premotor Cortex. These findings suggest that the ability to adapt to a distorted speech signal may, in part, rely on mapping novel acoustic patterns onto existing articulatory motor plans, consistent with the idea that speech perception involves integrating multi-modal information including auditory and motoric cues. (C) 2009 Elsevier Inc. All rights reserved

    The causal role of left and right superior temporal gyri in speech perception in noise, a TMS study

    Get PDF
    Successful perception of speech in everyday listening conditions requires effective listening strategies to overcome common acoustic distortions, such as background noise. Convergent evidence from neuroimaging and clinical studies identify activation within the temporal lobes as key to successful speech perception. However, current neurobiological models disagree on whether the left temporal lobe is sufficient for successful speech perception or whether bilateral processing is required. We addressed this issue using TMS to selectively disrupt processing in either the left or right superior temporal gyrus (STG) of healthy participants to test whether the left temporal lobe is sufficient or whether both left and right STG are essential. Participants repeated keywords from sentences presented in background noise in a speech reception threshold task while receiving online repetitive TMS separately to the left STG, right STG, or vertex or while receiving no TMS. Results show an equal drop in performance following application of TMS to either left or right STG during the task. A separate group of participants performed a visual discrimination threshold task to control for the confounding side effects of TMS. Results show no effect of TMS on the control task, supporting the notion that the results of Experiment 1 can be attributed to modulation of cortical functioning in STG rather than to side effects associated with online TMS. These results indicate that successful speech perception in everyday listening conditions requires both left and right STG and thus have ramifications for our understanding of the neural organization of spoken language processing

    Perceptual Learning of Noise-Vocoded Speech Under Divided Attention

    Get PDF
    Speech perception performance for degraded speech can improve with practice or exposure. Such perceptual learning is thought to be reliant on attention and theoretical accounts like the predictive coding framework suggest a key role for attention in supporting learning. However, it is unclear whether speech perceptual learning requires undivided attention. We evaluated the role of divided attention in speech perceptual learning in two online experiments (N = 336). Experiment 1 tested the reliance of perceptual learning on undivided attention. Participants completed a speech recognition task where they repeated forty noise-vocoded sentences in a between-group design. Participants performed the speech task alone or concurrently with a domain-general visual task (dual task) at one of three difficulty levels. We observed perceptual learning under divided attention for all four groups, moderated by dual-task difficulty. Listeners in easy and intermediate visual conditions improved as much as the single-task group. Those who completed the most challenging visual task showed faster learning and achieved similar ending performance compared to the single-task group. Experiment 2 tested whether learning relies on domain-specific or domain-general processes. Participants completed a single speech task or performed this task together with a dual task aiming to recruit domain-specific (lexical or phonological), or domain-general (visual) processes. All secondary task conditions produced patterns and amount of learning comparable to the single speech task. Our results demonstrate that the impact of divided attention on perceptual learning is not strictly dependent on domain-general or domain-specific processes and speech perceptual learning persists under divided attention

    A systematic review of acoustic change complex (ACC) measurements and applicability in children for the assessment of the neural capacity for sound and speech discrimination

    Get PDF
    Objective: The acoustic change complex (ACC) is a cortical auditory evoked potential (CAEP) and can be elicited by a change in an otherwise continuous sound. The ACC has been highlighted as a promising tool in the assessment of sound and speech discrimination capacity, and particularly for difficult-to-test populations such as infants with hearing loss, due to the objective nature of ACC measurements. Indeed, there is a pressing need to develop further means to accurately and thoroughly establish the hearing status of children with hearing loss, to help guide hearing interventions in a timely manner. Despite the potential of the ACC method, ACC measurements remain relatively rare in a standard clinical settings. The objective of this study was to perform an up-to-date systematic review on ACC measurements in children, to provide greater clarity and consensus on the possible methodologies, applications, and performance of this technique, and to facilitate its uptake in relevant clinical settings. Design: Original peer-reviewed articles conducting ACC measurements in children (&lt; 18 years). Data were extracted and summarised for: (1) participant characteristics; (2) ACC methods and auditory stimuli; (3) information related to the performance of the ACC technique; (4) ACC measurement outcomes, advantages, and challenges. The systematic review was conducted using PRISMA guidelines for reporting and the methodological quality of included articles was assessed. Results: A total of 28 studies were identified (9 infant studies). Review results show that ACC responses can be measured in infants (from &lt; 3 months), and there is evidence of age-dependency, including increased robustness of the ACC response with increasing childhood age. Clinical applications include the measurement of the neural capacity for speech and non-speech sound discrimination in children with hearing loss, auditory neuropathy spectrum disorder (ANSD) and central auditory processing disorder (CAPD). Additionally, ACCs can be recorded in children with hearing aids, auditory brainstem implants, and cochlear implants, and ACC results may guide hearing intervention/rehabilitation strategies. The review identified that the time taken to perform ACC measurements was often lengthy; the development of more efficient ACC test procedures for children would be beneficial. Comparisons between objective ACC measurements and behavioural measures of sound discrimination showed significant correlations for some, but not all, included studies. Conclusions: ACC measurements of the neural capacity to discriminate between speech and non-speech sounds are feasible in infants and children, and a wide range of possible clinical applications exist, although more time-efficient procedures would be advantageous for clinical uptake. A consideration of age and maturational effects is recommended, and further research is required to investigate the relationship between objective ACC measures and behavioural measures of sound and speech perception for effective clinical implementation.</p

    Localising semantic and syntactic processing in spoken and written language comprehension: an Activation Likelihood Estimation meta-analysis

    Get PDF
    We conducted an Activation Likelihood Estimation (ALE) meta-analysis to identify brain regions that are recruited by linguistic stimuli requiring relatively demanding semantic or syntactic processing. We included 54 functional MRI studies that explicitly varied the semantic or syntactic processing load, while holding constant demands on earlier stages of processing. We included studies that introduced a syntactic/semantic ambiguity or anomaly, used a priming manipulation that specifically reduced the load on semantic/syntactic processing, or varied the level of syntactic complexity. The results confirmed the critical role of the posterior left Inferior Frontal Gyrus (LIFG) in semantic and syntactic processing. These results challenge models of sentence comprehension highlighting the role of anterior LIFG for semantic processing. In addition, the results emphasise the posterior (but not anterior) temporal lobe for both semantic and syntactic processing

    Categorization of regional and foreign accent in 5- to 7-year-old British children

    Get PDF
    This study examines children's ability to detect accent-related information in connected speech. British English children aged 5 and 7 years old were asked to discriminate between their home accent from an Irish accent or a French accent in a sentence categorization task. Using a preliminary accent rating task with adult listeners, it was first verified that the level of accentedness was similar across the two unfamiliar accents. Results showed that whereas the younger children group behaved just above chance level in this task, the 7-year-old group could reliably distinguish between these variations of their own language, but were significantly better at detecting the foreign accent than the regional accent. These results extend and replicate a previous study (Girard, Floccia, & Goslin, 2008) in which it was found that 5-year-old French children could detect a foreign accent better than a regional accent. The factors underlying the relative lack of awareness for a regional accent as opposed to a foreign accent in childhood are discussed, especially the amount of exposure, the learnability of both types of accents, and a possible difference in the amount of vowels versus consonants variability, for which acoustic measures of vowel formants and plosives voice onset time are provided. © 2009 The International Society for the Study of Behavioural Development
    • …
    corecore