6 research outputs found

    Women’s Rights and Shari’a Law: A Workable Reality? An Examination of Possible International Human Rights Approaches Through the Continuing Reform of the Pakistani Hudood Ordinance

    Get PDF
    This study investigated the degree to which audiovisual presentation (compared to auditory-only presentation) affected isolation point (IPs, the amount of time required for the correct identification of speech stimuli using a gating paradigm) in silence and noise conditions. The study expanded on the findings of Moradi et al. (under revision), using the same stimuli, but presented in an audiovisual instead of an auditory-only manner. The results showed that noise impeded the identification of consonants and words (i.e., delayed IPs and lowered accuracy), but not the identification of final words in sentences. In comparison with the previous study by Moradi et al., it can be concluded that the provision of visual cues expedited IPs and increased the accuracy of speech stimuli identification in both silence and noise. The implication of the results is discussed in terms of models for speech understanding

    Subjective ratings of masker disturbance during the perception of native and non-native speech

    No full text
    AbstractThe aim of the present study was to address how 43 normal-hearing and hearing-impaired listeners subjectively experienced the disturbance generated by four masker conditions (i.e., stationary noise, fluctuating noise, Swedish two-talker babble and English two-talker babble) while listening to speech in two target languages (i.e., Swedish (native) or English (non-native). The participants were asked to evaluate their noise-disturbance experience on a continuous scale from 0 to 10 immediately after having performed each listening condition. The data demonstrated a three-way interaction effect between target language, masker condition, and group (hearing-impaired versus normal-hearing). The hearing-impaired listeners experienced the Swedish-babble masker as significantly more disturbing for the native target language (Swedish) than for the non-native language (English). Additionally, this masker was significantly more disturbing than each of the other masker types during the perception of Swedish target speech. The normal-hearing listeners, on the other hand, indicated that the Swedish speech-masker was more disturbing than the stationary and the fluctuating noise-maskers for the perception of English target speech. The NH listeners perceived more disturbance from the speech maskers than the noise maskers. The HI listeners did not perceive the speech maskers as generally more disturbing than the noise maskers. However, they had particular difficulty with the perception of native speech masked by native babble, a common condition in daily-life listening conditions. These results suggest that the characteristics of the different maskers applied in the current study seem to affect the perceived disturbance differently in hearing-impaired and normal-hearing listeners. There was no general difference in the perceived disturbance across conditions between the HI listeners and the NH listeners

    Early ERP signature of hearing impairment in visual rhyme judgment

    No full text
    Postlingually acquired hearing impairment is associated with changes in the representation of sound in semantic long-term memory. An indication of this is the lower performance on visual rhyme judgment tasks in conditions where phonological and orthographic cues mismatch, requiring high reliance on phonological representations. In this study, event-related potentials (ERPs) were used for the first time to investigate the neural correlates of phonological processing in visual rhyme judgments in participants with acquired hearing impairment (HI) and normal hearing (NH). Rhyme task word pairs rhymed or not and had matching or mismatching orthography. In addition, the interstimulus-interval (ISI) was manipulated to be either long (800 ms) or short (50 ms). Long ISIs allow for engagement of explicit, top-down processes, while short ISIs limit the involvement of such mechanisms. We hypothesized lower behavioural performance and N400 and N2 deviations in HI in the mismatching rhyme judgment conditions, particularly in short ISI. However, the results showed a different pattern. As expected, behavioural performance in the mismatch conditions was lower in HI than in NH in short ISI, but ERPs did not differ across groups. In contrast, HI performed on a par with NH in long ISI. Further, HI, but not NH, showed an amplified N2-like response in the non-rhyming, orthographically mismatching condition in long ISI. This was also the rhyme condition in which participants in both groups benefited the most from the possibility to engage top-down processes afforded with the longer ISI. Taken together, these results indicate an early ERP signature of hearing impairment in this challenging phonological task, likely reflecting use of a compensatory strategy. This strategy is suggested to involve increased reliance on explicit mechanisms such as articulatory recoding and grapheme-to-phoneme conversion

    Seeing the talker’s face supports executive processing of speech in steady state noise

    Get PDF
    Listening to speech in noise depletes cognitive resources, affecting speech processing. The present study investigated how remaining resources or cognitive spare capacity (CSC) can be deployed by young adults with normal hearing. We administered a test of CSC (CSCT, Mishra et al., 2013) along with a battery of established cognitive tests to 20 participants with normal hearing. In the CSCT, lists of two-digit numbers were presented with and without visual cues in quiet, as well as in steady-state and speech-like noise at a high intelligibility level. In low load conditions, two numbers were recalled according to instructions inducing executive processing (updating, inhibition) and in high load conditions the participants were additionally instructed to recall one extra number, which was the always the first item in the list. In line with previous findings, results showed that CSC was sensitive to memory load and executive function but generally not related to working memory capacity. Furthermore, CSCT scores in quiet were lowered by visual cues, probably due to distraction. In steady-state noise, the presence of visual cues improved CSCT scores, probably by enabling better encoding. Contrary to our expectation, CSCT performance was disrupted more in steady-state than speech-like noise, although only without visual cues, possibly because selective attention could be used to ignore the speech-like background and provide an enriched representation of target items in working memory similar to that obtained in quiet. This interpretation is supported by a consistent association between CSCT scores and updating skills

    Similar digit-based working memory in deaf signers and hearing non-signers despite digit span differences

    Get PDF
    Similar working memory (WM) for lexical items has been demonstrated for signers and non-signers while short-term memory (STM) is regularly poorer in deaf than hearing individuals. In the present study, we investigated digit-based WM and STM in Swedish and British deaf signers and hearing non-signers. To maintain good experimental control we used printed stimuli throughout and held response mode constant across groups. We showed that deaf signers have similar digit-based WM performance, despite shorter digit spans, compared to well-matched hearing non-signers. We found no difference between signers and non-signers on STM span for letters chosen to minimize phonological similarity or in the effects of recall direction. This set of findings indicates that similar WM for signers and non-signers can be generalized from lexical items to digits and suggests that poorer STM in deaf signers compared to hearing non-signers may be due to differences in phonological similarity across the language modalities of sign and speech

    The Ease of Language Understanding (ELU) model: theoretical, empirical, and clinical advances

    Get PDF
    Working memory is important for online language processing during conversation. We use it to maintain relevant information, to inhibit or ignore irrelevant information, and to attend to conversation selectively. Working memory helps us to keep track of and actively participate in conversation, including taking turns and following the gist. This paper examines the Ease of Language Understanding model (i.e., the ELU model, Rönnberg, 2003; Rönnberg et al., 2008) in light of new behavioral and neural findings concerning the role of working memory capacity (WMC) in uni-modal and bimodal language processing. The new ELU model is a meaning prediction system that depends on phonological and semantic interactions in rapid implicit and slower explicit processing mechanisms that both depend on WMC albeit in different ways. A revised ELU model is proposed based on findings that address the relationship between WMC and (a) early attention processes in listening to speech, (b) signal processing in hearing aids and its effects on short-term memory, (c) inhibition of speech maskers and its effect on episodic long-term memory, (d) the effects of hearing impairment on episodic and semantic long-term memory, and finally, (e) listening effort. New predictions and clinical implications are outlined. Comparisons with other WMC and speech perception models are made
    corecore