1,367 research outputs found

    RELATIONSHIP AMONG BRAIN HEMISPHERIC DOMINANCE, ATTITUDE TOWARDS L1 AND L2, GENDER, AND LEARNING SUPRASEGMENTAL FEATURES

    Get PDF
    Oral skills are important components of language competence. To have good and acceptable listening and speaking, one must have good pronunciation, which encompasses segmental and suprasegmental features. Despite extensive studies on the role of segmental features and related issues in listening and speaking, there is paucity of research on the role of suprasegmental features in the same domain. Conducting studies which aim at shedding light on the issues related to learning suprasegmental features can help language teachers and learners in the process of teaching/learning English as a foreign language. To this end, this study was designed to investigate the relationship among brain hemispheric dominance, gender, attitudes towards L1 and L2, and learning suprasegmental features in Iranian EFL learners. First, 200 Intermediate EFL learners were selected from different English language teaching institutes in Hamedan and Isfahan, two provinces in Iran, as the sample. Prior to the main stage of the study, Oxford Placement Test (OPT) was used to homogenize the proficiency level of all the participants. Then, the participants were asked to complete the Edinburgh Handedness Questionnaire to determine their dominant hemisphere. They were also required to answer two questionnaires regarding their attitudes towards L1 and L2. Finally, the participants took suprasegmental features test. The results of the independent samples t-tests indicated left-brained language learners’ superiority in observing and learning suprasegmental features. It was also found that females are better than males in producing suprasegmental features. Furthermore, the results of Pearson Product Moment Correlations indicated that there is significant relationship between attitude towards L2 and learning suprasegmental features. However, no significant relationship was found between attitude towards L1 and learning English suprasegmental features. The findings of this study can provide English learners, teachers and developers of instructional materials with some theoretical and pedagogical implications which are discussed in the paper

    The Neurocognition of Prosody

    Get PDF
    Prosody is one of the most undervalued components of language, despite its fulfillment of manifold purposes. It can, for instance, help assign the correct meaning to compounds such as “white house” (linguistic function), or help a listener understand how a speaker feels (emotional function). However, brain-based models that take into account the role prosody plays in dynamic speech comprehension are still rare. This is probably due to the fact that it has proven difficult to fully denote the neurocognitive architecture underlying prosody. This review discusses clinical and neuroscientific evidence regarding both linguistic and emotional prosody. It will become obvious that prosody processing is a multistage operation and that its temporally and functionally distinct processing steps are anchored in a functionally differentiated brain network

    The role of infant-directed speech in language development of infants with hearing loss

    Get PDF
    It is estimated that approximately two out of every 1000 infants worldwide are born with unilateral or bilateral hearing loss (HL). Congenital HL, which refers to HL present at birth, has major negative effects on infants’ speech and language acquisition. Although such negative effects can be mediated by early access to hearing devices and intervention, the majority of children with HL have delayed language development in comparison with their normal-hearing (NH) peers. The aim of this thesis was to provide a deeper empirical understanding of the acoustic features in infant-directed speech (IDS) to infants with HL compared to infants with NH of the same chronological and the same hearing age. The three specific objectives were set for this thesis. The first objective is to investigate the effects of HL and the degree of hearing experience on the acoustic features of IDS. The second objective is to assess adjustments in IDS features across development in IDS to infants with HL, as they acquire more hearing experience. The third objective is to evaluate the role of specific IDS components such as vowel hyperarticulation and exaggerated prosody in lexical processing in infants with NH from six to 18 months of age, at both neural and behavioural levels. This was achieved by conducting four experiments. The first experiment used a cross-sectional design that assessed the acoustic features in IDS to infants with HL with a specific focus on whether and how infants’ chronological age and hearing age may affect these features. Experiment 2 included a longitudinal investigation that focused on the acoustic features of IDS to infants with HL and infants with NH of the same hearing age. We sought to identify how infants’ changing linguistic needs may shape maternal IDS across development. Experiments 3 and 4 focused on lexical processing in six-, 10-, and 18-month-old infants, whereby we aimed to identify the role of specific IDS features in facilitating lexical processing in infants with NH at different stages of language acquisition. The results of this thesis demonstrated that mothers adjust their IDS to infants with HL in a similar manner as in IDS to infants with NH. However, some differences are evident in the production of the corner vowels /i/ and /u/. These differences exist even when controlling for the amount of hearing experience had by infants with HL. Additionally, findings demonstrated a relation between vowel production in IDS and infants’ receptive vocabulary indicating that the exaggeration in vowel production in maternal IDS may play a fostering role in infants’ language acquisition. This linguistic role was confirmed as vowel hyperarticulation was also found to facilitate lexical processing at the neural level in 10-month-old infants. However, with regard to older infants (18 months), our findings demonstrated that natural IDS with heightened pitch and vowel hyperarticulation represents the richest input that facilitates infants’ speech processing. In summary, the findings of this thesis suggest that congenital HL in infants affects maternal production of vowels in IDS resulting in less clear vowel categories. This may result from mothers adjusting their vowel production according to infants’ reduced vowel discrimination abilities, thus, adjusting their IDS to infants’ linguistic competence. Additionally, receptive vocabulary seems not to be affected by this, indicating the role of other cues for building a lexicon in infants with HL that warrant further investigation. Furthermore, the findings suggest that pitch and vowel hyperarticulation in IDS play significant roles in facilitating lexical processing in the first two years of life

    The central contribution of prosody to sentence processing: Evidence from behavioural and neuroimaging studies

    Get PDF

    Right ventral stream damage underlies both poststroke aprosodia and amusia

    Get PDF
    Background and purpose: This study was undertaken to determine and compare lesion patterns and structural dysconnectivity underlying poststroke aprosodia and amusia, using a data-driven multimodal neuroimaging approach. Methods: Thirty-nine patients with right or left hemisphere stroke were enrolled in a cohort study and tested for linguistic and affective prosody perception and musical pitch and rhythm perception at subacute and 3-month poststroke stages. Participants listened to words spoken with different prosodic stress that changed their meaning, and to words spoken with six different emotions, and chose which meaning or emotion was expressed. In the music tasks, participants judged pairs of short melodies as the same or different in terms of pitch or rhythm. Structural magnetic resonance imaging data were acquired at both stages, and machine learning-based lesion-symptom mapping and deterministic tractography were used to identify lesion patterns and damaged white matter pathways giving rise to aprosodia and amusia. Results: Both aprosodia and amusia were behaviorally strongly correlated and associated with similar lesion patterns in right frontoinsular and striatal areas. In multiple regression models, reduced fractional anisotropy and lower tract volume of the right inferior fronto-occipital fasciculus were the strongest predictors for both disorders, over time. Conclusions: These results highlight a common origin of aprosodia and amusia, both arising from damage and disconnection of the right ventral auditory stream integrating rhythmic-melodic acoustic information in prosody and music. Comorbidity of these disabilities may worsen the prognosis and affect rehabilitation success.Peer reviewe

    Inter-hemispheric EEG coherence analysis in Parkinson's disease : Assessing brain activity during emotion processing

    Get PDF
    Parkinson’s disease (PD) is not only characterized by its prominent motor symptoms but also associated with disturbances in cognitive and emotional functioning. The objective of the present study was to investigate the influence of emotion processing on inter-hemispheric electroencephalography (EEG) coherence in PD. Multimodal emotional stimuli (happiness, sadness, fear, anger, surprise, and disgust) were presented to 20 PD patients and 30 age-, education level-, and gender-matched healthy controls (HC) while EEG was recorded. Inter-hemispheric coherence was computed from seven homologous EEG electrode pairs (AF3–AF4, F7–F8, F3–F4, FC5–FC6, T7–T8, P7–P8, and O1–O2) for delta, theta, alpha, beta, and gamma frequency bands. In addition, subjective ratings were obtained for a representative of emotional stimuli. Interhemispherically, PD patients showed significantly lower coherence in theta, alpha, beta, and gamma frequency bands than HC during emotion processing. No significant changes were found in the delta frequency band coherence. We also found that PD patients were more impaired in recognizing negative emotions (sadness, fear, anger, and disgust) than relatively positive emotions (happiness and surprise). Behaviorally, PD patients did not show impairment in emotion recognition as measured by subjective ratings. These findings suggest that PD patients may have an impairment of inter-hemispheric functional connectivity (i.e., a decline in cortical connectivity) during emotion processing. This study may increase the awareness of EEG emotional response studies in clinical practice to uncover potential neurophysiologic abnormalities

    Perception of Words and Pitch Patterns in Song and Speech

    Get PDF
    This functional magnetic resonance imaging study examines shared and distinct cortical areas involved in the auditory perception of song and speech at the level of their underlying constituents: words and pitch patterns. Univariate and multivariate analyses were performed to isolate the neural correlates of the word- and pitch-based discrimination between song and speech, corrected for rhythmic differences in both. Therefore, six conditions, arranged in a subtractive hierarchy were created: sung sentences including words, pitch and rhythm; hummed speech prosody and song melody containing only pitch patterns and rhythm; and as a control the pure musical or speech rhythm. Systematic contrasts between these balanced conditions following their hierarchical organization showed a great overlap between song and speech at all levels in the bilateral temporal lobe, but suggested a differential role of the inferior frontal gyrus (IFG) and intraparietal sulcus (IPS) in processing song and speech. While the left IFG coded for spoken words and showed predominance over the right IFG in prosodic pitch processing, an opposite lateralization was found for pitch in song. The IPS showed sensitivity to discrete pitch relations in song as opposed to the gliding pitch in speech. Finally, the superior temporal gyrus and premotor cortex coded for general differences between words and pitch patterns, irrespective of whether they were sung or spoken. Thus, song and speech share many features which are reflected in a fundamental similarity of brain areas involved in their perception. However, fine-grained acoustic differences on word and pitch level are reflected in the IPS and the lateralized activity of the IFG

    Structural Processing Of Language Components: Detection And Comprehension

    Get PDF
    Although music and language share many perceptually functional characteristics, research endeavors are still focusing on the underlying neural circuitry. Past research has indicated a distinction of hemispheric lateralization between music and language processing. Recently, efforts have shifted to the notion of an initial general shared pathway in the brain with auditory stimuli differentiated in later processing to specialized regions. Therefore, both linguistic and musical components have been examined in numerous experiments to discern the possible influence of music and language components on auditory perception and comprehension, including their potential interaction. However, the effects of sentential prosody on early language structural processing and short-term working memory have yet to be examined from a linguistic perspective. Sixteen subjects participated in an experiment using behavioral and electroencephalography (EEG) data to assess the effects of sentential prosody variation on syntactic detection and language memory. Findings from this experiment could support current therapy techniques in speech-language pathology and provide an avenue for the development of new therapy techniques using multiple communication modalities

    The role of the insula in speech and language processing

    Get PDF
    Lesion and neuroimaging studies indicate that the insula mediates motor aspects of speech production, specifically, articulatory control. Although it has direct connections to Broca\u27s area, the canonical speech production region, the insula is also broadly connected with other speech and language centres, and may play a role in coordinating higher-order cognitive aspects of speech and language production. The extent of the insula\u27s involvement in speech and language processing was assessed using the Activation Likelihood Estimation (ALE) method. Meta-analyses of 42 fMRI studies with healthy adults were performed, comparing insula activation during performance of language (expressive and receptive) and speech (production and perception) tasks. Both tasks activated bilateral anterior insulae. However, speech perception tasks preferentially activated the left dorsal mid-insula, whereas expressive language tasks activated left ventral mid-insula. Results suggest distinct regions of the mid-insula play different roles in speech and language processing. © 2014 Elsevier Inc

    The Production of Emotional Prosdy in Varying Severities of Apraxia of Speech

    Get PDF
    One mild AOS, one moderate AOS and one control speaker were asked to produce utterances with different emotional intent. In Experiment 1, the three subjects were asked to produce sentences with a happy, sad, or neutral intent through a repetition task. In Experiment 2, the three subjects were asked to produce sentences with either a happy or sad intent through a picture elicitation task. Paired t-tests comparing data from the acoustic analyses of each subject\u27s utterances revealed significant differences between FO, duration, and intensity characteristics between the happy and sad sentences of the control speaker. There were no significant differences in the acoustic characteristics of the productions of the AOS speakers suggesting that the AOS subjects were unable to volitionally produce acoustic parameters that help convey emotion. Two more experiments were designed to determine if näive listeners could hear the acoustic cues to signal emotion in all three speakers. In Experiment 3, näive listeners were asked to identify the sentences produced in Experiment 1 as happy, sad, or neutral. In Experiment 4, näive listeners were asked to identify the sentences produced in Experiment 2 as either happy or sad. Chi-square findings revealed that the naive listeners were able to identify the emotional differences of the control speaker and the correct identification was not by chance. The näive listeners could not distinguish between the emotional utterances of the mild or moderate AOS speakers. Higher percentages of correct identification in certain sentences over others were artifacts attributed to either chance (the näive listeners were guessing) or a response strategy (when in doubt, the naive listeners chose neutral or sad). The findings from Exp. 3 & 4 corroborate the acoustic findings from Exp. 1 & 2. In addition to the 4 structured experiments, spontaneous samples of happy, sad, and neutral utterances were collected and compared to those sentences produced in Experiments 1 & 2. Comparisons between the elicited and spontaneous sentences indicated that the moderate AOS subject was able to produce variations of FO and duration similar to those variations that would be produced by normal speakers conveying emotion (Banse & Scherer, 1996; Lieberman & Michaels, 1962; Scherer, 1988). The mild AOS subject was unable to produce prosodic differences between happy and sad emotion. This study found that although these AOS subjects were unable to produce acoustic parameters during elicited speech that signal emotion, they were able to produce some more variation in the acoustic properties of FO and duration, especially in the moderate AOS speaker. However, any meaningful variation pattern that would convey emotion (such as seen in the control subject) were not found. These findings suggest that the AOS subjects probably convey emotion non-verbally (e.g., facial expression, muscle tension, body language)
    corecore