27 research outputs found

    The importance of material used in speech therapy : two case studies in minimally conscious state patients

    Get PDF
    Speech therapy can be part of the care pathway for patients recovering from comas and presenting a disorder of consciousness (DOC). Although there are no official recommendations for speech therapy follow-up, neuroscientific studies suggest that relevant stimuli may have beneficial effects on the behavioral assessment of patients with a DOC. In two case studies, we longitudinally measured (from 4 to 6 weeks) the behavior (observed in a speech therapy session or using items from the Coma Recovery Scale—Revised) of two patients in a minimally conscious state (MCS) when presenting music and/or autobiographical materials. The results highlight the importance of using relevant material during a speech therapy session and suggest that a musical context with a fast tempo could improve behavior evaluation compared to noise. This work supports the importance of adapted speech therapy for MCS patients and encourages larger studies to confirm these initial observations

    Regular rhythmic primes improve sentence repetition in children with developmental language disorder

    Get PDF
    Recently reported links between rhythm and grammar processing have opened new perspectives for using rhythm in clinical interventions for children with developmental language disorder (DLD). Previous research using the rhythmic priming paradigm has shown improved performance on language tasks after regular rhythmic primes compared to control conditions. However, this research has been limited to effects of rhythmic priming on grammaticality judgments. The current study investigated whether regular rhythmic primes could also benefit sentence repetition, a task requiring proficiency in complex syntax—an area of difficultly for children with DLD. Regular rhythmic primes improved sentence repetition performance compared to irregular rhythmic primes in children with DLD and with typical development—an effect that did not occur with a non-linguistic control task. These findings suggest processing overlap for musical rhythm and linguistic syntax, with implications for the use of rhythmic stimulation for treatment of children with DLD in clinical research and practice

    Effects of musical valence on the cognitive processing of lyrics

    No full text
    The effects of music on the brain have been extensively researched, and numerous connections have been found between music and language, music and emotion, and music and cognitive processing. Despite this work, these three research areas have never before been drawn together in a single research paradigm. This is significant as their combination could lead to valuable insights into the effects of musical valence on the cognitive processing of lyrics. Based on the feelings-as-information theory, which states that negative moods lead to analytic, systematic and fine-grained processing, while positive moods encourage holistic and heuristic-based processing, the current study (n = 64) used an error detection paradigm and found that significantly more error words were detected when paired with negatively valenced music compared to positively valenced music. Non-musicians were better at detecting error words than musicians, and native English speakers outperformed non-native English speakers. Such a result explains previous findings that sad and happy lyrics have differential effects on emotion induction, and suggests this is due to sad lyrics being processed at deeper semantic levels. This study provides a framework in which to understand the interaction of lyrics and music with emotion induction - a primary reason for listening to music

    The nature of syntactic processing in music and language

    No full text
    Thesis by publication."Department of Psychology, ARC Centre of Excellence in Cognition and its Disorders, Macquarie University, Sydney, Australia" -- title page.Includes bibliographical references.Chapter 1. Introduction -- Chapter 2. Music and language : syntactic interference without syntactic violations -- Chapter 3. Syntactic processing in music and language : effects of interrupting auditory streams with alternating timbres -- Chapter 4. Complexity or syntax? Music, language, and syntactic interference -- Chapter 5. Effects of language syntax on music syntax processing -- Chapter 6. Discussion --Appendices.It has been suggested that music and language are processed with shared cognitive resources. As these processing resources are limited in capacity, the concurrent presentation of music and language should produce interference, such that reduced processing is observed in one or both domains. The aim of this thesis was to investigate shared syntactic processing in music and language. To this end, I conducted a series of experiments to address limitations in previous research on this topic, which has (1) depended on surprising violations of syntactic structure, which may have engaged shared non-syntactic processes between music and language; (2) ignored considerations of auditory streaming research; and (3) focused mainly on the effects of music syntax processing on language syntax processing but not vice versa. Chapter 1 outlines the theoretical basis for the thesis. Chapter 2 presents three experiments showing that syntactic interference can be observed without surprising violations of structure, and that syntactic processing is dependent on successful auditory streaming. Chapter 3 reports on an event-related potential (ERP) study suggesting that syntactic processing of music is reduced when auditory streaming is disrupted. Experiments in Chapters 4 and 5 suggest that syntactic interference from music to language is modulated by whether tasks are primary or secondary. In both chapters, syntactic interference was not observed on the primary tasks, but interference was observed on the secondary tasks. In Chapter 6, all the experimental findings are drawn together and interpreted within a new Competitive Attention and Prioritisation Model. This thesis provides a new understanding of the nature of syntactic processing in music and language, and provides insight into the simultaneous processing of syntax in these two important modes of human communication.Mode of access: World wide web1 online resource (x, 356 pages) diagrams, graph

    Effects of musical valence on the cognitive processing of lyrics

    No full text
    The effects of music on the brain have been extensively researched, and numerous connections have been found between music and language, music and emotion, and music and cognitive processing. Despite this work, these three research areas have never before been drawn together into a single research paradigm. This is significant as their combination could lead to valuable insights into the effects of musical valence on the cognitive processing of lyrics. This research draws on theories of cognitive processing suggesting that negative moods facilitate systematic and detail-oriented processing, while positive moods facilitate heuristic-based processing. The current study (n = 56) used an error detection paradigm and found that significantly more error words were detected when paired with negatively valenced sad music compared to positively valenced happy music. Such a result explains previous findings that sad and happy lyrics have differential effects on emotion induction, and suggests this is due to sad lyrics being processed at deeper semantic levels. This study provides a framework in which to understand the interaction of lyrics and music with emotion induction - a primary reason for listening to music.15 page(s

    Emotion without words : a comparison study of music and speech prosody

    No full text
    Music and language are two human behaviours that are linked through their innateness, universality, and complexity. Recent research has investigated the communicative similarities between music and language and has found syntactic, semantic, and emotional dimensions in both. Emotional communication is thought to be related to the prosody of language and the dynamics of music. The purpose of this study was to investigate whether language’s prosody can successfully communicate a phrase’s emotional intent with the lexical elements of speech removed, and whether the results are comparable with a musical phrase of the same perceived emotion. Eighty-five participants ranked a selection of emotional music and prosodic vocalizations on scales of happy and sad. Results showed consistency and correctness in the emotional rankings; however, there was higher variance and lower intensity in the speech examples across all participants and more consistency in music examples among musicians compared to nonmusicians. This study suggests that speech prosody can communicate a phrase’s emotional content without lexical elements and that the results are comparable, though less intense, than the same emotion conveyed by music. This study has implications for the field of music therapy through support for the accurate identification of emotional information in non-verbal stimuli.16 page(s

    Music and language : do they draw on similar syntactic working memory resources?

    No full text
    The cognitive processing similarities between music and language is an emerging field of study, with research finding evidence for shared processing pathways in the brain, especially in relation to syntax. This research combines theory from the shared syntactic integration resource hypothesis (SSIRH; Patel, 2008) and syntactic working memory (SWM) theory (Kljajevic, 2010), and suggests there will be shared processing costs when music and language concurrently access SWM. To examine this, word lists and complex sentences were paired with three music conditions: normal; syntactic manipulation (out-of-key chord); and a control condition with an instrument manipulation. As predicted, memory for sentences declined when paired with the syntactic manipulation compared to the other two music manipulations, but the same pattern did not occur in word lists. This suggests that both sentences and music with a syntactic irregularity are accessing SWM. Word lists, however, are thought to be primarily accessing the phonological loop, and therefore did not show effects of shared processing. Musicians performed differently from non-musicians, suggesting that the processing of musical and linguistic syntax differs with musical ability. Such results suggest a separation in processing between the phonological loop and SWM, and give evidence for shared processing mechanisms between music and language syntax.20 page(s

    What you hear first, is what you get: Initial metrical cue presentation modulates syllable detection in sentence processing

    No full text
    International audienceAuditory rhythms create powerful expectations for the listener. Rhythmic cues with the same temporal structure as subsequent sentences enhance processing compared with irregular or mismatched cues. In the present study, we focus on syllable detection following matched rhythmic cues. Cues were aligned with subsequent sentences at the syllable (low-level cue) or the accented syllable (high-level cue) level. A different group of participants performed the task without cues to provide a baseline. We hypothesized that unaccented syllable detection would be faster after low-level cues, and accented syllable detection would be faster after high-level cues. There was no difference in syllable detection depending on whether the sentence was preceded by a high-level or low-level cue. However, the results revealed a priming effect of the cue that participants heard first. Participants who heard a high-level cue first were faster to detect accented than unaccented syllables, and faster to detect accented syllables than participants who heard a low-level cue first. The low-level-first participants showed no difference between detection of accented and unaccented syllables. The baseline experiment confirmed that hearing a low-level cue first removed the benefit of the high-level grouping structure for accented syllables. These results suggest that the initially perceived rhythmic structure influenced subsequent cue perception and its influence on syllable detection. Results are discussed in terms of dynamic attending, temporal context effects, and implications for context effects in neural entrainment

    Syntactic and non-syntactic sources of interference by music on language processing

    Get PDF
    Abstract Music and language are complex hierarchical systems in which individual elements are systematically combined to form larger, syntactic structures. Suggestions that music and language share syntactic processing resources have relied on evidence that syntactic violations in music interfere with syntactic processing in language. However, syntactic violations may affect auditory processing in non-syntactic ways, accounting for reported interference effects. To investigate the factors contributing to interference effects, we assessed recall of visually presented sentences and word-lists when accompanied by background auditory stimuli differing in syntactic structure and auditory distraction: melodies without violations, scrambled melodies, melodies that alternate in timbre, and environmental sounds. In Experiment 1, one-timbre melodies interfered with sentence recall, and increasing both syntactic complexity and distraction by scrambling melodies increased this interference. In contrast, three-timbre melodies reduced interference on sentence recall, presumably because alternating instruments interrupted auditory streaming, reducing pressure on long-distance syntactic structure building. Experiment 2 confirmed that participants were better at discriminating syntactically coherent one-timbre melodies than three-timbre melodies. Together, these results illustrate that syntactic processing and auditory streaming interact to influence sentence recall, providing implications for theories of shared syntactic processing and auditory distraction
    corecore