37 research outputs found
Pitch processing in music and speech
The present paper proposes an overview of research that investigates pitch processing by considering cognitive processes (related to context, learning, memory and/or knowledge) for both music and language materials. Research investigating cross-domain influences of expertise (either in music or tone languages) and deficits (as in congenital amusia), referred to as positive and negative transfer effects, also contributes to our understanding of domain-specificity or -generality of mechanisms involved in pitch processing
Music cognition : learning, perception, expectations
Research in music cognition domain has shown that non musician listeners have implicit knowledge about the Western tonal musical system. This knowledge, acquired by mere exposure to music in everyday life, influences perception of musical structures and allows developing expectations for future incoming events. Musical expectations play a role for musical expressivity and influence event processing: Expected events are processed faster and more accurately than less-expected events and this influence extends to the processing of simultaneously presented visual information. Studying implicit learning of auditory material in the laboratory allows us to further understand this cognitive capacity (i.e., at the origin of tonal acculturation) and its potential application to the learning of new musical systems and new musical expectations. In addition to behavioral studies on cognitive processes in and around music perception, computational models allow simulating learning, representation and perception of music for non musician listeners
Music and language perception : expectations, structural integration, and cognitive sequencing
Music can be described as sequences of events that are structured in pitch and time. Studying music processing provides insight into how complex event sequences are learned, perceived, and represented by the brain. Given the temporal nature of sound, expectations, structural integration, and cognitive sequencing are central in music perception (i.e., which sounds are most likely to come next and at what moment should they occur?). This paper focuses on similarities in music and language cognition research, showing that music cognition research provides insight into the understanding of not only music processing but also language processing and the processing of other structured stimuli. The hypothesis of shared resources between music and language processing and of domain-general dynamic attention has motivated the development of research to test music as a means to stimulate sensory, cognitive, and motor processes
Laterality effects for musical structure processing : a dichotic listening study
Objective: Our study investigated hemispheric lateralization for musical structure processing using a dichotic listening paradigm with music and speech. Method: Eight chord sequences and 8 spoken syllable sequences were simultaneously presented, each to 1 ear. For the musical sequences, the final chord was expected (i.e., tonic) or less expected (i.e., subdominant). In addition to tonal function, which was task irrelevant, we manipulated the final syllable and the final timbre of the sequences for the experimental task: Participants were asked to identify the final syllable (/di/, /du/) or the timbre of the final chord (Timbre A or B). Results: Our experiment revealed a left-ear advantage for the tonal function effect on spoken syllable identification. For syllables presented to the right ear, identification was faster when the final chord of the musical sequence was a tonic chord rather than a subdominant chord (i.e., musical sequences presented to the left ear). Conclusions: The present finding extends the effect of musical structure previously observed for sung and visual syllable processing to spoken syllable processing. It further suggests a right-hemispheric specialization for the processing of musical structures in healthy listeners, as previously reported for split-brain patients (Tramo & Bharucha, 1991)
Musical structure processing after repeated listening : schematic expectations resist veridical expectations
The present study investigates whether expectations based on listeners’ schematic knowledge of music (related to tonal functions and reflected in harmonic priming) may be modulated by veridical expectancies (knowing what to come, here tested with repetition priming). It is well established that target chord processing is facilitated when the target acts as the most referential chord of the Western musical system (i.e., the tonic chord) in the prime context. Our study attempted to modulate this harmonic priming effect by presenting half of participants with numerous sequences ending on a moderately referential chord (i.e., the subdominant chord) before the experimental test. The second half of participants was presented with sequences ending on the most referential tonic chord. This repeated processing was supposed to modulate the strength of the harmonic priming effect, with lesser priming in the former than in the latter group. These sequences were either different idioms than those in the exposition phase, but with the same musical structure (Experiment 1) or identical repetitions (Experiment 2). Harmonic priming effects were observed in both experiments, and were only weakly affected by repetition priming in Experiment 2. This outcome underlines the strength of schematic expectations, which resist veridical expectations, and it provides some empirical ground for the role of expectations in musical expressivity despite listeners’ knowing about what will come next
Exploiting multiple sources of information in learning an artificial language : human data and modeling
This study investigates the joint influences of three factors on the discovery of new word-like units in a continuous artificial speech stream: the statistical structure of the ongoing input, the initial wordlikeness of parts of the speech flow, and the contextual information provided by the earlier emergence of other word-like units. Results of an experiment conducted with adult participants show that these sources of information have strong and interactive influences on word discovery. The authors then examine the ability of different models of word segmentation to account for these results. PARSER (Perruchet & Vinter, 1998) is compared to the view that word segmentation relies on the exploitation of transitional probabilities between successive syllables, and with the models based on the Minimum Description Length principle, such as INCDROP. The authors submit arguments suggesting that PARSER has the advantage of accounting for the whole pattern of data without ad-hoc modifications, while relying exclusively on general-purpose learning principles. This study strengthens the growing notion that nonspecific cognitive processes, mainly based on associative learning and memory principles, are able to account for a larger part of early language acquisition than previously assumed
Working memory for pitch, timbre, and words
Aiming to further our understanding of fundamental mechanisms of auditory working memory (WM), the present study compared performance for three auditory materials (words, tones, timbres). In a forward recognition task (Experiment 1) participants indicated whether the order of the items in the second sequence was the same as in the first sequence. In a backward recognition task (Experiment 2) participants indicated whether the items of the second sequence were played in the correct backward order. In Experiment 3 participants performed an articulatory suppression task during the retention delay of the backward task. To investigate potential length effects the number of items per sequence was manipulated. Overall findings underline the benefit of a cross-material experimental approach and suggest that human auditory WM is not a unitary system. Whereas WM processes for timbres differed from those for tones and words, similarities and differences were observed for words and tones: Both types of stimuli appear to rely on rehearsal mechanisms, but might differ in the involved sensorimotor codes
Discontinuity in the enumeration of sequentially presented auditory and visual stimuli
The seeking of discontinuity in enumeration was recently renewed because Cowan [Cowan, N. (2001). The magical number 4 in short-term memory: A reconsideration of mental storage capacity. Behavioral and Brain Sciences, 24, 87-185; Cowan, N. (2005). Working memory capacity. Hove: Psychology Press] suggested that it allows evaluating the limit of the focus of attention, currently estimated at four items. A strong argument in favour of a general constraint of the cognitive system is that similar discontinuities should be observed in modalities different from the classic simultaneous presentation of visual objects. Recently, data were provided on tactile stimuli, but the authors diverged in their conclusion about the existence of such discontinuity [Gallace, A., Tan, H. Z., & Spence, C. (2006). Numerosity judgments for tactile stimuli distributed over the body surface. Perception, 35(2), 247-266; Riggs, K. J., Ferrand, L., Lancelin, D., Fryziel, L., Dumur, G., & Simpson, A. (2006). Subitizing in tactile perception. Psychological Science, 17(4), 271-272]. Following a similar rationale, our study aimed at evaluating discontinuity in the enumeration of auditory and visual stimuli presented sequentially. The clear and similar discontinuity observed in error rates, response times and given responses for both modalities favours the general capacity limit view, but also questions the size of this capacity, because the discontinuity occurred here at size 2. However, the masking of stimuli in sensory memory could not be entirely discarded
Shared structural and temporal integration resources for music and arithmetic processing
While previous research has investigated the relationship either between language and music processing or between language and arithmetic processing, the present study investigated the relationship between music and arithmetic processing. Rule-governed number series, with the final number being a correct or incorrect series ending, were visually presented in synchrony with musical sequences, with the final chord functioning as the expected tonic or the less-expected subdominant chord (i.e., tonal function manipulation). Participants were asked to judge the correctness of the final number as quickly and accurately as possible. The results revealed an interaction between the processing of series ending and the processing of the task-irrelevant chords' tonal function, thus suggesting that music and arithmetic processing share cognitive resources. These findings are discussed in terms of general temporal and structural integration resources for linguistic and non-linguistic rule-governed sequences
(Investigating musical expectations of non-musician listeners : the musical priming paradigm)
Western listeners become sensitive to the regularities of the Western tonal system by mere exposure to musical pieces. The implicitly acquired tonal knowledge allows listeners to develop musical expectations for future events of a musical sequence. These expectations play a role for musical expressivity and influence the processing of musical events. The musical priming paradigm is an indirect investigation method that allows studying listeners’ tonal knowledge and the influence of musical expectations on processing speed of musical events. Behavioral data sets have shown that the processing of a musical event is facilitated when it is tonally related (and supposed to be “expected”) in comparison to when it is unrelated or less-related to the preceding tonal context. Neurophysiological data sets have shown that the processing of a less-expected event requires more neural resources than the processing of more prototypical musical structures. For example, studies using functional magnetic resonance imaging have reported increased activation in the inferior frontal cortex for unexpected musical events. Studying musical expectations – as an example of processing complex, non-verbal acoustic structures – contributes to a better understanding of the processes underlying the acquisition of implicit knowledge about our auditory environment as well as about the influence of this knowledge on perception