66 research outputs found
Language and Music: Sound, Structure, and Meaning
Language and music are the most impressive examples of humans’ capacity to process complex sound and structure. Though interest in the relationship between these two abilities has a long history, only recently has cognitive and neuroscientific research started to illuminate both what is shared and what is distinct between linguistic and musical processing. This review considers evidence for a link between language and music at three levels of analysis: sound, structure, and meaning. These links not only inform our understanding of language and music, but also add to a more basic understanding of our processing of complex auditory stimuli, abstract structure, meaning, and emotion
Acoustic Correlates of Auditory Object and Event Perception: Speakers, Musical Timbres, and Environmental Sounds
Partial funding for Open Access provided by the UMD Libraries' Open Access Publishing Fund.Human listeners must identify and orient themselves to auditory objects and events in their environment. What acoustic features support a listener’s ability to differentiate the great variety of natural sounds they might encounter? Studies of auditory object perception typically examine identification (and confusion) responses or dissimilarity ratings between pairs of objects and events. However, the majority of this prior work has been conducted within single categories of sound. This separation has precluded a broader understanding of the general acoustic attributes that govern auditory object and event perception within and across different behaviorally relevant sound classes. The present experiments take a broader approach by examining multiple categories of sound relative to one another. This approach bridges critical gaps in the literature and allows us to identify (and assess the relative importance of) features that are useful for distinguishing sounds within, between and across behaviorally relevant sound categories. To do this, we conducted behavioral sound identification (Experiment 1) and dissimilarity rating (Experiment 2) studies using a broad set of stimuli that leveraged the acoustic variability within and between different sound categories via a diverse set of 36 sound tokens (12 utterances from different speakers, 12 instrument timbres, and 12 everyday objects from a typical human environment). Multidimensional scaling solutions as well as analyses of item-pair-level responses as a function of different acoustic qualities were used to understand what acoustic features informed participants’ responses. In addition to the spectral and temporal envelope qualities noted in previous work, listeners’ dissimilarity ratings were associated with spectrotemporal variability and aperiodicity. Subsets of these features (along with fundamental frequency variability) were also useful for making specific within or between sound category judgments. Dissimilarity ratings largely paralleled sound identification performance, however the results of these tasks did not completely mirror one another. In addition, musical training was related to improved sound identification performance
The relationship between priming and linguistic representations is mediated by processing constraints. Commentary on Branigan, H. & Pickering, M., “An experimental approach to linguistic representation.”
Understanding the nature of linguistic representations undoubtedly will benefit from multiple types of evidence, including structural priming. Here, we argue that successfully gaining linguistic insights from structural priming requires us to better understand (1) the precise mappings between linguistic input and comprehenders’ syntactic knowledge; and (2) the role of cognitive faculties such as memory and attention in structural priming
Processing structure in language and music: A case for shared reliance on cognitive control
The relationship between structural processing in music and language has received increasing interest in the last several years, spurred by the influential Shared Syntactic Integration Resource Hypothesis (SSIRH; Patel, 2003). According to this resource-sharing framework, music and language rely on separable syntactic representations but recruit shared cognitive resources to integrate these representations into evolving structures. The SSIRH is supported by findings of interactions between structural manipulations in music and language. However, other recent evidence suggests that such interactions can also arise with non-structural manipulations, and some recent neuroimaging studies report largely non-overlapping neural regions involved in processing musical and linguistic structure. These conflicting results raise the question of exactly what shared (and distinct) resources underlie musical and linguistic structural processing. This paper suggests that one shared resource is prefrontal cortical mechanisms of cognitive control, which are recruited to detect and resolve conflict that occurs when expectations are violated and interpretations must be revised. By this account, musical processing involves not just the incremental processing and integration of musical elements as they occur, but also the incremental generation of musical predictions and expectations, which must sometimes be overridden and revised in light of evolving musical input
What is "musical ability" and how do we measure it?
There is little consensus on what exactly constitutes musical ability and how to best measure it. Past research has used various tasks; most commonly assessing perceptual skills (e.g., same/different judgments in sequentially presented melodies), but also sometimes production tasks (e.g., singing a series of pitches or tapping along with a musical sequence). Outcome measures have ranged from single indices (e.g., "pitch ability") to composite scores from multiple tasks (e.g., pitch, rhythm, loudness, timbre, etc.). To date, it remains unclear how these different measures/scores relate to one another, limiting the ability to generalize across tasks and results. To address these issues, we assessed 165 participants' performance on 15 representative musical ability tasks to model the unity and diversity of musical abilities. Latent variable model comparisons suggest that musical ability is best represented by related but separable pitch, timing, perception, and production factors
Memory and cognitive control in an integrated theory of language processing.
Pickering and Garrod’s integrated model of production and comprehension includes no explicit role for non-linguistic cognitive processes. Yet, how domain-general cognitive functions contribute to language processing has become clearer with well-specified theories and supporting data. We therefore believe that their account can benefit by incorporating functions like working memory and cognitive control into a unified model of language processing
Tuning the mind: Exploring the connections between musical ability and executive functions
A growing body of research suggests that musical experience and ability are related to a variety of cognitive abilities, including executive functioning (EF). However, it is not yet clear if these relationships are limited to specific components of EF, limited to auditory tasks, or reflect very general cognitive advantages. This study investigated the existence and generality of the relationship between musical ability and EFs by evaluating the musical experience and ability of a large group of participants and investigating whether this predicts individual differences on three different components of EF – inhibition, updating, and switching – in both auditory and visual modalities. Musical ability predicted better performance on both auditory and visual updating tasks, even when controlling for a variety of potential confounds (age, handedness, bilingualism, and socio-economic status). However, musical ability was not clearly related to inhibitory control and was unrelated to switching performance. These data thus show that cognitive advantages associated with musical ability are not limited to auditory processes, but are limited to specific aspects of EF. This supports a process-specific (but modality-general) relationship between musical ability and non-musical aspects of cognition.GRAMMY Foundatio
- …