203 research outputs found

    Editor's Note

    Get PDF
    No abstract available

    Associations between musical preferences and personality in female secondary school students

    Get PDF
    It is widely believed that someone’s personality can be assessed through their musical taste. There are many theoretical approaches that explain why this could be true, and a long tradition of research has investigated the associations between personality and musical preferences, but empirical evidence regarding these correlations shows inconsistent results. An explanation for these inconsistent findings could be that personality and musical preferences might be largely stable and not correlated in adults, whereas during childhood and adolescence, these traits may be connected more strongly, as younger individuals’ traits are still developing and music is a highly influential factor at this point of life. Therefore, the aim of the current study is to test whether pupils’ personality profiles are associated with musical preferences. Data from a cross-sectional study at a British girls’ secondary school were used (N = 312) for this purpose. Musical preferences were assessed using a nonverbal inventory with sound examples. By using structural equation modeling, regression trees, and random forest models, it was investigated how well ratings of musical sound excerpts can be used to predict the Big Five personality traits. Results from random forest regression models indicate that extraversion (R² = 6.4%), agreeableness (R² = 5.6%), and conscientiousness (R² = 4.1%) can be predicted by musical preferences to a small degree. In contrast, the explained variance for openness to experience and neuroticism was negligibly small (< 1%). The results arising from a data-driven structural equation model show that mellow musical styles are associated with agreeableness, whereas intense and sophisticated music is correlated with extraversion

    Survival of musical activities. When do young people stop making music?

    Get PDF
    Although making music is a popular leisure activity for children and adolescents, few stay musically engaged. Previous research has focused on finding reasons for quitting musical activities, pedagogical strategies to keep students engaged with music, and motivational factors of musical training. Nonetheless, we know very little about how the proportion of musically active children changes with age and what traits influence the survival of musical engagement. This study used longitudinal data from secondary school students in the UK and Germany aged between 10 and 17 years. A survival analysis was applied to investigate the trajectories of musical activities across this age span. Other factors like type of learned instrument, gender, personality and intelligence were taken into account for further analyses using generalized linear models. Results indicate that about 50% of all students drop out of music lessons and other musical activities by the time they turn 17 years old, with most students quitting between the ages of 15 and 17. Musical home environment is an important factor that is associated with lower drop out rates while conscientiousness and theory of musicality showed smaller significant associations

    Learning and Recalling Melodies: A Computational Investigation Using the Melodic Recall Paradigm

    Get PDF
    Using melodic recall paradigm data, we describe an algorithmic approach to assessing melodic learning across multiple attempts. In a first simulation experiment, we reason for using similarity measures to assess melodic recall performance over previously utilized accuracy-based measures. In Experiment 2, with up to six attempts per melody, 31 participants sang back 28 melodies (length 15–48 notes) presented either as a piano sound or a vocal audio excerpt from real pop songs. Our analysis aimed to predict the similarity between the target melody and participants’ sung recalls across successive attempts. Similarity was measured with different algorithmic measures reflecting various structural (e.g., tonality, intervallic) aspects of melodies and overall similarity. However, previous melodic recall research mentioned, but did not model, that the length of the sung recalls tends to increase across attempts, alongside overall performance. Consequently, we modeled how the attempt length changes alongside similarity to meet this omission in the literature. In a mediation analysis, we find that a target melody’s length, but not other melodic features, is the main predictor of similarity via the attempt length. We conclude that sheer length constraints appear to be the main factor when learning melodies long enough to require several attempts to recall. Analytical features of melodic structure may be more important for shorter melodies, or with stimulus sets that are structurally more diverse than those found in the sample of pop songs used in this study

    Singing Ability Assessment: Development and validation of a singing test based on item response theory and a general open-source software environment for singing data

    Get PDF
    We describe the development of the Singing Ability Assessment (SAA) open-source test environment. The SAA captures and scores different aspects of human singing ability and melodic memory in the context of item response theory. Taking perspectives from both melodic recall and singing accuracy literature, we present results from two online experiments (N = 247; N = 910). On-the-fly audio transcription is produced via a probabilistic algorithm and scored via latent variable approaches. Measures of the ability to sing long notes indicate a three-dimensional principal components analysis solution representing pitch accuracy, pitch volatility and changes in pitch stability (proportion variance explained: 35%; 33%; 32%). For melody singing, a mixed-effects model uses features of melodic structure (e.g., tonality, melody length) to predict overall sung melodic recall performance via a composite score [R2c = .42; R2m = .16]. Additionally, two separate mixed-effects models were constructed to explain performance in singing back melodies in a rhythmic [R2c = .42; R2m = .13] and an arhythmic [R2c = .38; R2m = .11] condition. Results showed that the yielded SAA melodic scores are significantly associated with previously described measures of singing accuracy, the long note singing accuracy measures, demographic variables, and features of participants’ hardware setup. Consequently, we release five R packages which facilitate deploying melodic stimuli online and in laboratory contexts, constructing audio production tests, transcribing audio in the R environment, and deploying the test elements and their supporting models. These are published as open-source, easy to access, and flexible to adapt

    Development and validation of the first adaptive test of emotion perception in music

    Get PDF
    The Musical Emotion Discrimination Task (MEDT) is a short, non-adaptive test of the ability to discriminate emotions in music. Test-takers hear two performances of the same melody, both played by the same performer but each trying to communicate a different basic emotion, and are asked to determine which one is “happier”, for example. The goal of the current study was to construct a new version of the MEDT using a larger set of shorter, more diverse music clips and an adaptive framework to expand the ability range for which the test can deliver measurements. The first study analysed responses from a large sample of participants (N = 624) to determine how musical features contributed to item difficulty, which resulted in a quantitative model of musical emotion discrimination ability rooted in Item Response Theory (IRT). This model informed the construction of the adaptive MEDT. A second study contributed preliminary evidence for the validity and reliability of the adaptive MEDT, and demonstrated that the new version of the test is suitable for a wider range of abilities. This paper therefore presents the first adaptive musical emotion discrimination test, a new resource for investigating emotion processing which is freely available for research use

    Musical development during adolescence: Perceptual skills, cognitive resources, and musical training

    Get PDF
    Longitudinal studies on musical development can provide very valuable insights and potentially evidence for causal mechanisms driving the development of musical skills and cognitive resources, such as working memory and intelligence. Nonetheless, quantitative longitudinal studies on musical and cognitive development are very rare in the published literature. Hence, the aim of this paper is to document available longitudinal evidence on musical development from three different sources. In part I, data from a systematic literature review are presented in a graphical format, making developmental trends from five previous longitudinal studies comparable. Part II presents a model of musical development derived from music-related variables that are part of the British Millennium Cohort Study. In part III, data from the ongoing LongGold project are analyzed answering five questions on the change of musical skills and cognitive resources across adolescence and on the role that musical training and activities might play in these developmental processes. Results provide evidence for substantial near transfer effects (from musical training to musical skills) and weaker evidence for far-transfer to cognitive variables. But results also show evidence of cognitive profiles of high intelligence and working memory capacity that are conducive to strong subsequent growth rates of musical development

    Editor's Note

    Get PDF
    No abstract available

    Editor's Note

    Get PDF
    No abstract available

    The Goldsmiths Dance Sophistication Index (Gold-DSI): A Psychometric Tool to Assess Individual Differences in Dance Experience

    Get PDF
    Dance has become an important topic for research in empirical aesthetics, social and motor cognition, and as an intervention for neurodegenerative and neurodevelopmental disorders. Despite the growing scientific interest in dance, no standardised psychometric instrument exists to assess people’s dance experience. Here, we introduce the Goldsmiths Dance Sophistication Index (Gold-DSI), a 26-item questionnaire to measure individual differences in participatory and observational dance experience on a continuous scale. The Gold-DSI was developed in three stages: In the first stage a set of 76 items was generated by adapting questions from the Goldsmiths Musical Sophistication Index (Müllensiefen et al., 2014), and as part of a stakeholder workshop using a grounded theory approach. The second stage focused on item reduction. Using a large-scale online survey (N=424) , hierarchical factor analysis was used to fit a model comprising of one general and six secondary factors (28 items in total). In stage three, six new items were added to specifically capture individual differences in dance observation. We then collected data from two samples for final model estimation (N=127) and evaluation (N=190). The final version of the Gold-DSI comprises 26 items; 20 items relate to one general factor that captures experience in dance participation. This includes four secondary factors: Body Awareness, Social Dancing, Urge to Dance, and Dance Training. A further six items separately measure experience in dance observation. In sum, the Gold-DSI provides a brief, standardised and continuous assessment of doing, watching and knowing about dance
    corecore