55 research outputs found

    AI and Tempo Estimation: A Review

    Full text link
    The author's goal in this paper is to explore how artificial intelligence (AI) has been utilised to inform our understanding of and ability to estimate at scale a critical aspect of musical creativity - musical tempo. The central importance of tempo to musical creativity can be seen in how it is used to express specific emotions (Eerola and Vuoskoski 2013), suggest particular musical styles (Li and Chan 2011), influence perception of expression (Webster and Weir 2005) and mediate the urge to move one's body in time to the music (Burger et al. 2014). Traditional tempo estimation methods typically detect signal periodicities that reflect the underlying rhythmic structure of the music, often using some form of autocorrelation of the amplitude envelope (Lartillot and Toiviainen 2007). Recently, AI-based methods utilising convolutional or recurrent neural networks (CNNs, RNNs) on spectral representations of the audio signal have enjoyed significant improvements in accuracy (Aarabi and Peeters 2022). Common AI-based techniques include those based on probability (e.g., Bayesian approaches, hidden Markov models (HMM)), classification and statistical learning (e.g., support vector machines (SVM)), and artificial neural networks (ANNs) (e.g., self-organising maps (SOMs), CNNs, RNNs, deep learning (DL)). The aim here is to provide an overview of some of the more common AI-based tempo estimation algorithms and to shine a light on notable benefits and potential drawbacks of each. Limitations of AI in this field in general are also considered, as is the capacity for such methods to account for idiosyncrasies inherent in tempo perception, i.e., how well AI-based approaches are able to think and act like humans.Comment: 9 page

    Redefining groove

    Get PDF

    Virtual Reality-Assisted Physiotherapy for Visuospatial Neglect Rehabilitation: A Proof-of-Concept Study

    Full text link
    This study explores a VR-based intervention for Visuospatial neglect (VSN), a post-stroke condition. It aims to develop a VR task utilizing interactive visual-audio cues to improve sensory-motor training and assess its impact on VSN patients' engagement and performance. Collaboratively designed with physiotherapists, the VR task uses directional and auditory stimuli to alert and direct patients, tested over 12 sessions with two individuals. Results show a consistent decrease in task completion variability and positive patient feedback, highlighting the VR task's potential for enhancing engagement and suggesting its feasibility in rehabilitation. The study underlines the significance of collaborative design in healthcare technology and advocates for further research with a larger sample size to confirm the benefits of VR in VSN treatment, as well as its applicability to other multimodal disorders.Comment: 29 pages, 8 figures, 5 table

    Personalised interactive music systems for physical activity and exercise: a systematic review and meta-analysis

    Get PDF
    The use of Personalised Interactive Music Systems (PIMS) may provide benefits in promoting physical activity levels. This systematic review and meta-analysis was conducted to assess the overall impact of PIMS in physical activity and exercise domains. Separate random effects meta-analyses were conducted for outcomes in physical activity levels, physical exertion, rate of perceived exertion(RPE), and affect. In total, 18 studies were identified. Of these, six studies (with17 total intervention arms) reported data on at least one outcome of interest, from which an effect size could be calculated. PIMS were significantly associated with beneficial changes in physical activity levels (g = 0.49, CI [0.07, 0.91], p = 0.02,k = 4, n = 76) and affect (g = 1.68, CI [0.15, 3.20], p = 0.03, k = 4, n = 122).However, no significant benefit of PIMS use was found for RPE (g = 0.72, CI [-0.14, 1.59], p = 0.10, k = 3, n = 77) or physical exertion (g = 0.79, CI [-0.64,2.10], p = 0.28, k = 5, n = 142). Overall, results support the preliminary use of PIMS across a variety of physical activities to promote physical activity levels and positive affect

    The psychology of streaming : exploring music listeners’ motivations to favour access over ownership

    No full text
    Digital streaming represents the most radical development in the way we experience music since the invention of automatic playback technologies two centuries ago. From zero ownership and on-demand access to a virtually limitless library of music via a disconnected financial transaction, streaming services challenge previous conceptions of how music is defined, experienced and consumed. This paper explores streaming from a psychological perspective, and highlights a range of factors that motivate users to favour access over ownership. From removal of responsibilities of ownership to enhanced discovery, nostalgia-fulfilment to augmented emotional engagement, adoption of access-based consumption is shown to be both driven by, and have multiple positive effects on listeners' psychological functioning. The paper concludes by examining some implications of the issues discussed for each of the three pillars of the streaming industry — listeners, content-creators and service providers — for enhancing the musical experience, growing revenues, and maximising overall potential for engagement with and through music.peerReviewe

    The Role of Enculturation in Music-Induced Emotions : A Study on Psychophysiological Responses during Music Listening

    No full text
    Previous cross-cultural studies in music and emotion have mostly focused on emotion recognition and whether basic perceived emotions are recognised across cultures. As a result, the impact of enculturation on music-induced emotions remains largely unexplored. In addition, such studies have relied mainly on subjective self-reports, ignoring other components of emotion such as physiology. Cross-cultural studies have suggested that cultural learning has a differential effect on certain emotional components (subjective feeling, physiology, and facial expression), yet this has not been tested in a music setting. To test this hypothesis, three groups of Finnish, Chinese, and Greek non-musicians listened to 20 excerpts of Western, Chinese, and Greek music that were selected from previous studies in which the emotional character of the music had been rated. Self-reports were used to collect continuous ratings of valence and arousal, along with measures of physiological activity (heart rate, skin conductance, and respiratory rate). Ratings of intensity, familiarity with the excerpt and familiarity with the music style were also collected after each stimulus. Results showed similar levels of familiarity with Western music across nationalities. However, the subjective measurements revealed group differences in the subjective feeling, even when familiarity was controlled for. Arousal was the only subjective rating that did not have a differentiating pattern, in line with previous research that has suggested arousal has a more universal quality. Physiological activity also showed less variation across nationalities, indicating that autonomic nervous system responses to music listening are less mediated by enculturation.peerReviewe

    Effects of musical valence on the cognitive processing of lyrics

    No full text
    The effects of music on the brain have been extensively researched, and numerous connections have been found between music and language, music and emotion, and music and cognitive processing. Despite this work, these three research areas have never before been drawn together into a single research paradigm. This is significant as their combination could lead to valuable insights into the effects of musical valence on the cognitive processing of lyrics. This research draws on theories of cognitive processing suggesting that negative moods facilitate systematic and detail-oriented processing, while positive moods facilitate heuristic-based processing. The current study (n = 56) used an error detection paradigm and found that significantly more error words were detected when paired with negatively valenced sad music compared to positively valenced happy music. Such a result explains previous findings that sad and happy lyrics have differential effects on emotion induction, and suggests this is due to sad lyrics being processed at deeper semantic levels. This study provides a framework in which to understand the interaction of lyrics and music with emotion induction - a primary reason for listening to music.15 page(s

    Adolescents’ expression and perception of emotion in music reflects their broader abilities of emotional communication

    No full text
    Background: Musical behavior has been shown to reflect broader individual differences. However, despite the prevalence of music in the lives of young people little is known about the mechanisms through which adolescents’ musical behavior connects to their general socio-emotional behavior and adjustment. The current study focused on abilities of emotional communication and investigated whether adolescents’ abilities in both perceiving and expressing emotions through music would be reflective of their general abilities of socio-emotional communication and interaction, measured through empathy and conduct problems. Due to the lack of previous research the study was mainly exploratory, but we expected accurate and congruent perception and expression of musical emotions to correlate positively with higher empathy and negatively with conduct problems. Method: Sixty-one 14-year-olds (45 female, mean age 14.72) were given three music-related tasks that assessed emotion perception and emotion expression through music. Participants also filled in self-report scales for empathy (perspective taking and empathic concern) and conduct problems (externalized symptoms). Results: The results showed that perspective taking was particularly related to accurate recognition of tenderness in music and congruent use of staccato articulation for the expression of anger through music. Empathic concern was particularly related to congruent use of slow tempo for expressing sadness and loud volume for expressing anger and also correlated with an overall tendency for intensified perception of fear in music. Externalized symptoms were particularly related to incongruent expression of sadness and anger through music: the use of staccato for expressing sadness and dull timbre for expressing anger. Conclusion: Overall, the results preliminarily support the idea of using musical behavior as an indicator of the broader socio-emotional communication abilities, which in turn play a major role in adolescent adjustment and wellbeingpeerReviewe
    corecore