4,668 research outputs found

    Automatic comparison of global children’s and adult songs supports a sensorimotor hypothesis for the origin of musical scales

    Get PDF
    Music throughout the world varies greatly, yet some musical features like scale structure display striking crosscultural similarities. Are there musical laws or biological constraints that underlie this diversity? The “vocal mistuning” hypothesis proposes that cross-cultural regularities in musical scales arise from imprecision in vocal tuning, while the integer-ratio hypothesis proposes that they arise from perceptual principles based on psychoacoustic consonance. In order to test these hypotheses, we conducted automatic comparative analysis of 100 children’s and adult songs from throughout the world. We found that children’s songs tend to have narrower melodic range, fewer scale degrees, and less precise intonation than adult songs, consistent with motor limitations due to their earlier developmental stage. On the other hand, adult and children’s songs share some common tuning intervals at small-integer ratios, particularly the perfect 5th (~3:2 ratio). These results suggest that some widespread aspects of musical scales may be caused by motor constraints, but also suggest that perceptual preferences for simple integer ratios might contribute to cross-cultural regularities in scale structure. We propose a “sensorimotor hypothesis” to unify these competing theories

    Musemo: Express Musical Emotion Based on Neural Network

    Get PDF
    Department of Urban and Environmental Engineering (Convergence of Science and Arts)Music elicits emotional responses, which enable people to empathize with the emotional states induced by music, experience changes in their current feelings, receive comfort, and relieve stress (Juslin & Laukka, 2004). Music emotion recognition (MER) is a field of research that extracts emotions from music through various systems and methods. Interest in this field is increasing as researchers try to use it for psychiatric purposes. In order to extract emotions from music, MER requires music and emotion labels for each music. Many MER studies use emotion labels created by non-music-specific psychologists such as Russell???s circumplex model of affects (Russell, 1980) and Ekman???s six basic emotions (Ekman, 1999). However, Zentner, Grandjean, and Scherer suggest that emotions commonly used in music are subdivided into specific areas, rather than spread across the entire spectrum of emotions (Zentner, Grandjean, & Scherer, 2008). Thus, existing MER studies have difficulties with the emotion labels that are not widely agreed through musicians and listeners. This study proposes a musical emotion recognition model ???Musemo??? that follows the Geneva emotion music scale proposed by music psychologists based on a convolution neural network. We evaluate the accuracy of the model by varying the length of music samples used as input of Musemo and achieved RMSE (root mean squared error) performance of up to 14.91%. Also, we examine the correlation among emotion labels by reducing the Musemo???s emotion output vector to two dimensions through principal component analysis. Consequently, we can get results that are similar to the study that Vuoskoski and Eerola analyzed for the Geneva emotion music scale (Vuoskoski & Eerola, 2011). We hope that this study could be expanded to inform treatments to comfort those in need of psychological empathy in modern society.clos

    Analyzing Visual Mappings of Traditional and Alternative Music Notation

    Full text link
    In this paper, we postulate that combining the domains of information visualization and music studies paves the ground for a more structured analysis of the design space of music notation, enabling the creation of alternative music notations that are tailored to different users and their tasks. Hence, we discuss the instantiation of a design and visualization pipeline for music notation that follows a structured approach, based on the fundamental concepts of information and data visualization. This enables practitioners and researchers of digital humanities and information visualization, alike, to conceptualize, create, and analyze novel music notation methods. Based on the analysis of relevant stakeholders and their usage of music notation as a mean of communication, we identify a set of relevant features typically encoded in different annotations and encodings, as used by interpreters, performers, and readers of music. We analyze the visual mappings of musical dimensions for varying notation methods to highlight gaps and frequent usages of encodings, visual channels, and Gestalt laws. This detailed analysis leads us to the conclusion that such an under-researched area in information visualization holds the potential for fundamental research. This paper discusses possible research opportunities, open challenges, and arguments that can be pursued in the process of analyzing, improving, or rethinking existing music notation systems and techniques.Comment: 5 pages including references, 3rd Workshop on Visualization for the Digital Humanities, Vis4DH, IEEE Vis 201

    Explorative Visual Analysis of Rap Music

    Get PDF
    Detecting references and similarities in music lyrics can be a difficult task. Crowdsourced knowledge platforms such as Genius. can help in this process through user-annotated information about the artist and the song but fail to include visualizations to help users find similarities and structures on a higher and more abstract level. We propose a prototype to compute similarities between rap artists based on word embedding of their lyrics crawled from Genius. Furthermore, the artists and their lyrics can be analyzed using an explorative visualization system applying multiple visualization methods to support domain-specific tasks

    Generating Preview Tables for Entity Graphs

    Full text link
    Users are tapping into massive, heterogeneous entity graphs for many applications. It is challenging to select entity graphs for a particular need, given abundant datasets from many sources and the oftentimes scarce information for them. We propose methods to produce preview tables for compact presentation of important entity types and relationships in entity graphs. The preview tables assist users in attaining a quick and rough preview of the data. They can be shown in a limited display space for a user to browse and explore, before she decides to spend time and resources to fetch and investigate the complete dataset. We formulate several optimization problems that look for previews with the highest scores according to intuitive goodness measures, under various constraints on preview size and distance between preview tables. The optimization problem under distance constraint is NP-hard. We design a dynamic-programming algorithm and an Apriori-style algorithm for finding optimal previews. Results from experiments, comparison with related work and user studies demonstrated the scoring measures' accuracy and the discovery algorithms' efficiency.Comment: This is the camera-ready version of a SIGMOD16 paper. There might be tiny differences in layout, spacing and linebreaking, compared with the version in the SIGMOD16 proceedings, since we must submit TeX files and use arXiv to compile the file

    Evaluation of Drum Rhythmspace in a Music Production Environment

    Get PDF
    In modern computer-based music production, vast musical data libraries are essential. However, their presentation via subpar interfaces can hinder creativity, complicating the selection of ideal sequences. While low-dimensional space solutions have been suggested, their evaluations in real-world music production remain limited. In this study, we focus on Rhythmspace, a two-dimensional platform tailored for the exploration and generation of drum patterns in symbolic MIDI format. Our primary objectives encompass two main aspects: first, the evolution of Rhythmspace into a VST tool specifically designed for music production settings, and second, a thorough evaluation of this tool to ascertain its performance and applicability within the music production scenario. The tool’s development necessitated transitioning the existing Rhythmspace, which operates in Puredata and Python, into a VST compatible with Digital Audio Workstations (DAWs) using the JUCE(C++) framework. Our evaluation encompassed a series of experiments, starting with a composition test where participants crafted drum sequences followed by a listening test, wherein participants ranked the sequences from the initial experiment. The results show that Rhythmspace and similar tools are beneficial, facilitating the exploration and creation of drum patterns in a user-friendly and intuitive manner, and enhancing the creative process for music producers. These tools not only streamline the drum sequence generation but also offer a fresh perspective, often serving as a source of inspiration in the dynamic realm of electronic music production

    Explanations in Music Recommender Systems in a Mobile Setting

    Get PDF
    Revised version: some spelling errors corrected.Every day, millions of users utilize their mobile phones to access music streaming services such as Spotify. However, these `black boxes’ seldom provide adequate explanations for their music recommendations. A systematic literature review revealed that there is a strong relationship between moods and music, and that explanations and interface design choices can effect how people perceive recommendations just as much as algorithm accuracy. However, little seems to be known about how to apply user-centric design approaches, which exploit affective information to present explanations, to mobile devices. In order to bridge these gaps, the work of Andjelkovic, Parra, & O’Donovan (2019) was extended upon and applied as non-interactive designs in a mobile setting. Three separate Amazon Mechanical Turk studies asked participants to compare the same three interface designs: baseline, textual, and visual (n=178). Each survey displayed a different playlist with either low, medium, or high music popularity. Results indicate that music familiarity may or may not influence the need for explanations, but explanations are important to users. Both explanatory designs fared equally better than the baseline, and the use of affective information may help systems become more efficient, transparent, trustworthy, and satisfactory. Overall, there does not seem to be a `one design fits all’ solution for explanations in a mobile setting.Master's Thesis in Information ScienceINFO390MASV-INFOMASV-IK
    • 

    corecore