1,297 research outputs found

    The Pitch Histogram of Traditional Chinese Anhemitonic Pentatonic Folk Songs

    Get PDF
    Funding Information: The APC was funded by Open Access Initiative of the University of Bremen and the DFG via SuUB Bremen. Publisher Copyright: © 2022 by the authors.As an essential subset of Chinese music, traditional Chinese folk songs frequently apply the anhemitonic pentatonic scale. In music education and demonstration, the Chinese anhemitonic pentatonic mode is usually introduced theoretically, supplemented by music appreciation, and a non-Chinese-speaking audience often lacks a perceptual understanding. We discovered that traditional Chinese anhemitonic pentatonic folk songs could be identified intuitively according to their distinctive bell-shaped pitch distribution in different types of pitch histograms, reflecting the Chinese characteristics of Zhongyong (the doctrine of the mean). Applying pitch distribution to the demonstration of the Chinese anhemitonic pentatonic folk songs, exemplified by a considerable number of instances, allows the audience to understand the culture behind the music from a new perspective by creating an auditory and visual association. We have also made preliminary attempts to feature and model the observations and implemented pilot classifiers to provide references for machine learning in music information retrieval (MIR). To the best of our knowledge, this article is the first MIR study to use various pitch histograms on traditional Chinese anhemitonic pentatonic folk songs, demonstrating that, based on cultural understanding, lightweight statistical approaches can progress cultural diversity in music education, computational musicology, and MIR.publishersversionpublishe

    Feature Extraction for Music Information Retrieval

    Get PDF
    Copyright c © 2009 Jesper Højvang Jensen, except where otherwise stated

    Modeling Musical Mood From Audio Features, Affect and Listening Context on an In-situ Dataset

    Get PDF
    Musical mood is the emotion that a piece of music expresses. When musical mood is used in music recommenders (i.e., systems that recommend music a listener is likely to enjoy), salient suggestions that match a user’s expectations are made. The musical mood of a track can be modeled solely from audio features of the music; however, these models have been derived from musical data sets of a single genre and labeled in a laboratory setting. Applying these models to data sets that reflect a user’s actual listening habits may not work well, and as a result, music recommenders based on these models may fail. Using a smartphone-based experience-sampling application that we developed for the Android platform, we collected a music listening data set gathered in-situ during a user’s daily life. Analyses of our data set showed that real-life listening experiences differ from data sets previously used in modeling musical mood. Our data set is a heterogeneous set of songs, artists, and genres. The reasons for listening and the context within which listening occurs vary across individuals and for a single user. We then created the first model of musical mood using in-situ, real-life data. We showed that while audio features, song lyrics and socially-created tags can be used to successfully model musical mood with classification accuracies greater than chance, adding contextual information such as the listener’s affective state and or listening context can improve classification accuracies. We successfully classified musical arousal in a 2-class model with a classification accuracy of 67% and musical valence with an accuracy of 75%. Finally, we discuss ways in which the classification accuracies can be improved, and the applications that result from our models

    Developing a Noise-Robust Beat Learning Algorithm for Music-Information Retrieval

    Get PDF
    The field of Music-Information Retrieval (Music-IR) involves the development of algorithms that can analyze musical audio and extract various high-level musical features. Many such algorithms have been developed, and systems now exist that can reliably identify features such as beat locations, tempo, and rhythm from musical sources. These features in turn are used to assist in a variety of music-related tasks ranging from automatically creating playlists that match specified criteria to synchronizing various elements, such as computer graphics, with a performance. These Music-IR systems thus help humans to enjoy and interact with music. While current systems for identifying beats in music are have found widespread utility, most of them have been developed on music that is relatively free of acoustic noise. Much of the music that humans listen to, though, is performed in noisy environments. People often enjoy music in crowded clubs and noisy rooms, but this music is much more challenging for Music-IR systems to analyze, and current beat trackers generally perform poorly on musical audio heard in such conditions. If our algorithms could accurately process this music, though, it would enable this music too to be used in applications such as automatic song selection, which are currently limited to music taken directly from professionally-produced digital files that have little acoustic noise. Noise-robust beat learning algorithms would also allow for additional types of performance augmentation which create noise and thus cannot be used with current algorithms. Such a system, for instance, could aid robots in performing synchronously with music, whereas current systems are generally unable to accurately process audio heard in conjunction with noisy robot motors. This work aims to present a new approach for learning beats and identifying both their temporal locations and their spectral characteristics for music recorded in the presence of noise. First, datasets of musical audio recorded in environments with multiple types of noise were collected and annotated. Noise sources used for these datasets included HVAC sounds from a room, chatter from a crowded bar, and fans and motor noises from a moving robot. Second, an algorithm for learning and locating musical beats was developed which incorporates signal processing and machine learning techniques such as Harmonic-Percussive Source Separation and Probabilistic Latent Component Analysis. A representation of the musical signal called the stacked spectrogram was also utilized in order to better represent the time-varying nature of the beats. Unlike many current systems, which assume that the beat locations will be correlated with some hand-crafted features, this system learns the beats directly from the acoustic signal. Finally, the algorithm was tested against several state-of-the-art beat trackers on the audio datasets. The resultant system was found to significantly outperform the state-of-the-art when evaluated on audio played in realistically noisy conditions.Ph.D., Electrical Engineering -- Drexel University, 201

    ESCOM 2017 Book of Abstracts

    Get PDF

    In Search of Signature Pedagogies for Teacher Education: The Critical Case of Kodály-Inspired Music Teacher Education

    Get PDF
    The purposes of this study are to identify the features of Kodály-inspired music teacher education programs that either confirm or refute the notion that signature pedagogies (Shulman, 2005 a, b, c) are present in this form of teacher education and to identify whether and how philosophical, pedagogical, and institutional influences support such pedagogies. Signature pedagogies are shared modes of teaching that are distinct to a specific profession. These pedagogies, based in the cognitive, practical, and normative apprenticeships of professional preparation, dominate the preparation programs of a profession, both within and across institutions. This study employs a collective case study design to examine Kodály-inspired teacher education programs, specifically those endorsed by the Organization of American Kodály Educators (OAKE). This study serves as a critical test of the applicability of the construct of signature pedagogies to teacher education. Because these programs purport to hold shared philosophical and pedagogical ideals and are governed by an endorsing body (OAKE), signature pedagogies ought to be present in these programs if they are present in any teacher education programs. Embedded in this collective case are: (1) a history of Kodály-inspired pedagogy and its adoption and adaptation in the U.S., (2) case studies of two prominent and influential OAKE-endorsed Kodály-inspired teacher education programs, and (3) case studies of four to five faculty in each of these programs. Data sources include primary and secondary texts and documents, observations of the various events and activities that occur as a part of Kodály-inspired teacher education programs, and focus group and individual interviews with program faculty and students. This study finds that the two case sites possess four signature pedagogies: (1) demonstration teaching, (2) master class teaching, (3) discovery learning, and (4) the music literature collection and retrieval system. These pedagogies appear to be inextricably tethered to the contexts, professional body (OAKE), and work of Kodály-inspired music educators though multiple complex linkages. The study closes by assessing the applicability and usefulness of the construct for the discourses and study of teacher education and by offering revisions to the construct that may help to improve the construct's usefulness in future research

    Rhythm in shoes: student perceptions of the integration of tap dance into choral music

    Full text link
    The purpose of this qualitative study was to collect descriptive data pertaining to students’ perceptions regarding the use of tap dance movement and its effect on the understanding of rhythms found in choral literature. This enquiry investigated the following questions: (a) What are the perceptions of high school students regarding the difficulty of tap dance movement? (b) What are the perceptions of high school students regarding the effectiveness of tap dance movement as a method toward promoting their rhythm accuracy when performing rhythms featured in choral music? (c) What are the perceptions of high school students regarding the effectiveness of integrating tap dance movement with the study of select rhythm patterns chosen from choral literature in their retention of the rhythms? Over a five-month period, high school choral ensemble members (N = 88) were taught twenty-five rhythm patterns excerpted from choral literature, integrating tap dance movement with the instruction. The results revealed that the difficulty level of the movement, tempo at which it is executed, the changing of feet while performing the movement, and the amount of tap experience an individual possesses influence students’ perceptions regarding the degree of complexity of tap dance movement. Additionally, the data indicate the enjoyment of the movement, the demonstrations of the movement, the integration of music with the movement, the use of step names and counting, and the use of tap shoes are elements related to tap dance movement that students perceived to help promote their understanding of rhythms found in choral music. Moreover, the results pertaining to the students’ perception of how tap dance movement was an effective method of promoting their retention of rhythms found in choral music indicate a lack of agreement. While there were singers who found the movement to benefit their ability to memorize the examined rhythms, there was a comparable amount of students who indicated that they were unable to remember the rhythms following the instruction. Lastly, the findings provide information regarding the specific types of movements that students found beneficial to their rhythmic comprehension, adding to the existing literature and useful for replication in future studies
    • …
    corecore