1,574 research outputs found

    Evaluation of Music Performance: Computerized Assessment Versus Human Judges.

    Get PDF
    Ph.D. Thesis. University of Hawaiʻi at Mānoa 2018

    Functional Scaffolding for Musical Composition: A New Approach in Computer-Assisted Music Composition

    Get PDF
    While it is important for systems intended to enhance musical creativity to define and explore musical ideas conceived by individual users, many limit musical freedom by focusing on maintaining musical structure, thereby impeding the user\u27s freedom to explore his or her individual style. This dissertation presents a comprehensive body of work that introduces a new musical representation that allows users to explore a space of musical rules that are created from their own melodies. This representation, called functional scaffolding for musical composition (FSMC), exploits a simple yet powerful property of multipart compositions: The pattern of notes and rhythms in different instrumental parts of the same song are functionally related. That is, in principle, one part can be expressed as a function of another. Music in FSMC is represented accordingly as a functional relationship between an existing human composition, or scaffold, and an additional generated voice. This relationship is encoded by a type of artificial neural network called a compositional pattern producing network (CPPN). A human user without any musical expertise can then explore how these additional generated voices should relate to the scaffold through an interactive evolutionary process akin to animal breeding. The utility of this insight is validated by two implementations of FSMC called NEAT Drummer and MaestroGenesis, that respectively help users tailor drum patterns and complete multipart arrangements from as little as a single original monophonic track. The five major contributions of this work address the overarching hypothesis in this dissertation that functional relationships alone, rather than specialized music theory, are sufficient for generating plausible additional voices. First, to validate FSMC and determine whether plausible generated voices result from the human-composed scaffold or intrinsic properties of the CPPN, drum patterns are created with NEAT Drummer to accompany several different polyphonic pieces. Extending the FSMC approach to generate pitched voices, the second contribution reinforces the importance of functional transformations through quality assessments that indicate that some partially FSMC-generated pieces are indistinguishable from those that are fully human. While the third contribution focuses on constructing and exploring a space of plausible voices with MaestroGenesis, the fourth presents results from a two-year study where students discuss their creative experience with the program. Finally, the fifth contribution is a plugin for MaestroGenesis called MaestroGenesis Voice (MG-V) that provides users a more natural way to incorporate MaestroGenesis in their creative endeavors by allowing scaffold creation through the human voice. Together, the chapters in this dissertation constitute a comprehensive approach to assisted music generation, enabling creativity without the need for musical expertise

    Robotic Musicianship - Musical Interactions Between Humans and Machines

    Get PDF

    Jennifer Higdon\u27s Oboe Concerto: the composition, transformation, and a performer\u27s analysis

    Get PDF
    This monograph presents a formal examination of Jennifer Higdon’s Oboe Concerto in its various forms. Jennifer Higdon has garnered international success, yet few in-depth studies of her music exist. In this document, personal accounts from commissioners illustrate the unusual commission of Oboe Concerto. Likewise, the composer, a conductor, and soloists for premiere performances highlight unique aspects of the concerto, particularly its unusual form, multiple versions, and the potential challenges in preparing the work. Higdon’s concerto showcases the lyrical capabilities of the oboe, with an emphasis on melody and sustained tone. The transformation of the concerto illustrates Higdon’s skills of self-promotion, as she is willing to adapt her works to meet new demands from commissioners and audiences alike. Oboe Concerto is a strong example of the composer’s distinctive compositional style, which is detailed in this monograph

    The effects of computer-assisted keyboard technology and MIDI accompaniments on group piano students' performance accuracy and attitudes.

    Get PDF
    Recommendations from the results include using CAI such as the Guide Mode to help group piano students improve in pitch accuracy during the early stages of learning new repertoire. After students feel comfortable with the pitches, practicing with MIDI accompaniments but without the Guide Mode may assist in the development of rhythmic continuity. However, teachers should not assume that the technology is an automatic way of improving piano performance. More time to practice with the technology outside of the classroom setting may be needed to observe any longer term effects on students' performance.Perceptions of MIDI accompaniments and the Guide Mode's effectiveness in helping students improve performance accuracy were generally positive. In open-ended responses, a majority of the participants from the Guide Mode group expressed that practicing with the Guide Mode was the most helpful part of the practice sessions. Students also reported that they made greater improvement when they practiced hands separately. Some subjects also stated that the use of MIDI accompaniments helped keep their rhythm steady. Other subjects believed that the use of technology had no effect on their performance.This study investigated the effects of musical instrument digital interface (MIDI) accompaniment and computer-assisted instruction (CAI) technology on group piano students' performance accuracy and attitudes. Subjects ( N = 29) in this quasi-experimental design were non-keyboard music major college students in four intact third semester piano classes. Two of the classes were assigned to a group that practiced with the Guide Mode on Yamaha Clavinova keyboards and MIDI accompaniment, while the other two classes were assigned to a group that practiced without the Guide Mode but with MIDI accompaniment.The researcher compared the posttest scores to the pretest scores within subjects for significant differences in performance accuracy due to the treatment. Differences in pretest and posttest scores were also compared between the Guide Mode group and the MIDI-only group. Four outliers were identified as possibly skewing the data. When the outliers were removed, the group that practiced with the Guide Mode (n = 19) demonstrated significantly better improvement in total pitch errors in comparison to the control group (n = 10), p < .05. No significant difference in rhythmic errors emerged between groups. Within groups, participants made significant improvement in overall accuracy from pretests to posttests.Subjects' performances of two piano compositions were first recorded as pretests. Afterwards each class practiced the same two compositions with their respective treatment for two weeks in class. Subjects then recorded the two compositions as posttests. Three judges evaluated the pretest and posttest recordings for accuracy in pitch and rhythm. A Likert-type questionnaire investigated subjects' attitudes toward practicing with the Guide Mode and MIDI accompaniment

    Automatic musical key detection

    Get PDF
    Selles töös oleme pakkunud mudeli tonaalsuse avastamiseks, mis on võimeline tegelema muusikaga erinevatest muusikalisest traditsioonedest ilma, et nende põhjalik analüüs oleks nõutud. Meie mudel põhineb eeldusel, et enamik muusikalisi traditsioone kasutavad hieraarhia kehtestaniseks helide kestust. Oleme pakkunud algoritmi automaatseks helilaadi avastamiseks. Meetod oli hinnatud nii sümboolse kui ka audio andmestiku peal.In this thesis we have proposed a model for tonality estimation, which is capable of handling music coming from various musical traditions and does not require their thorough analysis. In our model we have employed an assumption, that most musical traditions use duration to maintain pitch salience. Proceeding from this assumption, we have proposed an algorithm for automatic key detection, based on a distributional approach. The proposed method was evaluated on both symbolic and acoustic datasets

    Evaluating an analysis-by-synthesis model for Jazz improvisation

    Get PDF
    This paper pursues two goals. First, we present a generative model for (monophonic) jazz improvisation whose main purpose is testing hypotheses on creative processes during jazz improvisation. It uses a hierarchical Markov model based on mid-level units and the Weimar Bebop Alphabet, with statistics taken from the Weimar Jazz Database. A further ingredient is chord-scale theory to select pitches. Second, as there are several issues with Turing-like evaluation processes for generative models of jazz improvisation, we decided to conduct an exploratory online study to gain further insight while testing our algorithm in the context of a variety of human generated solos by eminent masters, jazz students, and non-professionals in various performance renditions. Results show that jazz experts (64.4% accuracy) but not non-experts (41.7% accuracy) are able to distinguish the computer-generated solos amongst a set of real solos, but with a large margin of error. The type of rendition is crucial when assessing artificial jazz solos because expressive and performative aspects (timbre, articulation, micro-timing and band-soloist interaction) seem to be equally if not more important than the syntactical (tone) content. Furthermore, the level of expertise of the solo performer does matter, as solos by non-professional humans were on average rated worse than the algorithmic ones. Accordingly, we found indications that assessments of origin of a solo are partly driven by aesthetic judgments. We propose three possible strategies to install a reliable evaluation process to mitigate some of the inherent problems

    Measuring Expressive Music Performances: a Performance Science Model using Symbolic Approximation

    Get PDF
    Music Performance Science (MPS), sometimes termed systematic musicology in Northern Europe, is concerned with designing, testing and applying quantitative measurements to music performances. It has applications in art musics, jazz and other genres. It is least concerned with aesthetic judgements or with ontological considerations of artworks that stand alone from their instantiations in performances. Musicians deliver expressive performances by manipulating multiple, simultaneous variables including, but not limited to: tempo, acceleration and deceleration, dynamics, rates of change of dynamic levels, intonation and articulation. There are significant complexities when handling multivariate music datasets of significant scale. A critical issue in analyzing any types of large datasets is the likelihood of detecting meaningless relationships the more dimensions are included. One possible choice is to create algorithms that address both volume and complexity. Another, and the approach chosen here, is to apply techniques that reduce both the dimensionality and numerosity of the music datasets while assuring the statistical significance of results. This dissertation describes a flexible computational model, based on symbolic approximation of timeseries, that can extract time-related characteristics of music performances to generate performance fingerprints (dissimilarities from an ‘average performance’) to be used for comparative purposes. The model is applied to recordings of Arnold Schoenberg’s Phantasy for Violin with Piano Accompaniment, Opus 47 (1949), having initially been validated on Chopin Mazurkas.1 The results are subsequently used to test hypotheses about evolution in performance styles of the Phantasy since its composition. It is hoped that further research will examine other works and types of music in order to improve this model and make it useful to other music researchers. In addition to its benefits for performance analysis, it is suggested that the model has clear applications at least in music fraud detection, Music Information Retrieval (MIR) and in pedagogical applications for music education

    Worldwide Infrastructure for Neuroevolution: A Modular Library to Turn Any Evolutionary Domain into an Online Interactive Platform

    Get PDF
    Across many scientific disciplines, there has emerged an open opportunity to utilize the scale and reach of the Internet to collect scientific contributions from scientists and non-scientists alike. This process, called citizen science, has already shown great promise in the fields of biology and astronomy. Within the fields of artificial life (ALife) and evolutionary computation (EC) experiments in collaborative interactive evolution (CIE) have demonstrated the ability to collect thousands of experimental contributions from hundreds of users across the glob. However, such collaborative evolutionary systems can take nearly a year to build with a small team of researchers. This dissertation introduces a new developer framework enabling researchers to easily build fully persistent online collaborative experiments around almost any evolutionary domain, thereby reducing the time to create such systems to weeks for a single researcher. To add collaborative functionality to any potential domain, this framework, called Worldwide Infrastructure for Neuroevolution (WIN), exploits an important unifying principle among all evolutionary algorithms: regardless of the overall methods and parameters of the evolutionary experiment, every individual created has an explicit parent-child relationship, wherein one individual is considered the direct descendant of another. This principle alone is enough to capture and preserve the relationships and results for a wide variety of evolutionary experiments, while allowing multiple human users to meaningfully contribute. The WIN framework is first validated through two experimental domains, image evolution and a new two-dimensional virtual creature domain, Indirectly Encoded SodaRace (IESoR), that is shown to produce a visually diverse variety of ambulatory creatures. Finally, an Android application built with WIN, filters, allows users to interactively evolve custom image effects to apply to personalized photographs, thereby introducing the first CIE application available for any mobile device. Together, these collaborative experiments and new mobile application establish a comprehensive new platform for evolutionary computation that can change how researchers design and conduct citizen science online
    corecore