248 research outputs found

    Interactive Musical Partner: A System for Human/Computer Duo Improvisations

    Get PDF
    This research is centered on the creation of a computer program that will make music with a human improviser. This Interactive Musical Partner (IMP) is designed for duo improvisations, with one human improviser and one instance of IMP, focusing on a freely improvised duo aesthetic. IMP has Musical Personality Settings (MPS) that can be set prior to performance, and these MPS guide the way IMP responds to musical input from the human. The MPS also govern the probability of particular outcomes from IMP’s creative algorithms. IMP uses audio data feature extraction methods to listen to the human partner, and react to, or ignore, the human’s musical input, based on the current MPS. This course of research presents a number of problems. Parameters for the Musical Personality Settings (MPS) must be defined, and then those parameters must be mapped to extractable audio features. A system for musical decision-making and reaction/interaction (action/interaction module) must be in place, and a synthesis module that allows for MPS control must be deployed. Designing a program intended to play with an improviser, and then improvising with that program has caused me to assess every aspect of my practice as an improviser. Not only has this research expanded my understanding of the technologies involved and made me a better technologist, but striving to get the technology to be musical has made me look at all sides of the music I make, resulting in a better improvising artist

    Utilizing Computer Programming to Analyze Post-Tonal Music: A Segmentation and Contour Analysis of Twentieth-Century Music for Solo Flute

    Get PDF
    Two concepts will be synthesized in this dissertation: 1) the creation of accessible computer applications for melodic segmentation and contour reduction and 2) the application of segmentation and contour reduction to analyze twentieth-century post-tonal works for unaccompanied flute. Two analytical methodologies have been chosen: James Tenney and Larry Polanski\u27s Gestalt segmentation theory and Robert Schultz\u27s refinement of Robert Morris\u27s contour reduction algorithm. The investigation also utilizes Robert Schultz\u27s concept of diachronic-transformational analysis in conjunction with contour reduction. While both segmentation and contour reduction are invaluable analytical tools, they are meticulous and time-consuming processes. Computer implementation of these algorithmic procedures produces quick and accurate results while reducing analyst fatigue and human error. Microsoft Excel is used to complete melodic segmentation. Java programming language is used to create a contour reduction application. Each implementation greatly reduces the time needed to segment and analyze a melody. Computer programming is combined with pitch class set analysis to produce informed and expressive musical interpretations

    Application of Intermediate Multi-Agent Systems to Integrated Algorithmic Composition and Expressive Performance of Music

    Get PDF
    We investigate the properties of a new Multi-Agent Systems (MAS) for computer-aided composition called IPCS (pronounced “ipp-siss”) the Intermediate Performance Composition System which generates expressive performance as part of its compositional process, and produces emergent melodic structures by a novel multi-agent process. IPCS consists of a small-medium size (2 to 16) collection of agents in which each agent can perform monophonic tunes and learn monophonic tunes from other agents. Each agent has an affective state (an “artificial emotional state”) which affects how it performs the music to other agents; e.g. a “happy” agent will perform “happier” music. The agent performance not only involves compositional changes to the music, but also adds smaller changes based on expressive music performance algorithms for humanization. Every agent is initialized with a tune containing the same single note, and over the interaction period longer tunes are built through agent interaction. Agents will only learn tunes performed to them by other agents if the affective content of the tune is similar to their current affective state; learned tunes are concatenated to the end of their current tune. Each agent in the society learns its own growing tune during the interaction process. Agents develop “opinions” of other agents that perform to them, depending on how much the performing agent can help their tunes grow. These opinions affect who they interact with in the future. IPCS is not a mapping from multi-agent interaction onto musical features, but actually utilizes music for the agents to communicate emotions. In spite of the lack of explicit melodic intelligence in IPCS, the system is shown to generate non-trivial melody pitch sequences as a result of emotional communication between agents. The melodies also have a hierarchical structure based on the emergent social structure of the multi-agent system and the hierarchical structure is a result of the emerging agent social interaction structure. The interactive humanizations produce micro-timing and loudness deviations in the melody which are shown to express its hierarchical generative structure without the need for structural analysis software frequently used in computer music humanization

    A Functional Taxonomy of Music Generation Systems

    Get PDF
    Digital advances have transformed the face of automatic music generation since its beginnings at the dawn of computing. Despite the many breakthroughs, issues such as the musical tasks targeted by different machines and the degree to which they succeed remain open questions. We present a functional taxonomy for music generation systems with reference to existing systems. The taxonomy organizes systems according to the purposes for which they were designed. It also reveals the inter-relatedness amongst the systems. This design-centered approach contrasts with predominant methods-based surveys and facilitates the identification of grand challenges to set the stage for new breakthroughs.Comment: survey, music generation, taxonomy, functional survey, survey, automatic composition, algorithmic compositio

    Electronics, music and computers

    Get PDF
    technical reportElectronic and computer technology has had and will continue to have a marked effect in the field of music. Through the years scientists, engineers, and musicians have applied available technology to new musical instruments, innovative musical sound production, sound analysis, and musicology. At the University of Utah we have designed and are implementing a communication network involving and electronic organ and a small computer to provide a tool to be used in music performance, the learning of music theory, the investigation of music notation, the composition of music, the perception of music, and the printing of music
    • …
    corecore