51 research outputs found

    Re-Sonification of Objects, Events, and Environments

    Get PDF
    abstract: Digital sound synthesis allows the creation of a great variety of sounds. Focusing on interesting or ecologically valid sounds for music, simulation, aesthetics, or other purposes limits the otherwise vast digital audio palette. Tools for creating such sounds vary from arbitrary methods of altering recordings to precise simulations of vibrating objects. In this work, methods of sound synthesis by re-sonification are considered. Re-sonification, herein, refers to the general process of analyzing, possibly transforming, and resynthesizing or reusing recorded sounds in meaningful ways, to convey information. Applied to soundscapes, re-sonification is presented as a means of conveying activity within an environment. Applied to the sounds of objects, this work examines modeling the perception of objects as well as their physical properties and the ability to simulate interactive events with such objects. To create soundscapes to re-sonify geographic environments, a method of automated soundscape design is presented. Using recorded sounds that are classified based on acoustic, social, semantic, and geographic information, this method produces stochastically generated soundscapes to re-sonify selected geographic areas. Drawing on prior knowledge, local sounds and those deemed similar comprise a locale's soundscape. In the context of re-sonifying events, this work examines processes for modeling and estimating the excitations of sounding objects. These include plucking, striking, rubbing, and any interaction that imparts energy into a system, affecting the resultant sound. A method of estimating a linear system's input, constrained to a signal-subspace, is presented and applied toward improving the estimation of percussive excitations for re-sonification. To work toward robust recording-based modeling and re-sonification of objects, new implementations of banded waveguide (BWG) models are proposed for object modeling and sound synthesis. Previous implementations of BWGs use arbitrary model parameters and may produce a range of simulations that do not match digital waveguide or modal models of the same design. Subject to linear excitations, some models proposed here behave identically to other equivalently designed physical models. Under nonlinear interactions, such as bowing, many of the proposed implementations exhibit improvements in the attack characteristics of synthesized sounds.Dissertation/ThesisPh.D. Electrical Engineering 201

    Convolutional Methods for Music Analysis

    Get PDF

    Enmeshed 3

    Get PDF
    Enmeshed 3, for cello and live electronics, is the third in a series of works in which a solo instrument becomes ‘enmeshed’ in multiple layers of transformations derived from the live performance. The works are shaped and structured in terms of the varying relationships between these layers and the ‘distances’ between the original acoustic performance and the various transformations, in terms of pitch, time delay, timbre, texture and space. At certain points in the work these almost converge whilst at other times large distances open up, with the different layers in a wild counterpoint. All the sounds in the work derive from live transformation of the soloist's performance. The composer’s own granular synthesis algorithms play a significant role in these transformations. Multichannel spatialisation also plays an important part in terms of spatial positioning and movement, the creation of different virtual spatial environments and in the definition of different layers. It can be performed variously with between 8 and 24 channels. Enmeshed 3 is in five contrasting but inter-related sections centering around a long slow meditative central passage. It was written for Madeleine Shapiro who premiered it at the New York City Electroacoustic Music Festival in April 2013

    Measuring Expressive Music Performances: a Performance Science Model using Symbolic Approximation

    Get PDF
    Music Performance Science (MPS), sometimes termed systematic musicology in Northern Europe, is concerned with designing, testing and applying quantitative measurements to music performances. It has applications in art musics, jazz and other genres. It is least concerned with aesthetic judgements or with ontological considerations of artworks that stand alone from their instantiations in performances. Musicians deliver expressive performances by manipulating multiple, simultaneous variables including, but not limited to: tempo, acceleration and deceleration, dynamics, rates of change of dynamic levels, intonation and articulation. There are significant complexities when handling multivariate music datasets of significant scale. A critical issue in analyzing any types of large datasets is the likelihood of detecting meaningless relationships the more dimensions are included. One possible choice is to create algorithms that address both volume and complexity. Another, and the approach chosen here, is to apply techniques that reduce both the dimensionality and numerosity of the music datasets while assuring the statistical significance of results. This dissertation describes a flexible computational model, based on symbolic approximation of timeseries, that can extract time-related characteristics of music performances to generate performance fingerprints (dissimilarities from an ‘average performance’) to be used for comparative purposes. The model is applied to recordings of Arnold Schoenberg’s Phantasy for Violin with Piano Accompaniment, Opus 47 (1949), having initially been validated on Chopin Mazurkas.1 The results are subsequently used to test hypotheses about evolution in performance styles of the Phantasy since its composition. It is hoped that further research will examine other works and types of music in order to improve this model and make it useful to other music researchers. In addition to its benefits for performance analysis, it is suggested that the model has clear applications at least in music fraud detection, Music Information Retrieval (MIR) and in pedagogical applications for music education

    Singing voice resynthesis using concatenative-based techniques

    Get PDF
    Tese de Doutoramento. Engenharia Informática. Faculdade de Engenharia. Universidade do Porto. 201

    The Structural and Aesthetic Capacity of Sonic Matter: Remarks on Sonic Dramaturgy

    Get PDF
    This research study focuses on my compositional practice and its related creative strategies. It describes a series of ideas relevant to the structural and aesthetic capacity of sonic matter and the notion of sonic dramaturgy. Its thread of enquiry is based upon transformational logic and the inner nature of sound. The ontology of sound matter, its intrinsic nature and perceptual and cognitive effects, is of primary relevance. This can be contrasted with a permutational approach – the ars combinatoria – that has prevailed in Western Music after the Renaissance. There are four boundaries in which my conceptual compass operates: 1. The intrinsic logic of the sound-material 2. Form as organisation immanent to sonic matter 3. Form as Sonic Dramaturgy 4. The relevance of listeners’ perceptual and cognitive capacities. It is easily understandable that an empirical and experiential attitude manifests itself from the above. My aim is to examine in practice, that encounter and that creative friction which occurs between sound-matter and the human mind, and as a result a priori schemas have been avoided

    Interfacing Jazz: A Study in Computer-Mediated Jazz Music Creation And Performance

    Get PDF
    O objetivo central desta dissertação é o estudo e desenvolvimento de algoritmos e interfaces mediados por computador para performance e criação musical. É sobretudo centrado em acompanhamentos em Jazz clássico e explora um meta-controlo dos parâmetros musicais como forma de potenciar a experiência de tocar Jazz por músicos e não-músicos, quer individual quer coletivamente. Pretende contribuir para a pesquisa existente nas áreas de geração automática de música e de interfaces para expressão musical, apresentando um conjunto de algoritmos e interfaces de controlo especialmente criados para esta dissertação. Estes algoritmos e interfaces implementam processos inteligentes e musicalmente informados, para gerar eventos musicais sofisticados e corretos musical estilisticamente, de forma automática, a partir de um input simplificado e intuitivo do utilizador, e de forma coerente gerir a experiência de grupo, estabelecendo um controlo integrado sobre os parâmetros globais. A partir destes algoritmos são apresentadas propostas para diferentes aplicações dos conceitos e técnicas, de forma a ilustrar os benefícios e potencial da utilização de um meta-controlo como extensão dos paradigmas existentes para aplicações musicais, assim como potenciar a criação de novos. Estas aplicações abordam principalmente três áreas onde a música mediada por computador pode trazer grandes benefícios, nomeadamente a performance, a criação e a educação. Uma aplicação, PocketBand, implementada no ambiente de programação Max, permite a um grupo de utilizadores tocarem em grupo como uma banda de jazz, quer sejam ou não treinados musicalmente, cada um utilizando um teclado de computador ou um dispositivo iOS multitoque. O segundo protótipo visa a utilização em contextos coletivos e participativos. Trata-se de uma instalação para vários utilizadores, para ecrã multitoque, intitulada MyJazzBand, que permite até quatro utilizadores tocarem juntos como membros de uma banda de jazz virtual. Ambas as aplicações permitem que os utilizadores experienciem e participem de forma eficaz como músicos de jazz, quer sejam ou não músicos profissionais. As aplicações podem ser utilizadas para fins educativos, seja como um sistema de acompanhamento automático em tempo real para qualquer instrumentista ou cantor, seja como uma fonte de informação para procedimentos harmónicos, ou como uma ferramenta prática para criar esboços ou conteúdos para aulas. Irei também demonstrar que esta abordagem reflete uma tendência crescente entre as empresas de software musical comercial, que já começaram a explorar a mediação por computador e algoritmos musicais inteligentes.Abstract : This dissertation focuses on the study and development of computer-mediated interfaces and algorithms for music performance and creation. It is mainly centered on traditional Jazz music accompaniment and explores the meta-control over musical events to potentiate the rich experience of playing jazz by musicians and non-musicians alike, both individually and collectively. It aims to complement existing research on automatic generation of jazz music and new interfaces for musical expression, by presenting a group of specially designed algorithms and control interfaces that implement intelligent, musically informed processes to automatically produce sophisticated and stylistically correct musical events. These algorithms and control interfaces are designed to have a simplified and intuitive input from the user, and to coherently manage group playing by establishing an integrated control over global common parameters. Using these algorithms, two proposals for different applications are presented, in order to illustrate the benefits and potential of this meta-control approach to extend existing paradigms for musical applications, as well as to create new ones. These proposals focus on two main perspectives where computer-mediated music can benefit by using this approach, namely in musical performance and creation, both of which can also be observed from an educational perspective. A core framework, implemented in the Max programming environment, integrates all the functionalities of the instrument algorithms and control strategies, as well as global control, synchronization and communication between all the components. This platform acts as a base, from which different applications can be created. For this dissertation, two main application concepts were developed. The first, PocketBand, has a single-user, one-man-band approach, where a single interface allows a single user to play up to three instruments. This prototype application, for a multi- touch tablet, was the test bed for several experiments with the user interface and playability issues that helped define and improve the mediated interface concept and the instrument algorithms. The second prototype aims the creation of a collective experience. It is a multi-user installation for a multi-touch table, called MyJazzBand, that allows up to four users to play together as members of a virtual jazz band. Both applications allow the users to experience and effectively participate as jazz band musicians, whether they are musically trained or not. The applications can be used for educational purposes, whether as a real-time accompaniment system for any jazz instrument practitioner or singer, as a source of information for harmonic procedures, or as a practical tool for creating quick arrangement drafts or music lesson contents. I will also demonstrate that this approach reflects a growing trend on commercial music software that has already begun to explore and implement mediated interfaces and intelligent music algorithms

    The structural and aesthetic capacity of sonic matter : remarks on sonic dramaturgy

    Get PDF
    This research study focuses on my compositional practice and its related creative strategies. It describes a series of ideas relevant to the structural and aesthetic capacity of sonic matter and the notion of sonic dramaturgy. Its thread of enquiry is based upon transformational logic and the inner nature of sound. The ontology of sound matter, its intrinsic nature and perceptual and cognitive effects, is of primary relevance. This can be contrasted with a permutational approach – the ars combinatoria – that has prevailed in Western Music after the Renaissance. There are four boundaries in which my conceptual compass operates: 1. The intrinsic logic of the sound-material 2. Form as organisation immanent to sonic matter 3. Form as Sonic Dramaturgy 4. The relevance of listeners’ perceptual and cognitive capacities. It is easily understandable that an empirical and experiential attitude manifests itself from the above. My aim is to examine in practice, that encounter and that creative friction which occurs between sound-matter and the human mind, and as a result a priori schemas have been avoided.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Singing voice resynthesis using concatenative-based techniques

    Get PDF
    Dissertação submetida à Faculdade de Engenharia da Universidade do Porto para satisfação parcial dos requisitos do grau de doutor em Engenharia Informática.Singing has an important role in our life, and although synthesizers have been trying to replicate every musical instrument for decades, is was only during the last nine years that commercial singing synthesizers started to appear, allowing the ability to merge music and text, i.e., singing. These solutions may present realistic results on some situations, but they require time consuming processes and experienced users. The goal of this research work is to develop, create or adapt techniques that allow the resynthesis of the singing voice, i.e., allow the user to directly control a singing voice synthesizer using his/her own voice. The synthesizer should be able to replicate, as close as possible, the same melody, same phonetic sequence, and the same musical performance. Initially, some work was developed trying to resynthesize piano recordings with evolutionary approaches, using Genetic Algorithms, where a population of individuals (candidate solutions) representing a sequence of music notes evolved over time, tries to match an original audio stream. Later, the focus would return to the singing voice, exploring techniques as Hidden Markov Models, Neural Network Self Organized Maps, among others. Finally, a Concatenative Unit Selection approach was chosen as the core of a singing voice resynthesis system. By extracting energy, pitch and phonetic information (MFCC, LPC), and using it within a phonetic similarity Viterbi-based Unit Selection System, a sequence of internal sound library frames is chosen to replicate the original audio performance. Although audio artifacts still exist, preventing its use on professional applications, the concept of a new audio tool was created, that presents high potential for future work, not only in singing voice, but in other musical or speech domains.This dissertation had the kind support of FCT (Portuguese Foundation for Science and Technology, an agency of the Portuguese Ministry for Science, Technology and Higher Education) under grant SFRH / BD / 30300 / 2006, and has been articulated with research project PTDC/SAU-BEB/104995/2008 (Assistive Real-Time Technology in Singing) whose objectives include the development of interactive technologies helping the teaching and learning of singing
    • …
    corecore