567 research outputs found

    HIEMPA: Hybrid Instruments from Electroacoustic Manipulation and Models of Pütorino and Aquascape

    Get PDF
    The HIEMPA project combined a team of people with technical, artistic, environmental and cultural expertise towards an artistic outcome aiming to extend the New Zealand sonic art tradition. The work involved collecting audio samples from the aquascape of the Ruakuri Caves and Nature Reserve in Waitomo, South Waikato, New Zealand; and samples of a variety of pütorino – a New Zealand Mäori wind instrument. Following a machine learning analysis of this audio material and an analysis of the performance material, hybrid digital instruments were built and mapped to suitable hardware triggers. The new instruments are playable in realtime, along with the electroacoustic manipulation of pütorino performances. The project takes into account the environmental and cultural significance of the source material, with the results to be released as a set of compositions. This paper discusses the background research and process of the project

    Reactive Statistical Mapping: Towards the Sketching of Performative Control with Data

    Get PDF
    Part 1: Fundamental IssuesInternational audienceThis paper presents the results of our participation to the ninth eNTERFACE workshop on multimodal user interfaces. Our target for this workshop was to bring some technologies currently used in speech recognition and synthesis to a new level, i.e. being the core of a new HMM-based mapping system. The idea of statistical mapping has been investigated, more precisely how to use Gaussian Mixture Models and Hidden Markov Models for realtime and reactive generation of new trajectories from inputted labels and for realtime regression in a continuous-to-continuous use case. As a result, we have developed several proofs of concept, including an incremental speech synthesiser, a software for exploring stylistic spaces for gait and facial motion in realtime, a reactive audiovisual laughter and a prototype demonstrating the realtime reconstruction of lower body gait motion strictly from upper body motion, with conservation of the stylistic properties. This project has been the opportunity to formalise HMM-based mapping, integrate various of these innovations into the Mage library and explore the development of a realtime gesture recognition tool

    3DCGキャラクタの表現の改善法と実時間操作に関する研究

    Get PDF
    早大学位記番号:新8176早稲田大

    A novel continuous pitch electronic wind instrument controller

    Get PDF
    We present a design for an electronic continuous pitch wind controller for musical performance. It uses a combination of linear position, magnetic reed, and air pressure sensors to generate three fully continuous control dimensions. Each control dimension is encoded and transmitted using the industry standard MIDI protocol to allow the instrument to interface with a large variety of synthesizers to control different parameters of the synthesis algorithm in real time, allowing for a high degree of expressiveness not possible with existing electronic wind instrument controllers. The first part of the thesis will provide a justification for the design of a novel instrument, and present some of the theory behind pitch representation, encoding, and transmission with respect to digital systems. The remainder of the thesis will present the particular design and explain the workings of its various subsystems

    16th Sound and Music Computing Conference SMC 2019 (28–31 May 2019, Malaga, Spain)

    Get PDF
    The 16th Sound and Music Computing Conference (SMC 2019) took place in Malaga, Spain, 28-31 May 2019 and it was organized by the Application of Information and Communication Technologies Research group (ATIC) of the University of Malaga (UMA). The SMC 2019 associated Summer School took place 25-28 May 2019. The First International Day of Women in Inclusive Engineering, Sound and Music Computing Research (WiSMC 2019) took place on 28 May 2019. The SMC 2019 TOPICS OF INTEREST included a wide selection of topics related to acoustics, psychoacoustics, music, technology for music, audio analysis, musicology, sonification, music games, machine learning, serious games, immersive audio, sound synthesis, etc

    Chorus Digitalis: polyphonic gestural singing

    Get PDF
    Chorus Digitalis is a choir of gesture controlled digital singers. Chorus Digitalis is based on Cantor Digitalis, a gesture controlled singing voice synthesizer, and the Méta-Mallette, an environment designed for collective electronic music and video performances. Cantor Digitalis is an improved formant synthesizer, using the RT-CALM voice source model and source-filter interaction mechanisms. Chorus Digitalis is the result of the integration of voice synthesis in the Méta-Mallette environment. Each virtual voice is controlled by both a graphic tablet and a joystick. Polyphonic singing performances of Chorus Digitalis with four players will be given at the conference. The Méta-Mallette and Cantor Digitalis are implemented using Max/MSP

    Audio source separation for music in low-latency and high-latency scenarios

    Get PDF
    Aquesta tesi proposa mètodes per tractar les limitacions de les tècniques existents de separació de fonts musicals en condicions de baixa i alta latència. En primer lloc, ens centrem en els mètodes amb un baix cost computacional i baixa latència. Proposem l'ús de la regularització de Tikhonov com a mètode de descomposició de l'espectre en el context de baixa latència. El comparem amb les tècniques existents en tasques d'estimació i seguiment dels tons, que són passos crucials en molts mètodes de separació. A continuació utilitzem i avaluem el mètode de descomposició de l'espectre en tasques de separació de veu cantada, baix i percussió. En segon lloc, proposem diversos mètodes d'alta latència que milloren la separació de la veu cantada, gràcies al modelatge de components específics, com la respiració i les consonants. Finalment, explorem l'ús de correlacions temporals i anotacions manuals per millorar la separació dels instruments de percussió i dels senyals musicals polifònics complexes.Esta tesis propone métodos para tratar las limitaciones de las técnicas existentes de separación de fuentes musicales en condiciones de baja y alta latencia. En primer lugar, nos centramos en los métodos con un bajo coste computacional y baja latencia. Proponemos el uso de la regularización de Tikhonov como método de descomposición del espectro en el contexto de baja latencia. Lo comparamos con las técnicas existentes en tareas de estimación y seguimiento de los tonos, que son pasos cruciales en muchos métodos de separación. A continuación utilizamos y evaluamos el método de descomposición del espectro en tareas de separación de voz cantada, bajo y percusión. En segundo lugar, proponemos varios métodos de alta latencia que mejoran la separación de la voz cantada, gracias al modelado de componentes que a menudo no se toman en cuenta, como la respiración y las consonantes. Finalmente, exploramos el uso de correlaciones temporales y anotaciones manuales para mejorar la separación de los instrumentos de percusión y señales musicales polifónicas complejas.This thesis proposes specific methods to address the limitations of current music source separation methods in low-latency and high-latency scenarios. First, we focus on methods with low computational cost and low latency. We propose the use of Tikhonov regularization as a method for spectrum decomposition in the low-latency context. We compare it to existing techniques in pitch estimation and tracking tasks, crucial steps in many separation methods. We then use the proposed spectrum decomposition method in low-latency separation tasks targeting singing voice, bass and drums. Second, we propose several high-latency methods that improve the separation of singing voice by modeling components that are often not accounted for, such as breathiness and consonants. Finally, we explore using temporal correlations and human annotations to enhance the separation of drums and complex polyphonic music signals
    corecore