14,438 research outputs found

    Multiple Media Interfaces for Music Therapy

    Get PDF
    This article describes interfaces (and the supporting technological infrastructure) to create audiovisual instruments for use in music therapy. In considering how the multidimensional nature of sound requires multidimensional input control, we propose a model to help designers manage the complex mapping between input devices and multiple media software. We also itemize a research agenda

    Computers in Support of Musical Expression

    Get PDF

    Mandarin Singing Voice Synthesis Based on Harmonic Plus Noise Model and Singing Expression Analysis

    Full text link
    The purpose of this study is to investigate how humans interpret musical scores expressively, and then design machines that sing like humans. We consider six factors that have a strong influence on the expression of human singing. The factors are related to the acoustic, phonetic, and musical features of a real singing signal. Given real singing voices recorded following the MIDI scores and lyrics, our analysis module can extract the expression parameters from the real singing signals semi-automatically. The expression parameters are used to control the singing voice synthesis (SVS) system for Mandarin Chinese, which is based on the harmonic plus noise model (HNM). The results of perceptual experiments show that integrating the expression factors into the SVS system yields a notable improvement in perceptual naturalness, clearness, and expressiveness. By one-to-one mapping of the real singing signal and expression controls to the synthesizer, our SVS system can simulate the interpretation of a real singer with the timbre of a speaker.Comment: 8 pages, technical repor

    Deep Learning Techniques for Music Generation -- A Survey

    Full text link
    This paper is a survey and an analysis of different ways of using deep learning (deep artificial neural networks) to generate musical content. We propose a methodology based on five dimensions for our analysis: Objective - What musical content is to be generated? Examples are: melody, polyphony, accompaniment or counterpoint. - For what destination and for what use? To be performed by a human(s) (in the case of a musical score), or by a machine (in the case of an audio file). Representation - What are the concepts to be manipulated? Examples are: waveform, spectrogram, note, chord, meter and beat. - What format is to be used? Examples are: MIDI, piano roll or text. - How will the representation be encoded? Examples are: scalar, one-hot or many-hot. Architecture - What type(s) of deep neural network is (are) to be used? Examples are: feedforward network, recurrent network, autoencoder or generative adversarial networks. Challenge - What are the limitations and open challenges? Examples are: variability, interactivity and creativity. Strategy - How do we model and control the process of generation? Examples are: single-step feedforward, iterative feedforward, sampling or input manipulation. For each dimension, we conduct a comparative analysis of various models and techniques and we propose some tentative multidimensional typology. This typology is bottom-up, based on the analysis of many existing deep-learning based systems for music generation selected from the relevant literature. These systems are described and are used to exemplify the various choices of objective, representation, architecture, challenge and strategy. The last section includes some discussion and some prospects.Comment: 209 pages. This paper is a simplified version of the book: J.-P. Briot, G. Hadjeres and F.-D. Pachet, Deep Learning Techniques for Music Generation, Computational Synthesis and Creative Systems, Springer, 201

    Sampling the past:a tactile approach to interactive musical instrument exhibits in the heritage sector

    Get PDF
    In the last decade, the heritage sector has had to adapt to a shifting cultural landscape of public expectations and attitudes towards ownership and intellectual property. One way it has done this is to focus on each visitor’s encounter and provide them with a sense of experiential authenticity.There is a clear desire by the public to engage with music collections in this way, and a sound museological rationale for providing such access, but the approach raises particular curatorial problems, specifically how do we meaningfully balance access with the duty to preserve objects for future generations?This paper charts the development of one such project. Based at Fenton House in Hampstead, and running since 2008, the project seeks to model digitally the keyboard instruments in the Benton Fletcher Collection and provide a dedicated interactive exhibit, which allows visitors to view all of the instruments in situ, and then play them through a custom-built two-manual MIDI controller with touch-screen interface.We discuss the approach to modelling, which uses high-definition sampling, and highlight the strengths and weaknesses of the exhibit as it currently stands, with particular focus on its key shortcoming: at present, there is no way to effectively model the key feel of a historic keyboard instrument.This issue is of profound importance, since the feel of any instrument is fundamental to its character, and shapes the way performers relate to it. The issue is further compounded if we are to consider a single dedicated keyboard as being the primary mode of interface for several instrument models of different classes, each with its own characteristic feel.We conclude by proposing an outline solution to this problem, detailing early work on a real-time adaptive haptic keyboard interface that changes its action in response to sampled resistance curves, measured on a key-by-key basis from the original instruments
    • …
    corecore