23,761 research outputs found

    A Database on Musicians’ Movements During Musical Performances

    Get PDF
    The movements of 20 musicians playing 11 different musical instruments, including all standard orchestral instruments, were captured during solo performances by means of a motion capturing system under concert-like conditions.DFG, FOR 1557, Simulation and Evaluation of Acoustical Environments (SEACEN

    Sensing and mapping for interactive performance

    Get PDF
    This paper describes a trans-domain mapping (TDM) framework for translating meaningful activities from one creative domain onto another. The multi-disciplinary framework is designed to facilitate an intuitive and non-intrusive interactive multimedia performance interface that offers the users or performers real-time control of multimedia events using their physical movements. It is intended to be a highly dynamic real-time performance tool, sensing and tracking activities and changes, in order to provide interactive multimedia performances. From a straightforward definition of the TDM framework, this paper reports several implementations and multi-disciplinary collaborative projects using the proposed framework, including a motion and colour-sensitive system, a sensor-based system for triggering musical events, and a distributed multimedia server for audio mapping of a real-time face tracker, and discusses different aspects of mapping strategies in their context. Plausible future directions, developments and exploration with the proposed framework, including stage augmenta tion, virtual and augmented reality, which involve sensing and mapping of physical and non-physical changes onto multimedia control events, are discussed

    Real-Time Audio-to-Score Alignment of Music Performances Containing Errors and Arbitrary Repeats and Skips

    Full text link
    This paper discusses real-time alignment of audio signals of music performance to the corresponding score (a.k.a. score following) which can handle tempo changes, errors and arbitrary repeats and/or skips (repeats/skips) in performances. This type of score following is particularly useful in automatic accompaniment for practices and rehearsals, where errors and repeats/skips are often made. Simple extensions of the algorithms previously proposed in the literature are not applicable in these situations for scores of practical length due to the problem of large computational complexity. To cope with this problem, we present two hidden Markov models of monophonic performance with errors and arbitrary repeats/skips, and derive efficient score-following algorithms with an assumption that the prior probability distributions of score positions before and after repeats/skips are independent from each other. We confirmed real-time operation of the algorithms with music scores of practical length (around 10000 notes) on a modern laptop and their tracking ability to the input performance within 0.7 s on average after repeats/skips in clarinet performance data. Further improvements and extension for polyphonic signals are also discussed.Comment: 12 pages, 8 figures, version accepted in IEEE/ACM Transactions on Audio, Speech, and Language Processin

    RoboJam: A Musical Mixture Density Network for Collaborative Touchscreen Interaction

    Full text link
    RoboJam is a machine-learning system for generating music that assists users of a touchscreen music app by performing responses to their short improvisations. This system uses a recurrent artificial neural network to generate sequences of touchscreen interactions and absolute timings, rather than high-level musical notes. To accomplish this, RoboJam's network uses a mixture density layer to predict appropriate touch interaction locations in space and time. In this paper, we describe the design and implementation of RoboJam's network and how it has been integrated into a touchscreen music app. A preliminary evaluation analyses the system in terms of training, musical generation and user interaction

    Sound Generation by a Turbulent Flow in Musical Instruments - Multiphysics Simulation Approach -

    Get PDF
    Total computational costs of scientific simulations are analyzed between direct numerical simulations (DNS) and multiphysics simulations (MPS) for sound generation in musical instruments. In order to produce acoustic sound by a turbulent flow in a simple recorder-like instrument, compressible fluid dynamic calculations with a low Mach number are required around the edges and the resonator of the instrument in DNS, while incompressible fluid dynamic calculations coupled with dynamics of sound propagation based on the Lighthill's acoustic analogy are used in MPS. These strategies are evaluated not only from the viewpoint of computational performances but also from the theoretical points of view as tools for scientific simulations of complicated systems.Comment: 6 pages, 10 figure files, to appear in the proceedings of HPCAsia0

    Gulliver project: performers and visitors

    Get PDF
    This paper discusses two projects in our research environment. The Gulliver project, an ambitious project conceived by some artists connected to our research efforts, and the Aveiro-project, as well ambitious, but with goals that can be achieved beause of technological developments, rather than be dependent on artistic and 'political' (read: financial) sources. Both projects are on virtual and augmented reality. The main goal is to design inhabited environments, where 'inhabited' refers to autonomous agents and agents that represent humans, realtime or off-line, visiting the virtual environment and interacting with other agents. The Gulliver environment has been designed by two artists: Matjaz Stuk and Alena Hudcovicova. The Aveiro project is a research effort of a group of researchers trying to design models of intelligence and interaction underlying the behavior of (groups of) agents inhabiting virtual worlds. In this paper we survey the current state of both projects and we discuss current and future attempts to have music performances by virtual and real performers in these environments
    • …
    corecore