67 research outputs found

    Third International Conference on Technologies for Music Notation and Representation TENOR 2017

    Get PDF
    The third International Conference on Technologies for Music Notation and Representation seeks to focus on a set of specific research issues associated with Music Notation that were elaborated at the first two editions of TENOR in Paris and Cambridge. The theme of the conference is vocal music, whereas the pre-conference workshops focus on innovative technological approaches to music notation

    Trombone Synthesis by Model and Measurement

    Get PDF
    A physics-based synthesis model of a trombone is developed using filter elements that are both theoretically-based and estimatedfrom measurement. The model consists of two trombone instrument transfer functions: one at the position of the mouthpieceenabling coupling to a lip-valve model and one at the outside of the bell for sound production. The focus of this work is onextending a previously presented measurement technique used to obtain acoustic characterizations of waveguide elements forcylindrical and conical elements, with further development allowing for the estimation of the flared trombone bell reflection andtransmission functions for which no one-parameter traveling wave solution exists. A one-dimensional bell model is developedproviding an approximate theoretical expectation to which estimation results may be compared. Dynamic trombone modelelements, such as those dependent on the bore length, are theoretically and parametrically modeled. As a result, the trombonemodel focuses on accuracy, interactivity, and efficiency, making it suitable for a number of real-time computer music applications

    A MODELLER-SIMULATOR FOR INSTRUMENTAL PLAYING OF VIRTUAL MUSICAL INSTRUMENTS

    No full text
    International audienceThis paper presents a musician-oriented modelling and simulation environment for designing physically modelled virtual instruments and interacting with them via a high performance haptic device. In particular, our system allows restoring the physical coupling between the user and the manipulated virtual instrument, a key factor for expressive playing of traditional acoustical instruments that is absent in the vast majority of computer-based musical systems. We first analyse the various uses of haptic devices in Computer Music, and introduce the various technologies involved in our system. We then present the modeller and simulation environments, and examples of musical virtual instruments created with this new environment

    Non-Standard Sound Synthesis with Dynamic Models

    Get PDF
    Full version unavailable due to 3rd party copyright restrictions.This Thesis proposes three main objectives: (i) to provide the concept of a new generalized non-standard synthesis model that would provide the framework for incorporating other non-standard synthesis approaches; (ii) to explore dynamic sound modeling through the application of new non-standard synthesis techniques and procedures; and (iii) to experiment with dynamic sound synthesis for the creation of novel sound objects. In order to achieve these objectives, this Thesis introduces a new paradigm for non-standard synthesis that is based in the algorithmic assemblage of minute wave segments to form sound waveforms. This paradigm is called Extended Waveform Segment Synthesis (EWSS) and incorporates a hierarchy of algorithmic models for the generation of microsound structures. The concepts of EWSS are illustrated with the development and presentation of a novel non-standard synthesis system, the Dynamic Waveform Segment Synthesis (DWSS). DWSS features and combines a variety of algorithmic models for direct synthesis generation: list generation and permutation, tendency masks, trigonometric functions, stochastic functions, chaotic functions and grammars. The core mechanism of DWSS is based in an extended application of Cellular Automata. The potential of the synthetic capabilities of DWSS is explored in a series of Case Studies where a number of sound object were generated revealing (i) the capabilities of the system to generate sound morphologies belonging to other non-standard synthesis approaches and, (ii) the capabilities of the system of generating novel sound objects with dynamic morphologies. The introduction of EWSS and DWSS is preceded by an extensive and critical overview on the concepts of microsound synthesis, algorithmic composition, the two cultures of computer music, the heretical approach in composition, non- standard synthesis and sonic emergence along with the thorough examination of algorithmic models and their application in sound synthesis and electroacoustic composition. This Thesis also proposes (i) a new definition for “algorithmic composition”, (ii) the term “totalistic algorithmic composition”, and (iii) four discrete aspects of non-standard synthesis

    SOHO::Sonification of Hybrid ObjectsA Disappearing-Computer Research Atelier Final Report

    Get PDF

    Physically-based auralization : design, implementation, and evaluation

    Get PDF
    The aim of this research is to implement an auralization system that renders audible a 3D model of an acoustic environment. The design of such a system is an iterative process where successive evaluation of auralization quality is utilized to further refine the model and develop the rendering methods. The work can be divided into two parts corresponding to design and implementation of an auralization system and evaluation of the system employing objective and subjective criteria. The presented auralization method enables both static and dynamic rendering. In dynamic rendering positions and orientations of sound sources, surfaces, or a listener can change. These changes are allowed by modeling the direct sound and early reflections with the image-source method. In addition, the late reverberation is modeled with a time-invariant recursive digital filter structure. The core of the thesis deals with the processing of image sources for auralization. The sound signal emitted by each image source is processed with digital filters modeling such acoustic phenomena as sound source directivity, distance delay and attenuation, air and material absorption, and the characteristics of spatial hearing. The digital filter design and implementation of these filters are presented in detail. The traditional image-source method has also been extended to handle diffraction in addition to specular reflections. The evaluation of quality of the implemented auralization system was performed by comparing recorded and auralized soundtracks subjectively. The compared soundtracks were prepared by recording sound signals in a real room and by auralizing these signals with a 3D model of the room. The auralization quality was assessed with objective and subjective methods. The objective analysis was based on both traditional room acoustic criteria and on a simplified auditory model developed for this purpose. This new analysis method mimics the behavior of human cochlea. Therefore, with the developed method, impulse responses and sound signals can be visualized with similar time and frequency resolution as human hearing applies. The evaluation was completed subjectively by conducting listening tests. The utilized listening test methodology is explained and the final results are presented. The results show that the implemented auralization system provides plausible and natural sounding auralizations in rooms similar to the lecture room employed for evaluation.reviewe

    Automatic annotation of musical audio for interactive applications

    Get PDF
    PhDAs machines become more and more portable, and part of our everyday life, it becomes apparent that developing interactive and ubiquitous systems is an important aspect of new music applications created by the research community. We are interested in developing a robust layer for the automatic annotation of audio signals, to be used in various applications, from music search engines to interactive installations, and in various contexts, from embedded devices to audio content servers. We propose adaptations of existing signal processing techniques to a real time context. Amongst these annotation techniques, we concentrate on low and mid-level tasks such as onset detection, pitch tracking, tempo extraction and note modelling. We present a framework to extract these annotations and evaluate the performances of different algorithms. The first task is to detect onsets and offsets in audio streams within short latencies. The segmentation of audio streams into temporal objects enables various manipulation and analysis of metrical structure. Evaluation of different algorithms and their adaptation to real time are described. We then tackle the problem of fundamental frequency estimation, again trying to reduce both the delay and the computational cost. Different algorithms are implemented for real time and experimented on monophonic recordings and complex signals. Spectral analysis can be used to label the temporal segments; the estimation of higher level descriptions is approached. Techniques for modelling of note objects and localisation of beats are implemented and discussed. Applications of our framework include live and interactive music installations, and more generally tools for the composers and sound engineers. Speed optimisations may bring a significant improvement to various automated tasks, such as automatic classification and recommendation systems. We describe the design of our software solution, for our research purposes and in view of its integration within other systems.EU-FP6-IST-507142 project SIMAC (Semantic Interaction with Music Audio Contents); EPSRC grants GR/R54620; GR/S75802/01
    corecore