268 research outputs found

    Instruments for Spatial Sound Control in Real Time Music Performances. A Review

    Get PDF
    The systematic arrangement of sound in space is widely considered as one important compositional design category of Western art music and acoustic media art in the 20th century. A lot of attention has been paid to the artistic concepts of sound in space and its reproduction through loudspeaker systems. Much less attention has been attracted by live-interactive practices and tools for spatialisation as performance practice. As a contribution to this topic, the current study has conducted an inventory of controllers for the real time spatialisation of sound as part of musical performances, and classified them both along different interface paradigms and according to their scope of spatial control. By means of a literature study, we were able to identify 31 different spatialisation interfaces presented to the public in context of artistic performances or at relevant conferences on the subject. Considering that only a small proportion of these interfaces combines spatialisation and sound production, it seems that in most cases the projection of sound in space is not delegated to a musical performer but regarded as a compositional problem or as a separate performative dimension. With the exception of the mixing desk and its fader board paradigm as used for the performance of acousmatic music with loudspeaker orchestras, all devices are individual design solutions developed for a specific artistic context. We conclude that, if controllers for sound spatialisation were supposed to be perceived as musical instruments in a narrow sense, meeting certain aspects of instrumentality, immediacy, liveness, and learnability, new design strategies would be required

    Extended Nonnegative Tensor Factorisation Models for Musical Sound Source Separation

    Get PDF
    Recently, shift-invariant tensor factorisation algorithms have been proposed for the purposes of sound source separation of pitched musical instruments. However, in practice, existing algorithms require the use of log-frequency spectrograms to allow shift invariance in frequency which causes problems when attempting to resynthesise the separated sources. Further, it is difficult to impose harmonicity constraints on the recovered basis functions. This paper proposes a new additive synthesis-based approach which allows the use of linear-frequency spectrograms as well as imposing strict harmonic constraints, resulting in an improved model. Further, these additional constraints allow the addition of a source filter model to the factorisation framework, and an extended model which is capable of separating mixtures of pitched and percussive instruments simultaneously

    A History of Audio Effects

    Get PDF
    Audio effects are an essential tool that the field of music production relies upon. The ability to intentionally manipulate and modify a piece of sound has opened up considerable opportunities for music making. The evolution of technology has often driven new audio tools and effects, from early architectural acoustics through electromechanical and electronic devices to the digitisation of music production studios. Throughout time, music has constantly borrowed ideas and technological advancements from all other fields and contributed back to the innovative technology. This is defined as transsectorial innovation and fundamentally underpins the technological developments of audio effects. The development and evolution of audio effect technology is discussed, highlighting major technical breakthroughs and the impact of available audio effects

    Обзор графических вероятностных моделей гармонии для анализа музыкальных произведений

    Get PDF
    This paper presents the current state of art in the field of automated musical harmony analysis. Research in this field can be motivated by the real-world problems of creating completely automated content-based music recommendation systems (similar to Pandora, but with-out the manual work of expert musicologists). The paper is mainly focused on probabilistic graphical models as one of the most promising approaches, although we also give background in alternative methods. We consider works that use Markov chain models, hidden Markov models, and multi-level graphical models. Along with the models that capture only harmonic information—chord progressions, in some cases also the key,—we also list several models that combine harmonic structure with rhythmic or stream structure.Цель статьи — познакомить читателя с современным состоянием дел в области автоматического анализа музыкальной гармонии. Мотивацией для исследований в этой области может являться создание автоматических систем рекомендации музыки, ориентированных на содержание (наподобие Pandora, но без ручного труда экспертов-музыковедов). Основное внимание уделено графическим вероятностным моделям как одному из наиболее перспективных подходов, но описываются и альтернативные методы. Рассмотрены работы, использующие марковские цепи, скрытые марковские модели, многоуровневые графические модели. Приведены как работы, моделирующие только гармонию — последовательности аккордов, в некоторых случаях и тональность, — так и работы, включающие в себя информацию о структуре анализируемого произведения (ритмической, голосовой)

    Computer Music and Digital Media Art Through a Web-Based Collaborative Interface

    Get PDF
    A dissertação apresentada é resultado de uma investigação em interfaces colaborativas usando tecnologias web, feita no contexto do Braga Media Arts. Como resultado é apresentado um ambiente audiovisual em rede como desenvolvimento prático, o Akson. O Akson foi inicialmente concebido como uma exploração do que poderia ser construído aproveitando a infraestrutura global da Internet, bem como a reprodução musical e visual em vários dispositivos. Este sistema foi feito pensando em seu uso em performance ao vivo e é capaz de interagir com os dispositivos do público.The dissertation presented is the result of an investigation into collaborative interfaces using web technologies, done in the context of Braga Media Arts. As a result an audiovisual environment is presented as a practical development, Akson. Akson was initially conceived of as an exploration into what could be built by leveraging global internet infrastructure as well as musical and visual playback across multiple devices. This system was done thinking about its use in live performance and is able to interact with the devices of the public

    Automatic annotation of musical audio for interactive applications

    Get PDF
    PhDAs machines become more and more portable, and part of our everyday life, it becomes apparent that developing interactive and ubiquitous systems is an important aspect of new music applications created by the research community. We are interested in developing a robust layer for the automatic annotation of audio signals, to be used in various applications, from music search engines to interactive installations, and in various contexts, from embedded devices to audio content servers. We propose adaptations of existing signal processing techniques to a real time context. Amongst these annotation techniques, we concentrate on low and mid-level tasks such as onset detection, pitch tracking, tempo extraction and note modelling. We present a framework to extract these annotations and evaluate the performances of different algorithms. The first task is to detect onsets and offsets in audio streams within short latencies. The segmentation of audio streams into temporal objects enables various manipulation and analysis of metrical structure. Evaluation of different algorithms and their adaptation to real time are described. We then tackle the problem of fundamental frequency estimation, again trying to reduce both the delay and the computational cost. Different algorithms are implemented for real time and experimented on monophonic recordings and complex signals. Spectral analysis can be used to label the temporal segments; the estimation of higher level descriptions is approached. Techniques for modelling of note objects and localisation of beats are implemented and discussed. Applications of our framework include live and interactive music installations, and more generally tools for the composers and sound engineers. Speed optimisations may bring a significant improvement to various automated tasks, such as automatic classification and recommendation systems. We describe the design of our software solution, for our research purposes and in view of its integration within other systems.EU-FP6-IST-507142 project SIMAC (Semantic Interaction with Music Audio Contents); EPSRC grants GR/R54620; GR/S75802/01

    A General Framework for Visualization of Sound Collections in Musical Interfaces

    Get PDF
    open access articleWhile audio data play an increasingly central role in computer-based music production, interaction with large sound collections in most available music creation and production environments is very often still limited to scrolling long lists of file names. This paper describes a general framework for devising interactive applications based on the content-based visualization of sound collections. The proposed framework allows for a modular combination of different techniques for sound segmentation, analysis, and dimensionality reduction, using the reduced feature space for interactive applications. We analyze several prototypes presented in the literature and describe their limitations. We propose a more general framework that can be used flexibly to devise music creation interfaces. The proposed approach includes several novel contributions with respect to previously used pipelines, such as using unsupervised feature learning, content-based sound icons, and control of the output space layout. We present an implementation of the framework using the SuperCollider computer music language, and three example prototypes demonstrating its use for data-driven music interfaces. Our results demonstrate the potential of unsupervised machine learning and visualization for creative applications in computer music
    corecore