80 research outputs found

    Real-time sound synthesis on a multi-processor platform

    Get PDF
    Real-time sound synthesis means that the calculation and output of each sound sample for a channel of audio information must be completed within a sample period. At a broadcasting standard, a sampling rate of 32,000 Hz, the maximum period available is 31.25 μsec. Such requirements demand a large amount of data processing power. An effective solution for this problem is a multi-processor platform; a parallel and distributed processing system. The suitability of the MIDI [Music Instrument Digital Interface] standard, published in 1983, as a controller for real-time applications is examined. Many musicians have expressed doubts on the decade old standard's ability for real-time performance. These have been investigated by measuring timing in various musical gestures, and by comparing these with the subjective characteristics of human perception. An implementation and its optimisation of real-time additive synthesis programs on a multi-transputer network are described. A prototype 81-polyphonic-note- organ configuration was implemented. By devising and deploying monitoring processes, the network's performance was measured and enhanced, leading to an efficient usage; the 88-note configuration. Since 88 simultaneous notes are rarely necessary in most performances, a scheduling program for dynamic note allocation was then introduced to achieve further efficiency gains. Considering calculation redundancies still further, a multi-sampling rate approach was applied as a further step to achieve an optimal performance. The theories underlining sound granulation, as a means of constructing complex sounds from grains, and the real-time implementation of this technique are outlined. The idea of sound granulation is quite similar to the quantum-wave theory, "acoustic quanta". Despite the conceptual simplicity, the signal processing requirements set tough demands, providing a challenge for this audio synthesis engine. Three issues arising from the results of the implementations above are discussed; the efficiency of the applications implemented, provisions for new processors and an optimal network architecture for sound synthesis

    The composer as technologist : an investigation into compositional process

    Get PDF
    This work presents an investigation into compositional process. This is undertaken where a study of musical gesture, certain areas of cognitive musicology, computer vision technologies and object-orientated programming, provide the basis for a composer (author) to assume the role of a technologist and acquire knowledge and skills to that end. In particular, it focuses on the application and development of a video gesture recognition heuristic to the compositional problems posed. The result is the creation of an interactive musical work with score for violin and electronics that supports the research findings. In addition, the investigative approach into developing technology to solve musical problems that explores practical composition and aesthetic challenges is detailed

    A History of Audio Effects

    Get PDF
    Audio effects are an essential tool that the field of music production relies upon. The ability to intentionally manipulate and modify a piece of sound has opened up considerable opportunities for music making. The evolution of technology has often driven new audio tools and effects, from early architectural acoustics through electromechanical and electronic devices to the digitisation of music production studios. Throughout time, music has constantly borrowed ideas and technological advancements from all other fields and contributed back to the innovative technology. This is defined as transsectorial innovation and fundamentally underpins the technological developments of audio effects. The development and evolution of audio effect technology is discussed, highlighting major technical breakthroughs and the impact of available audio effects

    Mobile Music Development Tools for Creative Coders

    Get PDF
    This project is a body of work that facilitates the creation of musical mobile artworks. The project includes a code toolkit that enhances and simplifies the development of mobile music iOS applications, a flexible notation system designed for mobile musical interactions, and example apps and scored compositions to demonstrate the toolkit and notation system. The code library is designed to simplify the technical aspect of user-centered design and development with a more direct connection between concept and deliverable. This sim- plification addresses learning problems (such as motivation, self-efficacy, and self-perceived understanding) by bridging the gap between idea and functional prototype and improving the ability to contextualize the development process for musicians and other creatives. The toolkit helps to circumvent the need to learn complex iOS development patterns and affords more readable code. CSPD (color, shape, pattern, density) notation is a pseudo-tablature that describes performance interactions. The system leverages visual density and patterns of both color and shape to describe types of gestures (physical or musical) and their relationships rather than focusing on strict rhythmic or pitch/frequency content. The primary design goal is to visualize macro musical concepts that create middleground structure

    Enmeshed 3

    Get PDF
    Enmeshed 3, for cello and live electronics, is the third in a series of works in which a solo instrument becomes ‘enmeshed’ in multiple layers of transformations derived from the live performance. The works are shaped and structured in terms of the varying relationships between these layers and the ‘distances’ between the original acoustic performance and the various transformations, in terms of pitch, time delay, timbre, texture and space. At certain points in the work these almost converge whilst at other times large distances open up, with the different layers in a wild counterpoint. All the sounds in the work derive from live transformation of the soloist's performance. The composer’s own granular synthesis algorithms play a significant role in these transformations. Multichannel spatialisation also plays an important part in terms of spatial positioning and movement, the creation of different virtual spatial environments and in the definition of different layers. It can be performed variously with between 8 and 24 channels. Enmeshed 3 is in five contrasting but inter-related sections centering around a long slow meditative central passage. It was written for Madeleine Shapiro who premiered it at the New York City Electroacoustic Music Festival in April 2013

    Designing instruments towards networked music practices

    Get PDF
    It is commonly noted in New Interfaces for Musical Expression (NIME) research that few of these make it to the mainstream and are adopted by the general public. Some research in Sound and Music Computing (SMC) suggests that the lack of humanistic research guiding technological development may be one of the causes. Many new technologies are invented, however without real aim else than for technical innovation, great products however emphasize the user-friendliness, user involvement in the design process or User-Centred Design (UCD), that seek to guarantee that innovation address real, existing needs among users. Such an approach includes not only traditionally quantifiable usability goals, but also qualitative, psychological, philosophical and musical such. The latter approach has come to be called experience design, while the former is referred to as interaction design. Although the Human Computer Interaction (HCI) community in general has recognized the significance of qualitative needs and experience design, NIME has been slower to adopt this new paradigm. This thesis therefore attempts to investigate its relevance in NIME, and specifically Computer Supported Cooperative Work (CSCW) for music applications by devising a prototype for group music action based on needs defined from pianists engaging in piano duets, one of the more common forms of group creation seen in the western musical tradition. These needs, some which are socio-emotional in nature, are addressed through our prototype although in the context of computers and global networks by allowing for composers from all over the world to submit music to a group concert on a Yamaha Disklavier in location in Porto, Portugal. Although this prototype is not a new gestural controller per se, and therefore not a traditional NIME, but rather a platform that interfaces groups of composers with a remote audience, the aim of this research is on investigating how contextual parameters like venue, audience, joint concert and technologies impact the overall user experience of such a system. The results of this research has been important not only in understanding the processes, services, events or environments in which NIME’s operate, but also understanding reciprocity, creativity, experience design in Networked Music practices.É de conhecimento generalizado que na área de investigação em novos interfaces para expressão musical (NIME - New Interfaces for Musical Expression), poucos dos resultantes dispositivos acabam por ser popularizados e adoptados pelo grande público. Algum do trabalho em computação sonora e musical (SMC- Sound and Music Computing) sugere que uma das causas para esta dificuldade, reside numalacuna ao nível da investigação dos comportamentos humanos como linha orientadora para os desenvolvimentos tecnológicos. Muitos dos desenvolvimentos tecnológicos são conduzidos sem um real objectivo, para além da inovação tecnológica, resultando em excelentes produtos, mas sem qualquer enfâse na usabilidade humana ou envolvimento do utilizador no processo de Design (UCDUser Centered Design), no sentido de garantir que a inovação atende a necessidades reais dos utilizadores finais. Esta estratégia implica, não só objectivos quantitativos tradicionais de usabilidade, mas também princípios qualitativos, fisiológicos, psicológicos e musicológicos. Esta ultima abordagem é atualmente reconhecida como Design de Experiência (Experience Design) enquanto a abordagem tradicional é vulgarmente reconhecida apenas como Design de Interação (Interaction Design). Apesar de na área Interação Homem-Computador (HCI – Human Computer Interaction) as necessidades qualitativas no design de experiência ser amplamente reconhecido em termos do seu significado e aplicabilidade, a comunidade NIME tem sido mais lenta em adoptar este novo paradigma. Neste sentido, esta Tese procura investigar a relevância em NIME, especificamente nu subtópico do trabalho cooperativo suportado por Computadores (CSCW – Computer Supported Cooperative Work), para aplicações musicais, através do desenvolvimento de um protótipo de um sistema que suporta ações musicais coletivas, baseado nas necessidades especificas de Pianistas em duetos de Piano, uma das formas mais comuns de criação musical em grupo popularizada na tradição musical ocidental. Estes requisitos, alguns sócioemocionais na sua natureza, são atendidos através do protótipo, neste caso aplicado ao contexto informático e da rede de comunicações global, permitindo a compositores de todo o mundo submeterem a sua música para um concerto de piano em grupo num piano acústico Yamaha Disklavier, localizado fisicamente na cidade do Porto, Portugal. Este protótipo não introduz um novo controlador em si mesmo, e consequentemente não está alinhado com as típicas propostas de NIME. Trata-se sim, de uma nova plataforma de interface em grupo para compositores com uma audiência remota, enquadrado com objectivos de experimentação e investigação sobre o impacto de diversos parâmetros, tais como o espaço performativo, as audiências, concertos colaborativos e tecnologias em termos do sistema global. O resultado deste processo de investigação foi relevante, não só para compreender os processos, serviços, eventos ou ambiente em que os NIME podem operar, mas também para melhor perceber a reciprocidade, criatividade e design de experiencia nas práticas musicais em rede

    A Mathematical, Graphical and Visual Approach to Granular Synthesis Composition

    Get PDF
    We show a method for Granular Synthesis Composition based on a mathematical modeling for the musical gesture. Each gesture is drawn as a curve generated from a particular mathematical model (or function) and coded as a MATLAB script. The gestures can be deterministic through defining mathematical time functions, hand free drawn, or even randomly generated. This parametric information of gestures is interpreted through OSC messages by a granular synthesizer (Granular Streamer). The musical composition is then realized with the models (scripts) written in MATLAB and exported to a graphical score (Granular Score). The method is amenable to allow statistical analysis of the granular sound streams and the final music composition. We also offer a way to create granular streams based on correlated pair of grains parameters

    An investigation of audio signal-driven sound synthesis with a focus on its use for bowed stringed synthesisers

    Get PDF
    This thesis proposes an alternative approach to sound synthesis. It seeks to offer traditional string players a synthesiser which will allow them to make use of their existing skills in performance. A theoretical apparatus reflecting on the constraints of formalisation is developed and used to shed light on construction-related shortcomings in the instrumental developments of related research. Historical aspects and methods of sound synthesis, and the act of musical performance, are addressed with the aim of drawing conclusions for the construction of algorithms and interfaces. The alternative approach creates an openness and responsiveness in the synthesis instrument by using implicit playing parameters without the necessity to define, specify or measure all of them. In order to investigate this approach, several synthesis algorithms are developed, sounds are designed and a selection of them empirically compared to conventionally synthesised sounds. The algorithms are used in collaborative projects with other musicians in order to examine their practical musical value. The results provide evidence that implementations using the approach presented can offer musically significant differences as compared to similarly complex conventional implementations, and that - depending on the disposition of the musician - they can form a valuable contribution to the sound repertoire of performers and composers

    From Musical Grammars to Music Cognition in the 1980s and 1990s: Highlights of the History of Computer-Assisted Music Analysis

    Get PDF
    While approaches that had already established historical precedents – computer-assisted analytical approaches drawing on statistics and information theory – developed further, many research projects conducted during the 1980s aimed at the development of new methods of computer-assisted music analysis. Some projects discovered new possibilities related to using computers to simulate human cognition and perception, drawing on cognitive musicology and Artificial Intelligence, areas that were themselves spurred on by new technical developments and by developments in computer program design. The 1990s ushered in revolutionary methods of music analysis, especially those drawing on Artificial Intelligence research. Some of these approaches started to focus on musical sound, rather than scores. They allowed music analysis to focus on how music is actually perceived. In some approaches, the analysis of music and of music cognition merged. This article provides an overview of computer-assisted music analysis of the 1980s and 1990s, as it relates to music cognition. Selected approaches are being discussed
    corecore