19 research outputs found

    Real-time sound synthesis on a multi-processor platform

    Get PDF
    Real-time sound synthesis means that the calculation and output of each sound sample for a channel of audio information must be completed within a sample period. At a broadcasting standard, a sampling rate of 32,000 Hz, the maximum period available is 31.25 μsec. Such requirements demand a large amount of data processing power. An effective solution for this problem is a multi-processor platform; a parallel and distributed processing system. The suitability of the MIDI [Music Instrument Digital Interface] standard, published in 1983, as a controller for real-time applications is examined. Many musicians have expressed doubts on the decade old standard's ability for real-time performance. These have been investigated by measuring timing in various musical gestures, and by comparing these with the subjective characteristics of human perception. An implementation and its optimisation of real-time additive synthesis programs on a multi-transputer network are described. A prototype 81-polyphonic-note- organ configuration was implemented. By devising and deploying monitoring processes, the network's performance was measured and enhanced, leading to an efficient usage; the 88-note configuration. Since 88 simultaneous notes are rarely necessary in most performances, a scheduling program for dynamic note allocation was then introduced to achieve further efficiency gains. Considering calculation redundancies still further, a multi-sampling rate approach was applied as a further step to achieve an optimal performance. The theories underlining sound granulation, as a means of constructing complex sounds from grains, and the real-time implementation of this technique are outlined. The idea of sound granulation is quite similar to the quantum-wave theory, "acoustic quanta". Despite the conceptual simplicity, the signal processing requirements set tough demands, providing a challenge for this audio synthesis engine. Three issues arising from the results of the implementations above are discussed; the efficiency of the applications implemented, provisions for new processors and an optimal network architecture for sound synthesis

    Complex musical behaviours via time-variant audio feedback networks and distributed adaptation: a study of autopoietic infrastructures for real-time performance systems

    Get PDF
    The research project presented here is a study of the application of complex adaptive systems (CASes) for live music performance and composition by fully autonomous or semi-autonomous machines. The fundamental artistic concept and the motivating idea behind this project are that complex systems are an optimal model for creative music practice as they operate at the edge of chaos: that is, a condition where there is an interplay between order and disorder, or patterns and surprise. Arguably, this is an essential element found in music regardless of its genre or style. The central research questions addressed by this project are: how to realise music systems with an abstract yet structurally coherent and contextually complex output that display organicity and resemble aliveness? How to design music organisms so that artificial expressiveness and formal developments emerge and generate musical behaviours that contribute to the ongoing exploration at the edges of new music practices? How to create audio networks that are responsible for their structure and organisation where a substantial autonomy expands the paradigm of human-machine interaction? The methodology used in this project for the implementation of the music systems applies theories from complexity and adaptation, biology, and philosophy within nonlinear time-variant self-modulating feedback networks. The structural coupling between system and performer is realised by following a cybernetic approach in human-machine interaction and human-machine interfacing. The music systems developed for the creative works in this research are a combination of interdependent algorithms for the processing of information and the synthesis of sound and music. A technique formulated to improve the complexity of music systems is referred to as distributed adaptation, related to the notion of evolvability in biology and genetic algorithms. Distributed adaptation consists of making the adaptation infrastructure itself adaptive and time-variant by employing emergent sensing mechanisms for the generation of information signals, and emergent mapping functions between information signals and state variables. This framework realises the idea for which information and information processing recursively determine each other in a radical constructivist fashion with the important consequence that the machine ultimately constructs its reality as a self-sensing, self-performing, and context-dependent entity. This research includes seven music performances, each implementing CASes with or without human intervention. Also included is a library of original software algorithms for low-level and high-level information processing written in Faust. Chronologically ordered, the performances depict the progress of the study, starting with systems having basic adaptive characteristics and eventually revealing the more advanced ones where the distributed adaptation design is applied. Through self-reflection and post-hoc analysis, case studies illustrate that the combination of CASes and cybernetic performance and interfacing, and particularly distributed adaptation systems with or without human agents, produce a sophisticated musical output whose evolutions are complex, coherent and expressive. These results suggest that the emergent behaviours of such systems can be deployed as a means for the exploration of new music in practice. Furthermore, the autonomy and contextual nature of these systems suggest that promising results can be achieved when applying them to other fields involving audio, especially where interactivity with the surrounding environment is crucial, or when using them as musical instruments for users with special needs

    Audiovisual granular synthesis: creating synergistic relationships between sound and image

    Get PDF
    The aims of this research were to investigate how an audio processing technique known as granular synthesis can be translated to a visual processing equivalent, and to develop software that fuses audiovisual relationships for the creation of real-time audiovisual art. In order to carry out this project, two main research questions were posed. The first question was: how can audio processing techniques such as granular synthesis be adapted and applied to influence new visual performance techniques, and the second question was: how can computer software synergistically integrate audio and visuals to enable the real-time creation and performance of audiovisual art. The project at the centre of my research was the creation of a real-time audiovisual granular synthesis instrument named Kortex. The research project involved a practice-based methodology and used an iterative performance cycle to evaluate and develop the Kortex prototype. These included performing iterations of the Kortex prototype at a number of local, interstate and international events. Kortex facilitates the identification of shared characteristics found between sound and image at the micro and macro level. The micro level addresses individual audiovisual segments, or grains, while the macro level addresses post-processing effects applied to the stream of audiovisual grains. Audiovisual characteristics are paired together by the user at each level, enabling composition with both media simultaneously. This provides the audiovisual artist with a dynamic approach for the creation of new works. Creating relationships between image and sound is highly subjective, yet an artist may use a mathematical, metaphorical/intuitive or intrinsic approach to create a convincing correlation between the two media. The mathematical approach expresses the relationship between sound and image as an equation. Metaphorical/intuitive relationships are formed when the two media share similar emotional or perceptual characteristics, while intrinsic relationships occur when audio and visual media are synthesised from the same source. Performers need powerful control strategies to manipulate large collections of variables in real-time. I found that pattern-generating modulation sources created overlapping phrases that evolved the behaviour of audiovisual relationships. Furthermore, saving interesting aesthetics that emerged into banks of presets, along with the ability to slide from one to the next, facilitated powerful transformations during a performance. The project has contributed to the field of audiovisual art, specifically to the performance work of DJs and VJs. Kortex provides a single audiovisual composition and performance environment that can be used by DJs and VJs for creative collaboration. Kortex has enormous potential for adoption by the DJ/VJ community to assist in the production of tightly synchronised real-time audiovisual performances

    Proceedings of the 19th Sound and Music Computing Conference

    Get PDF
    Proceedings of the 19th Sound and Music Computing Conference - June 5-12, 2022 - Saint-Étienne (France). https://smc22.grame.f

    Proceedings of the 7th Sound and Music Computing Conference

    Get PDF
    Proceedings of the SMC2010 - 7th Sound and Music Computing Conference, July 21st - July 24th 2010

    Users, systems, and technology in high-end audio

    Get PDF
    Thesis (Ph. D. in History, Anthropology, and Science, Technology and Society (HASTS))--Massachusetts Institute of Technology, Program in Science, Technology and Society, 2009.Page 414 blank.Includes bibliographical references (p. 401-413).This is a story about technology, users, and music. It is about an approach to the design, manipulation, and arrangement of technologies in small-scale systems to achieve particular aesthetic goals - goals that are at once subjective and contingent. These goals emerge from enthusiasm for technology, for system-building, and for music among members of a community of users, and the promise of the emotional rewards derived from these elements in combination. It is a story about how enthusiasm and passion become practice, and how particular technologies, system-building activities, listening, debating, innovating, and interacting form that practice. Using both historical and ethnographic research methods, including fieldwork and oral history interviews, this dissertation is focused on how and why user communities mobilize around particular technologies and socio-technical systems. In particular, it concerns how users' aesthetic sensibilities and enthusiasm for technology can shape both technologies themselves and the processes of technological innovation. These issues are explored through a study of the small but enthusiastic high-end audio community in the United States. These users express needs, desires, and aesthetic motivations towards technology that set them apart from mainstream consumers, but also reveal important and under-recognized aspects of human relationships with technology more broadly. Covering the emergence and growth of high-end audio from the early 1970s to 2000, I trace some of the major technology transitions during this period and their associated social elements, including the shift from vacuum tube to solid-state electronics in the 1970s, and from analog vinyl records to digital compact discs in the 1980s. I show how this community came to understand technology, science, and their own social behavior through powerful emotional and aesthetic responses to music and the technologies used to reproduce music in the home. I further show how focusing on technology's users can recast assumptions about the ingredients and conditions necessary to foster technological innovation.by Kieran Downes.Ph.D.in History, Anthropology, and Science, Technology and Society (HAST
    corecore