265 research outputs found

    Rich Contacts: Corpus-Based Convolution of Audio Contact Gestures for Enhanced Musical Expression

    Get PDF
    We propose ways of enriching the timbral potential of gestural sonic material captured via piezo or contact microphones, through latency-free convolution of the microphone signal with grains from a sound corpus. This creates a new way to combine the sonic richness of large sound corpora, easily accessible via navigation through a timbral descriptor space, with the intuitive gestural interaction with a surface, captured by any contact microphone. We use convolution to excite the grains from the corpus via the microphone input, capturing the contact interaction sounds, which allows articulation of the corpus by hitting, scratching, or strumming a surface with various parts of the hands or objects. We also show how changes of grains have to be carefully handled, how one can smoothly interpolate between neighbouring grains, and finally evaluate the system against previous attempts

    earGram Actors: an interactive audiovisual system based on social behavior

    Get PDF
    In multi-agent systems, local interactions among system components following relatively simple rules often result in complex overall systemic behavior. Complex behavioral and morphological patterns have been used to generate and organize audiovisual systems with artistic purposes. In this work, we propose to use the Actor model of social interactions to drive a concatenative synthesis engine called earGram in real time. The Actor model was originally developed to explore the emergence of complex visual patterns. On the other hand, earGram was originally developed to facilitate the creative exploration of concatenative sound synthesis. The integrated audiovisual system allows a human performer to interact with the system dynamics while receiving visual and auditory feedback. The interaction happens indirectly by disturbing the rules governing the social relationships amongst the actors, which results in a wide range of dynamic spatiotemporal patterns. A performer thus improvises within the behavioural scope of the system while evaluating the apparent connections between parameter values and actual complexity of the system output

    Introducing CatOracle: Corpus-based concatenative improvisation with the Audio Oracle algorithm

    Get PDF
    CATORACLE responds to the need to join high-level control of audio timbre with the organization of musical form in time. It is inspired by two powerful existing tools: CataRT for corpus-based concatenative synthesis based on the MUBU for MAX library, and PYORACLE for computer improvisation, combining for the first time audio descriptor analysis and learning and generation of musical structures. Harnessing a user-defined list of audio fea- tures, live or prerecorded audio is analyzed to construct an “Audio Oracle” as a basis for improvisation. CatOracle also extends features of classic concatenative synthesis to include live interactive audio mosaicking and score-based transcription using the BACH library for MAX. The project suggests applications not only to live performance of written and improvised electroacoustic music, but also computer-assisted composition and musical analysis

    Gesture-Timbre Space: Multidimensional Feature Mapping Using Machine Learning & Concatenative Synthesis

    Get PDF
    This paper presents a method for mapping embodied gesture, acquired with electromyography and motion sensing, to a corpus of small sound units, organised by derived timbral features using concatenative synthesis. Gestures and sounds can be associated directly using individual units and static poses, or by using a sound tracing method that leverages our intuitive associations between sound and embodied movement. We propose a method for augmenting corporal density to enable expressive variation on the original gesture-timbre space

    Composing Music for Acoustic Instruments and Electronics Mediated Through the Application of Microsound

    Get PDF
    This project seeks to extend, through a portfolio of compositions, the use of microsound to mixed works incorporating acoustic instrument and electronics. Issues relating to the notation of microsound when used with acoustic instruments are explored and the adoption of a clear and intuitive system of graphical notation is proposed. The design of the performance environment for the electroacoustic part is discussed and different models for the control of the electronics are considered. Issues relating to structure and form when applied to compositions that mix note-based material with texture-based material are also considered. A framework based on a pure sound/noise continuum, used in conjunction with a hierarchy of gestural archetypes, is adopted as a possible solution to the challenges of structuring mixed compositions. Gestural and textural relationships between different parts of the compositions are also explored and the use of extended instrumental techniques to create continua between the acoustic and the electroacoustic is adopted. The role of aleatoric techniques and improvisation in both the acoustic and the electroacoustic parts are explored through adoption of an interactive performance environment incorporating a pitch-tracking algorithm. Finally, the advantages and disadvantages of real time recording and processing of the electronic part when compared with live processing of pre-existing sound-files are discussed

    Algorithmic Compositional Methods and their Role in Genesis: A Multi-Functional Real-Time Computer Music System

    Get PDF
    Algorithmic procedures have been applied in computer music systems to generate compositional products using conventional musical formalism, extensions of such musical formalism and extra-musical disciplines such as mathematical models. This research investigates the applicability of such algorithmic methodologies for real-time musical composition, culminating in Genesis, a multi-functional real-time computer music system written for Mac OS X in the SuperCollider object-oriented programming language, and contained in the accompanying DVD. Through an extensive graphical user interface, Genesis offers musicians the opportunity to explore the application of the sonic features of real-time sound-objects to designated generative processes via different models of interaction such as unsupervised musical composition by Genesis and networked control of external Genesis instances. As a result of the applied interactive, generative and analytical methods, Genesis forms a unique compositional process, with a compositional product that reflects the character of its interactions between the sonic features of real-time sound-objects and its selected algorithmic procedures. Within this thesis, the technologies involved in algorithmic methodologies used for compositional processes, and the concepts that define their constructs are described, with consequent detailing of their selection and application in Genesis, with audio examples of algorithmic compositional methods demonstrated on the accompanying DVD. To demonstrate the real-time compositional abilities of Genesis, free explorations with instrumentalists, along with studio recordings of the compositional processes available in Genesis are presented in audiovisual examples contained in the accompanying DVD. The evaluation of the Genesis system’s capability to form a real-time compositional process, thereby maintaining real-time interaction between the sonic features of real-time sound objects and its selected algorithmic compositional methods, focuses on existing evaluation techniques founded in HCI and the qualitative issues such evaluation methods present. In terms of the compositional products generated by Genesis, the challenges in quantifying and qualifying its compositional outputs are identified, demonstrating the intricacies of assessing generative methods of compositional processes, and their impact on a resulting compositional product. The thesis concludes by considering further advances and applications of Genesis, and inviting further dissemination of the Genesis system and promotion of research into evaluative methods of generative techniques, with the hope that this may provide additional insight into the relative success of products generated by real-time algorithmic compositional processes

    Using audio feature extraction for interactive feature-based sonification of sound

    Get PDF
    Presented at the 21st International Conference on Auditory Display (ICAD2015), July 6-10, 2015, Graz, Styria, Austria.Feature extraction from an audio stream is usually used for visual analysis and measurement of sound. This paper seeks to describe a set of methods for using feature extraction to manipulate concatenative synthesis, and develops experiments with reconfigurations of the feature-based concatenative synthesis systems within a live, interactive context. The aim is to explore sound creation and manipulation within an interactive, creative, feedback loop

    Physics-based Concatenative Sound Synthesis of Photogrammetric models for Aural and Haptic Feedback in Virtual Environments

    Get PDF
    We present a novel physics-based concatenative sound synthesis (CSS) methodology for congruent interactions across physical, graphical, aural and haptic modalities in Virtual Environments. Navigation in aural and haptic corpora of annotated audio units is driven by user interactions with highly realistic photogrammetric based models in a game engine, where automated and interactive positional, physics and graphics data are supported. From a technical perspective, the current contribution expands existing CSS frameworks in avoiding mapping or mining the annotation data to real-time performance attributes, while guaranteeing degrees of novelty and variation for the same gesture
    corecore