156 research outputs found

    Star Interpolator – A Novel Visualization Paradigm for Graphical Interpolators

    Get PDF
    This paper presents a new visualization paradigm for graphical interpolation systems, known as Star Interpolation, that has been specifically created for sound design applications. Through the presented investigation of previous visualizations, it becomes apparent that the existing visuals in this class of system, generally relate to the interpolation model that determines the weightings of the presets and not the sonic output. The Star Interpolator looks to resolve this deficiency by providing visual cues that relate to the parameter space. Through comparative exploration it has been found this visualization provides a number of benefits over the previous systems. It is also shown that hybrid visualizations can be generated that combine the benefits of the new visualization with the existing interpolation models. These can then be accessed by using an Interactive Visualization (IV) approach. The results fromour exploration of these visualizations are encouraging and they appear to be advantageous when using the interpolators for sound designs tasks

    A Journey in (Interpolated) Sound: Impact of Different Visualizations in Graphical Interpolators

    Get PDF
    Graphical interpolation systems provide a simple mechanism for the control of sound synthesis systems by providing a level of abstraction above the parameters of the synthesis engine, allowing users to explore different sounds without awareness of the synthesis details. While a number of graphical interpolator systems have been developed over many years, with a variety of user-interface designs, few have been subject to user-evaluations. We present the testing and evaluation of alternative visualizations for a graphical interpolator in order to establish if the visual feedback provided through the interface, aids the navigation and identification of sounds with the system. The testing took the form of comparing the users’ mouse traces, showing the journey they made through the interpolated sound space when different visual interfaces were used. Sixteen participants took part and a summary of the results is presented, showing that the visuals provide users with additional cues that lead to better interaction with the interpolator

    A framework for the development and evaluation of graphical interpolation for synthesizer parameter mappings

    Get PDF
    This paper presents a framework that supports the development and evaluation of graphical interpolated parameter mapping for the purpose of sound design. These systems present the user with a graphical pane, usually two-dimensional, where synthesizer presets can be located. Moving an interpolation point cursor within the pane will then create new sounds by calculating new parameter values, based on the cursor position and the interpolation model used. The exploratory nature of these systems lends itself to sound design applications, which also have a highly exploratory character. However, populating the interpolation space with “known” preset sounds allows the parameter space to be constrained, reducing the design complexity otherwise associated with synthesizer-based sound design. An analysis of previous graphical interpolators is presented and from this a framework is formalized and tested to show its suitability for the evaluation of such systems. The framework has then been used to compare the functionality of a number of systems that have been previously implemented. This has led to a better understanding of the different sonic outputs that each can produce and highlighted areas for further investigation

    Designing musical games for electroacoustic improvisation

    Get PDF
    This paper describes the background and motivations behind the author’s electroacoustic game-pieces Pathfinder (2016) and ICARUS (2019), designed specifically for his performance practice with an augmented drum kit. The use of game structures in music is outlined, while musical expression in the context of commercial musical games using conventional game controllers is discussed. Notions such as agility, agency and authorship in music composition and improvisation are in parallel with game design and play, where players are asked to develop skills through affordances within a digital game-space. It is argued that the recent democratisation of game engines opens a wide range of expressive opportunities for real-time game-based improvisation and performance. Some of the design decisions and performance strategies for the two instrument-controlled games are presented to illustrate the discussion; this is done in terms of game design, physical control through the augmented instrument, live electronics and overall artistic goals of the pieces. Finally, future directions for instrument-controlled electroacoustic game-pieces are suggested

    The Gestural Control of Audio Processing

    Get PDF
    Gesture enabled devices have become so ubiquitous in recent years that commands such as ‘pinch to zoom-in on an image’ are part of most people’s gestural vocabulary. Despite this, gestural interfaces have been used sparingly within the audio industry. The aim of this research project is to evaluate the effectiveness of a gestural interface for the control of audio processing. In particular, the ability of a gestural system to streamline workflow and rationalise the number of control parameters, thus reducing the complexity of Human Computer Interaction (HCI). A literature review of gestural technology explores the ways in which it can improve HCI, before focussing on areas of implementation in audio systems. Case studies of previous research projects were conducted to evaluate the benefits and pitfalls of gestural control over audio. The findings from these studies concluded that the scope of this project should be limited to two-dimensional gestural control. An elicitation of gestural preferences was performed to identify expert-user’s gestural associations. This data was used to compile a taxonomy of gestures and their most widely-intuitive parameter mappings. A novel interface was then produced using a popular tablet-computer. This facilitated the control of equalisation, compression and gating. Objective testing determined the performance of the gestural interface in comparison to traditional WIMP (Windows, Icons, Menus, Pointer) techniques, thus producing a benchmark for the system under test. Further testing is carried out to observe the effects of graphic user interfaces (GUIs) in a gestural system, in particular the suitability of skeuomorphic (knobs and faders) designs in modern DAWs (Digital Audio Workstations). A novel visualisation method, deemed more suitable for gestural interaction, is proposed and tested. Semantic descriptors are explored as a means of further improving the speed and usability of gestural interfaces, through the simultaneous control of multiple parameters. This rationalisation of control moves towards the implementation of gestural shortcuts and ‘continuous pre-sets’

    Making Mappings: Design Criteria for Live Performance

    Get PDF
    The conference was held in a worldwide format (both online and at NYU Shanghai).International audienceWe present new results combining data from a previously published study of the mapping design process and a new replication of the same method with a group of participants having different background expertise. Our thematic analysis of participants' interview responses reveal some design criteria common to both groups of participants: mappings must manage the balance of control between the instrument and the player, and they should be easy to understand for the player and audience. We also consider several criteria that distinguish the two groups' evaluation strategies. We conclude with important discussion of the mapping designer's perspective, performance with gestural controllers, and the difficulties of evaluating mapping designs and musical instruments in general

    Scanning Spaces: Paradigms for Spatial Sonification and Synthesis

    Get PDF
    In 1962 Karlheinz Stockhausen’s “Concept of Unity in Electronic Music” introduced a connection between the parameters of intensity, duration, pitch, and timbre using an accelerating pulse train. In 1973 John Chowning discovered that complex audio spectra could be synthesized by increasing vibrato rates past 20Hz. In both cases the notion of acceleration to produce timbre was critical to discovery. Although both composers also utilized sound spatialization in their works, spatial parameters were not unified with their synthesis techniques. This dissertation examines software studies and multimedia works involving the use of spatial and visual data to produce complex sound spectra. The culmination of these experiments, Spatial Modulation Synthesis, is introduced as a novel, mathematical control paradigm for audio-visual synthesis, providing unified control of spatialization, timbre, and visual form using high-speed sound trajectories.The unique, visual sonification and spatialization rendering paradigms of this disser- tation necessitated the development of an original audio-sample-rate graphics rendering implementation, which, unlike typical multimedia frameworks, provides an exchange of audio-visual data without downsampling or interpolation

    Creativity, Exploration and Control in Musical Parameter Spaces.

    Get PDF
    PhDThis thesis investigates the use of multidimensional control of synthesis parameters in electronic music, and the impact of controller mapping techniques on creativity. The theoretical contribution of this work, the EARS model, provides a rigorous application of creative cognition research to this topic. EARS provides a cognitive model of creative interaction with technology, retrodicting numerous prior findings in musical interaction research. The model proposes four interaction modes, and characterises them in terms of parameter-space traversal mechanisms. Recommendations for properties of controller-synthesiser mappings that support each of the modes are given. This thesis proposes a generalisation of Fitts' law that enables throughput-based evaluation of multi-dimensional control devices. Three experiments were run that studied musicians performing sound design tasks with various interfaces. Mappings suited to three of the four EARS modes were quantitatively evaluated. Experiment one investigated the notion of a `divergent interface'. A mapping geometry that caters to early-stage exploratory creativity was developed, and evaluated via a publicly available tablet application. Dimension reduction of a 10D synthesiser parameter space to 2D surface was achieved using Hilbert space-filling curves. Interaction data indicated that this divergent mapping was used for early-stage creativity, and that the traditional sliders were used for late-stage one tuning. Experiment two established a `minimal experimental paradigm' for sound design interface evaluation. This experiment showed that multidimensional controllers were faster than 1D sliders for locating a target sound in two and three timbre dimensions. iv The final study tested a novel embodied interaction technique: ViBEAMP. This system utilised a hand tracker and a 3D visualisation to train users to control 6 synthesis parameters simultaneously. Throughput was recorded as triple that of six sliders, and working memory load was signiffcantly reduced. This experiment revealed that musical, time-targeted interactions obey a different speed-accuracy trade-of law from accuracy-targeted interactions.Electronic Engineering and Computer Science at Queen Mar

    16th Sound and Music Computing Conference SMC 2019 (28–31 May 2019, Malaga, Spain)

    Get PDF
    The 16th Sound and Music Computing Conference (SMC 2019) took place in Malaga, Spain, 28-31 May 2019 and it was organized by the Application of Information and Communication Technologies Research group (ATIC) of the University of Malaga (UMA). The SMC 2019 associated Summer School took place 25-28 May 2019. The First International Day of Women in Inclusive Engineering, Sound and Music Computing Research (WiSMC 2019) took place on 28 May 2019. The SMC 2019 TOPICS OF INTEREST included a wide selection of topics related to acoustics, psychoacoustics, music, technology for music, audio analysis, musicology, sonification, music games, machine learning, serious games, immersive audio, sound synthesis, etc

    A perceptual sound space for auditory displays based on sung-vowel synthesis

    Get PDF
    When designing displays for the human senses, perceptual spaces are of great importance to give intuitive access to physical attributes. Similar to how perceptual spaces based on hue, saturation, and lightness were constructed for visual color, research has explored perceptual spaces for sounds of a given timbral family based on timbre, brightness, and pitch. To promote an embodied approach to the design of auditory displays, we introduce the Vowel-Type-Pitch (VTP) space, a cylindrical sound space based on human sung vowels, whose timbres can be synthesized by the composition of acoustic formants and can be categorically labeled. Vowels are arranged along the circular dimension, while voice type and pitch of the vowel correspond to the remaining two axes of the cylindrical VTP space. The decoupling and perceptual effectiveness of the three dimensions of the VTP space are tested through a vowel labeling experiment, whose results are visualized as maps on circular slices of the VTP cylinder. We discuss implications for the design of auditory and multi-sensory displays that account for human perceptual capabilities
    • 

    corecore