5 research outputs found

    Interacting with 3D Reactive Widgets for Musical Performance

    Get PDF
    International audienceWhile virtual reality and 3D interaction open new prospects for musical performance, existing immersive virtual instruments are often limited to single process instruments or musical navigation tools. We believe that immersive virtual environments may be used to design expressive and efficient multi-process instruments. In this paper we present the 3D reactive widgets. These graphical elements enable efficient and simultaneous control and visualization of musical processes. Then we describe Piivert, a novel input device that we have developed to manipulate these widgets, and several techniques for 3D musical interaction

    A Multidimensional Sketching Interface for Visual Interaction with Corpus-Based Concatenative Sound Synthesis

    Get PDF
    The present research sought to investigate the correspondence between auditory and visual feature dimensions and to utilise this knowledge in order to inform the design of audio-visual mappings for visual control of sound synthesis. The first stage of the research involved the design and implementation of Morpheme, a novel interface for interaction with corpus-based concatenative synthesis. Morpheme uses sketching as a model for interaction between the user and the computer. The purpose of the system is to facilitate the expression of sound design ideas by describing the qualities of the sound to be synthesised in visual terms, using a set of perceptually meaningful audio-visual feature associations. The second stage of the research involved the preparation of two multidimensional mappings for the association between auditory and visual dimensions.The third stage of this research involved the evaluation of the Audio-Visual (A/V) mappings and of Morpheme’s user interface. The evaluation comprised two controlled experiments, an online study and a user study. Our findings suggest that the strength of the perceived correspondence between the A/V associations prevails over the timbre characteristics of the sounds used to render the complementary polar features. Hence, the empirical evidence gathered by previous research is generalizable/ applicable to different contexts and the overall dimensionality of the sound used to render should not have a very significant effect on the comprehensibility and usability of an A/V mapping. However, the findings of the present research also show that there is a non-linear interaction between the harmonicity of the corpus and the perceived correspondence of the audio-visual associations. For example, strongly correlated cross-modal cues such as size-loudness or vertical position-pitch are affected less by the harmonicity of the audio corpus in comparison to weaker correlated dimensions (e.g. texture granularity-sound dissonance). No significant differences were revealed as a result of musical/audio training. The third study consisted of an evaluation of Morpheme’s user interface were participants were asked to use the system to design a sound for a given video footage. The usability of the system was found to be satisfactory.An interface for drawing visual queries was developed for high level control of the retrieval and signal processing algorithms of concatenative sound synthesis. This thesis elaborates on previous research findings and proposes two methods for empirically driven validation of audio-visual mappings for sound synthesis. These methods could be applied to a wide range of contexts in order to inform the design of cognitively useful multi-modal interfaces and representation and rendering of multimodal data. Moreover this research contributes to the broader understanding of multimodal perception by gathering empirical evidence about the correspondence between auditory and visual feature dimensions and by investigating which factors affect the perceived congruency between aural and visual structures

    A Multidimensional Sketching Interface for Visual Interaction with Corpus-Based Concatenative Sound Synthesis

    Get PDF
    The present research sought to investigate the correspondence between auditory and visual feature dimensions and to utilise this knowledge in order to inform the design of audio-visual mappings for visual control of sound synthesis. The first stage of the research involved the design and implementation of Morpheme, a novel interface for interaction with corpus-based concatenative synthesis. Morpheme uses sketching as a model for interaction between the user and the computer. The purpose of the system is to facilitate the expression of sound design ideas by describing the qualities of the sound to be synthesised in visual terms, using a set of perceptually meaningful audio-visual feature associations. The second stage of the research involved the preparation of two multidimensional mappings for the association between auditory and visual dimensions.The third stage of this research involved the evaluation of the Audio-Visual (A/V) mappings and of Morpheme’s user interface. The evaluation comprised two controlled experiments, an online study and a user study. Our findings suggest that the strength of the perceived correspondence between the A/V associations prevails over the timbre characteristics of the sounds used to render the complementary polar features. Hence, the empirical evidence gathered by previous research is generalizable/ applicable to different contexts and the overall dimensionality of the sound used to render should not have a very significant effect on the comprehensibility and usability of an A/V mapping. However, the findings of the present research also show that there is a non-linear interaction between the harmonicity of the corpus and the perceived correspondence of the audio-visual associations. For example, strongly correlated cross-modal cues such as size-loudness or vertical position-pitch are affected less by the harmonicity of the audio corpus in comparison to weaker correlated dimensions (e.g. texture granularity-sound dissonance). No significant differences were revealed as a result of musical/audio training. The third study consisted of an evaluation of Morpheme’s user interface were participants were asked to use the system to design a sound for a given video footage. The usability of the system was found to be satisfactory.An interface for drawing visual queries was developed for high level control of the retrieval and signal processing algorithms of concatenative sound synthesis. This thesis elaborates on previous research findings and proposes two methods for empirically driven validation of audio-visual mappings for sound synthesis. These methods could be applied to a wide range of contexts in order to inform the design of cognitively useful multi-modal interfaces and representation and rendering of multimodal data. Moreover this research contributes to the broader understanding of multimodal perception by gathering empirical evidence about the correspondence between auditory and visual feature dimensions and by investigating which factors affect the perceived congruency between aural and visual structures

    The Seaboard: discreteness and continuity in musical interface design

    Get PDF
    The production of acoustic music bridges two senses—touch and hearing—by connecting physical movements, gestures, and tactile interactions with the creation of sound. Mastery of acoustic music depends on the development and refinement of muscle memory and ear training in concert. This process leads to a capacity for great depth of expression even though the actual timbral palette of each given acoustic instrument is relatively limited. By contrast, modern modes of music creation involving recorded music and digital sound manipulation sacrifice this immediate bridge and substitute more abstract processes that enable sonic possibilities extending far beyond the acoustic palette. Mastery in abstract approaches to music making doesn’t necessarily rely on muscle memory or ear training, as many key processes do not need to happen in realtime. This freedom from the limits of time and practiced physical manipulation radically increases the range of achievable sounds, rhythms and effects, but sometimes results in a loss of subtlety of expressiveness. This practice-based PhD asks whether it is possible, and if so how, to achieve an integration of relevant sensor technologies, design concepts, and formation techniques to create a new kind of musical instrument and sound creation tool that bridges this gap with a satisfying result for musicians and composers. In other words, can one create new, multi-dimensional interfaces which provide more effective ways to control the expressive capabilities of digital music creation in real-time? In particular, can one build on the intuitive, logical, and well-known layout of the piano keyboard to create a new instrument that more fully enables both continuous and discrete approaches to music making? My research practice proposes a new musical instrument called the Seaboard, documents its invention, development, design, and refinement, and evaluates the extent to which it positively answers the above question. The Seaboard is a reinterpretation of the piano keyboard as a soft, continuous wavelike surface that places polyphonic pitch bend, vibrato and continuous touch right at the musician’s fingertips. The addition of new realtime parameters to a familiar layout means it combines the intuitiveness of the traditional instrument with some of the versatility of digital technology. Designing and prototyping the Seaboard to the point of successfully proving that a new synthesis between acoustic techniques and digital technologies is possible is shown to require significant coordination and integration of a range of technical disciplines. The research approach has been to build and refine a series of prototypes that successively grapple with the integration of these elements, whilst rigorously documenting the design issues, engineering challenges, and ultimate decisions that determine whether an intervention in the field of musical instrumentation is fruitful

    COMBINING AUDIOVISUAL MAPPINGS FOR 3D MUSICAL INTERACTION

    Get PDF
    3D environments provide new possibilities for musical interaction. They allow musicians to manipulate and visualize large sets of sound processes associated to 3D objects by connecting graphical parameters to sound parameters. Several of these audiovisual mappings can be combined on a single 3D object. However, this brings up the issues of the choice of these mappings and of their combinations. We conducted a user study on sixteen musicians to evaluate audiovisual mappings and their combinations in the context of 3D musical interaction. This user study is composed of three experiments. The first experiment investigates subjects preferences for mappings between four perceptual sound parameters (amplitude, pitch, spectral centroid and noisiness) and ten graphical parameters, some of them specific to 3D environments. The second experiment focuses on efficiency of single mappings in an audiovisual identification task. The results show almost no significant differences, but some tendencies, which may indicate that the choice of mappings scales is more important than the choice of the mappings themselves. The third experiment investigates the efficiency of mappings combinations. The results indicate no significant differences, which suggest that it may be possible to combine up to four audiovisual mappings on a single graphical object without any performance loss for musicians, if they do not disrupt each other. 1
    corecore