1,700 research outputs found

    A hybrid keyboard-guitar interface using capacitive touch sensing and physical modeling

    Get PDF
    This paper was presented at the 9th Sound and Music Computing Conference, Copenhagen, Denmark.This paper presents a hybrid interface based on a touch- sensing keyboard which gives detailed expressive control over a physically-modeled guitar. Physical modeling al- lows realistic guitar synthesis incorporating many expres- sive dimensions commonly employed by guitarists, includ- ing pluck strength and location, plectrum type, hand damp- ing and string bending. Often, when a physical model is used in performance, most control dimensions go unused when the interface fails to provide a way to intuitively con- trol them. Techniques as foundational as strumming lack a natural analog on the MIDI keyboard, and few digital controllers provide the independent control of pitch, vol- ume and timbre that even novice guitarists achieve. Our interface combines gestural aspects of keyboard and guitar playing. Most dimensions of guitar technique are control- lable polyphonically, some of them continuously within each note. Mappings are evaluated in a user study of key- boardists and guitarists, and the results demonstrate its playa- bility by performers of both instruments

    Enhancing the Visualization of Percussion Gestures by Virtual Character Animation

    Get PDF
    International audienceA new interface for visualizing and analyzing percussion gestures is presented, proposing enhancements of existing motion capture analysis tools. This is achieved by offering a percussion gesture analysis protocol using motion capture. A virtual character dynamic model is then designed in order to take advantage of gesture characteristics, yielding to improve gesture analysis with visualization and interaction cues of different types

    Enhancing the Visualization of Percussion Gestures by Virtual Character Animation

    Get PDF
    International audienceA new interface for visualizing and analyzing percussion gestures is presented, proposing enhancements of existing motion capture analysis tools. This is achieved by offering a percussion gesture analysis protocol using motion capture. A virtual character dynamic model is then designed in order to take advantage of gesture characteristics, yielding to improve gesture analysis with visualization and interaction cues of different types

    Designing and Composing for Interdependent Collaborative Performance with Physics-Based Virtual Instruments

    Get PDF
    Interdependent collaboration is a system of live musical performance in which performers can directly manipulate each other’s musical outcomes. While most collaborative musical systems implement electronic communication channels between players that allow for parameter mappings, remote transmissions of actions and intentions, or exchanges of musical fragments, they interrupt the energy continuum between gesture and sound, breaking our cognitive representation of gesture to sound dynamics. Physics-based virtual instruments allow for acoustically and physically plausible behaviors that are related to (and can be extended beyond) our experience of the physical world. They inherently maintain and respect a representation of the gesture to sound energy continuum. This research explores the design and implementation of custom physics-based virtual instruments for realtime interdependent collaborative performance. It leverages the inherently physically plausible behaviors of physics-based models to create dynamic, nuanced, and expressive interconnections between performers. Design considerations, criteria, and frameworks are distilled from the literature in order to develop three new physics-based virtual instruments and associated compositions intended for dissemination and live performance by the electronic music and instrumental music communities. Conceptual, technical, and artistic details and challenges are described, and reflections and evaluations by the composer-designer and performers are documented

    Ontology of music performance variation

    Get PDF
    Performance variation in rhythm determines the extent that humans perceive and feel the effect of rhythmic pulsation and music in general. In many cases, these rhythmic variations can be linked to percussive performance. Such percussive performance variations are often absent in current percussive rhythmic models. The purpose of this thesis is to present an interactive computer model, called the PD-103, that simulates the micro-variations in human percussive performance. This thesis makes three main contributions to existing knowledge: firstly, by formalising a new method for modelling percussive performance; secondly, by developing a new compositional software tool called the PD-103 that models human percussive performance, and finally, by creating a portfolio of different musical styles to demonstrate the capabilities of the software. A large database of recorded samples are classified into zones based upon the vibrational characteristics of the instruments, to model timbral variation in human percussive performance. The degree of timbral variation is governed by principles of biomechanics and human percussive performance. A fuzzy logic algorithm is applied to analyse current and first-order sample selection in order to formulate an ontological description of music performance variation. Asynchrony values were extracted from recorded performances of three different performance skill levels to create \timing fingerprints" which characterise unique features to each percussionist. The PD-103 uses real performance timing data to determine asynchrony values for each synthesised note. The spectral content of the sample database forms a three-dimensional loudness/timbre space, intersecting instrumental behaviour with music composition. The reparameterisation of the sample database, following the analysis of loudness, spectral flatness, and spectral centroid, provides an opportunity to explore the timbral variations inherent in percussion instruments, to creatively explore dimensions of timbre. The PD-103 was used to create a music portfolio exploring different rhythmic possibilities with a focus on meso-periodic rhythms common to parts of West Africa, jazz drumming, and electroacoustic music. The portfolio also includes new timbral percussive works based on spectral features and demonstrates the central aim of this thesis, which is the creation of a new compositional software tool that integrates human percussive performance and subsequently extends this model to different genres of music

    A design exploration on the effectiveness of vocal imitations

    Get PDF
    Among sonic interaction design practices a rising interest is given to the use of the voice as a tool for producing fast and rough sketches. Goal of the EU project SkAT-VG (Sketching Audio Technologies using Vocalization and Gestures, 2014-2016) is to develop vocal sketching as a reference practice for sound design by (i) improving our understanding on how sounds are communicated through vocalizations and gestures, (ii) looking for physical relations between vocal sounds and sound-producing phenomena, (iii) designing tools for converting vocalizations and gestures into parametrized sound models. We present the preliminary outcomes of a vocal sketching workshop held at the Conservatory of Padova, Italy. Research through design activities focused on how teams of potential designers make use of vocal imitations, and how morphological attributes of sound may inform the training of basic vocal techniques

    Relative effectiveness of three diverse instructional conditions on seventh-grade wind band students' expressive musical performance

    Full text link
    Thesis (D.M.A.)--Boston UniversityIn this study, the researcher examined the relative effectiveness of three diverse instructional conditions (aural model (AM), concrete musical instruction (CM), and verbal instruction using imagery/metaphor statements (MI)) on seventh-grade wind band students' expressive musical performance. This study was based, in part, on Woody's (2006a) research with adaptations to include developmentally appropriate instructional conditions for seventh-grade wind band students. In the AM condition, the aural model was recorded by an advanced pianist who synthesized elements from both the CM and MI conditions and exaggerated the expressive properties of loudness, tempo, and style/note duration. In the CM instructional condition, the researcher notated musical markings corresponding to the intended emotion on the printed score for three melodies. Finally, in the MI condition, high-quality examples of imagery/metaphor statements were gathered from experienced wind band instructors and the best-rated statement for each melody was utilized. Participants were enrolled in two seventh-grade wind band programs located in Cobb County, Georgia. Sixty randomly sampled, seventh-grade wind band musicians participated in an expressive performance procedure (EPP) consisting of a pretest recording, an instructional condition, and a posttest recording followed by computer analysis of the loudness, tempo and style/note duration expressive properties. Data were analyzed through matched pairs t-tests determining whether the instructional conditions affected statistically significant differences, from pretest to posttest scores (p < .05), on expressive music performance. Data were further analyzed using an ANCOVA and Tukey HSD post-hoc tests to examine statistically significant (p < .05) differences regarding the relative effectiveness of the three instructional conditions. The results of the matched pairs t-tests indicated the AM, CM, and MI instructional conditions affected the mean difference score sets with statistical significance. Furthermore, results suggested the AM and MI conditions were found to be significantly (p < .005) more effective in affecting the mean difference scores sets than the CM condition; however, the AM and MI conditions did not appear to be significantly more effective compared to one another. The analysis provided evidence supporting the notion that diverse instructional conditions may be effective alternatives for teaching expressive performance

    Algorithmic Compositional Methods and their Role in Genesis: A Multi-Functional Real-Time Computer Music System

    Get PDF
    Algorithmic procedures have been applied in computer music systems to generate compositional products using conventional musical formalism, extensions of such musical formalism and extra-musical disciplines such as mathematical models. This research investigates the applicability of such algorithmic methodologies for real-time musical composition, culminating in Genesis, a multi-functional real-time computer music system written for Mac OS X in the SuperCollider object-oriented programming language, and contained in the accompanying DVD. Through an extensive graphical user interface, Genesis offers musicians the opportunity to explore the application of the sonic features of real-time sound-objects to designated generative processes via different models of interaction such as unsupervised musical composition by Genesis and networked control of external Genesis instances. As a result of the applied interactive, generative and analytical methods, Genesis forms a unique compositional process, with a compositional product that reflects the character of its interactions between the sonic features of real-time sound-objects and its selected algorithmic procedures. Within this thesis, the technologies involved in algorithmic methodologies used for compositional processes, and the concepts that define their constructs are described, with consequent detailing of their selection and application in Genesis, with audio examples of algorithmic compositional methods demonstrated on the accompanying DVD. To demonstrate the real-time compositional abilities of Genesis, free explorations with instrumentalists, along with studio recordings of the compositional processes available in Genesis are presented in audiovisual examples contained in the accompanying DVD. The evaluation of the Genesis system’s capability to form a real-time compositional process, thereby maintaining real-time interaction between the sonic features of real-time sound objects and its selected algorithmic compositional methods, focuses on existing evaluation techniques founded in HCI and the qualitative issues such evaluation methods present. In terms of the compositional products generated by Genesis, the challenges in quantifying and qualifying its compositional outputs are identified, demonstrating the intricacies of assessing generative methods of compositional processes, and their impact on a resulting compositional product. The thesis concludes by considering further advances and applications of Genesis, and inviting further dissemination of the Genesis system and promotion of research into evaluative methods of generative techniques, with the hope that this may provide additional insight into the relative success of products generated by real-time algorithmic compositional processes
    • …
    corecore