8,739 research outputs found

    Advanced Media Control Through Drawing: Using a graphics tablet to control complex audio and video data in a live context

    Get PDF
    This paper demonstrates the results of the authors’ Wacom tablet MIDI user interface. This application enables users’ drawing actions on a graphics tablet to control audio and video parameters in real-time. The programming affords five degrees (x, y, pressure, x tilt, y tilt) of concurrent control for use in any audio or video software capable of receiving and processing MIDI data. Drawing gesture can therefore form the basis of dynamic control simultaneously in the auditory and visual realms. This creates a play of connections between parameters in both mediums, and illustrates a direct correspondence between drawing action and media transformation that is immediately apparent to viewers. The paper considers the connection between drawing technique and media control both generally and specifically, postulating that dynamic drawing in a live context creates a performance mode not dissimilar to performing on a musical instrument or conducting with a baton. The use of a dynamic and physical real-time media interface re-inserts body actions into live media performance in a compelling manner. Performers can learn to “draw/play” the graphics tablet as a musical and visual “instrument”, creating a new and uniquely idiomatic form of electronic drawing. The paper also discusses how to practically program the application and presents examples of its use as a media manipulation tool

    Design Strategies for Adaptive Social Composition: Collaborative Sound Environments

    Get PDF
    In order to develop successful collaborative music systems a variety of subtle interactions need to be identified and integrated. Gesture capture, motion tracking, real-time synthesis, environmental parameters and ubiquitous technologies can each be effectively used for developing innovative approaches to instrument design, sound installations, interactive music and generative systems. Current solutions tend to prioritise one or more of these approaches, refining a particular interface technology, software design or compositional approach developed for a specific composition, performer or installation environment. Within this diverse field a group of novel controllers, described as ‘Tangible Interfaces’ have been developed. These are intended for use by novices and in many cases follow a simple model of interaction controlling synthesis parameters through simple user actions. Other approaches offer sophisticated compositional frameworks, but many of these are idiosyncratic and highly personalised. As such they are difficult to engage with and ineffective for groups of novices. The objective of this research is to develop effective design strategies for implementing collaborative sound environments using key terms and vocabulary drawn from the available literature. This is articulated by combining an empathic design process with controlled sound perception and interaction experiments. The identified design strategies have been applied to the development of a new collaborative digital instrument. A range of technical and compositional approaches was considered to define this process, which can be described as Adaptive Social Composition. Dan Livingston

    Exciting Instrumental Data: Toward an Expanded Action-Oriented Ontology for Digital Music Performance

    Get PDF
    Musical performance using digital musical instruments has obfuscated the relationship between observable musical gestures and the resultant sound. This is due to the sound producing mechanisms of digital musical instruments being hidden within the digital music making system. The difficulty in observing embodied artistic expression is especially true for musical instruments that are comprised of digital components only. Despite this characteristic of digital music performance practice, this thesis argues that it is possible to bring digital musical performance further within our action-oriented ontology by understanding the digital musician through the lens of LĂ©vi-Strauss’ notion of the bricoleur. Furthermore, by examining musical gestures with these instruments through a multi-tiered analytical framework that accounts for the physical computing elements necessarily present in all digital music making systems, we can further understand and appreciate the intricacies of digital music performance practice and culture

    Gesture cutting through textual complexity: Towards a tool for online gestural analysis and control of complex piano notation processing

    Get PDF
    International audienceThis project introduces a recently developed prototype for real-time processing and control of complex piano notation through the pianist’s gesture. The tool materializes an embodied cognition-influenced paradigm of interaction of pianists with complex notation (embodied or corporeal navigation), drawing from latest developments in the computer music fields of musical representation (augmented and interactive musical scores via INScore) and of multimodal interaction (Gesture Follower). Gestural, video, audio and MIDI data are appropriately mapped on the musical score, turning it into a personalized, dynamic, multimodal tablature. This tablature may be used for efficient learning, performance and archiving, with potential applications in pedagogy, composition, improvisation and score following. The underlying metaphor for such a tool is that instrumentalists touch or cut through notational complexity using performative gestures, as much as they touch their own keyboards. Their action on the instrument forms integral part of their understanding, which can be represented as a gestural processing of the notation. Next to the already mentioned applications, new perspectives in piano performance of post-1945 complex notation and in musicology (‘performative turn’), as well as the emerging field of ‘embodied and extended cognition’, are indispensable for this project

    Advanced Media Control Through Drawing: Using a graphics tablet to control complex audio and video data in a live context

    Get PDF
    This paper demonstrates the results of the authors’ Wacom tablet MIDI user interface. This application enables users’ drawing actions on a graphics tablet to control audio and video parameters in real-time. The programming affords five degrees (x, y, pressure, x tilt, y tilt) of concurrent control for use in any audio or video software capable of receiving and processing MIDI data

    Improvisatory music and painting interface

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2004.Includes bibliographical references (p. 101-104).(cont.) theoretical section is accompanied by descriptions of historic and contemporary works that have influenced IMPI.Shaping collective free improvisations in order to obtain solid and succinct works with surprising and synchronized events is not an easy task. This thesis is a proposal towards that goal. It presents the theoretical, philosophical and technical framework of the Improvisatory Music and Painting Interface (IMPI) system: a new computer program for the creation of audiovisual improvisations performed in real time by ensembles of acoustic musicians. The coordination of these improvisations is obtained using a graphical language. This language is employed by one "conductor" in order to generate musical scores and abstract visual animations in real time. Doodling on a digital tablet following the syntax of the language allows both the creation of musical material with different levels of improvisatory participation from the ensemble and also the manipulation of the projected graphics in coordination with the music. The generated musical information is displayed in several formats on multiple computer screens that members of the ensemble play from. The digital graphics are also projected on a screen to be seen by an audience. This system is intended for a non-tonal, non-rhythmic, and texture-oriented musical style, which means that strong emphasis is put on the control of timbral qualities and continuum transitions. One of the main goals of the system is the translation of planned compositional elements (such as precise structure and synchronization between instruments) into the improvisatory domain. The graphics that IMPI generates are organic, fluid, vivid, dynamic, and unified with the music. The concept of controlled improvisation as well as the paradigm of the relationships between acoustic and visual material are both analyzed from an aesthetic point of view. TheHugo SolĂ­s GarcĂ­a.S.M

    Tangible user interfaces : past, present and future directions

    Get PDF
    In the last two decades, Tangible User Interfaces (TUIs) have emerged as a new interface type that interlinks the digital and physical worlds. Drawing upon users' knowledge and skills of interaction with the real non-digital world, TUIs show a potential to enhance the way in which people interact with and leverage digital information. However, TUI research is still in its infancy and extensive research is required in or- der to fully understand the implications of tangible user interfaces, to develop technologies that further bridge the digital and the physical, and to guide TUI design with empirical knowledge. This paper examines the existing body of work on Tangible User In- terfaces. We start by sketching the history of tangible user interfaces, examining the intellectual origins of this ïŹeld. We then present TUIs in a broader context, survey application domains, and review frame- works and taxonomies. We also discuss conceptual foundations of TUIs including perspectives from cognitive sciences, phycology, and philoso- phy. Methods and technologies for designing, building, and evaluating TUIs are also addressed. Finally, we discuss the strengths and limita- tions of TUIs and chart directions for future research

    Introduction to Gestural Similarity in Music. An Application of Category Theory to the Orchestra

    Full text link
    Mathematics, and more generally computational sciences, intervene in several aspects of music. Mathematics describes the acoustics of the sounds giving formal tools to physics, and the matter of music itself in terms of compositional structures and strategies. Mathematics can also be applied to the entire making of music, from the score to the performance, connecting compositional structures to acoustical reality of sounds. Moreover, the precise concept of gesture has a decisive role in understanding musical performance. In this paper, we apply some concepts of category theory to compare gestures of orchestral musicians, and to investigate the relationship between orchestra and conductor, as well as between listeners and conductor/orchestra. To this aim, we will introduce the concept of gestural similarity. The mathematical tools used can be applied to gesture classification, and to interdisciplinary comparisons between music and visual arts.Comment: The final version of this paper has been published by the Journal of Mathematics and Musi

    Moving sounds and sonic moves : exploring interaction quality of embodied music mediation technologies through a user-centered perspective

    Get PDF
    This research project deals with the user-experience related to embodied music mediation technologies. More specifically, adoption and policy problems surrounding new media (art) are considered, which arise from the usability issues that to date pervade new interfaces for musical expression. Since the emergence of new wireless mediators and control devices for musical expression, there is an explicit aspiration of the creative industries and various research centers to embed such technologies into different areas of the cultural industries. The number of applications and their uses have exponentially increased over the last decade. Conversely, many of the applications to date still suffer from severe usability problems, which not only hinder the adoption by the cultural sector, but also make culture participants take a rather cautious, hesitant, or even downright negative stance towards these technologies. Therefore, this thesis takes a vantage point that is in part sociological in nature, yet has a link to cultural studies as well. It combines this with a musicological frame of reference to which it introduces empirical user-oriented approaches, predominantly taken from the field of human-computer-interaction studies. This interdisciplinary strategy is adopted to cope with the complex nature of digital embodied music controlling technologies. Within the Flanders cultural (and creative) industries, opportunities of systems affiliated with embodied interaction are created and examined. This constitutes an epistemological jigsaw that looks into 1) “which stakeholders require what various levels of involvement, what interactive means and what artistic possibilities?”, 2) “the way in which artistic aspirations, cultural prerequisites and operational necessities of (prospective) users can be defined?”, 3) “how functional, artistic and aesthetic requirements can be accommodated?”, and 4) “how quality of use and quality of experience can be achieved, quantified, evaluated and, eventually, improved?”. Within this multi-facetted problem, the eventual aim is to assess the applicability of the foresaid technology, both from a theoretically and empirically sound basis, and to facilitate widening and enhancing the adoption of said technologies. Methodologically, this is achieved by 1) applied experimentation, 2) interview techniques, 3) self-reporting and survey research, 4) usability evaluation of existing devices, and 5) human-computer interaction methods applied – and attuned – to the specific case of embodied music mediation technologies. Within that scope, concepts related to usability, flow, presence, goal assessment and game enjoyment are scrutinized and applied, and both task- and experience-oriented heuristics and metrics are developed and tested. In the first part, covering three chapters, the general context of the thesis is given. In the first chapter, an introduction to the topic is offered and the current problems are enumerated. In the second chapter, a broader theoretical background is presented of the concepts that underpin the project, namely 1) the paradigm of embodiment and its connection to musicology, 2) a state of the arts concerning new interfaces for musical expression, 3) an introduction into HCI-usability and its application domain in systematic musicology, 4) an insight into user-centered digital design procedures, and 5) the challenges brought about by e-culture and digitization for the cultural-creative industries. In the third chapter, the state of the arts concerning the available methodologies related to the thesis’ endeavor is discussed, a set of literature-based design guidelines are enumerated and from this a conceptual model is deduced which is gradually presented throughout the thesis, and fully deployed in the “SoundField”-project (as described in Chapter 9). The following chapters, contained in the second part of the thesis, give a quasi-chronological overview of how methodological concepts have been applied throughout the empirical case studies, aimed specifically at the exploration of the various aspects of the complex status quaestionis. In the fourth chapter, a series of application-based tests, predominantly revolving around interface evaluation, illustrate the complex relation between gestural interfaces and meaningful musical expression, advocating a more user-centered development approach to be adopted. In the fifth chapter, a multi-purpose questionnaire dubbed “What Moves You” is discussed, which aimed at creating a survey of the (prospective) end-users of embodied music mediation technologies. Therefore, it primarily focused on cultural background, musical profile and preferences, views on embodied interaction, literacy of and attitudes towards new technology and participation in digital culture. In the sixth chapter, the ethnographical studies that accompanied the exhibition of two interactive art pieces, entitled "Heart as an Ocean" & "Lament", are discussed. In these studies, the use of interview and questionnaire methodologies together with the presentation and reception of interactive art pieces, are probed. In the seventh chapter, the development of the collaboratively controlled music-game “Sync-In-Team” is presented, in which interface evaluation, presence, game enjoyment and goal assessment are the pivotal topics. In the eighth chapter, two usability studies are considered, that were conducted on prototype systems/interfaces, namely a heuristic evaluation of the “Virtual String” and a usability metrics evaluation on the “Multi-Level Sonification Tool”. The findings of these two studies in conjunction with the exploratory studies performed in association with the interactive art pieces, finally gave rise to the “SoundField”-project, which is recounted in full throughout the ninth chapter. The integrated participatory design and evaluation method, presented in the conceptual model is fully applied over the course of the “SoundField”-project, in which technological opportunities and ecological validity and applicability are investigated through user-informed development of numerous use cases. The third and last part of the thesis renders the final conclusions of this research project. The tenth chapter sets out with an epilogue in which a brief overview is given on how the state of the arts has evolved since the end of the project (as the research ended in 2012, but the research field has obviously moved on), and attempts to consolidate the implications of the research studies with some of the realities of the Flemish cultural-creative industries. Chapter eleven continues by discussing the strengths and weaknesses of the conceptual model throughout the various stages of the project. Also, it comprises the evaluation of the hypotheses, how the assumptions that were made held up, and how the research questions eventually could be assessed. Finally, the twelfth and last chapter concludes with the most important findings of the project. Also, it discusses some of the implications on cultural production, artistic research policy and offers an outlook on future research beyond the scope of the “SoundField” project

    Not All Gestures Are Created Equal: Gesture and Visual Feedback in Interaction Spaces.

    Full text link
    As multi-touch mobile computing devices and open-air gesture sensing technology become increasingly commoditized and affordable, they are also becoming more widely adopted. It became necessary to create new interaction design specifically for gesture-based interfaces to meet the growing needs of users. However, a deeper understanding of the interplay between gesture, and visual and sonic output is needed to make meaningful advances in design. This thesis addresses this crucial step in development by investigating the interrelation between gesture-based input, and visual representation and feedback, in gesture-driven creative computing. This thesis underscores the importance that not all gestures are created equal, and there are multiple factors that affect their performance. For example, a drag gesture in visual programming scenario performs differently than in a target acquisition task. The work presented here (i) examines the role of visual representation and mapping in gesture input, (ii) quantifies user performance differences in gesture input to examine the effect of multiple factors on gesture interactions, and (iii) develops tools and platforms for exploring visual representations of gestures. A range of gesture spaces and scenarios, from continuous sound control with open-air gestures to mobile visual programming with discrete gesture-driven commands, was assessed. Findings from this thesis reveals a rich space of complex interrelations between gesture input and visual feedback and representations. The contributions of this thesis also includes the development of an augmented musical keyboard with 3-D continuous gesture input and projected visualization, as well as a touch-driven visual programming environment for interactively constructing dynamic interfaces. These designs were evaluated by a series of user studies in which gesture-to-sound mapping was found to have a significant affect on user performance, along with other factors such as the selection of visual representation and device size. A number of counter-intuitive findings point to the potentially complex interactions between factors such as device size, task and scenarios, which exposes the need for further research. For example, the size of the device was found to have contradictory effects in two different scenarios. Furthermore, this work presents a multi-touch gestural environment to support the prototyping of gesture interactions.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113456/1/yangqi_1.pd
    • 

    corecore