17 research outputs found

    Not All Gestures Are Created Equal: Gesture and Visual Feedback in Interaction Spaces.

    Full text link
    As multi-touch mobile computing devices and open-air gesture sensing technology become increasingly commoditized and affordable, they are also becoming more widely adopted. It became necessary to create new interaction design specifically for gesture-based interfaces to meet the growing needs of users. However, a deeper understanding of the interplay between gesture, and visual and sonic output is needed to make meaningful advances in design. This thesis addresses this crucial step in development by investigating the interrelation between gesture-based input, and visual representation and feedback, in gesture-driven creative computing. This thesis underscores the importance that not all gestures are created equal, and there are multiple factors that affect their performance. For example, a drag gesture in visual programming scenario performs differently than in a target acquisition task. The work presented here (i) examines the role of visual representation and mapping in gesture input, (ii) quantifies user performance differences in gesture input to examine the effect of multiple factors on gesture interactions, and (iii) develops tools and platforms for exploring visual representations of gestures. A range of gesture spaces and scenarios, from continuous sound control with open-air gestures to mobile visual programming with discrete gesture-driven commands, was assessed. Findings from this thesis reveals a rich space of complex interrelations between gesture input and visual feedback and representations. The contributions of this thesis also includes the development of an augmented musical keyboard with 3-D continuous gesture input and projected visualization, as well as a touch-driven visual programming environment for interactively constructing dynamic interfaces. These designs were evaluated by a series of user studies in which gesture-to-sound mapping was found to have a significant affect on user performance, along with other factors such as the selection of visual representation and device size. A number of counter-intuitive findings point to the potentially complex interactions between factors such as device size, task and scenarios, which exposes the need for further research. For example, the size of the device was found to have contradictory effects in two different scenarios. Furthermore, this work presents a multi-touch gestural environment to support the prototyping of gesture interactions.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113456/1/yangqi_1.pd

    Improving User Involvement Through Live Collaborative Creation

    Full text link
    Creating an artifact - such as writing a book, developing software, or performing a piece of music - is often limited to those with domain-specific experience or training. As a consequence, effectively involving non-expert end users in such creative processes is challenging. This work explores how computational systems can facilitate collaboration, communication, and participation in the context of involving users in the process of creating artifacts while mitigating the challenges inherent to such processes. In particular, the interactive systems presented in this work support live collaborative creation, in which artifact users collaboratively participate in the artifact creation process with creators in real time. In the systems that I have created, I explored liveness, the extent to which the process of creating artifacts and the state of the artifacts are immediately and continuously perceptible, for applications such as programming, writing, music performance, and UI design. Liveness helps preserve natural expressivity, supports real-time communication, and facilitates participation in the creative process. Live collaboration is beneficial for users and creators alike: making the process of creation visible encourages users to engage in the process and better understand the final artifact. Additionally, creators can receive immediate feedback in a continuous, closed loop with users. Through these interactive systems, non-expert participants help create such artifacts as GUI prototypes, software, and musical performances. This dissertation explores three topics: (1) the challenges inherent to collaborative creation in live settings, and computational tools that address them; (2) methods for reducing the barriers of entry to live collaboration; and (3) approaches to preserving liveness in the creative process, affording creators more expressivity in making artifacts and affording users access to information traditionally only available in real-time processes. In this work, I showed that enabling collaborative, expressive, and live interactions in computational systems allow the broader population to take part in various creative practices.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/145810/1/snaglee_1.pd

    Prototyping of Ubiquitous Music Ecosystems

    Get PDF
    This paper focuses the prototyping stage of the design cycle of ubiquitous music (ubimus) ecosystems. We present three case studies of prototype deployments for creative musical activities. The first case exemplifies a ubimus system for synchronous musical interaction using a hybrid Java-JavaScript development platform, mow3s-ecolab. The second case study makes use of the HTML5 Web Audio library to implement a loop-based sequencer. The third prototype - an HTML-controlled sine-wave oscillator - provides an example of using the Chromium open-source sand-boxing technology Portable Native Client (PNaCl) platform for audio programming on the web. This new approach involved porting the Csound language and audio engine to the PNaCl web technology. The Csound PNaCl environment provides programming tools for ubiquitous audio applications that go beyond the HTML5 Web Audio framework. The limitations and advantages of the three approaches proposed - the hybrid Java/- JavaScript environment, the HTML5 audio library and the Csound PNaCl infrastructure - are discussed in the context of rapid prototyping of ubimus ecosystems

    Data-to-music API: Real-time data-agnostic sonification with musical structure models

    Get PDF
    Presented at the 21st International Conference on Auditory Display (ICAD2015), July 6-10, 2015, Graz, Styria, Austria.In sonification methodologies that aim to represent the underlying data accurately, musical or artistic approaches are often dismissed as being not transparent, likely to distort the data, not generalizable, or not reusable for different data types. Scientific applications for sonification have been, therefore, hesitant to use approaches guided by artistic aesthetics and musical expressivity. All sonifications, however, may have musical effects on listeners, as our trained ears with daily exposure to music tend to naturally distinguish musical and non-musical sound relationships, such as harmony, rhythmic stability, or timbral balance. This study proposes to take advantage of the musical effects of sonification in a systematic manner. Data may be mapped to high-level musical parameters rather than to one-to-one low-level audio parameters. An approach to create models that encapsulate modulatable musical structures is proposed in the context of the new DataTo- Music JavaScript API. The API provides an environment for rapid development of data-agnostic sonification applications in a web browser, with a model-based modular musical structure system. The proposed model system is compared to existing sonification frameworks as well as music theory and composition models. Also, issues regarding the distortion of original data, transparency, and reusability of musical models are discussed

    Scanning Spaces: Paradigms for Spatial Sonification and Synthesis

    Get PDF
    In 1962 Karlheinz Stockhausen’s “Concept of Unity in Electronic Music” introduced a connection between the parameters of intensity, duration, pitch, and timbre using an accelerating pulse train. In 1973 John Chowning discovered that complex audio spectra could be synthesized by increasing vibrato rates past 20Hz. In both cases the notion of acceleration to produce timbre was critical to discovery. Although both composers also utilized sound spatialization in their works, spatial parameters were not unified with their synthesis techniques. This dissertation examines software studies and multimedia works involving the use of spatial and visual data to produce complex sound spectra. The culmination of these experiments, Spatial Modulation Synthesis, is introduced as a novel, mathematical control paradigm for audio-visual synthesis, providing unified control of spatialization, timbre, and visual form using high-speed sound trajectories.The unique, visual sonification and spatialization rendering paradigms of this disser- tation necessitated the development of an original audio-sample-rate graphics rendering implementation, which, unlike typical multimedia frameworks, provides an exchange of audio-visual data without downsampling or interpolation

    Towards a Practitioner Model of Mobile Music

    Get PDF
    This practice-based research investigates the mobile paradigm in the context of electronic music, sound and performance; it considers the idea of mobile as a lens through which a new model of electronic music performance can be interrogated. This research explores mobile media devices as tools and modes of artistic expression in everyday contexts and situations. While many of the previous studies have tended to focus upon the design and construction of new hardware and software systems, this research puts performance practice at the centre of its analysis. This research builds a methodological and practical framework that draws upon theories of mobile-mediated aurality, rhetoric on the practice of walking, relational aesthetics, and urban and natural environments as sites for musical performance. The aim is to question the spaces commonly associated with electronic music – where it is situated, listened to and experienced. This thesis concentrates on the creative use of existing systems using generic mobile devices – smartphones, tablets and HD cameras – and commercially available apps. It will describe the development, implementation and evaluation of a self-contained performance system utilising digital signal processing apps and the interconnectivity of an inter-app routing system. This is an area of investigation that other research programmes have not addressed in any depth. This research’s enquiries will be held in dynamic and often unpredictable conditions, from navigating busy streets to the fold down shelf on the back of a train seat, as a solo performer or larger groups of players, working with musicians, nonmusicians and other participants. Along the way, it examines how ubiquitous mobile technology and its total access might promote inclusivity and creativity through the cultural adhesive of mobile media. This research aims to explore how being mobile has unrealised potential to change the methods and experiences of making electronic music, to generate a new kind of performer identity and as a consequence lead towards a practitioner model of mobile music

    Steps to an Ecology of Networked Knowledge and Innovation: Enabling new forms of collaboration among sciences, engineering, arts, and design

    Get PDF
    SEAD network White Papers ReportThe final White Papers (posted at http://seadnetwork.wordpress.com/white-paper- abstracts/final-white-papers/) represent a spectrum of interests in advocating for transdisciplinarity among arts, sciences, and technologies. All authors submitted plans of action and identified stakeholders they perceived as instrumental in carrying out such plans. The individual efforts led to an international scope. One of the important characteristics of this collection is that the papers do not represent a collective aim toward an explicit initiative. Rather, they offer a broad array of views on barriers faced and prospective solutions. In summary, the collected White Papers and associated Meta- analyses began as an effort to take the pulse of the SEAD community as broadly as possible. The ideas they generated provide a fruitful basis for gauging trends and challenges in facilitating the growth of the network and implementing future SEAD initiatives.National Science Foundation Grant No.1142510. Additional funding was provided by the ATEC program at the University of Texas at Dallas and the Institute for Applied Creativity at Texas A&M University

    Bodily Expression Support for Creative Dance Education by Grasping-Type Musical Interface with Embedded Motion and Grasp Sensors

    Get PDF
    Dance has been made mandatory as one of the physical education courses in Japan because it can cultivate capacities for expression and communication. Among several types of dance education, creative dance especially contributes to the cultivation of these capacities. However, creative dance requires some level of particular skills, as well as creativity, and it is difficult to presuppose these pre-requisites in beginner-level dancers without experience. We propose a novel supporting device for dance beginners to encourage creative dance performance by continuously generating musical sounds in real-time in accordance with their bodily movements. It has embedded sensors developed for this purpose. Experiments to evaluate the effectiveness of the device were conducted with ten beginner-level dancers. Using the proposed device, the subjects demonstrated enhanced creative dance movements with greater variety, evaluated in terms of Laban dance movement description. Also, using the device, they performed with better accuracy and repeatability in a task where they produced an imagined circular trajectory by hand. The proposed interface is effective in terms of creative dance activity and accuracy of motion generation for beginner-level dancers
    corecore