22,849 research outputs found

    miMic: The microphone as a pencil

    Get PDF
    miMic, a sonic analogue of paper and pencil is proposed: An augmented microphone for vocal and gestural sonic sketching. Vocalizations are classified and interpreted as instances of sound models, which the user can play with by vocal and gestural control. The physical device is based on a modified microphone, with embedded inertial sensors and buttons. Sound models can be selected by vocal imitations that are automatically classified, and each model is mapped to vocal and gestural features for real-time control. With miMic, the sound designer can explore a vast sonic space and quickly produce expressive sonic sketches, which may be turned into sound prototypes by further adjustment of model parameters

    Body as Instrument – Performing with Gestural Interfaces

    Full text link
    This paper explores the challenge of achieving nuanced control and physical engagement with gestural interfaces in performance. Performances with a prototype gestural performance system, Gestate, provide the basis for insights into the application of gestural systems in live contexts. These reflections stem from a performer's perspective, summarising the experience of prototyping and performing with augmented instruments that extend vocal or instrumental technique through gestural control. Successful implementation of rapidly evolving gestural technologies in real-time performance calls for new approaches to performing and musicianship, centred on a growing understanding of the body's physical and creative potential. For musicians hoping to incorporate gestural control seamlessly into their performance practice, a balance of technical mastery and kinaesthetic awareness is needed to adapt existing approaches to their own purposes. Within non-tactile systems, visual feedback mechanisms can support this process by providing explicit visual cues that compensate for the absence of haptic feedback. Experience gained through prototyping and performance can yield a deeper understanding of the broader nature of gestural control and the way in which performers inhabit their own bodies.4 page(s

    An Expressive Multidimensional Physical Modelling Percussion Instrument

    Get PDF
    This paper describes the design, implementation and evaluation of a digital percussion instrument with multidimensional polyphonic control of a real-time physical modelling system. The system utilises modular parametric control of different physical models, excitations and couplings alongside continuous morphing and unique interaction capabilities to explore and enhance expressivity and gestural interaction for a percussion instrument. Details of the instrument and audio engine are provided together with an experiment that tested real-time capabilities of the system, and expressive qualities of the instrument. Testing showed that advances in sensor technology have the potential to enhance creativity in percussive instruments and extend gestural manipulation, but will require well designed and inherently complex mapping schemes

    Real-time gestural control of robot manipulator through Deep Learning human-pose inference

    Get PDF
    International audienceWith the raise of collaborative robots, human-robot interaction needs to be as natural as possible. In this work, we present a framework for real-time continuous motion control of a real collabora-tive robot (cobot) from gestures captured by an RGB camera. Through deep learning existing techniques, we obtain human skeletal pose information both in 2D and 3D. We use it to design a controller that makes the robot mirror in real-time the movements of a human arm or hand

    Altering speech synthesis prosody through real time natural gestural control

    Get PDF
    A significant amount of research has been and continues to be undertaken into generating expressive prosody within speech synthesis. Separately, recent developments in HMM-based synthesis (specifically pHTS, developed at University of Mons) provide a platform for reactive speech synthesis, able to react in real time to surroundings or user interaction. Considering both of these elements, this project explores whether it is possible to generate superior prosody in a speech synthesis system, using natural gestural controls, in real time. Building on a previous piece of work undertaken at The University of Edinburgh, a system is constructed in which a user may apply a variety of prosodic effects in real time through natural gestures, recognised by a Microsoft Kinect sensor. Gestures are recognised and prosodic adjustments made through a series of hand-crafted rules (based on data gathered from preliminary experiments), though machine learning techniques are also considered within this project and recommended for future iterations of the work. Two sets of formal experiments are implemented, both of which suggest that - under further development - the system developed may work successfully in a real world environment. Firstly, user tests show that subjects can learn to control the device successfully, adding prosodic effects to the intended words in the majority of cases with practice. Results are likely to improve further as buffering issues are resolved. Secondly, listening tests show that the prosodic effects currently implemented significantly increase perceived naturalness, and in some cases are able to alter the semantic perception of a sentence in an intended way. Alongside this paper, a demonstration video of the project may be found on the accompanying CD, or online at http://tinyurl.com/msc-synthesis. The reader is advised to view this demonstration, as a way of understanding how the system functions and sounds in action

    BitBox!:A case study interface for teaching real-time adaptive music composition for video games

    Get PDF
    Real-time adaptive music is now well-established as a popular medium, largely through its use in video game soundtracks. Commercial packages, such as fmod, make freely available the underlying technical methods for use in educational contexts, making adaptive music technologies accessible to students. Writing adaptive music, however, presents a significant learning challenge, not least because it requires a different mode of thought, and tutor and learner may have few mutual points of connection in discovering and understanding the musical drivers, relationships and structures in these works. This article discusses the creation of ‘BitBox!’, a gestural music interface designed to deconstruct and explain the component elements of adaptive composition through interactive play. The interface was displayed at the Dare Protoplay games exposition in Dundee in August 2014. The initial proof-of- concept study proved successful, suggesting possible refinements in design and a broader range of applications

    Toward natural interaction in the real world: real-time gesture recognition

    Get PDF
    Using a new hand tracking technology capable of tracking 3D hand postures in real-time, we developed a recognition system for continuous natural gestures. By natural gestures, we mean those encountered in spontaneous interaction, rather than a set of artificial gestures chosen to simplify recognition. To date we have achieved 95.6% accuracy on isolated gesture recognition, and 73% recognition rate on continuous gesture recognition, with data from 3 users and twelve gesture classes. We connected our gesture recognition system to Google Earth, enabling real time gestural control of a 3D map. We describe the challenges of signal accuracy and signal interpretation presented by working in a real-world environment, and detail how we overcame them.National Science Foundation (U.S.) (award IIS-1018055)Pfizer Inc.Foxconn Technolog
    • …
    corecore