5,675 research outputs found

    LAMI: A gesturally controlled three-dimensional stage Leap (Motion-based) Audio Mixing Interface

    Get PDF
    Interface designers are increasingly exploring alternative approaches to user input/control. LAMI is a Leap (Motion-based) AMI which takes user’s hand gestures and maps these to a three-dimensional stage displayed on a computer monitor. Audio channels are visualised as spheres whose Y coordinate is spectral centroid and X and Z coordinates are controlled by hand position and represent pan and level respectively. Auxiliary send levels are controlled via wrist rotation and vertical hand position and visually represented as dial-like arcs. Channel EQ curve is controlled by manipulating a lathed column visualisation. Design of LAMI followed an iterative design cycle with candidate interfaces rapidly prototyped, evaluated and refined. LAMI was evaluated against Logic Pro X in a defined audio mixing task

    Exploring the Container Metaphor for Equalisation Manipulation

    Get PDF
    This paper presents the first stage in the design and evaluation of a novel container metaphor interface for equalisation control. The prototype system harnesses the Pepper's Ghost illusion to project mid-air a holographic data visualisation of an audio track's long-term average and real-time frequency content as a deformable shape manipulated directly via hand gestures. The system uses HTML 5, JavaScript and the Web Audio API in conjunction with a Leap Motion controller and bespoke low budget projection system. During subjective evaluation users commented that the novel system was simpler and more intuitive to use than commercially established equalisation interface paradigms and most suited to creative, expressive and explorative equalisation tasks

    Concepts for microgravity experiments utilizing gloveboxes

    Get PDF
    The need for glovebox facilities on spacecraft in which microgravity materials processing experiments are performed is discussed. At present such facilities are being designed, and some of their capabilities are briefly described. A list of experiment concepts which would require or benefit from such facilities is presented

    Master Hand Technology For The HMI Using Hand Gesture And Colour Detection

    Get PDF
    Master Hand Technology uses different hand gestures and colors to give various commands for the Human-Machine(here Computer) Interfacing. Gestures recognition deals with the goal of interpreting human gestures via mathematical algorithm. Gestures made by users with the help of a color band and/or body pose , in two or three dimensions , get translated by software/image processing into predefined commands .The computer then acts according to the command. There have been a lot work already developed in this field either by extracting hand gesture only or extracting hand with the help of color segmentation. In this project, both hand gesture extraction and color detection used for better, faster, robust, accurate and real-time applications. Red, Green, Blue colors are most efficiently detected if RGB color space used. Using HSV color space, it can be extended to any no of colors. For hand gesture detection, the default background is captured and stored for further processing. Comparing the new captured image with background image and doing necessary extraction and filtering, hand portion can be extracted. Then applying different mathematical algorithms different hand gestures detected. All this work done using MATLAB software. By interfacing a portion of Master hand or/and color to mouse of a Computer, the computer can be controlled same as the mouse. And then many virtual (Augmented reality) or PC based application can be developed (e.g. Calculator, Paint). It does not matter whether the system is within your reach or not; but a camera that is linked with the system must have to be near-by . Showing different gestures by your Master-Hand , the computer can be controlled remotely. If the camera can be set-up online, then the computer can be controlled even from a very far place online

    The Gestural Control of Audio Processing

    Get PDF
    Gesture enabled devices have become so ubiquitous in recent years that commands such as ‘pinch to zoom-in on an image’ are part of most people’s gestural vocabulary. Despite this, gestural interfaces have been used sparingly within the audio industry. The aim of this research project is to evaluate the effectiveness of a gestural interface for the control of audio processing. In particular, the ability of a gestural system to streamline workflow and rationalise the number of control parameters, thus reducing the complexity of Human Computer Interaction (HCI). A literature review of gestural technology explores the ways in which it can improve HCI, before focussing on areas of implementation in audio systems. Case studies of previous research projects were conducted to evaluate the benefits and pitfalls of gestural control over audio. The findings from these studies concluded that the scope of this project should be limited to two-dimensional gestural control. An elicitation of gestural preferences was performed to identify expert-user’s gestural associations. This data was used to compile a taxonomy of gestures and their most widely-intuitive parameter mappings. A novel interface was then produced using a popular tablet-computer. This facilitated the control of equalisation, compression and gating. Objective testing determined the performance of the gestural interface in comparison to traditional WIMP (Windows, Icons, Menus, Pointer) techniques, thus producing a benchmark for the system under test. Further testing is carried out to observe the effects of graphic user interfaces (GUIs) in a gestural system, in particular the suitability of skeuomorphic (knobs and faders) designs in modern DAWs (Digital Audio Workstations). A novel visualisation method, deemed more suitable for gestural interaction, is proposed and tested. Semantic descriptors are explored as a means of further improving the speed and usability of gestural interfaces, through the simultaneous control of multiple parameters. This rationalisation of control moves towards the implementation of gestural shortcuts and ‘continuous pre-sets’
    corecore