685 research outputs found

    Issues and techniques for collaborative music making on multi-touch surfaces

    Get PDF
    A range of systems exist for collaborative music making on multi-touch surfaces. Some of them have been highly successful, but currently there is no systematic way of designing them, to maximise collaboration for a particular user group. We are particularly interested in systems that will engage novices and experts. We designed a simple application in an initial attempt to clearly analyse some of the issues. Our application allows groups of users to express themselves in collaborative music making using pre-composed materials. User studies were video recorded and analysed using two techniques derived from Grounded Theory and Content Analysis. A questionnaire was also conducted and evaluated. Findings suggest that the application affords engaging interaction. Enhancements for collaborative music making on multi-touch surfaces are discussed. Finally, future work on the prototype is proposed to maximise engagement

    Gsi demo: Multiuser gesture/speech interaction over digital tables by wrapping single user applications

    Get PDF
    Most commercial software applications are designed for a single user using a keyboard/mouse over an upright monitor. Our interest is exploiting these systems so they work over a digital table. Mirroring what people do when working over traditional tables, we want to allow multiple people to interact naturally with the tabletop application and with each other via rich speech and hand gesture and speech interaction on a digital table for geospatial applications- Google Earth, Warcraft III and The Sims. In this paper, we describe our underlying architecture: GSI Demo. First, GSI Demo creates a run-time wrapper around existing single user applications: it accepts and translates speech and gestures from multiple people into a single stream of keyboard and mouse inputs recognized by the application. Second, it lets people use multimodal demonstration- instead of programming- to quickly map their own speech and gestures to these keyboard/mouse inputs. For example, continuous gestures are trained by saying ¨Computer, when I do (one finger gesture), you do (mouse drag) ¨. Similarly, discrete speech commands can be trained by saying ¨Computer, when I say (layer bars), you do (keyboard and mouse macro) ¨. The end result is that end users can rapidly transform single user commercial applications into a multi-user, multimodal digital tabletop system

    Toward natural interaction in the real world: real-time gesture recognition

    Get PDF
    Using a new hand tracking technology capable of tracking 3D hand postures in real-time, we developed a recognition system for continuous natural gestures. By natural gestures, we mean those encountered in spontaneous interaction, rather than a set of artificial gestures chosen to simplify recognition. To date we have achieved 95.6% accuracy on isolated gesture recognition, and 73% recognition rate on continuous gesture recognition, with data from 3 users and twelve gesture classes. We connected our gesture recognition system to Google Earth, enabling real time gestural control of a 3D map. We describe the challenges of signal accuracy and signal interpretation presented by working in a real-world environment, and detail how we overcame them.National Science Foundation (U.S.) (award IIS-1018055)Pfizer Inc.Foxconn Technolog

    Using natural user interfaces to support synchronous distributed collaborative work

    Get PDF
    Synchronous Distributed Collaborative Work (SDCW) occurs when group members work together at the same time from different places together to achieve a common goal. Effective SDCW requires good communication, continuous coordination and shared information among group members. SDCW is possible because of groupware, a class of computer software systems that supports group work. Shared-workspace groupware systems are systems that provide a common workspace that aims to replicate aspects of a physical workspace that is shared among group members in a co-located environment. Shared-workspace groupware systems have failed to provide the same degree of coordination and awareness among distributed group members that exists in co-located groups owing to unintuitive interaction techniques that these systems have incorporated. Natural User Interfaces (NUIs) focus on reusing natural human abilities such as touch, speech, gestures and proximity awareness to allow intuitive human-computer interaction. These interaction techniques could provide solutions to the existing issues of groupware systems by breaking down the barrier between people and technology created by the interaction techniques currently utilised. The aim of this research was to investigate how NUI interaction techniques could be used to effectively support SDCW. An architecture for such a shared-workspace groupware system was proposed and a prototype, called GroupAware, was designed and developed based on this architecture. GroupAware allows multiple users from distributed locations to simultaneously view and annotate text documents, and create graphic designs in a shared workspace. Documents are represented as visual objects that can be manipulated through touch gestures. Group coordination and awareness is maintained through document updates via immediate workspace synchronization, user action tracking via user labels and user availability identification via basic proxemic interaction. Members can effectively communicate via audio and video conferencing. A user study was conducted to evaluate GroupAware and determine whether NUI interaction techniques effectively supported SDCW. Ten groups of three members each participated in the study. High levels of performance, user satisfaction and collaboration demonstrated that GroupAware was an effective groupware system that was easy to learn and use, and effectively supported group work in terms of communication, coordination and information sharing. Participants gave highly positive comments about the system that further supported the results. The successful implementation of GroupAware and the positive results obtained from the user evaluation provides evidence that NUI interaction techniques can effectively support SDCW

    Ambient Gestures

    No full text
    We present Ambient Gestures, a novel gesture-based system designed to support ubiquitous ‘in the environment’ interactions with everyday computing technology. Hand gestures and audio feedback allow users to control computer applications without reliance on a graphical user interface, and without having to switch from the context of a non-computer task to the context of the computer. The Ambient Gestures system is composed of a vision recognition software application, a set of gestures to be processed by a scripting application and a navigation and selection application that is controlled by the gestures. This system allows us to explore gestures as the primary means of interaction within a multimodal, multimedia environment. In this paper we describe the Ambient Gestures system, define the gestures and the interactions that can be achieved in this environment and present a formative study of the system. We conclude with a discussion of our findings and future applications of Ambient Gestures in ubiquitous computing

    BubbleType:Enabling Text Entry within a Walk-Up Tabletop Installation

    Get PDF
    corecore