37 research outputs found

    SketchWizard: Wizard of Oz Prototyping of Pen-based User Interfaces

    Get PDF
    SketchWizard allows designers to create Wizard of Oz prototypes of pen-based user interfaces in the early stages of design. In the past, designers have been inhibited from participating in the design of pen-based interfaces because of the inadequacy of paper prototypes and the difficulty of developing functional prototypes. In SketchWizard, designers and end users share a drawing canvas between two computers, allowing the designer to simulate the behavior of recognition or other technologies. Special editing features are provided to help designers respond quickly to end-user input. This paper describes the SketchWizard system and presents two evaluations of our approach. The first is an early feasibility study in which Wizard of Oz was used to prototype a pen-based user interface. The second is a laboratory study in which designers used SketchWizard to simulate existing pen-based interfaces. Both showed that end users gave valuable feedback in spite of delays between end-user actions and wizard updates

    Interaction in motion: designing truly mobile interaction

    Get PDF
    The use of technology while being mobile now takes place in many areas of people’s lives in a wide range of scenarios, for example users cycle, climb, run and even swim while interacting with devices. Conflict between locomotion and system use can reduce interaction performance and also the ability to safely move. We discuss the risks of such “interaction in motion”, which we argue make it desirable to design with locomotion in mind. To aid such design we present a taxonomy and framework based on two key dimensions: relation of interaction task to locomotion task, and the amount that a locomotion activity inhibits use of input and output interfaces. We accompany this with four strategies for interaction in motion. With this work, we ultimately aim to enhance our understanding of what being “mobile” actually means for interaction, and help practitioners design truly mobile interactions

    Opisthenar : hand poses and finger tapping recognition by observing back of hand using embedded wrist camera

    Get PDF
    We introduce a vision-based technique to recognize static hand poses and dynamic finger tapping gestures. Our approach employs a camera on the wrist, with a view of the opisthenar (back of the hand) area. We envisage such cameras being included in a wrist-worn device such as a smartwatch, fitness tracker or wristband. Indeed, selected off-the-shelf smartwatches now incorporate a built-in camera on the side for photography purposes. However, in this configuration, the fingers are occluded from the view of the camera. The oblique angle and placement of the camera make typical vision-based techniques difficult to adopt. Our alternative approach observes small movements and changes in the shape, tendons, skin and bones on the opisthenar area. We train deep neural networks to recognize both hand poses and dynamic finger tapping gestures. While this is a challenging configuration for sensing, we tested the recognition with a real-time user test and achieved a high recognition rate of 89.4% (static poses) and 67.5% (dynamic gestures). Our results further demonstrate that our approach can generalize across sessions and to new users. Namely, users can remove and replace the wrist-worn device while new users can employ a previously trained system, to a certain degree. We conclude by demonstrating three applications and suggest future avenues of work based on sensing the back of the hand.Postprin

    WRIST : Watch-Ring Interaction and Sensing Technique for wrist gestures and macro-micro pointing

    Get PDF
    Funding: Next-Generation In-ormation Computing Development Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT (NRF-2017M3C4A7066316) and Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-01270, WISE AR UI/UX Platform Development for Smartglasses).To better explore the incorporation of pointing and gesturing into ubiquitous computing, we introduce WRIST, an interaction and sensing technique that leverages the dexterity of human wrist motion. WRIST employs a sensor fusion approach which combines inertial measurement unit (IMU) data from a smartwatch and a smart ring. The relative orientation difference of the two devices is measured as the wrist rotation that is independent from arm rotation, which is also position and orientation invariant. Employing our test hardware, we demonstrate that WRIST affords and enables a number of novel yet simplistic interaction techniques, such as (i) macro-micro pointing without explicit mode switching and (ii) wrist gesture recognition when the hand is held in different orientations (e.g., raised or lowered). We report on two studies to evaluate the proposed techniques and we present a set of applications that demonstrate the benefits of WRIST. We conclude with a discussion of the limitations and highlight possible future pathways for research in pointing and gesturing with wearable devices.Postprin

    01_COVER_V6CK.indd

    No full text
    corecore