34,445 research outputs found

    A comparison of surface and motion user-defined gestures for mobile augmented reality.

    Get PDF
    Augmented Reality (AR) technology permits interaction between the virtual and physical worlds. Recent advancements in mobile devices allow for a better mobile AR experience, and in turn, improving user adoption rate and increasing the number of mobile AR applications across a wide range of disciplines. Nevertheless, the majority of mobile AR applications, that we have surveyed, adopted surface gestures as the default interaction method for the AR experience and have not utilised three-dimensional (3D) spatial interaction, as supported by AR interfaces. This research investigates two types of gestures for interacting in mobile AR applications, surface gestures, which have been deployed by mainstream applications, and motion gestures, that take advantages of 3D movement of the handheld device. Our goal is to find out if there exists a gesture-based interaction suitable for handheld devices, that can utilise the 3D interaction of mobile AR applications. We conducted two user studies, an elicitation study and a validation study. In the elicitation study, we elicited two sets of gestures, surface and motion, for mobile AR applications. We recruited twenty-one participants to perform twelve common mobile AR tasks, which yielded a total of five-hundred and four gestures. We classified and illustrated the two sets of gestures, and compared them in terms of goodness, ease of use, and engagement. The elicitation process yielded two separate sets of user-defined gestures; legacy surface gestures, which were familiar and easy to use by the participants, and motion gestures, which found to be more engaging. From the design patterns of the motion gestures, we proposed a novel interaction technique for mobile AR called TMR (Touch-Move-Release). To validate our elicited gestures in an actual application, we conducted a second study. We have developed a mobile AR game similar to Pokémon GO and implemented the selected gestures from the elicitation study. The study was conducted with ten participants, and we found that the motion gesture could provide more engagement and better game experience. Nevertheless, surface gestures were more accurate and easier to use. We discussed the implications of our findings and gave our design recommendations for designers on the usage of the elicited gestures. Our research can be further explored in the future. It can be used as a "prequel" to the design of better gesture-based interaction technique for different tasks in various mobile AR applications

    3DTouch: A wearable 3D input device with an optical sensor and a 9-DOF inertial measurement unit

    Full text link
    We present 3DTouch, a novel 3D wearable input device worn on the fingertip for 3D manipulation tasks. 3DTouch is designed to fill the missing gap of a 3D input device that is self-contained, mobile, and universally working across various 3D platforms. This paper presents a low-cost solution to designing and implementing such a device. Our approach relies on relative positioning technique using an optical laser sensor and a 9-DOF inertial measurement unit. 3DTouch is self-contained, and designed to universally work on various 3D platforms. The device employs touch input for the benefits of passive haptic feedback, and movement stability. On the other hand, with touch interaction, 3DTouch is conceptually less fatiguing to use over many hours than 3D spatial input devices. We propose a set of 3D interaction techniques including selection, translation, and rotation using 3DTouch. An evaluation also demonstrates the device's tracking accuracy of 1.10 mm and 2.33 degrees for subtle touch interaction in 3D space. Modular solutions like 3DTouch opens up a whole new design space for interaction techniques to further develop on.Comment: 8 pages, 7 figure

    dWatch: a Personal Wrist Watch for Smart Environments

    Get PDF
    Intelligent environments, such as smart homes or domotic systems, have the potential to support people in many of their ordinary activities, by allowing complex control strategies for managing various capabilities of a house or a building: lights, doors, temperature, power and energy, music, etc. Such environments, typically, provide these control strategies by means of computers, touch screen panels, mobile phones, tablets, or In-House Displays. An unobtrusive and typically wearable device, like a bracelet or a wrist watch, that lets users perform various operations in their homes and to receive notifications from the environment, could strenghten the interaction with such systems, in particular for those people not accustomed to computer systems (e.g., elderly) or in contexts where they are not in front of a screen. Moreover, such wearable devices reduce the technological gap introduced in the environment by home automation systems, thus permitting a higher level of acceptance in the daily activities and improving the interaction between the environment and its inhabitants. In this paper, we introduce the dWatch, an off-the-shelf personal wearable notification and control device, integrated in an intelligent platform for domotic systems, designed to optimize the way people use the environment, and built as a wrist watch so that it is easily accessible, worn by people on a regular basis and unobtrusiv

    Direct combination: a new user interaction principle for mobile and ubiquitous HCI

    Get PDF
    Direct Combination (DC) is a recently introduced user interaction principle. The principle (previously applied to desktop computing) can greatly reduce the degree of search, time, and attention required to operate user interfaces. We argue that Direct Combination applies particularly aptly to mobile computing devices, given appropriate interaction techniques, examples of which are presented here. The reduction in search afforded to users can be applied to address several issues in mobile and ubiquitous user interaction including: limited feedback bandwidth; minimal attention situations; and the need for ad-hoc spontaneous interoperation and dynamic reconfiguration of multiple devices. When Direct Combination is extended and adapted to fit the demands of mobile and ubiquitous HCI, we refer to it as Ambient Combination (AC) . Direct Combination allows the user to exploit objects in the environment to narrow down the range of interactions that need be considered (by system and user). When the DC technique of pairwise or n-fold combination is applicable, it can greatly lessen the demands on users for memorisation and interface navigation. Direct Combination also appears to offers a new way of applying context-aware information. In this paper, we present Direct Combination as applied ambiently through a series of interaction scenarios, using an implemented prototype system

    Prototype gesture recognition interface for vehicular head-up display system

    Get PDF

    Establishing the design knowledge for emerging interaction platforms

    Get PDF
    While awaiting a variety of innovative interactive products and services to appear in the market in the near future such as interactive tabletops, interactive TVs, public multi-touch walls, and other embedded appliances, this paper calls for preparation for the arrival of such interactive platforms based on their interactivity. We advocate studying, understanding and establishing the foundation for interaction characteristics and affordances and design implications for these platforms which we know will soon emerge and penetrate our everyday lives. We review some of the archetypal interaction platform categories of the future and highlight the current status of the design knowledge-base accumulated to date and the current rate of growth for each of these. We use example designs illustrating design issues and considerations based on the authors’ 12-year experience in pioneering novel applications in various forms and styles
    • 

    corecore