3,018 research outputs found

    Enabling single-handed interaction in mobile and wearable computing

    Get PDF
    Mobile and wearable computing are increasingly pervasive as people carry and use personal devices in everyday life. Screen sizes of such devices are becoming larger and smaller to accommodate both intimate and practical uses. Some mobile device screens are becoming larger to accommodate new experiences (e.g., phablet, tablet, eReader), whereas screen sizes on wearable devices are becoming smaller to allow them to fit into more places (e.g., smartwatch, wrist-band and eye-wear). However, these trends are making it difficult to use such devices with only one hand due to their placement, limited thumb reach and the fat-finger problem. This is especially true as there are many occasions when a userā€™s other hand is occupied (encumbered) or not available. This thesis work explores, creates and studies novel interaction techniques that enable effective single-hand usage on mobile and wearable devices, empowering users to achieve more with their smart devices when only one hand is available.Postprin

    Multi-Sensor Context-Awareness in Mobile Devices and Smart Artefacts

    Get PDF
    The use of context in mobile devices is receiving increasing attention in mobile and ubiquitous computing research. In this article we consider how to augment mobile devices with awareness of their environment and situation as context. Most work to date has been based on integration of generic context sensors, in particular for location and visual context. We propose a different approach based on integration of multiple diverse sensors for awareness of situational context that can not be inferred from location, and targeted at mobile device platforms that typically do not permit processing of visual context. We have investigated multi-sensor context-awareness in a series of projects, and report experience from development of a number of device prototypes. These include development of an awareness module for augmentation of a mobile phone, of the Mediacup exemplifying context-enabled everyday artifacts, and of the Smart-Its platform for aware mobile devices. The prototypes have been explored in various applications to validate the multi-sensor approach to awareness, and to develop new perspectives of how embedded context-awareness can be applied in mobile and ubiquitous computing

    Exploring user-defined gestures for alternate interaction space for smartphones and smartwatches

    Get PDF
    2016 Spring.Includes bibliographical references.In smartphones and smartwatches, the input space is limited due to their small form factor. Although many studies have highlighted the possibility of expanding the interaction space for these devices, limited work has been conducted on exploring end-user preferences for gestures in the proposed interaction spaces. In this dissertation, I present the results of two elicitation studies that explore end-user preferences for creating gestures in the proposed alternate interaction spaces for smartphones and smartwatches. Using the data collected from the two elicitation studies, I present gestures preferred by end-users for common tasks that can be performed using smartphones and smartwatches. I also present the end-user mental models for interaction in proposed interaction spaces for these devices, and highlight common user motivations and preferences for suggested gestures. Based on the findings, I present design implications for incorporating the proposed alternate interaction spaces for smartphones and smartwatches

    Ambient Gestures

    No full text
    We present Ambient Gestures, a novel gesture-based system designed to support ubiquitous ā€˜in the environmentā€™ interactions with everyday computing technology. Hand gestures and audio feedback allow users to control computer applications without reliance on a graphical user interface, and without having to switch from the context of a non-computer task to the context of the computer. The Ambient Gestures system is composed of a vision recognition software application, a set of gestures to be processed by a scripting application and a navigation and selection application that is controlled by the gestures. This system allows us to explore gestures as the primary means of interaction within a multimodal, multimedia environment. In this paper we describe the Ambient Gestures system, define the gestures and the interactions that can be achieved in this environment and present a formative study of the system. We conclude with a discussion of our findings and future applications of Ambient Gestures in ubiquitous computing

    WRISTBAND.IO:expanding input and output spaces of a Smartwatch

    Get PDF
    Smartwatches are characterized by their small size designed for wearability, discretion, and mobile interactions. Most of the interactivity, however, is limited to the size of the display, introducing issues such as screen occlusion and limited information density. We introduce Wristband.io, a smartwatch with an extended interaction space along the wristband, enabling (i) back-of-band interaction using a touchpad, (ii) a low resolution ambient watchband display for offscreen notification, and (iii) tangible buttons for quick, eyes-free input. Insights gained through a study show that back-of-band input increases accuracy and task completion rates for smaller on-screen targets. We probe the design space of Wristband.io with three applications

    WRISTBAND.IO:expanding input and output spaces of a Smartwatch

    Get PDF
    Smartwatches are characterized by their small size designed for wearability, discretion, and mobile interactions. Most of the interactivity, however, is limited to the size of the display, introducing issues such as screen occlusion and limited information density. We introduce Wristband.io, a smartwatch with an extended interaction space along the wristband, enabling (i) back-of-band interaction using a touchpad, (ii) a low resolution ambient watchband display for offscreen notification, and (iii) tangible buttons for quick, eyes-free input. Insights gained through a study show that back-of-band input increases accuracy and task completion rates for smaller on-screen targets. We probe the design space of Wristband.io with three applications
    • ā€¦
    corecore