51 research outputs found

    Enabling single-handed interaction in mobile and wearable computing

    Get PDF
    Mobile and wearable computing are increasingly pervasive as people carry and use personal devices in everyday life. Screen sizes of such devices are becoming larger and smaller to accommodate both intimate and practical uses. Some mobile device screens are becoming larger to accommodate new experiences (e.g., phablet, tablet, eReader), whereas screen sizes on wearable devices are becoming smaller to allow them to fit into more places (e.g., smartwatch, wrist-band and eye-wear). However, these trends are making it difficult to use such devices with only one hand due to their placement, limited thumb reach and the fat-finger problem. This is especially true as there are many occasions when a user’s other hand is occupied (encumbered) or not available. This thesis work explores, creates and studies novel interaction techniques that enable effective single-hand usage on mobile and wearable devices, empowering users to achieve more with their smart devices when only one hand is available.Postprin

    Tangible UI by object and material classification with radar

    Get PDF
    Radar signals penetrate, scatter, absorb and reflect energy into proximate objects and ground penetrating and aerial radar systems are well established. We describe a highly accurate system based on a combination of a monostatic radar (Google Soli), supervised machine learning to support object and material classification based Uls. Based on RadarCat techniques, we explore the development of tangible user interfaces without modification of the objects or complex infrastructures. This affords new forms of interaction with digital devices, proximate objects and micro-gestures.Postprin

    Workshop on object recognition for input and mobile interaction

    Get PDF
    Today we can see an increasing number of Object Recognition systems of very different sizes, portability, embedability and form factors which are starting to become part of the ubiquitous, tangible, mobile and wearable computing ecosystems that we might make use of in our daily lives.These systems rely on a variety of technologies including computer vision, radar, acoustic sensing, tagging and smart objects. Such systems open up a wide-range of new forms of touchless interaction. With systems deployed in mobile products then using everyday objects that can be found in the office or home, we can realise new applications and novel types of interaction. Object based interactions might revolutionise how people interact with a computer. System could be used in conjunction with a mobile phone, for example it could be trained to open a recipe app when you hold a phone to your stomach, or change its settings when operating with a gloved hand. Although the last few years have seen an increasing amount of research in this area, knowledge about this subject remains under explored, fragmented, and cuts across a set of related but heterogeneous issues. This workshop brings together researchers and practitioners interested in the challenges posed by Object Recognition for Input and Mobile Interaction.Postprin

    WatchMI: pressure touch, twist and pan gesture input on unmodified smartwatches

    Get PDF
    The screen size of a smartwatch provides limited space to enable expressive multi-touch input, resulting in a markedly difficult and limited experience. We present WatchMI: Watch Movement Input that enhances touch interaction on a smartwatch to support continuous pressure touch, twist, pan gestures and their combinations. Our novel approach relies on software that analyzes, in real-time, the data from a built-in Inertial Measurement Unit (IMU) in order to determine with great accuracy and different levels of granularity the actions performed by the user, without requiring additional hardware or modification of the watch. We report the results of an evaluation with the system, and demonstrate that the three proposed input interfaces are accurate, noise-resistant, easy to use and can be deployed on a variety of smartwatches. We then showcase the potential of this work with seven different applications including, map navigation, an alarm clock, a music player, pan gesture recognition, text entry, file explorer and controlling remote devices or a game character.Postprin

    WatchMI: applications of watch movement input on unmodified smartwatches

    Get PDF
    In this demo, we show that it is possible to enhance touch interaction on unmodified smartwatch to support continuous pressure touch, twist and pan gestures, by only analyzing the real-time data of Inertial Measurement Unit (IMU). Our evaluation results show that the three proposed input interfaces are accurate, noise-resistant, easy to use and can be deployed to a variety of smartwatches. We then showcase the potential of this work with seven example applications. During the demo session, users can try the prototype.Postprin

    Augmented learning for sports using wearable head-worn and wrist-worn devices

    Get PDF
    Novices can learn sports in a variety of ways ranging from guidance from an instructor to watching video tutorials. In each case, subsequent and repeated self-directed practice sessions are an essential step. However, during such self-directed practice, constant guidance and feedback is absent. As a result, the novices do not know if they are making mistake or if there are any areas for improvement. In this position paper, we propose using wearable devices to augment such self-directed practice sessions by providing augmented guidance and feedback. In particular, a head-worn display can provide real-time guidance whilst wrist-worn devices can provide real-time tracking and monitoring of various states. We envision this approach being applied to various sports, and in particular this is suitable for sports that utilize precise hand motion such as snooker, billiards, golf, archery, cricket, tennis and table tennis.Postprin

    Multi-scale gestural interaction for augmented reality

    Get PDF
    We present a multi-scale gestural interface for augmented reality applications. With virtual objects, gestural interactions such as pointing and grasping can be convenient and intuitive, however they are imprecise, socially awkward, and susceptible to fatigue. Our prototype application uses multiple sensors to detect gestures from both arm and hand motions (macro-scale), and finger gestures (micro-scale). Micro-gestures can provide precise input through a belt-worn sensor configuration, with the hand in a relaxed posture. We present an application that combines direct manipulation with microgestures for precise interaction, beyond the capabilities of direct manipulation alone.Postprin

    SpeCam: sensing surface color and material with the front-facing camera of mobile device

    Get PDF
    SpeCam is a lightweight surface color and material sensing approach for mobile devices which only uses the front-facing camera and the display as a multi-spectral light source. We leverage the natural use of mobile devices (placing it face-down) to detect the material underneath and therefore infer the location or placement of the device. SpeCam can then be used to support discreet micro-interactions to avoid the numerous distractions that users daily face with today's mobile devices. Our two-parts study shows that SpeCam can i) recognize colors in the HSB space with 10 degrees apart near the 3 dominant colors and 4 degrees otherwise and ii) 30 types of surface materials with 99% accuracy. These findings are further supported by a spectroscopy study. Finally, we suggest a series of applications based on simple mobile micro-interactions suitable for using the phone when placed face-down.Postprin

    Automated data gathering and training tool for personalized "Itchy Nose"

    Get PDF
    In "Itchy Nose" we proposed a sensing technique for detecting finger movements on the nose for supporting subtle and discreet interaction. It uses the electrooculography sensors embedded in the frame of a pair of eyeglasses for data gathering and uses machine-learning technique to classify different gestures. Here we further propose an automated training and visualization tool for its classifier. This tool guides the user to make the gesture in proper timing and records the sensor data. It automatically picks the ground truth and trains a machine-learning classifier with it. With this tool, we can quickly create trained classifier that is personalized for the user and test various gestures.Postprin
    • …
    corecore