3,839 research outputs found

    Tangible UI by object and material classification with radar

    Get PDF
    Radar signals penetrate, scatter, absorb and reflect energy into proximate objects and ground penetrating and aerial radar systems are well established. We describe a highly accurate system based on a combination of a monostatic radar (Google Soli), supervised machine learning to support object and material classification based Uls. Based on RadarCat techniques, we explore the development of tangible user interfaces without modification of the objects or complex infrastructures. This affords new forms of interaction with digital devices, proximate objects and micro-gestures.Postprin

    Inertia compensation while scanning screw threads on coordinate-measuring machines

    Full text link
    Usage of scanning coordinate-measuring machines for inspection of screw threads has become a common practice nowadays. Compared to touch trigger probing, scanning capabilities allow to speed up measuring process while still maintaining high accuracy. However, in some cases accuracy drasticaly depends on the scanning speed. In this paper a compensation method is proposed allowing to reduce the influence of some dynamic effects while scanning screw threads on coordinate-measuring machines

    WatchMI: pressure touch, twist and pan gesture input on unmodified smartwatches

    Get PDF
    The screen size of a smartwatch provides limited space to enable expressive multi-touch input, resulting in a markedly difficult and limited experience. We present WatchMI: Watch Movement Input that enhances touch interaction on a smartwatch to support continuous pressure touch, twist, pan gestures and their combinations. Our novel approach relies on software that analyzes, in real-time, the data from a built-in Inertial Measurement Unit (IMU) in order to determine with great accuracy and different levels of granularity the actions performed by the user, without requiring additional hardware or modification of the watch. We report the results of an evaluation with the system, and demonstrate that the three proposed input interfaces are accurate, noise-resistant, easy to use and can be deployed on a variety of smartwatches. We then showcase the potential of this work with seven different applications including, map navigation, an alarm clock, a music player, pan gesture recognition, text entry, file explorer and controlling remote devices or a game character.Postprin

    Workshop on object recognition for input and mobile interaction

    Get PDF
    Today we can see an increasing number of Object Recognition systems of very different sizes, portability, embedability and form factors which are starting to become part of the ubiquitous, tangible, mobile and wearable computing ecosystems that we might make use of in our daily lives.These systems rely on a variety of technologies including computer vision, radar, acoustic sensing, tagging and smart objects. Such systems open up a wide-range of new forms of touchless interaction. With systems deployed in mobile products then using everyday objects that can be found in the office or home, we can realise new applications and novel types of interaction. Object based interactions might revolutionise how people interact with a computer. System could be used in conjunction with a mobile phone, for example it could be trained to open a recipe app when you hold a phone to your stomach, or change its settings when operating with a gloved hand. Although the last few years have seen an increasing amount of research in this area, knowledge about this subject remains under explored, fragmented, and cuts across a set of related but heterogeneous issues. This workshop brings together researchers and practitioners interested in the challenges posed by Object Recognition for Input and Mobile Interaction.Postprin
    corecore