3,839 research outputs found
Tangible UI by object and material classification with radar
Radar signals penetrate, scatter, absorb and reflect energy into proximate objects and ground penetrating and aerial radar systems are well established. We describe a highly accurate system based on a combination of a monostatic radar (Google Soli), supervised machine learning to support object and material classification based Uls. Based on RadarCat techniques, we explore the development of tangible user interfaces without modification of the objects or complex infrastructures. This affords new forms of interaction with digital devices, proximate objects and micro-gestures.Postprin
Recommended from our members
Live calcium imaging of Aedes aegypti neuronal tissues reveals differential importance of chemosensory systems for life-history-specific foraging strategies.
BackgroundThe mosquito Aedes aegypti has a wide variety of sensory pathways that have supported its success as a species as well as a highly competent vector of numerous debilitating infectious pathogens. Investigations into mosquito sensory systems and their effects on behavior are valuable resources for the advancement of mosquito control strategies. Numerous studies have elucidated key aspects of mosquito sensory systems, however there remains critical gaps within the field. In particular, compared to that of the adult form, there has been a lack of studies directed towards the immature life stages. Additionally, although numerous studies have pinpointed specific sensory receptors as well as responding motor outputs, there has been a lack of studies able to monitor both concurrently.ResultsTo begin filling aforementioned gaps, here we engineered Ae. aegypti to ubiquitously express a genetically encoded calcium indicator, GCaMP6s. Using this strain, combined with advanced microscopy, we simultaneously measured live stimulus-evoked calcium responses in both neuronal and muscle cells with a wide spatial range and resolution.ConclusionsBy coupling in vivo live calcium imaging with behavioral assays we were able to gain functional insights into how stimulus-evoked neural and muscle activities are represented, modulated, and transformed in mosquito larvae enabling us to elucidate mosquito sensorimotor properties important for life-history-specific foraging strategies
Inertia compensation while scanning screw threads on coordinate-measuring machines
Usage of scanning coordinate-measuring machines for inspection of screw
threads has become a common practice nowadays. Compared to touch trigger
probing, scanning capabilities allow to speed up measuring process while still
maintaining high accuracy. However, in some cases accuracy drasticaly depends
on the scanning speed. In this paper a compensation method is proposed allowing
to reduce the influence of some dynamic effects while scanning screw threads on
coordinate-measuring machines
WatchMI: pressure touch, twist and pan gesture input on unmodified smartwatches
The screen size of a smartwatch provides limited space to enable expressive multi-touch input, resulting in a markedly difficult and limited experience. We present WatchMI: Watch Movement Input that enhances touch interaction on a smartwatch to support continuous pressure touch, twist, pan gestures and their combinations. Our novel approach relies on software that analyzes, in real-time, the data from a built-in Inertial Measurement Unit (IMU) in order to determine with great accuracy and different levels of granularity the actions performed by the user, without requiring additional hardware or modification of the watch. We report the results of an evaluation with the system, and demonstrate that the three proposed input interfaces are accurate, noise-resistant, easy to use and can be deployed on a variety of smartwatches. We then showcase the potential of this work with seven different applications including, map navigation, an alarm clock, a music player, pan gesture recognition, text entry, file explorer and controlling remote devices or a game character.Postprin
Workshop on object recognition for input and mobile interaction
Today we can see an increasing number of Object Recognition systems of very different sizes, portability, embedability and form factors which are starting to become part of the ubiquitous, tangible, mobile and wearable computing ecosystems that we might make use of in our daily lives.These systems rely on a variety of technologies including computer vision, radar, acoustic sensing, tagging and smart objects. Such systems open up a wide-range of new forms of touchless interaction. With systems deployed in mobile products then using everyday objects that can be found in the office or home, we can realise new applications and novel types of interaction. Object based interactions might revolutionise how people interact with a computer. System could be used in conjunction with a mobile phone, for example it could be trained to open a recipe app when you hold a phone to your stomach, or change its settings when operating with a gloved hand. Although the last few years have seen an increasing amount of research in this area, knowledge about this subject remains under explored, fragmented, and cuts across a set of related but heterogeneous issues. This workshop brings together researchers and practitioners interested in the challenges posed by Object Recognition for Input and Mobile Interaction.Postprin
Health status of older adults with Type 2 diabetes mellitus after aerobic or resistance training: A randomised trial
10.1186/1477-7525-9-59Health and Quality of Life Outcomes9
Plasma lipoprotein subfraction concentrations are associated with lipid metabolism and age-related macular degeneration
10.1194/jlr.M073684Journal of Lipid Research5891785-179
- …
