6 research outputs found
SenseBelt:a belt-worn sensor to support cross-device interaction
Mobile interaction is shifting from a single device to simultaneous interaction with ensembles of devices such as phones, tablets, or watches. Spatially-aware cross-device interaction between mobile devices typically requires a fixed tracking infrastructure, which lim- its mobility. In this paper, we present SenseBelt – a sensing belt that enhances existing mobile interactions and enables low-cost, ad hoc sensing of cross-device gestures and interactions. SenseBelt enables proxemic interactions between people and their personal devices. SenseBelt also supports cross-device interaction be- tween personal devices and stationary devices, such as public displays. We discuss the design and implementation of SenseBelt together with possible applications. With an initial evaluation, we provide insights into the benefits and drawbacks of a belt-worn mediating sensor to support cross-device interactions
Recommended from our members
Proxemics of screen mediation: engagement with reading on screen manifests as diminished variation due to self-control, rather than diminished mean distance from screen
Objective: Burgoon's theory of conversational involvement suggest that when people engage with a person, they will move slightly closer to them, often subtly and subconsciously. However, some studies have failed to extend this to human-computer interaction. Our hypothesis is that during online reading, engagement is associated with an expenditure of effort to hold the head upright, still and centrally.
Method: We presented to 27 participants (ages 21.00 ± 2.89, 15 female) seated in front of 47.5x27 cm monitor two reading stimuli in a counterbalanced order, one (interesting) based on a best selling novel and the other (boring) based on European Union banking regulations. The participants were video-recorded during their reading while they wore reflective motion tracking markers. The markers were video-tracked off-line using Kinovea 0.8.
Results: Subjective VAS ratings showed that the stimuli elicited the bored and interested states as expected. Video tracking showed that the boring stimulus (compared to the interesting reading) elicited a greater head-to-screen velocity, a greater head-to-screen distance range, a greater head-to-screen distance standard deviation, but not a further away head-to-screen mean distance.
Conclusions: The more interesting reading led to efforts to control the head to a more central viewing position while suppressing head fidgeting
Investigating Precise Control in Spatial Interactions: Proxemics, Kinesthetics, and Analytics
Augmented and Virtual Reality (AR/VR) technologies have reshaped the way in which we perceive the virtual world. In fact, recent technological advancements provide experiences that make the physical and virtual worlds almost indistinguishable. However, the physical world affords subtle sensorimotor cues which we subconsciously utilize to perform simple and complex tasks in our daily lives. The lack of this affordance in existing AR/VR systems makes it difficult for their mainstream adoption over conventional user interfaces. As a case in point, existing spatial user interfaces (SUI) lack the intuition to perform tasks in a manner that is perceptually familiar to the physical world. The broader goal of this dissertation lies in facilitating an intuitive spatial manipulation experience, specifically for motor control.
We begin by investigating the role of proximity to an action on precise motor control in spatial tasks. We do so by introducing a new SUI called the Clock-Maker's Work-Space (CMWS), with the goal of enabling precise actions close to the body, akin to the physical world. On evaluating our setup in comparison to conventional mixed-reality interfaces, we find CMWS to afford precise actions for bi-manual spatial tasks. We further compare our SUI with a physical manipulation task and observe similarities in user behavior across both tasks.
We subsequently narrow our focus on studying precise spatial rotation. We utilize haptics, specifically force-feedback (kinesthetics) for augmenting fine motor control in spatial rotational task. By designing three kinesthetic rotation metaphors, we evaluate precise rotational control with and without haptic feedback for 3D shape manipulation. Our results show that haptics-based rotation algorithms allow for precise motor control in 3D space, also, help reduce hand fatigue.
In order to understand precise control in its truest form, we investigate orthopedic surgery training from the point of analyzing bone-drilling tasks. We designed a hybrid physical-virtual simulator for bone-drilling training and collected physical data for analyzing precise drilling action. We also developed a Laplacian based performance metric to help expert surgeons evaluate the resident training progress across successive years of orthopedic residency