16 research outputs found

    Pointing Without a Pointer

    Get PDF
    We present a method for performing selection tasks based on continuous control of multiple, competing agents who try to determine the user's intentions from their control behaviour without requiring an explicit pointer. The entropy in the selection process decreases in a continuous fashion -- we provide experimental evidence of selection from 500 initial targets. The approach allows adaptation over time to best make use of the multimodal communication channel between the human and the system. This general approach is well suited to mobile and wearable applications, shared displays and security conscious settings

    It’s a long way to Monte-Carlo: probabilistic display in GPS navigation

    Get PDF
    We present a mobile, GPS-based multimodal navigation system, equipped with inertial control that allows users to explore and navigate through an augmented physical space, incorporating and displaying the uncertainty resulting from inaccurate sensing and unknown user intentions. The system propagates uncertainty appropriately via Monte Carlo sampling and predicts at a user-controllable time horizon. Control of the Monte Carlo exploration is entirely tilt-based. The system output is displayed both visually and in audio. Audio is rendered via granular synthesis to accurately display the probability of the user reaching targets in the space. We also demonstrate the use of uncertain prediction in a trajectory following task, where a section of music is modulated according to the changing predictions of user position with respect to the target trajectory. We show that appropriate display of the full distribution of potential future users positions with respect to sites-of-interest can improve the quality of interaction over a simplistic interpretation of the sensed data

    BodySpace: inferring body pose for natural control of a music player

    Get PDF
    We describe the BodySpace system, which uses inertial sensing and pattern recognition to allow the gestural control of a music player by placing the device at different parts of the body. We demonstrate a new approach to the segmentation and recognition of gestures for this kind of application and show how simulated physical model-based techniques can shape gestural interaction

    Smooth Pursuit Target Speeds and Trajectories

    Get PDF
    In this paper we present an investigation of how the speed and trajectory of smooth pursuits targets impact on detection rates in gaze interfaces. Previous work optimized these values for the specific application for which smooth pursuit eye movements were employed. However, this may not always be possible. For example UI designers may want to minimize distraction caused by the stimulus, integrate it with a certain UI element (e.g., a button), or limit it to a certain area of the screen. In these cases an in-depth understanding of the interplay between speed, trajectory, and accuracy is required. To achieve this, we conducted a user study with 15 participants who had to follow targets with different speeds and on different trajectories using their gaze. We evaluated the data with respect to detectability. As a result, we obtained reasonable ranges for target speeds and demonstrate the effects of trajectory shapes. We show that slow moving targets are hard to detect by correlation and that introducing a delay improves the detection rate for fast moving targets. Our research is complemented by design rules which enable designers to implement better pursuit detectors and pursuit-based user interfaces

    TraceMatch:a computer vision technique for user input by tracing of animated controls

    Get PDF
    Recent works have explored the concept of movement correlation interfaces, in which moving objects can be selected by matching the movement of the input device to that of the desired object. Previous techniques relied on a single modality (e.g. gaze or mid-air gestures) and specific hardware to issue commands. TraceMatch is a computer vision technique that enables input by movement correlation while abstracting from any particular input modality. The technique relies only on a conventional webcam to enable users to produce matching gestures with any given body parts, even whilst holding objects. We describe an implementation of the technique for acquisition of orbiting targets, evaluate algorithm performance for different target sizes and frequencies, and demonstrate use of the technique for remote control of graphical as well as physical objects with different body parts

    Control theoretic models of pointing

    Get PDF
    This article presents an empirical comparison of four models from manual control theory on their ability to model targeting behaviour by human users using a mouse: McRuer’s Crossover, Costello’s Surge, second-order lag (2OL), and the Bang-bang model. Such dynamic models are generative, estimating not only movement time, but also pointer position, velocity, and acceleration on a moment-to-moment basis. We describe an experimental framework for acquiring pointing actions and automatically fitting the parameters of mathematical models to the empirical data. We present the use of time-series, phase space, and Hooke plot visualisations of the experimental data, to gain insight into human pointing dynamics. We find that the identified control models can generate a range of dynamic behaviours that captures aspects of human pointing behaviour to varying degrees. Conditions with a low index of difficulty (ID) showed poorer fit because their unconstrained nature leads naturally to more behavioural variability. We report on characteristics of human surge behaviour (the initial, ballistic sub-movement) in pointing, as well as differences in a number of controller performance measures, including overshoot, settling time, peak time, and rise time. We describe trade-offs among the models. We conclude that control theory offers a promising complement to Fitts’ law based approaches in HCI, with models providing representations and predictions of human pointing dynamics, which can improve our understanding of pointing and inform design

    Dynamic motion coupling of body movement for input control

    Get PDF
    Touchless gestures are used for input when touch is unsuitable or unavailable, such as when interacting with displays that are remote, large, public, or when touch is prohibited for hygienic reasons. Traditionally user input is spatially or semantically mapped to system output, however, in the context of touchless gestures these interaction principles suffer from several disadvantages including memorability, fatigue, and ill-defined mappings. This thesis investigates motion correlation as the third interaction principle for touchless gestures, which maps user input to system output based on spatiotemporal matching of reproducible motion. We demonstrate the versatility of motion correlation by using movement as the primary sensing principle, relaxing the restrictions on how a user provides input. Using TraceMatch, a novel computer vision-based system, we show how users can provide effective input through investigation of input performance with different parts of the body, and how users can switch modes of input spontaneously in realistic application scenarios. Secondly, spontaneous spatial coupling shows how motion correlation can bootstrap spatial input, allowing any body movement, or movement of tangible objects, to be appropriated for ad hoc touchless pointing on a per interaction basis. We operationalise the concept in MatchPoint, and demonstrate the unique capabilities through an exploration of the design space with application examples. Finally, we explore how users synchronise with moving targets in the context of motion correlation, revealing how simple harmonic motion leads to better synchronisation. Using the insights gained we explore the robustness of algorithms used for motion correlation, showing how it is possible to successfully detect a user's intent to interact whilst suppressing accidental activations from common spatial and semantic gestures. Finally, we look across our work to distil guidelines for interface design, and further considerations of how motion correlation can be used, both in general and for touchless gestures

    Efficient human-machine control with asymmetric marginal reliability input devices

    Get PDF
    Input devices such as motor-imagery brain-computer interfaces (BCIs) are often unreliable. In theory, channel coding can be used in the human-machine loop to robustly encapsulate intention through noisy input devices but standard feedforward error correction codes cannot be practically applied. We present a practical and general probabilistic user interface for binary input devices with very high noise levels. Our approach allows any level of robustness to be achieved, regardless of noise level, where reliable feedback such as a visual display is available. In particular, we show efficient zooming interfaces based on feedback channel codes for two-class binary problems with noise levels characteristic of modalities such as motor-imagery based BCI, with accuracy <75%. We outline general principles based on separating channel, line and source coding in human-machine loop design. We develop a novel selection mechanism which can achieve arbitrarily reliable selection with a noisy two-state button. We show automatic online adaptation to changing channel statistics, and operation without precise calibration of error rates. A range of visualisations are used to construct user interfaces which implicitly code for these channels in a way that it is transparent to users. We validate our approach with a set of Monte Carlo simulations, and empirical results from a human-in-the-loop experiment showing the approach operates effectively at 50-70% of the theoretical optimum across a range of channel conditions

    Remote control by body movement in synchrony with orbiting widgets: an evaluation of TraceMatch

    Get PDF
    In this work we consider how users can use body movement for remote control with minimal effort and maximum flexibility. TraceMatch is a novel technique where the interface displays available controls as circular widgets with orbiting targets, and where users can trigger a control by mimicking the displayed motion. The technique uses computer vision to detect circular motion as a uniform type of input, but is highly appropriable as users can produce matching motion with any part of their body. We present three studies that investigate input performance with different parts of the body, user preferences, and spontaneous choice of movements for input in realistic application scenarios. The results show that users can provide effective input with their head, hands and while holding objects, that multiple controls can be effectively distinguished by the difference in presented phase and direction of movement, and that users choose and switch modes of input seamlessly

    Updating During Lateral Movement Using Visual and Non-Visual Motion Cues

    Get PDF
    Spatial updating, the ability to track egocentric positions of surrounding objects during self-motion, is fundamental to navigating around the world. Past studies show people make systematic errors when updating after linear self-motion. To determine the source of these errors, I measured errors in remembered target position with and without passive lateral movements. I also varied the visual (Oculus Rift) and physical (motion-platform) self-motion feedback. In general, people remembered targets as less eccentric with greater underestimations for more eccentric targets. They could use physical cues for updating, but they made larger errors than when they had only visual cues. Visual motion cues alone were enough to produce updating, and physical cues were not needed when visual cues were available. Also, people remembered the targets within the range of movement as closer to the position they were perceived before moving. However, individual perceived distance of the target did not affect their updating
    corecore