13,122 research outputs found
Assessing the effectiveness of multi-touch interfaces for DP operation
Navigating a vessel using dynamic positioning (DP) systems close to offshore installations is a challenge. The operator's only possibility of manipulating the system is through its interface, which can be categorized as the physical appearance of the equipment and the visualization of the system. Are there possibilities of interaction between the operator and the system that can reduce strain and cognitive load during DP operations? Can parts of the system (e.g. displays) be physically brought closer to the user to enhance the feeling of control when operating the system? Can these changes make DP operations more efficient and safe? These questions inspired this research project, which investigates the use of multi-touch and hand gestures known from consumer products to directly manipulate the visualization of a vessel in the 3D scene of a DP system. Usability methodologies and evaluation techniques that are widely used in consumer market research were used to investigate how these interaction techniques, which are new to the maritime domain, could make interaction with the DP system more efficient and transparent both during standard and safety-critical operations. After investigating which gestures felt natural to use by running user tests with a paper prototype, the gestures were implemented into a Rolls-Royce DP system and tested in a static environment. The results showed that the test participants performed significantly faster using direct gesture manipulation compared to using traditional button/menu interaction. To support the results from these tests, further tests were carried out. The purpose is to investigate how gestures are performed in a moving environment, using a motion platform to simulate rough sea conditions. The key results and lessons learned from a collection of four user experiments, together with a discussion of the choice of evaluation techniques will be discussed in this paper
Tangible user interfaces : past, present and future directions
In the last two decades, Tangible User Interfaces (TUIs) have emerged as a new interface type that interlinks the digital and physical worlds. Drawing upon users' knowledge and skills of interaction with the real non-digital world, TUIs show a potential to enhance the way in which people interact with and leverage digital information. However, TUI research is still in its infancy and extensive research is required in or- der to fully understand the implications of tangible user interfaces, to develop technologies that further bridge the digital and the physical, and to guide TUI design with empirical knowledge. This paper examines the existing body of work on Tangible User In- terfaces. We start by sketching the history of tangible user interfaces, examining the intellectual origins of this field. We then present TUIs in a broader context, survey application domains, and review frame- works and taxonomies. We also discuss conceptual foundations of TUIs including perspectives from cognitive sciences, phycology, and philoso- phy. Methods and technologies for designing, building, and evaluating TUIs are also addressed. Finally, we discuss the strengths and limita- tions of TUIs and chart directions for future research
The Treachery of Images: Bayesian Scene Keypoints for Deep Policy Learning in Robotic Manipulation
In policy learning for robotic manipulation, sample efficiency is of
paramount importance. Thus, learning and extracting more compact
representations from camera observations is a promising avenue. However,
current methods often assume full observability of the scene and struggle with
scale invariance. In many tasks and settings, this assumption does not hold as
objects in the scene are often occluded or lie outside the field of view of the
camera, rendering the camera observation ambiguous with regard to their
location. To tackle this problem, we present BASK, a Bayesian approach to
tracking scale-invariant keypoints over time. Our approach successfully
resolves inherent ambiguities in images, enabling keypoint tracking on
symmetrical objects and occluded and out-of-view objects. We employ our method
to learn challenging multi-object robot manipulation tasks from wrist camera
observations and demonstrate superior utility for policy learning compared to
other representation learning techniques. Furthermore, we show outstanding
robustness towards disturbances such as clutter, occlusions, and noisy depth
measurements, as well as generalization to unseen objects both in simulation
and real-world robotic experiments
- …