9,503 research outputs found
Interactive form creation: exploring the creation and manipulation of free form through the use of interactive multiple input interface
Most current CAD systems support only the two most common input devices: a mouse and a keyboard that impose a limit to the degree of interaction that a user can have with the system. However, it is not uncommon for users to work together on the same computer during a collaborative task. Beside that, people tend to use both hands to manipulate 3D objects; one hand is used to orient the object while the other hand is used to perform some operation on the object. The same things could be applied to computer modelling in the conceptual phase of the design process. A designer can rotate and position an object with one hand, and manipulate the shape [deform it] with the other hand. Accordingly, the 3D object can be easily and intuitively changed through interactive manipulation of both hands.The research investigates the manipulation and creation of free form geometries through the use of interactive interfaces with multiple input devices. First the creation of the 3D model will be discussed; several different types of models will be illustrated. Furthermore, different tools that allow the user to control the 3D model interactively will be presented. Three experiments were conducted using different interactive interfaces; two bi-manual techniques were compared with the conventional one-handed approach. Finally it will be demonstrated that the use of new and multiple input devices can offer many opportunities for form creation. The problem is that few, if any, systems make it easy for the user or the programmer to use new input devices
Multi-Modal Human-Machine Communication for Instructing Robot Grasping Tasks
A major challenge for the realization of intelligent robots is to supply them
with cognitive abilities in order to allow ordinary users to program them
easily and intuitively. One way of such programming is teaching work tasks by
interactive demonstration. To make this effective and convenient for the user,
the machine must be capable to establish a common focus of attention and be
able to use and integrate spoken instructions, visual perceptions, and
non-verbal clues like gestural commands. We report progress in building a
hybrid architecture that combines statistical methods, neural networks, and
finite state machines into an integrated system for instructing grasping tasks
by man-machine interaction. The system combines the GRAVIS-robot for visual
attention and gestural instruction with an intelligent interface for speech
recognition and linguistic interpretation, and an modality fusion module to
allow multi-modal task-oriented man-machine communication with respect to
dextrous robot manipulation of objects.Comment: 7 pages, 8 figure
Exploring the Front Touch Interface for Virtual Reality Headsets
In this paper, we propose a new interface for virtual reality headset: a
touchpad in front of the headset. To demonstrate the feasibility of the front
touch interface, we built a prototype device, explored VR UI design space
expansion, and performed various user studies. We started with preliminary
tests to see how intuitively and accurately people can interact with the front
touchpad. Then, we further experimented various user interfaces such as a
binary selection, a typical menu layout, and a keyboard. Two-Finger and
Drag-n-Tap were also explored to find the appropriate selection technique. As a
low-cost, light-weight, and in low power budget technology, a touch sensor can
make an ideal interface for mobile headset. Also, front touch area can be large
enough to allow wide range of interaction types such as multi-finger
interactions. With this novel front touch interface, we paved a way to new
virtual reality interaction methods
An Introduction to 3D User Interface Design
3D user interface design is a critical component of any virtual environment (VE) application. In this paper, we present a broad overview of three-dimensional (3D) interaction and user interfaces. We discuss the effect of common VE hardware devices on user interaction, as well as interaction techniques for generic 3D tasks and the use of traditional two-dimensional interaction styles in 3D environments. We divide most user interaction tasks into three categories: navigation, selection/manipulation, and system control. Throughout the paper, our focus is on presenting not only the available techniques, but also practical guidelines for 3D interaction design and widely held myths. Finally, we briefly discuss two approaches to 3D interaction design, and some example applications with complex 3D interaction requirements. We also present an annotated online bibliography as a reference companion to this article
Recommended from our members
An evaluation of discrete and continuous mid-air loop and marking menu selection in optical see-through HMDs
© 2019 Copyright held by the owner/author(s). This paper investigates discrete and continuous hand-drawn loops and marks in mid-air as a selection input for gesture-based menu systemsonoptical see-through head-mounteddisplays (OST HMDs). We explore two fundamental methods of providing menu selection: the marking menu and the loop menu, and a hybrid method which combines the two. The loop menu design uses a selection mechanism with loops to approximate directional selections in a menu system. We evaluate the merits of loop and marking menu selection in an experiment with two phases and report that 1) the loop-based selection mechanism provides smooth and effective interaction; 2) users prioritize accuracy and comfort over speed for mid-air gestures; 3) users can exploit the flexibility of a final hybrid marking/loop menu design; and, finally, 4) users tend to chunk gestures depending on the selection task and their level of familiarity with the menu layout
Real-Time Markerless Tracking the Human Hands for 3D Interaction
This thesis presents methods for enabling suitable human computer interaction using only movements of the bare human hands in free space. This kind of interaction is natural and intuitive, particularly because actions familiar to our everyday life can be reflected. Furthermore, the input is contact-free which is of great advantage e.g. in medical applications due to hygiene factors. For enabling the translation of hand movements to control signals an automatic method for tracking the pose and/or posture of the hand is needed. In this context the simultaneous recognition of both hands is desirable to allow for more natural input. The first contribution of this thesis is a novel video-based method for real-time detection of the positions and orientations of both bare human hands in four different predefined postures, respectively. Based on such a system novel interaction interfaces can be developed. However, the design of such interfaces is a non-trivial task. Additionally, the development of novel interaction techniques is often mandatory in order to enable the design of efficient and easily operable interfaces. To this end, several novel interaction techniques are presented and investigated in this thesis, which solve existing problems and substantially improve the applicability of such a new device. These techniques are not restricted to this input instrument and can also be employed to improve the handling of other interaction devices. Finally, several new interaction interfaces are described and analyzed to demonstrate possible applications in specific interaction scenarios.Markerlose Verfolgung der menschlichen Hände in Echtzeit für 3D Interaktion In der vorliegenden Arbeit werden Verfahren dargestellt, die sinnvolle Mensch- Maschine-Interaktionen nur durch Bewegungen der bloßen Hände in freiem Raum ermöglichen. Solche "natürlichen" Interaktionen haben den besonderen Vorteil, dass alltägliche und vertraute Handlungen in die virtuelle Umgebung übertragen werden können. Außerdem werden auf diese Art berührungslose Eingaben ermöglicht, nützlich z.B. wegen hygienischer Aspekte im medizinischen Bereich. Um Handbewegungen in Steuersignale umsetzen zu können, ist zunächst ein automatisches Verfahren zur Erkennung der Lage und/oder der Art der mit der Hand gebildeten Geste notwendig. Dabei ist die gleichzeitige Erfassung beider Hände wünschenswert, um die Eingaben möglichst natürlich gestalten zu können. Der erste Beitrag dieser Arbeit besteht aus einer neuen videobasierten Methode zur unmittelbaren Erkennung der Positionen und Orientierungen beider Hände in jeweils vier verschiedenen, vordefinierten Gesten. Basierend auf einem solchen Verfahren können neuartige Interaktionsschnittstellen entwickelt werden. Allerdings ist die Ausgestaltung solcher Schnittstellen keinesfalls trivial. Im Gegenteil ist bei einer neuen Art der Interaktion meist sogar die Entwicklung neuer Interaktionstechniken erforderlich, damit überhaupt effiziente und gut bedienbare Schnittstellen konzipiert werden können. Aus diesem Grund wurden in dieser Arbeit einige neue Interaktionstechniken entwickelt und untersucht, die vorhandene Probleme beheben und die Anwendbarkeit eines solchen Eingabeinstruments für bestimmte Arten der Interaktion verbessern oder überhaupt erst ermöglichen. Diese Techniken sind nicht auf dieses Eingabeinstrument beschränkt und können durchaus auch die Handhabung anderer Eingabegeräte verbessern. Des Weiteren werden mehrere neue Interaktionsschnittstellen präsentiert, die den möglichen Einsatz bloßhändiger Interaktion in verschiedenen, typischen Anwendungsgebieten veranschaulichen
Adaptive User-Centered Multimodal Interaction towards Reliable and Trusted Automotive Interfaces
With the recently increasing capabilities of modern vehicles, novel
approaches for interaction emerged that go beyond traditional touch-based and
voice command approaches. Therefore, hand gestures, head pose, eye gaze, and
speech have been extensively investigated in automotive applications for object
selection and referencing. Despite these significant advances, existing
approaches mostly employ a one-model-fits-all approach unsuitable for varying
user behavior and individual differences. Moreover, current referencing
approaches either consider these modalities separately or focus on a stationary
situation, whereas the situation in a moving vehicle is highly dynamic and
subject to safety-critical constraints. In this paper, I propose a research
plan for a user-centered adaptive multimodal fusion approach for referencing
external objects from a moving vehicle. The proposed plan aims to provide an
open-source framework for user-centered adaptation and personalization using
user observations and heuristics, multimodal fusion, clustering,
transfer-of-learning for model adaptation, and continuous learning, moving
towards trusted human-centered artificial intelligence
- …