872 research outputs found
Barehand Mode Switching in Touch and Mid-Air Interfaces
Raskin defines a mode as a distinct setting within an interface where the same user input will produce results different to those it would produce in other settings. Most interfaces have multiple modes in which input is mapped to different actions, and, mode-switching is simply the transition from one mode to another. In touch interfaces, the current mode can change how a single touch is interpreted: for example, it could draw a line, pan the canvas, select a shape, or enter a command. In Virtual Reality (VR), a hand gesture-based 3D modelling application may have different modes for object creation, selection, and transformation. Depending on the mode, the movement of the hand is interpreted differently. However, one of the crucial factors determining the effectiveness of an interface is user productivity. Mode-switching time of different input techniques, either in a touch interface or in a mid-air interface, affects user productivity. Moreover, when touch and mid-air interfaces like VR are combined, making informed decisions pertaining to the mode assignment gets even more complicated. This thesis provides an empirical investigation to characterize the mode switching phenomenon in barehand touch-based and mid-air interfaces. It explores the potential of using these input spaces together for a productivity application in VR. And, it concludes with a step towards defining and evaluating the multi-faceted mode concept, its characteristics and its utility, when designing user interfaces more generally
Comparing Hand Gestures and a Gamepad Interface for Locomotion in Virtual Environments
Hand gesture is a new and promising interface for locomotion in virtual
environments. While several previous studies have proposed different hand
gestures for virtual locomotion, little is known about their differences in
terms of performance and user preference in virtual locomotion tasks. In the
present paper, we presented three different hand gesture interfaces and their
algorithms for locomotion, which are called the Finger Distance gesture, the
Finger Number gesture and the Finger Tapping gesture. These gestures were
inspired by previous studies of gesture-based locomotion interfaces and are
typical gestures that people are familiar with in their daily lives.
Implementing these hand gesture interfaces in the present study enabled us to
systematically compare the differences between these gestures. In addition, to
compare the usability of these gestures to locomotion interfaces using
gamepads, we also designed and implemented a gamepad interface based on the
Xbox One controller. We conducted empirical studies to compare these four
interfaces through two virtual locomotion tasks. A desktop setup was used
instead of sharing a head-mounted display among participants due to the concern
of the Covid-19 situation. Through these tasks, we assessed the performance and
user preference of these interfaces on speed control and waypoints navigation.
Results showed that user preference and performance of the Finger Distance
gesture were close to that of the gamepad interface. The Finger Number gesture
also had close performance and user preference to that of the Finger Distance
gesture. Our study demonstrates that the Finger Distance gesture and the Finger
Number gesture are very promising interfaces for virtual locomotion. We also
discuss that the Finger Tapping gesture needs further improvements before it
can be used for virtual walking
Computational interaction techniques for 3D selection, manipulation and navigation in immersive VR
3D interaction provides a natural interplay for HCI. Many techniques involving diverse sets of hardware and software components have been proposed, which has generated an explosion of Interaction Techniques (ITes), Interactive Tasks (ITas) and input devices, increasing thus the heterogeneity of tools in 3D User Interfaces (3DUIs). Moreover, most of those techniques are based on general formulations that fail in fully exploiting human capabilities for interaction. This is because while 3D interaction enables naturalness, it also produces complexity and limitations when using 3DUIs.
In this thesis, we aim to generate approaches that better exploit the high potential human capabilities for interaction by combining human factors, mathematical formalizations and computational methods. Our approach is focussed on the exploration of the close coupling between specific ITes and ITas while addressing common issues of 3D interactions.
We specifically focused on the stages of interaction within Basic Interaction Tasks (BITas) i.e., data input, manipulation, navigation and selection. Common limitations of these tasks are: (1) the complexity of mapping generation for input devices, (2) fatigue in mid-air object manipulation, (3) space constraints in VR navigation; and (4) low accuracy in 3D mid-air selection.
Along with two chapters of introduction and background, this thesis presents five main works. Chapter 3 focusses on the design of mid-air gesture mappings based on human tacit knowledge. Chapter 4 presents a solution to address user fatigue in mid-air object manipulation. Chapter 5 is focused on addressing space limitations in VR navigation. Chapter 6 describes an analysis and a correction method to address Drift effects involved in scale-adaptive VR navigation; and Chapter 7 presents a hybrid technique 3D/2D that allows for precise selection of virtual objects in highly dense environments (e.g., point clouds). Finally, we conclude discussing how the contributions obtained from this exploration, provide techniques and guidelines to design more natural 3DUIs
Two Hand Gesture Based 3D Navigation in Virtual Environments
Natural interaction is gaining popularity due to its simple, attractive, and realistic nature, which realizes direct Human Computer Interaction (HCI). In this paper, we presented a novel two hand gesture based interaction technique for 3 dimensional (3D) navigation in Virtual Environments (VEs). The system used computer vision techniques for the detection of hand gestures (colored thumbs) from real scene and performed different navigation (forward, backward, up, down, left, and right) tasks in the VE. The proposed technique also allow users to efficiently control speed during navigation. The proposed technique is implemented via a VE for experimental purposes. Forty (40) participants performed the experimental study. Experiments revealed that the proposed technique is feasible, easy to learn and use, having less cognitive load on users. Finally gesture recognition engines were used to assess the accuracy and performance of the proposed gestures. kNN achieved high accuracy rates (95.7%) as compared to SVM (95.3%). kNN also has high performance rates in terms of training time (3.16 secs) and prediction speed (6600 obs/sec) as compared to SVM with 6.40 secs and 2900 obs/sec
3D Multi-user interactive visualization with a shared large-scale display
When the multiple users interact with a virtual environment on a largescale
display there are several issues that need to be addressed to facilitate the
interaction. In the thesis, three main topics for collaborative visualization are
discussed; display setup, interactive visualization, and visual fatigue. The
problems that the author is trying to address in this thesis are how multiple
users can interact with a shared large-scale display depending on the display
setups and how they can interact with the shared visualization in a way that
doesnât lead to visual fatigue.
The first user study (Chapter 3) explores the display setups for multi-user
interaction with a shared large-display. The author describes the design of the
three main display setups (a shared view, a split screen, and a split screen with
navigation information) and a demonstration using these setups. The user
study found that the split screen and the split screen with navigation
information can improve usersâ confidence and reduce frustration level and
are more preferred than a shared view. However, a shared view can still
provide effective interaction and collaboration and the display setups cannot
have a large impact on usability and workload.
From the first study, the author employed a shared view for multi-user
interactive visualization with a shared large-scale display due to the
advantages of the shared view. To improve interactive visualization with a
shared view for multiple users, the author designed and conducted the second
user study (Chapter 4). A conventional interaction technique, the mean
tracking method, was not effective for more than three users. In order to
overcome the limitation of the current multi-user interactive visualization
techniques, two interactive visualization techniques (the Object Shift
Technique and Activity-based Weighted Mean Tracking method) were developed and were evaluated in the second user study. The Object Shift Technique translates the virtual objects in the opposite direction of movement
of the Point of View (PoV) and the Activity-based Weighted Mean Tracking
method assigns the higher weight to active users in comparison with
stationary users to determine the location of the PoV. The results of the user
study showed that these techniques can support collaboration, improve
interactivity, and provide similar visual discomfort compared to the
conventional method.
The third study (Chapter 5) describes how to reduce visual fatigue for 3D
stereoscopic visualization with a single point of view (PoV). When multiple
users interact with 3D stereoscopic VR using multi-user interactive
visualization techniques and they are close to the virtual objects, they can
perceive 3D visual fatigue from the large disparity. To reduce the 3D visual
fatigue, an Adaptive Interpupillary Distance (Adaptive IPD) adjustment
technique was developed. To evaluate the Adaptive IPD method, the author
compared to traditional 3D stereoscopic and the monoscopic visualization
techniques. Through the user experiments, the author was able to confirm that
the proposed method can reduce visual discomfort, yet maintain compelling
depth perception as the result provided the most preferable 3D stereoscopic
visualization experience.
For these studies, the author developed a software framework and designed
a set of experiments (Chapter 6). The framework architecture that contains
the three main ideas are described. A demonstration application for multidimensional
decision making was developed using the framework.
The primary contributions of this thesis include a literature review of multiuser
interaction with a shared large-scale display, deeper insights into three
display setups for multi-user interaction, development of the Object Shift
Techniques, the Activity-based Weighted Mean Tracking method, and the
Adaptive Interpupillary Distance Adjustment technique, the evaluation of the
three novel interaction techniques, development of a framework for
supporting a multi-user interaction with a shared large-scale display and its
application to multi-dimensional decision making VR system
Exploring Robot Teleoperation in Virtual Reality
This thesis presents research on VR-based robot teleoperation with a focus on remote environment visualisation in virtual reality, the effects of remote environment reconstruction scale in virtual reality on the human-operator's ability to control the robot and human-operator's visual attention patterns when teleoperating a robot from virtual reality.
A VR-based robot teleoperation framework was developed, it is compatible with various robotic systems and cameras, allowing for teleoperation and supervised control with any ROS-compatible robot and visualisation of the environment through any ROS-compatible RGB and RGBD cameras. The framework includes mapping, segmentation, tactile exploration, and non-physically demanding VR interface navigation and controls through any Unity-compatible VR headset and controllers or haptic devices.
Point clouds are a common way to visualise remote environments in 3D, but they often have distortions and occlusions, making it difficult to accurately represent objects' textures. This can lead to poor decision-making during teleoperation if objects are inaccurately represented in the VR reconstruction. A study using an end-effector-mounted RGBD camera with OctoMap mapping of the remote environment was conducted to explore the remote environment with fewer point cloud distortions and occlusions while using a relatively small bandwidth. Additionally, a tactile exploration study proposed a novel method for visually presenting information about objects' materials in the VR interface, to improve the operator's decision-making and address the challenges of point cloud visualisation.
Two studies have been conducted to understand the effect of virtual world dynamic scaling on teleoperation flow. The first study investigated the use of rate mode control with constant and variable mapping of the operator's joystick position to the speed (rate) of the robot's end-effector, depending on the virtual world scale. The results showed that variable mapping allowed participants to teleoperate the robot more effectively but at the cost of increased perceived workload.
The second study compared how operators used a virtual world scale in supervised control, comparing the virtual world scale of participants at the beginning and end of a 3-day experiment. The results showed that as operators got better at the task they as a group used a different virtual world scale, and participants' prior video gaming experience also affected the virtual world scale chosen by operators.
Similarly, the human-operator's visual attention study has investigated how their visual attention changes as they become better at teleoperating a robot using the framework.
The results revealed the most important objects in the VR reconstructed remote environment as indicated by operators' visual attention patterns as well as their visual priorities shifts as they got better at teleoperating the robot. The study also demonstrated that operatorsâ prior video gaming experience affects their ability to teleoperate the robot and their visual attention behaviours
Discoverable Free Space Gesture Sets for Walk-Up-and-Use Interactions
abstract: Advances in technology are fueling a movement toward ubiquity for beyond-the-desktop systems. Novel interaction modalities, such as free space or full body gestures are becoming more common, as demonstrated by the rise of systems such as the Microsoft Kinect. However, much of the interaction design research for such systems is still focused on desktop and touch interactions. Current thinking in free-space gestures are limited in capability and imagination and most gesture studies have not attempted to identify gestures appropriate for public walk-up-and-use applications. A walk-up-and-use display must be discoverable, such that first-time users can use the system without any training, flexible, and not fatiguing, especially in the case of longer-term interactions. One mechanism for defining gesture sets for walk-up-and-use interactions is a participatory design method called gesture elicitation. This method has been used to identify several user-generated gesture sets and shown that user-generated sets are preferred by users over those defined by system designers. However, for these studies to be successfully implemented in walk-up-and-use applications, there is a need to understand which components of these gestures are semantically meaningful (i.e. do users distinguish been using their left and right hand, or are those semantically the same thing?). Thus, defining a standardized gesture vocabulary for coding, characterizing, and evaluating gestures is critical. This dissertation presents three gesture elicitation studies for walk-up-and-use displays that employ a novel gesture elicitation methodology, alongside a novel coding scheme for gesture elicitation data that focuses on features most important to usersâ mental models. Generalizable design principles, based on the three studies, are then derived and presented (e.g. changes in speed are meaningful for scroll actions in walk up and use displays but not for paging or selection). The major contributions of this work are: (1) an elicitation methodology that aids users in overcoming biases from existing interaction modalities; (2) a better understanding of the gestural features that matter, e.g. that capture the intent of the gestures; and (3) generalizable design principles for walk-up-and-use public displays.Dissertation/ThesisDoctoral Dissertation Computer Science 201
Multi-touch interaction with stereoscopically rendered 3D objects
AnfĂ€nglich hauptsĂ€chlich im 2D Kontext betrachtet, gewinnen Multi-Touch Interfaces immer mehr an Bedeutung im Bereich dreidimensionaler Umgebungen und, in den letzten Jahren, auch im Zusammenhang mit stereoskopischen Visualisierungen. Dennoch fĂŒhrt die Touch-basierte Interaktion mit stereoskopisch dargestellten Objekten zu Problemen, da die Objekte in der nahen Umgebung der DisplayoberflĂ€che schweben, wĂ€hrend die BerĂŒhrungspunkte nur bei direktem Kontakt mit dem Display robust detektiert werden können. In dieser Arbeit werden die Probleme bei Touch-Interaktion in stereoskopischen Umgebungen nĂ€her untersucht und Interaktionskonzepte in diesem Kontext entwickelt. Insbesondere wird die Anwendbarkeit unterschiedlicher Wahrnehmungsillusionen fĂŒr 3D Touch-Interaktion mit stereoskopisch dargestellten Objekten in einer Reihe psychologischer Experimente untersucht. Basierend auf die Experimentdaten werden einige praktische Interaktionstechniken entwickelt und auf ihre Anwendbarkeit ĂŒberprĂŒft.While touch technology has proven its usability for 2D interaction and has already become a standard input modality for many devices, the challenges to exploit its applicability with stereoscopically rendered content have barely been studied. In this thesis we exploit different hardware and perception based techniques to allow users to touch stereoscopically displayed objects when the input is constrained to a 2D surface. Therefore we analyze the relation between the 3D positions of stereoscopically displayed objects and the on-surface touch points, where users touch the interactive surface, and we have conducted a series of experiments to investigate the userâs ability to discriminate small induced shifts while performing a touch gesture. The results were then used to design practical interaction techniques, which are suitable for numerous application scenarios. <br
- âŠ