75 research outputs found
Computational interaction techniques for 3D selection, manipulation and navigation in immersive VR
3D interaction provides a natural interplay for HCI. Many techniques involving diverse sets of hardware and software components have been proposed, which has generated an explosion of Interaction Techniques (ITes), Interactive Tasks (ITas) and input devices, increasing thus the heterogeneity of tools in 3D User Interfaces (3DUIs). Moreover, most of those techniques are based on general formulations that fail in fully exploiting human capabilities for interaction. This is because while 3D interaction enables naturalness, it also produces complexity and limitations when using 3DUIs.
In this thesis, we aim to generate approaches that better exploit the high potential human capabilities for interaction by combining human factors, mathematical formalizations and computational methods. Our approach is focussed on the exploration of the close coupling between specific ITes and ITas while addressing common issues of 3D interactions.
We specifically focused on the stages of interaction within Basic Interaction Tasks (BITas) i.e., data input, manipulation, navigation and selection. Common limitations of these tasks are: (1) the complexity of mapping generation for input devices, (2) fatigue in mid-air object manipulation, (3) space constraints in VR navigation; and (4) low accuracy in 3D mid-air selection.
Along with two chapters of introduction and background, this thesis presents five main works. Chapter 3 focusses on the design of mid-air gesture mappings based on human tacit knowledge. Chapter 4 presents a solution to address user fatigue in mid-air object manipulation. Chapter 5 is focused on addressing space limitations in VR navigation. Chapter 6 describes an analysis and a correction method to address Drift effects involved in scale-adaptive VR navigation; and Chapter 7 presents a hybrid technique 3D/2D that allows for precise selection of virtual objects in highly dense environments (e.g., point clouds). Finally, we conclude discussing how the contributions obtained from this exploration, provide techniques and guidelines to design more natural 3DUIs
Advancing proxy-based haptic feedback in virtual reality
This thesis advances haptic feedback for Virtual Reality (VR). Our work is guided by Sutherland's 1965 vision of the ultimate display, which calls for VR systems to control the existence of matter. To push towards this vision, we build upon proxy-based haptic feedback, a technique characterized by the use of passive tangible props. The goal of this thesis is to tackle the central drawback of this approach, namely, its inflexibility, which yet hinders it to fulfill the vision of the ultimate display. Guided by four research questions, we first showcase the applicability of proxy-based VR haptics by employing the technique for data exploration. We then extend the VR system's control over users' haptic impressions in three steps. First, we contribute the class of Dynamic Passive Haptic Feedback (DPHF) alongside two novel concepts for conveying kinesthetic properties, like virtual weight and shape, through weight-shifting and drag-changing proxies. Conceptually orthogonal to this, we study how visual-haptic illusions can be leveraged to unnoticeably redirect the user's hand when reaching towards props. Here, we contribute a novel perception-inspired algorithm for Body Warping-based Hand Redirection (HR), an open-source framework for HR, and psychophysical insights. The thesis concludes by proving that the combination of DPHF and HR can outperform the individual techniques in terms of the achievable flexibility of the proxy-based haptic feedback.Diese Arbeit widmet sich haptischem Feedback für Virtual Reality (VR) und ist inspiriert von Sutherlands Vision des ultimativen Displays, welche VR-Systemen die Fähigkeit zuschreibt, Materie kontrollieren zu können. Um dieser Vision näher zu kommen, baut die Arbeit auf dem Konzept proxy-basierter Haptik auf, bei der haptische Eindrücke durch anfassbare Requisiten vermittelt werden. Ziel ist es, diesem Ansatz die für die Realisierung eines ultimativen Displays nötige Flexibilität zu verleihen. Dazu bearbeiten wir vier Forschungsfragen und zeigen zunächst die Anwendbarkeit proxy-basierter Haptik durch den Einsatz der Technik zur Datenexploration. Anschließend untersuchen wir in drei Schritten, wie VR-Systeme mehr Kontrolle über haptische Eindrücke von Nutzern erhalten können. Hierzu stellen wir Dynamic Passive Haptic Feedback (DPHF) vor, sowie zwei Verfahren, die kinästhetische Eindrücke wie virtuelles Gewicht und Form durch Gewichtsverlagerung und Veränderung des Luftwiderstandes von Requisiten vermitteln. Zusätzlich untersuchen wir, wie visuell-haptische Illusionen die Hand des Nutzers beim Greifen nach Requisiten unbemerkt umlenken können. Dabei stellen wir einen neuen Algorithmus zur Body Warping-based Hand Redirection (HR), ein Open-Source-Framework, sowie psychophysische Erkenntnisse vor. Abschließend zeigen wir, dass die Kombination von DPHF und HR proxy-basierte Haptik noch flexibler machen kann, als es die einzelnen Techniken alleine können
Interfaces for human-centered production and use of computer graphics assets
L'abstract è presente nell'allegato / the abstract is in the attachmen
Recommended from our members
Learning Generalizable Dexterous Manipulation
Dexterous manipulation using multi-fingered robotic hands is a crucial area in robotics, aimed at performing intricate tasks with various objects in everyday environments. However, this field presents significant challenges. Modeling the complex contact patterns between a dexterous hand and manipulated objects is difficult, hindering the effectiveness of model-based control methods. Furthermore, the high number of Degrees of Freedom (DoF) in the hand's joints, dramatically increases the complexity of training data-driven policies for dexterous manipulation.This dissertation addresses the challenging task of learning highly generalizable dexterous manipulation skills applicable across diverse scenarios. We investigate two principal directions to enhance the learning capabilities of dexterous manipulation. First, we leverage the inherent structural similarities between human and robotic hands, employing human data to guide robot manipulation skills. This approach is motivated by the bio-inspired design of dexterous hands, which offers a unique opportunity to learn from human demonstrations. To facilitate efficient data collection, we develop AnyTeleop, a general vision-based teleoperation system for dexterous robot arm-hand systems. AnyTeleop utilizes readily available devices like web cameras to provide a versatile interface for teleoperating various arm-hand systems. Furthermore, we introduce CyberDemo, a data augmentation technique that expands the original human demonstrations, generating a dataset hundreds of times larger than the initial set. This approach allows for training policies capable of handling a wider range of scenarios without requiring additional human effort.Second, we explore the potential of using vast amounts of simulated data to learn dexterous manipulation policies. The primary challenge in this direction lies in bridging the domain gap between simulation and the real world, encompassing both dynamics and visual discrepancies. This sim2real gap is particularly pronounced for high DoF dexterous hands. To address this, we propose a sim-to-real reinforcement learning framework, DexPoint, that leverages point cloud and proprioceptive data. This framework integrates multi-modal sensory information into a unified 3D space, preserving the spatial relationships between robot components, sensors, and manipulated objects. This unified representation enables faster policy learning in simulation and smoother transfer to real-world applications
VR Lab: User Interaction in Virtual Environments using Space and Time Morphing
Virtual Reality (VR) allows exploring changes in space and time that would otherwise
be difficult to simulate in the real world. It becomes possible to transform the virtual
world by increasing or diminishing distances or playing with time delays. Analysing the
adaptability of users to different space-time conditions allows studying human perception
and finding the right combination of interaction paradigms.
Different methods have been proposed in the literature to offer users intuitive techniques
for navigating wide virtual spaces, even if restricted to small physical play areas.
Other studies investigate latency tolerance, suggesting humans’ inability to detect slight
discrepancies between visual and proprioceptive sensory information. These studies
contribute valuable insights for designing immersive virtual experiences and interaction
techniques suitable for each task.
This dissertation presents the design, implementation, and evaluation of a tangible
VR Lab where spatiotemporal morphing scenarios can be studied. As a case study, we
restricted the scope of the research to three spatial morphing scenarios and one temporal
morphing scenario. The spatial morphing scenarios compared Euclidean and hyperbolic
geometries, studied size discordance between physical and virtual objects, and the representation
of hands in VR. The temporal morphing scenario investigated from what
visual delay the task performance is affected. The users’ adaptability to the different
spatiotemporal conditions was assessed based on task completion time, questionnaires,
and observed behaviours.
The results revealed significant differences between Euclidean and hyperbolic spaces.
They also showed a preference for handling virtual and physical objects with concordant
sizes, without any virtual representation of the hands. Although task performance was
affected from 200 ms onwards, participants considered the ease of the task to be affected
only from 500 ms visual delay onwards.A Realidade Virtual (RV) permite explorar mudanças no espaço e no tempo que de outra
forma seriam difíceis de simular no mundo real. Torna-se possível transformar o mundo
virtual aumentando ou diminuindo as distâncias ou manipulando os atrasos no tempo.
A análise da adaptabilidade dos utilizadores a diferentes condições espaço-temporais
permite estudar a perceção humana e encontrar a combinação certa de paradigmas de
interação.
Diferentes métodos têm sido propostos na literatura para oferecer aos utilizadores
técnicas intuitivas de navegação em espaços virtuais amplos, mesmo que restritos a pequenas
áreas físicas de jogo. Outros estudos investigam a tolerância à latência, sugerindo
a incapacidade do ser humano de detetar ligeiras discrepâncias entre a informação sensorial
visual e propriocetiva. Estes estudos contribuem com valiosas informações para
conceber experiências virtuais imersivas e técnicas de interação adequadas a cada tarefa.
Esta dissertação apresenta o desenho, implementação e avaliação de um Laboratório
de RV tangível onde podem ser estudados cenários de distorção espaço-temporal. Como
estudo de caso, restringimos o âmbito da investigação a três cenários de distorção espacial
e um cenário de distorção temporal. Os cenários de distorção espacial compararam geometrias
Euclidianas e hiperbólicas, estudaram a discordância de tamanho entre objetos
físicos e virtuais, e a representação das mãos em RV. O cenário de distorção temporal investigou
a partir de que atraso visual o desempenho da tarefa é afetado. A adaptabilidade
dos utilizadores às diferentes condições espaço-temporais foi avaliada com base no tempo
de conclusão da tarefa, questionários, e comportamentos observados.
Os resultados revelaram diferenças significativas entre os espaços Euclidiano e hiperbólico.
Também mostraram a preferência pelo manuseamento de objetos virtuais e físicos
com tamanhos concordantes, sem qualquer representação virtual das mãos. Embora o desempenho
da tarefa tenha sido afetado a partir dos 200 ms, os participantes consideraram
que a facilidade da tarefa só foi afetada a partir dos 500 ms de atraso visual
All Hands on Deck: Choosing Virtual End Effector Representations to Improve Near Field Object Manipulation Interactions in Extended Reality
Extended reality, or XR , is the adopted umbrella term that is heavily gaining traction to collectively describe Virtual reality (VR), Augmented reality (AR), and Mixed reality (MR) technologies. Together, these technologies extend the reality that we experience either by creating a fully immersive experience like in VR or by blending in the virtual and real worlds like in AR and MR.
The sustained success of XR in the workplace largely hinges on its ability to facilitate efficient user interactions. Similar to interacting with objects in the real world, users in XR typically interact with virtual integrants like objects, menus, windows, and information that convolve together to form the overall experience. Most of these interactions involve near-field object manipulation for which users are generally provisioned with visual representations of themselves also called self-avatars. Representations that involve only the distal entity are called end-effector representations and they shape how users perceive XR experiences.
Through a series of investigations, this dissertation evaluates the effects of virtual end effector representations on near-field object retrieval interactions in XR settings. Through studies conducted in virtual, augmented, and mixed reality, implications about the virtual representation of end-effectors are discussed, and inferences are made for the future of near-field interaction in XR to draw upon from. This body of research aids technologists and designers by providing them with details that help in appropriately tailoring the right end effector representation to improve near-field interactions, thereby collectively establishing knowledge that epitomizes the future of interactions in XR
Docking Haptics: Extending the Reach of Haptics by Dynamic Combinations of Grounded and Worn Devices
Grounded haptic devices can provide a variety of forces but have limited
working volumes. Wearable haptic devices operate over a large volume but are
relatively restricted in the types of stimuli they can generate. We propose the
concept of docking haptics, in which different types of haptic devices are
dynamically docked at run time. This creates a hybrid system, where the
potential feedback depends on the user's location. We show a prototype docking
haptic workspace, combining a grounded six degree-of-freedom force feedback arm
with a hand exoskeleton. We are able to create the sensation of weight on the
hand when it is within reach of the grounded device, but away from the grounded
device, hand-referenced force feedback is still available. A user study
demonstrates that users can successfully discriminate weight when using docking
haptics, but not with the exoskeleton alone. Such hybrid systems would be able
to change configuration further, for example docking two grounded devices to a
hand in order to deliver twice the force, or extend the working volume. We
suggest that the docking haptics concept can thus extend the practical utility
of haptics in user interfaces
The Rocketbox Library and the Utility of Freely Available Rigged Avatars
As part of the open sourcing of the Microsoft Rocketbox avatar library for research and academic purposes, here we discuss the importance of rigged avatars for the Virtual and Augmented Reality (VR, AR) research community. Avatars, virtual representations of humans, are widely used in VR applications. Furthermore many research areas ranging from crowd simulation to neuroscience, psychology, or sociology have used avatars to investigate new theories or to demonstrate how they influence human performance and interactions. We divide this paper in two main parts: the first one gives an overview of the different methods available to create and animate avatars. We cover the current main alternatives for face and body animation as well introduce upcoming capture methods. The second part presents the scientific evidence of the utility of using rigged avatars for embodiment but also for applications such as crowd simulation and entertainment. All in all this paper attempts to convey why rigged avatars will be key to the future of VR and its wide adoption
Beyond the Screen: Reshaping the Workplace with Virtual and Augmented Reality
Although extended reality technologies have enjoyed an explosion in
popularity in recent years, few applications are effectively used outside the
entertainment or academic contexts. This work consists of a literature review
regarding the effective integration of such technologies in the workplace. It
aims to provide an updated view of how they are being used in that context.
First, we examine existing research concerning virtual, augmented, and
mixed-reality applications. We also analyze which have made their way to the
workflows of companies and institutions. Furthermore, we circumscribe the
aspects of extended reality technologies that determined this applicability
- …