21,567 research outputs found

    Natural freehand grasping of virtual objects for augmented reality

    Get PDF
    Grasping is a primary form of interaction with the surrounding world, and is an intuitive interaction technique by nature due to the highly complex structure of the human hand. Translating this versatile interaction technique to Augmented Reality (AR) can provide interaction designers with more opportunities to implement more intuitive and realistic AR applications. The work presented in this thesis uses quantifiable measures to evaluate the accuracy and usability of natural grasping of virtual objects in AR environments, and presents methods for improving this natural form of interaction. Following a review of physical grasping parameters and current methods of mediating grasping interactions in AR, a comprehensive analysis of natural freehand grasping of virtual objects in AR is presented to assess the accuracy, usability and transferability of this natural form of grasping to AR environments. The analysis is presented in four independent user studies (120 participants, 30 participants for each study and 5760 grasping tasks in total), where natural freehand grasping performance is assessed for a range of virtual object sizes, positions and types in terms of accuracy of grasping, task completion time and overall system usability. Findings from the first user study in this work highlighted two key problems for natural grasping in AR; namely inaccurate depth estimation and inaccurate size estimation of virtual objects. Following the quantification of these errors, three different methods for mitigating user errors and assisting users during natural grasping were presented and analysed; namely dual view visual feedback, drop shadows and additional visual feedback when adding user based tolerances during interaction tasks. Dual view visual feedback was found to significantly improve user depth estimation, however this method also significantly increased task completion time. Drop shadows provided an alternative, and a more usable solution, to dual view visual feedback through significantly improving depth estimation, task completion time and the overall usability of natural grasping. User based tolerances negated the fundamental problem of inaccurate size estimation of virtual objects, through enabling users to perform natural grasping without the need of being highly accurate in their grasping performance, thus providing evidence that natural grasping can be usable in task based AR environments. Finally recommendations for allowing and further improving natural grasping interaction in AR environments are provided, along with guidelines for translating this form of natural grasping to other AR environments and user interfaces

    Robust Hand Motion Capture and Physics-Based Control for Grasping in Real Time

    Get PDF
    Hand motion capture technologies are being explored due to high demands in the fields such as video game, virtual reality, sign language recognition, human-computer interaction, and robotics. However, existing systems suffer a few limitations, e.g. they are high-cost (expensive capture devices), intrusive (additional wear-on sensors or complex configurations), and restrictive (limited motion varieties and restricted capture space). This dissertation mainly focus on exploring algorithms and applications for the hand motion capture system that is low-cost, non-intrusive, low-restriction, high-accuracy, and robust. More specifically, we develop a realtime and fully-automatic hand tracking system using a low-cost depth camera. We first introduce an efficient shape-indexed cascaded pose regressor that directly estimates 3D hand poses from depth images. A unique property of our hand pose regressor is to utilize a low-dimensional parametric hand geometric model to learn 3D shape-indexed features robust to variations in hand shapes, viewpoints and hand poses. We further introduce a hybrid tracking scheme that effectively complements our hand pose regressor with model-based hand tracking. In addition, we develop a rapid 3D hand shape modeling method that uses a small number of depth images to accurately construct a subject-specific skinned mesh model for hand tracking. This step not only automates the whole tracking system but also improves the robustness and accuracy of model-based tracking and hand pose regression. Additionally, we also propose a physically realistic human grasping synthesis method that is capable to grasp a wide variety of objects. Given an object to be grasped, our method is capable to compute required controls (e.g. forces and torques) that advance the simulation to achieve realistic grasping. Our method combines the power of data-driven synthesis and physics-based grasping control. We first introduce a data-driven method to synthesize a realistic grasping motion from large sets of prerecorded grasping motion data. And then we transform the synthesized kinematic motion to a physically realistic one by utilizing our online physics-based motion control method. In addition, we also provide a performance interface which allows the user to act out before a depth camera to control a virtual object

    Robust Hand Motion Capture and Physics-Based Control for Grasping in Real Time

    Get PDF
    Hand motion capture technologies are being explored due to high demands in the fields such as video game, virtual reality, sign language recognition, human-computer interaction, and robotics. However, existing systems suffer a few limitations, e.g. they are high-cost (expensive capture devices), intrusive (additional wear-on sensors or complex configurations), and restrictive (limited motion varieties and restricted capture space). This dissertation mainly focus on exploring algorithms and applications for the hand motion capture system that is low-cost, non-intrusive, low-restriction, high-accuracy, and robust. More specifically, we develop a realtime and fully-automatic hand tracking system using a low-cost depth camera. We first introduce an efficient shape-indexed cascaded pose regressor that directly estimates 3D hand poses from depth images. A unique property of our hand pose regressor is to utilize a low-dimensional parametric hand geometric model to learn 3D shape-indexed features robust to variations in hand shapes, viewpoints and hand poses. We further introduce a hybrid tracking scheme that effectively complements our hand pose regressor with model-based hand tracking. In addition, we develop a rapid 3D hand shape modeling method that uses a small number of depth images to accurately construct a subject-specific skinned mesh model for hand tracking. This step not only automates the whole tracking system but also improves the robustness and accuracy of model-based tracking and hand pose regression. Additionally, we also propose a physically realistic human grasping synthesis method that is capable to grasp a wide variety of objects. Given an object to be grasped, our method is capable to compute required controls (e.g. forces and torques) that advance the simulation to achieve realistic grasping. Our method combines the power of data-driven synthesis and physics-based grasping control. We first introduce a data-driven method to synthesize a realistic grasping motion from large sets of prerecorded grasping motion data. And then we transform the synthesized kinematic motion to a physically realistic one by utilizing our online physics-based motion control method. In addition, we also provide a performance interface which allows the user to act out before a depth camera to control a virtual object

    Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping

    Full text link
    Instrumenting and collecting annotated visual grasping datasets to train modern machine learning algorithms can be extremely time-consuming and expensive. An appealing alternative is to use off-the-shelf simulators to render synthetic data for which ground-truth annotations are generated automatically. Unfortunately, models trained purely on simulated data often fail to generalize to the real world. We study how randomized simulated environments and domain adaptation methods can be extended to train a grasping system to grasp novel objects from raw monocular RGB images. We extensively evaluate our approaches with a total of more than 25,000 physical test grasps, studying a range of simulation conditions and domain adaptation methods, including a novel extension of pixel-level domain adaptation that we term the GraspGAN. We show that, by using synthetic data and domain adaptation, we are able to reduce the number of real-world samples needed to achieve a given level of performance by up to 50 times, using only randomly generated simulated objects. We also show that by using only unlabeled real-world data and our GraspGAN methodology, we obtain real-world grasping performance without any real-world labels that is similar to that achieved with 939,777 labeled real-world samples.Comment: 9 pages, 5 figures, 3 table

    Monitoring a Realistic Virtual Hand using a Passive Haptic Device to Interact with Virtual Worlds

    Get PDF
    We present a prototype of a hands-on immersive peripheral device for controlling a virtual hand with high dexterity. This prototype is as easy as a mouse to use and allows the control of a high number of degrees of freedom (dofs) with tactile feedback. The goals corresponding to design issues, physiological behaviors, include the choice of sensors’ technology and their position on the device, low forces exerted while using the device, relevant multi-sensorial feedback, performance of achieved tasks

    Simplified Hand Configuration for Object Manipulation

    Get PDF
    This work is focused on obtaining realistic human hand models that are suitable for manipulation tasks. Firstly, a 24 DOF kinematic model of the human hand is defined. This model is based on the human skeleton. Intra-finger and inter-finger constraints have been included in order to improve the movement realism. Secondly, two simplified hand descriptions (9 and 6 DOF) have been developed according to the constraints predefined. These simplified models involve some errors in reconstructing the hand posture. These errors are calculated with respect to the 24 DOF model and evaluated according to the hand gestures. Finally, some criteria are defined by which to select the hand description best suited to the features of the manipulation task

    Causative role of left aIPS in coding shared goals during human-avatar complementary joint actions

    Get PDF
    Successful motor interactions require agents to anticipate what a partner is doing in order to predictively adjust their own movements. Although the neural underpinnings of the ability to predict others' action goals have been well explored during passive action observation, no study has yet clarified any critical neural substrate supporting interpersonal coordination during active, non-imitative (complementary) interactions. Here, we combine non-invasive inhibitory brain stimulation (continuous Theta Burst Stimulation) with a novel human-avatar interaction task to investigate a causal role for higher-order motor cortical regions in supporting the ability to predict and adapt to others' actions. We demonstrate that inhibition of left anterior intraparietal sulcus (aIPS), but not ventral premotor cortex, selectively impaired individuals' performance during complementary interactions. Thus, in addition to coding observed and executed action goals, aIPS is crucial in coding 'shared goals', that is, integrating predictions about one's and others' complementary actions
    corecore