61 research outputs found

    Physical Telepresence: Shape Capture and Display for Embodied, Computer-mediated Remote Collaboration

    Get PDF
    We propose a new approach to Physical Telepresence, based on shared workspaces with the ability to capture and remotely render the shapes of people and objects. In this paper, we describe the concept of shape transmission, and propose interaction techniques to manipulate remote physical objects and physical renderings of shared digital content. We investigate how the representation of user's body parts can be altered to amplify their capabilities for teleoperation. We also describe the details of building and testing prototype Physical Telepresence workspaces based on shape displays. A preliminary evaluation shows how users are able to manipulate remote objects, and we report on our observations of several different manipulation techniques that highlight the expressive nature of our system.National Science Foundation (U.S.). Graduate Research Fellowship Program (Grant No. 1122374

    Human to robot hand motion mapping methods: review and classification

    Get PDF
    In this article, the variety of approaches proposed in literature to address the problem of mapping human to robot hand motions are summarized and discussed. We particularly attempt to organize under macro-categories the great quantity of presented methods, that are often difficult to be seen from a general point of view due to different fields of application, specific use of algorithms, terminology and declared goals of the mappings. Firstly, a brief historical overview is reported, in order to provide a look on the emergence of the human to robot hand mapping problem as a both conceptual and analytical challenge that is still open nowadays. Thereafter, the survey mainly focuses on a classification of modern mapping methods under six categories: direct joint, direct Cartesian, taskoriented, dimensionality reduction based, pose recognition based and hybrid mappings. For each of these categories, the general view that associates the related reported studies is provided, and representative references are highlighted. Finally, a concluding discussion along with the authors’ point of view regarding future desirable trends are reported.This work was supported in part by the European Commission’s Horizon 2020 Framework Programme with the project REMODEL under Grant 870133 and in part by the Spanish Government under Grant PID2020-114819GB-I00.Peer ReviewedPostprint (published version

    The Rutgers Master II-new design force-feedback glove

    Full text link

    Extending the Knowledge Driven Approach for Scalable Autonomy Teleoperation of a Robotic Avatar

    Get PDF
    Crewed missions to celestial bodies such as Moon and Mars are in the focus of an increasing number of space agencies. Precautions to ensure a safe landing of the crew on the extraterrestrial surface, as well as reliable infrastructure on the remote location, for bringing the crew back home are key considerations for mission planning. The European Space Agency (ESA) identified in its Terrae Novae 2030+ roadmap, that robots are needed as precursors and scouts to ensure the success of such missions. An important role these robots will play, is the support of the astronaut crew in orbit to carry out scientific work, and ultimately ensuring nominal operation of the support infrastructure for astronauts on the surface. The METERON SUPVIS Justin ISS experiments demonstrated that supervised autonomy robot command can be used for executing inspection, maintenance and installation tasks using a robotic co-worker on the planetary surface. The knowledge driven approach utilized in the experiments only reached its limits when situations arise that were not anticipated by the mission design. In deep space scenarios, the astronauts must be able to overcome these limitations. An approach towards more direct command of a robot was demonstrated in the METERON ANALOG-1 ISS experiment. In this technical demonstration, an astronaut used haptic telepresence to command a robotic avatar on the surface to execute sampling tasks. In this work, we propose a system that combines supervised autonomy and telepresence by extending the knowledge driven approach. The knowledge management is based on organizing the prior knowledge of the robot in an object-centered context. Action Templates are used to define the knowledge on the handling of the objects on a symbolic and geometric level. This robot-agnostic system can be used for supervisory command of any robotic coworker. By integrating the robot itself as an object into the object-centered domain, robot-specific skills and (tele-)operation modes can be injected into the existing knowledge management system by formulating respective Action Templates. In order to efficiently use advanced teleoperation modes, such as haptic telepresence, a variety of input devices are integrated into the proposed system. This work shows how the integration of these devices is realized in a way that is agnostic to the input devices and operation modes. The proposed system is evaluated in the Surface Avatar ISS experiment. This work shows how the system is integrated into a Robot Command Terminal featuring a 3-Degree-of-Freedom Joystick and a 7-Degree-of-Freedom haptic input device in the Columbus module of the ISS. In the preliminary experiment sessions of Surface Avatar, two astronauts on orbit took command of the humanoid service robot Rollin' Justin in Germany. This work presents and discusses the results of these ISS-to-ground sessions and derives requirements for extending the scalable autonomy system for the use with a heterogeneous robotic team

    Intent-Recognition-Based Traded Control for Telerobotic Assembly over High-Latency Telemetry

    Get PDF
    As we deploy robotic manipulation systems into unstructured real-world environments, the tasks which those robots are expected to perform grow very quickly in complexity. These tasks require a greater number of possible actions, more variable environmental conditions, and larger varieties of objects and materials which need to be manipulated. This in turn leads to a greater number of ways in which elements of a task can fail. When the cost of task failure is high, such as in the case of surgery or on-orbit robotic interventions, effective and efficient task recovery is essential. Despite ever-advancing capabilities, however, the current and near future state-of-the-art in fully autonomous robotic manipulation is still insufficient for many tasks in these critical applications. Thus, successful application of robotic manipulation in many application domains still necessitates a human operator to directly teleoperate the robots over some communications infrastructure. However, any such infrastructure always incurs some unavoidable round-trip telemetry latency depending on the distances involved and the type of remote environment. While direct teleoperation is appropriate when a human operator is physically close to the robots being controlled, there are still many applications in which such proximity is infeasible. In applications which require a robot to be far from its human operator, this latency can approach the speed of the relevant task dynamics, and performing the task with direct telemanipulation can become increasingly difficult, if not impossible. For example, round-trip delays for ground-controlled on-orbit robotic manipulation can reach multiple seconds depending on the infrastructure used and the location of the remote robot. The goal of this thesis is to advance the state-of-the art in semi-autonomous telemanipulation under multi-second round-trip communications latency between a human operator and remote robot in order to enable more telerobotic applications. We propose a new intent-recognition-based traded control (IRTC) approach which automatically infers operator intent and executes task elements which the human operator would otherwise be unable to perform. What makes our approach more powerful than the current approaches is that we prioritize preserving the operator's direct manual interaction with the remote environment while only trading control over to an autonomous subsystem when the operator-local intent recognition system automatically determines what the operator is trying to accomplish. This enables operators to perform unstructured and a priori unplanned actions in order to quickly recover from critical task failures. Furthermore, this thesis also describes a methodology for introducing and improving semi-autonomous control in critical applications. Specifically, this thesis reports (1) the demonstration of a prototype system for IRTC-based grasp assistance in the context of transatlantic telemetry delays, (2) the development of a systems framework for IRTC in semi-autonomous telemanipulation, and (3) an evaluation of the usability and efficacy of that framework with an increasingly complex assembly task. The results from our human subjects experiments show that, when incorporated with sufficient lower-level capabilities, IRTC is a promising approach to extend the reach and capabilities of on-orbit telerobotics and future in-space operations

    Design and implementation of haptic interactions

    Get PDF
    This thesis addresses current haptic display technology where the user interacts with a virtual environment by means of specialized interface devices. The user manipulates computer generated virtual objects and is able to feel the sense of touch through haptic feedback. The objective of this work is to design high performance haptic interactions by developing multi-purpose virtual tools and new control schemes to implement a PUMA 560 robotic manipulator as the haptic interface device. The interactions are modeled by coupling the motions of the virtual tool with those of the PUMA 560 robotic manipulator;The work presented in this dissertation uses both kinematic and dynamic based virtual manipulators as virtual simulators to address problems associated in both free and constrained motions. Both implementations are general enough to allow researchers with any six degree-of-freedom robot to apply the approaches and continue in this area of research. The results are expected to improve on the current haptic display technology by a new type of optimal position controller and better algorithms to handle both holonomic and nonholonomic constraints;Kane\u27s method is introduced to model dynamics of multibody systems. Multibody dynamics of a virtual simulator, a dumbbell, is developed and the advantages of the Kane\u27s method in handling the non-holonomic constraints are presented. The resulting model is used to develop an approach to dynamic simulation for use in interacting haptic display, including switching constraints. Experimental data is collected to show various contact configurations;A two-degree of freedom virtual manipulator is modeled to feel the surface of a taurus shape. An optimal position controller is designed to achieve kinematic coupling between the virtual manipulator and the haptic display device to impose motion constraints and the virtual interactions. Stability of the haptic interface is studied and proved using Lyapunov\u27s direct method. Experimental data in various positions of the robotic manipulator is obtained to justify theoretical results. A shift mechanism is then implemented on the taurus shape, thus the motions of the robotic manipulator is further constrained. The difficulties in handling the motion constraints are discussed and an alternative approach is presented

    Computational haptics : the Sandpaper system for synthesizing texture for a force-feedback display

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1995.Includes bibliographical references (p. 155-180).by Margaret Diane Rezvan Minsky.Ph.D
    corecore