12,926 research outputs found

    Shared control of human and robot by approximate dynamic programming

    Get PDF
    This paper aims at proposing a general framework of human-robot shared control for a natural and effective interface. A typical human-robot collaboration scenario is investigated, and a framework of shared control is developed based on finding the solution to an optimization problem. Human dynamics are taken into account in the analysis of the coupled human-robot system, and objectives of both human and robot are considered. Approximate dynamic programming is employed to solve the optimization problem in the presence of unknown human and robot dynamics. The validity of the proposed method is verified through simulation studies

    Learning a Unified Control Policy for Safe Falling

    Full text link
    Being able to fall safely is a necessary motor skill for humanoids performing highly dynamic tasks, such as running and jumping. We propose a new method to learn a policy that minimizes the maximal impulse during the fall. The optimization solves for both a discrete contact planning problem and a continuous optimal control problem. Once trained, the policy can compute the optimal next contacting body part (e.g. left foot, right foot, or hands), contact location and timing, and the required joint actuation. We represent the policy as a mixture of actor-critic neural network, which consists of n control policies and the corresponding value functions. Each pair of actor-critic is associated with one of the n possible contacting body parts. During execution, the policy corresponding to the highest value function will be executed while the associated body part will be the next contact with the ground. With this mixture of actor-critic architecture, the discrete contact sequence planning is solved through the selection of the best critics while the continuous control problem is solved by the optimization of actors. We show that our policy can achieve comparable, sometimes even higher, rewards than a recursive search of the action space using dynamic programming, while enjoying 50 to 400 times of speed gain during online execution

    Simulation of Rapidly-Exploring Random Trees in Membrane Computing with P-Lingua and Automatic Programming

    Get PDF
    Methods based on Rapidly-exploring Random Trees (RRTs) have been widely used in robotics to solve motion planning problems. On the other hand, in the membrane computing framework, models based on Enzymatic Numerical P systems (ENPS) have been applied to robot controllers, but today there is a lack of planning algorithms based on membrane computing for robotics. With this motivation, we provide a variant of ENPS called Random Enzymatic Numerical P systems with Proteins and Shared Memory (RENPSM) addressed to implement RRT algorithms and we illustrate it by simulating the bidirectional RRT algorithm. This paper is an extension of [21]a. The software presented in [21] was an ad-hoc simulator, i.e, a tool for simulating computations of one and only one model that has been hard-coded. The main contribution of this paper with respect to [21] is the introduction of a novel solution for membrane computing simulators based on automatic programming. First, we have extended the P-Lingua syntax –a language to define membrane computing models– to write RENPSM models. Second, we have implemented a new parser based on Flex and Bison to read RENPSM models and produce source code in C language for multicore processors with OpenMP. Finally, additional experiments are presented.Ministerio de Economía, Industria y Competitividad TIN2017-89842-

    Real-time, interactive, visually updated simulator system for telepresence

    Get PDF
    Time delays and limited sensory feedback of remote telerobotic systems tend to disorient teleoperators and dramatically decrease the operator's performance. To remove the effects of time delays, key components were designed and developed of a prototype forward simulation subsystem, the Global-Local Environment Telerobotic Simulator (GLETS) that buffers the operator from the remote task. GLETS totally immerses an operator in a real-time, interactive, simulated, visually updated artificial environment of the remote telerobotic site. Using GLETS, the operator will, in effect, enter into a telerobotic virtual reality and can easily form a gestalt of the virtual 'local site' that matches the operator's normal interactions with the remote site. In addition to use in space based telerobotics, GLETS, due to its extendable architecture, can also be used in other teleoperational environments such as toxic material handling, construction, and undersea exploration

    Dynamic update of a virtual cell for programming and safe monitoring of an industrial robot

    Get PDF
    A hardware/software architecture for robot motion planning and on-line safe monitoring has been developed with the objective to assure high flexibility in production control, safety for workers and machinery, with user-friendly interface. The architecture, developed using Microsoft Robotics Developers Studio and implemented for a six-dof COMAU NS 12 robot, established a bidirectional communication between the robot controller and a virtual replica of the real robotic cell. The working space of the real robot can then be easily limited for safety reasons by inserting virtual objects (or sensors) in such a virtual environment. This paper investigates the possibility to achieve an automatic, dynamic update of the virtual cell by using a low cost depth sensor (i.e., a commercial Microsoft Kinect) to detect the presence of completely unknown objects, moving inside the real cell. The experimental tests show that the developed architecture is able to recognize variously shaped mobile objects inside the monitored area and let the robot stop before colliding with them, if the objects are not too small

    Probabilistic movement modeling for intention inference in human-robot interaction.

    No full text
    Intention inference can be an essential step toward efficient humanrobot interaction. For this purpose, we propose the Intention-Driven Dynamics Model (IDDM) to probabilistically model the generative process of movements that are directed by the intention. The IDDM allows to infer the intention from observed movements using Bayes ’ theorem. The IDDM simultaneously finds a latent state representation of noisy and highdimensional observations, and models the intention-driven dynamics in the latent states. As most robotics applications are subject to real-time constraints, we develop an efficient online algorithm that allows for real-time intention inference. Two human-robot interaction scenarios, i.e., target prediction for robot table tennis and action recognition for interactive humanoid robots, are used to evaluate the performance of our inference algorithm. In both intention inference tasks, the proposed algorithm achieves substantial improvements over support vector machines and Gaussian processes.
    • …
    corecore