63 research outputs found

    Analysis of manipulator structures under joint-failure with respect to efficient control in task-specific contexts

    Full text link
    Abstract — Robots are meanwhile able to perform several tasks. But what happens, if one or multiple of the robot’s joints fail? Is the robot still able to perform the required tasks? Which capabilities of the robot get limited and which ones are lost? We propose an analysis of manipulator structures for the comparison of a robot’s capabilities with respect to efficient control. The comparison is processed (1) within a robot in the case of joint failures and (2) between robots with or without joint failures. It is important, that the analysis can be processed independently of the structure of the manipulator. The results have to be comparable between different manipulator structures. Therefore, an abstract representation of the robot’s dynamic capabilities is necessary. We introduce the Maneu-verability Volume and the Spinning Pencil for this purpose. The Maneuverability Volume shows, how efficiently the end-effector can be moved to any other position. The Spinning Pencil reflects the robot’s capability to change its end-effector orientation efficiently. Our experiments show not only the different capabilities of two manipulator structures, but also the change of the capabilities if one or multiple joints fail. I

    Representation of manipulation-relevant object properties and actions for surprise-driven exploration

    Full text link
    Abstract—We propose a framework for the sensor-based estimation of manipulation-relevant object properties and the abstraction of known actions in a learning setup from the observation of humans. The descriptors consists of an object-centric representation of manipulation constraints and a scene-specific action graph. The graph spans between the typical places, where objects are placed. This framework allows to abstract the strongly varying actions of a human operator and to monitor unexpected new actions, that require a modification of the knowledge stored in the system. The usage of an abstract, object-centric structure enables not only the application of knowledge in the same situation, but also the transfer to similar environments. Furthermore, the information can be derived from different sensing modalities. The proposed system builds up the representation of manipulation-relevant properties and actions. The properties, which are directly related to the object, are stored in the Object Container. The Functionality Map links the actions with the typical action areas in the environment. We present experimental results on real human actions, showing the quality of the results, that can be obtained with our system. I

    Speeding Up Optimization-based Motion Planning through Deep Learning

    Get PDF
    Planning collision-free motions for robots with many degrees of freedom is challenging in environments with complex obstacle geometries. Recent work introduced the idea of speeding up the planning by encoding prior experience of successful motion plans in a neural network. However, this 'neural motion planning' did not scale to complex robots in unseen 3D environments as needed for real-world applications. Here, we introduce 'basis point set', well-known in computer vision, to neural motion planning as a modern compact environment encoding enabling efficient supervised training networks that generalize well over diverse 3D worlds. Combined with a new elaborate training scheme, we reach a planning success rate of 100 %. We use the network to predict an educated initial guess for an optimization-based planner (OMP), which quickly converges to a feasible solution, massively outperforming random multi-starts when tested on previously unseen environments. For the DLR humanoid Agile Justin with 19 DoF and in challenging obstacle environments, optimal paths can be generated in 200 ms using only a single CPU core. We also show a first successful real-world experiment based on a high-resolution world model from an integrated 3D sensor

    Self-Contained Calibration of an Elastic Humanoid Upper Body with a Single Head-Mounted RGB Camera

    Get PDF
    When a humanoid robot performs a manipulation task, it first makes a model of the world using its visual sensors and then plans the motion of its body in this model. For this, precise calibration of the camera parameters and the kinematic tree is needed. Besides the accuracy of the calibrated model, the calibration process should be fast and self-contained, i.e., no external measurement equipment should be used. Therefore, we extend our prior work on calibrating the elastic upper body of DLR's Agile Justin by now using only its internal head-mounted RGB camera. We use simple visual markers at the ends of the kinematic chain and one in front of the robot, mounted on a pole, to get measurements for the whole kinematic tree. To ensure that the task-relevant cartesian error at the end-effectors is minimized, we introduce virtual noise to fit our imperfect robot model so that the pixel error has a higher weight if the marker is further away from the camera. This correction reduces the cartesian error by more than 20%, resulting in a final accuracy of 3.9mm on average and 9.1mm in the worst case. This way, we achieve the same precision as in our previous work, where an external cartesian tracking system was used
    • …
    corecore