463 research outputs found

    Versatile Multi-Contact Planning and Control for Legged Loco-Manipulation

    Full text link
    Loco-manipulation planning skills are pivotal for expanding the utility of robots in everyday environments. These skills can be assessed based on a system's ability to coordinate complex holistic movements and multiple contact interactions when solving different tasks. However, existing approaches have been merely able to shape such behaviors with hand-crafted state machines, densely engineered rewards, or pre-recorded expert demonstrations. Here, we propose a minimally-guided framework that automatically discovers whole-body trajectories jointly with contact schedules for solving general loco-manipulation tasks in pre-modeled environments. The key insight is that multi-modal problems of this nature can be formulated and treated within the context of integrated Task and Motion Planning (TAMP). An effective bilevel search strategy is achieved by incorporating domain-specific rules and adequately combining the strengths of different planning techniques: trajectory optimization and informed graph search coupled with sampling-based planning. We showcase emergent behaviors for a quadrupedal mobile manipulator exploiting both prehensile and non-prehensile interactions to perform real-world tasks such as opening/closing heavy dishwashers and traversing spring-loaded doors. These behaviors are also deployed on the real system using a two-layer whole-body tracking controller

    Non-Prehensile Object Transportation via Model Predictive Non-Sliding Manipulation Control

    Get PDF
    This article proposes a model predictive non-sliding manipulation (MPNSM) control approach to safely transport an object on a tray-like end-effector of a robotic manipulator. For the considered non-prehensile transportation task to succeed, both non-sliding manipulation and the robotic system constraints must always be satisfied. To tackle this problem, we devise a model predictive controller enforcing sticking contacts, i.e., preventing sliding between the object and the tray, and assuring that physical limits such as extreme joint positions, velocities, and input torques are never exceeded. The combined dynamic model of the physical system, comprising the manipulator and the object in contact, is derived in a compact form. The associated non-sliding manipulation constraint is formulated such that the parametrized contact forces belong to a conservatively approximated friction cone space. This constraint is enforced by the proposed MPNSM controller, formulated as an optimal control problem that optimizes the objective of tracking the desired trajectory while always satisfying both manipulation and robotic system constraints. We validate our approach by showing extensive dynamic simulations using a torque-controlled 7-degree-of-freedom (DoF) KUKA LBR IIWA robotic manipulator. Finally, demonstrative results from real experiments conducted on a 21-DoF humanoid robotic platform are shown

    Cognitive Reasoning for Compliant Robot Manipulation

    Get PDF
    Physically compliant contact is a major element for many tasks in everyday environments. A universal service robot that is utilized to collect leaves in a park, polish a workpiece, or clean solar panels requires the cognition and manipulation capabilities to facilitate such compliant interaction. Evolution equipped humans with advanced mental abilities to envision physical contact situations and their resulting outcome, dexterous motor skills to perform the actions accordingly, as well as a sense of quality to rate the outcome of the task. In order to achieve human-like performance, a robot must provide the necessary methods to represent, plan, execute, and interpret compliant manipulation tasks. This dissertation covers those four steps of reasoning in the concept of intelligent physical compliance. The contributions advance the capabilities of service robots by combining artificial intelligence reasoning methods and control strategies for compliant manipulation. A classification of manipulation tasks is conducted to identify the central research questions of the addressed topic. Novel representations are derived to describe the properties of physical interaction. Special attention is given to wiping tasks which are predominant in everyday environments. It is investigated how symbolic task descriptions can be translated into meaningful robot commands. A particle distribution model is used to plan goal-oriented wiping actions and predict the quality according to the anticipated result. The planned tool motions are converted into the joint space of the humanoid robot Rollin' Justin to perform the tasks in the real world. In order to execute the motions in a physically compliant fashion, a hierarchical whole-body impedance controller is integrated into the framework. The controller is automatically parameterized with respect to the requirements of the particular task. Haptic feedback is utilized to infer contact and interpret the performance semantically. Finally, the robot is able to compensate for possible disturbances as it plans additional recovery motions while effectively closing the cognitive control loop. Among others, the developed concept is applied in an actual space robotics mission, in which an astronaut aboard the International Space Station (ISS) commands Rollin' Justin to maintain a Martian solar panel farm in a mock-up environment. This application demonstrates the far-reaching impact of the proposed approach and the associated opportunities that emerge with the availability of cognition-enabled service robots

    A Holistic Approach to Human-Supervised Humanoid Robot Operations in Extreme Environments

    Get PDF
    Nuclear energy will play a critical role in meeting clean energy targets worldwide. However, nuclear environments are dangerous for humans to operate in due to the presence of highly radioactive materials. Robots can help address this issue by allowing remote access to nuclear and other highly hazardous facilities under human supervision to perform inspection and maintenance tasks during normal operations, help with clean-up missions, and aid in decommissioning. This paper presents our research to help realize humanoid robots in supervisory roles in nuclear environments. Our research focuses on National Aeronautics and Space Administration (NASA’s) humanoid robot, Valkyrie, in the areas of constrained manipulation and motion planning, increasing stability using support contact, dynamic non-prehensile manipulation, locomotion on deformable terrains, and human-in-the-loop control interfaces

    Motion Control of the Hybrid Wheeled-Legged Quadruped Robot Centauro

    Get PDF
    Emerging applications will demand robots to deal with a complex environment, which lacks the structure and predictability of the industrial workspace. Complex scenarios will require robot complexity to increase as well, as compared to classical topologies such as fixed-base manipulators, wheeled mobile platforms, tracked vehicles, and their combinations. Legged robots, such as humanoids and quadrupeds, promise to provide platforms which are flexible enough to handle real world scenarios; however, the improved flexibility comes at the cost of way higher control complexity. As a trade-off, hybrid wheeled-legged robots have been proposed, resulting in the mitigation of control complexity whenever the ground surface is suitable for driving. Following this idea, a new hybrid robot called Centauro has been developed inside the Humanoid and Human Centered Mechatronics lab at Istituto Italiano di Tecnologia (IIT). Centauro is a wheeled-legged quadruped with a humanoid bi-manual upper-body. Differently from other platform of similar concept, Centauro employs customized actuation units, which provide high torque outputs, moderately fast motions, and the possibility to control the exerted torque. Moreover, with more than forty motors moving its limbs, Centauro is a very redundant platform, with the potential to execute many different tasks at the same time. This thesis deals with the design and development of a software architecture, and a control system, tailored to such a robot; both wheeled and legged locomotion strategies have been studied, as well as prioritized, whole-body and interaction controllers exploiting the robot torque control capabilities, and capable to handle the system redundancy. A novel software architecture, made of (i) a real-time robotic middleware, and (ii) a framework for online, prioritized Cartesian controller, forms the basis of the entire work

    A Visuo-Tactile Control Framework for Manipulation and Exploration of Unknown Objects

    Get PDF
    Li Q, Haschke R, Ritter H. A Visuo-Tactile Control Framework for Manipulation and Exploration of Unknown Objects. Presented at the Humanoids2015, Seoul,Korea.We present a novel hierarchical control framework that unifies our previous work on tactile-servoing with visual-servoing approaches to allow for robust manipulation and exploration of unknown objects, including – but not limited to – robust grasping, online grasp optimization, in-hand manipulation, and exploration of object surfaces. The control framework is divided into three layers: a joint-level positioncontrol layer, a tactile servoing control layer, and a high-level visual servoing control layer. While the middle layer provides “blind” surface exploration skills, maintaining desired contact patterns, the visual layer monitors and controls the actual object pose providing high-level finger-tip motion commands that are merged with the tactile-servoing control commands. Because the high spatial resolution tactile array and tactile servoing method is used, the robot end-effector can actively perform slide, roll and twist motion in order to improve the contact quality with the unknown object only depending on the tactile feedback. Our control method can be consider as another alternative option for vision-force shared control method and vision-force-tactile control method which heavily depend on the 3D force/torque sensor to perform end-effector fine manipulation after the contact happening. We illustrate the efficiency of the proposed framework using a series of manipulation actions performed with two KUKA LWR arms equipped with a tactile sensor array as a “sensitive fingertip”. The two considered objects are unknown to the robot, i.e. neither shape nor friction properties are available
    • 

    corecore