39 research outputs found

    Whole-Body Impedance Control of Wheeled Humanoid Robots

    Full text link

    Bringing a Humanoid Robot Closer to Human Versatility : Hard Realtime Software Architecture and Deep Learning Based Tactile Sensing

    Get PDF
    For centuries, it has been a vision of man to create humanoid robots, i.e., machines that not only resemble the shape of the human body, but have similar capabilities, especially in dextrously manipulating their environment. But only in recent years it has been possible to build actual humanoid robots with many degrees of freedom (DOF) and equipped with torque controlled joints, which are a prerequisite for sensitively acting in the world. In this thesis, we extend DLR's advanced mobile torque controlled humanoid robot Agile Justin into two important directions to get closer to human versatility. First, we enable Agile Justin, which was originally built as a research platform for dextrous mobile manipulation, to also be able to execute complex dynamic manipulation tasks. We demonstrate this with the challenging task of catching up to two simultaneously thrown balls with its hands. Second, we equip Agile Justin with highly developed and deep learning based tactile sensing capabilities that are critical for dextrous fine manipulation. We demonstrate its tactile capabilities with the delicate task of identifying an objects material simply by gently sweeping with a fingertip over its surface. Key for the realization of complex dynamic manipulation tasks is a software framework that allows for a component based system architecture to cope with the complexity and parallel and distributed computational demands of deep sensor-perception-planning-action loops -- but under tight timing constraints. This thesis presents the communication layer of our aRDx (agile robot development -- next generation) software framework that provides hard realtime determinism and optimal transport of data packets with zero-copy for intra- and inter-process and copy-once for distributed communication. In the implementation of the challenging ball catching application on Agile Justin, we take full advantage of aRDx's performance and advanced features like channel synchronization. Besides developing the challenging visual ball tracking using only onboard sensing while everything is moving and the automatic and self-contained calibration procedure to provide the necessary precision, the major contribution is the unified generation of the reaching motion for the arms. The catch point selection, motion planning and the joint interpolation steps are subsumed in one nonlinear constrained optimization problem which is solved in realtime and allows for the realization of different catch behaviors. For the highly sensitive task of tactile material classification with a flexible pressure-sensitive skin on Agile Justin's fingertip, we present our deep convolutional network architecture TactNet-II. The input is the raw 16000 dimensional complex and noisy spatio-temporal tactile signal generated when sweeping over an object's surface. For comparison, we perform a thorough human performance experiment with 15 subjects which shows that Agile Justin reaches superhuman performance in the high-level material classification task (What material id?), as well as in the low-level material differentiation task (Are two materials the same?). To increase the sample efficiency of TactNet-II, we adapt state of the art deep end-to-end transfer learning to tactile material classification leading to an up to 15 fold reduction in the number of training samples needed. The presented methods led to six publication awards and award finalists and international media coverage but also worked robustly at many trade fairs and lab demos

    Cognitive Reasoning for Compliant Robot Manipulation

    Get PDF
    Physically compliant contact is a major element for many tasks in everyday environments. A universal service robot that is utilized to collect leaves in a park, polish a workpiece, or clean solar panels requires the cognition and manipulation capabilities to facilitate such compliant interaction. Evolution equipped humans with advanced mental abilities to envision physical contact situations and their resulting outcome, dexterous motor skills to perform the actions accordingly, as well as a sense of quality to rate the outcome of the task. In order to achieve human-like performance, a robot must provide the necessary methods to represent, plan, execute, and interpret compliant manipulation tasks. This dissertation covers those four steps of reasoning in the concept of intelligent physical compliance. The contributions advance the capabilities of service robots by combining artificial intelligence reasoning methods and control strategies for compliant manipulation. A classification of manipulation tasks is conducted to identify the central research questions of the addressed topic. Novel representations are derived to describe the properties of physical interaction. Special attention is given to wiping tasks which are predominant in everyday environments. It is investigated how symbolic task descriptions can be translated into meaningful robot commands. A particle distribution model is used to plan goal-oriented wiping actions and predict the quality according to the anticipated result. The planned tool motions are converted into the joint space of the humanoid robot Rollin' Justin to perform the tasks in the real world. In order to execute the motions in a physically compliant fashion, a hierarchical whole-body impedance controller is integrated into the framework. The controller is automatically parameterized with respect to the requirements of the particular task. Haptic feedback is utilized to infer contact and interpret the performance semantically. Finally, the robot is able to compensate for possible disturbances as it plans additional recovery motions while effectively closing the cognitive control loop. Among others, the developed concept is applied in an actual space robotics mission, in which an astronaut aboard the International Space Station (ISS) commands Rollin' Justin to maintain a Martian solar panel farm in a mock-up environment. This application demonstrates the far-reaching impact of the proposed approach and the associated opportunities that emerge with the availability of cognition-enabled service robots

    Development of a Humanoid Robot Arm for Use in Urban Environments

    Get PDF
    The Bucknell Humanoid Robot Arm project was developed in order toprovide a lightweight robotic arm for the IHMC / Bucknell University bipedal robot that will provide a means of manipulation and facilitate operations in urban environments. The resulting fabricated arm described in this thesis weighs only 13 pounds, and is capable of holding 11 pounds fully outstretched, lifting objects such as tools, and it can open doors. It is also capable of being easily integrated with the IHMC / Bucknell University biped. This thesis provides an introduction to robots themselves, discusses the goals of the Bucknell Humanoid Robot Arm project, provides a background on some of the existing robots, and shows how the Bucknell Humanoid Robot Arm fits in with the studies that have been completed. After reading these studies, important items such as design trees and operational scenarios were completed. The completion of these items led to measurable specifications and later the design requirements and specifications. A significant contribution of this thesis to the robotics discipline involves the design of the actuator itself. The arm uses of individual, lightweight, compactly designed actuators to achieve desired capabilities and performance requirements. Many iterations were completed to get to the final design of each actuator. After completing the actuators, the design of the intermediate links and brackets was finalized. Completion of the design led to the development of a complex controls system which used a combination of Clanguage and Java

    Framework for context analysis and planning of an assistive robot

    Get PDF
    This paper presents the developments with the SAM robot, established in the ARMEN project. We are interested in cognitive robotics. We have developed two complementary modules. The first one deals with the representation of knowledge, while the second develops the scenario generation. Indeed, the representation of knowledge tells us about the scene, the current state of the robot and the strategy to be adopted by the robot to achieve goals specified by an assisted person. The information extracted from the knowledge representation is the starting point to generate the action plan and the implementation of the scenario by the robot

    Context-aware design and motion planning for autonomous service robots

    Get PDF

    Development of a Multisensory Saddle as Input Device for Teleoperation

    Get PDF
    Due to demographic change and a skilled labour shortage in healthcare professions it is of high importance to explore technological assistance in healthcare. This thesis aims to develop a multisensory saddle as an additional input device to the HUG (Haptisches User Gerät) to be used for teleoperation in the SMiLE (Servicerobotik für Menschen in Lebenssituationen mit Einschränkungen) ecosystem. Since the HUG's input is limited to two robotic arms, the operator cannot control the mobile platform and the arms of a robot at the same time. Adding the saddle developed in this thesis to the HUG setup will resolve this, allowing the operator to control a robotic platform by pressing sensors with their legs. After evaluating different sensors, a sensor module is developed around a spring-loaded potentiometer, which is placed on the saddle adjustably to account for differences in the operators' height. Following the implementation of the necessary software infrastructure, the saddle will be validated in a user study comparing two different steering approaches: differential driving with and without the additional option to drive sideways. Obtaining mostly positive results about both the functionality to drive sideways and the saddle as an input device from the study thus concludes the development of a first prototype for the saddle

    Comparison of interaction modalities for mobile indoor robot guidance : direct physical interaction, person following, and pointing control

    Get PDF
    © 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksThree advanced natural interaction modalities for mobile robot guidance in an indoor environment were developed and compared using two tasks and quantitative metrics to measure performance and workload. The first interaction modality is based on direct physical interaction requiring the human user to push the robot in order to displace it. The second and third interaction modalities exploit a 3-D vision-based human-skeleton tracking allowing the user to guide the robot by either walking in front of it or by pointing toward a desired location. In the first task, the participants were asked to guide the robot between different rooms in a simulated physical apartment requiring rough movement of the robot through designated areas. The second task evaluated robot guidance in the same environment through a set of waypoints, which required accurate movements. The three interaction modalities were implemented on a generic differential drive mobile platform equipped with a pan-tilt system and a Kinect camera. Task completion time and accuracy were used as metrics to assess the users’ performance, while the NASA-TLX questionnaire was used to evaluate the users’ workload. A study with 24 participants indicated that choice of interaction modality had significant effect on completion time (F(2,61)=84.874, p<0.001), accuracy (F(2,29)=4.937, p=0.016), and workload (F(2,68)=11.948, p<0.001). The direct physical interaction required less time, provided more accuracy and less workload than the two contactless interaction modalities. Between the two contactless interaction modalities, the person-following interaction mod- lity was systematically better than the pointing-control one: The participants completed the tasks faster with less workloadPeer ReviewedPostprint (author's final draft
    corecore