16 research outputs found

    Bringing a Humanoid Robot Closer to Human Versatility : Hard Realtime Software Architecture and Deep Learning Based Tactile Sensing

    Get PDF
    For centuries, it has been a vision of man to create humanoid robots, i.e., machines that not only resemble the shape of the human body, but have similar capabilities, especially in dextrously manipulating their environment. But only in recent years it has been possible to build actual humanoid robots with many degrees of freedom (DOF) and equipped with torque controlled joints, which are a prerequisite for sensitively acting in the world. In this thesis, we extend DLR's advanced mobile torque controlled humanoid robot Agile Justin into two important directions to get closer to human versatility. First, we enable Agile Justin, which was originally built as a research platform for dextrous mobile manipulation, to also be able to execute complex dynamic manipulation tasks. We demonstrate this with the challenging task of catching up to two simultaneously thrown balls with its hands. Second, we equip Agile Justin with highly developed and deep learning based tactile sensing capabilities that are critical for dextrous fine manipulation. We demonstrate its tactile capabilities with the delicate task of identifying an objects material simply by gently sweeping with a fingertip over its surface. Key for the realization of complex dynamic manipulation tasks is a software framework that allows for a component based system architecture to cope with the complexity and parallel and distributed computational demands of deep sensor-perception-planning-action loops -- but under tight timing constraints. This thesis presents the communication layer of our aRDx (agile robot development -- next generation) software framework that provides hard realtime determinism and optimal transport of data packets with zero-copy for intra- and inter-process and copy-once for distributed communication. In the implementation of the challenging ball catching application on Agile Justin, we take full advantage of aRDx's performance and advanced features like channel synchronization. Besides developing the challenging visual ball tracking using only onboard sensing while everything is moving and the automatic and self-contained calibration procedure to provide the necessary precision, the major contribution is the unified generation of the reaching motion for the arms. The catch point selection, motion planning and the joint interpolation steps are subsumed in one nonlinear constrained optimization problem which is solved in realtime and allows for the realization of different catch behaviors. For the highly sensitive task of tactile material classification with a flexible pressure-sensitive skin on Agile Justin's fingertip, we present our deep convolutional network architecture TactNet-II. The input is the raw 16000 dimensional complex and noisy spatio-temporal tactile signal generated when sweeping over an object's surface. For comparison, we perform a thorough human performance experiment with 15 subjects which shows that Agile Justin reaches superhuman performance in the high-level material classification task (What material id?), as well as in the low-level material differentiation task (Are two materials the same?). To increase the sample efficiency of TactNet-II, we adapt state of the art deep end-to-end transfer learning to tactile material classification leading to an up to 15 fold reduction in the number of training samples needed. The presented methods led to six publication awards and award finalists and international media coverage but also worked robustly at many trade fairs and lab demos

    Master of Science

    Get PDF
    thesisHuman motion capture has a wide variety of applications in the entertainment and medical industries. Actors using motion capture devices provide realistic motion inputs for cartoons, virtual reality environments™, and computer and robot animation, resulting in tremendous time and cost savings. Medical applications include range of motion studies to diagnose injuries or identify insurance fraud, biomechanics studies of human performance and calculation of joint stresses, and ergonomics studies of humans in the workplace. There are common problems facing all methods of motion capture: how to attach the device to the individual's limbs, what sensors to use and how to use them, how to transmit data and convert it into a usable form, calibration of the device, data display, user comfort, and device reliability. Even when these problems are addressed, there are limitations in the kinematic model as well as human joint anomalies that make all methods imperfect. Currently, there are optical, magnetic, and exoskeletal devices for motion capture that vary widely in terms of performance, cost and limitations. Considering the likely environment and performance needs of the Sarcos Research Corporation, the SenSuit™ was built as an exoskeletal device. Creation of the SenSuit™ involved overcoming three major hurdles: the soft tissue interface, accurate joint angle measurement, and sensor design. The soft tissue interface is the series of rigid plates that are placed on skeletal landmarks located near the surface of the user's skin. Through appropriate location of the plates, a consistent, stable fit to the skeleton was achieved for users, which enhanced joint angle data. Accurate joint angle measurements were achieved either by aligning sensor rotation centers with approximate joint rotation centers or by computationally transforming the outputs of three degree of freedom sensor clusters located to reduce nonlinearities. A software routine allowed for quick, linear calibration of the individual. Joint angle sensors were designed that were small, linear, robust, and resistant to wear and contaminants. The SenSuit™ has proven itself both comfortable and reliable. It has been thoroughly tested in real-world applications, including real-time driving of graphical and robotic figures, as well as the programming of various robotic figures

    HAPTIC AND VISUAL SIMULATION OF BONE DISSECTION

    Get PDF
    Marco AgusIn bone dissection virtual simulation, force restitution represents the key to realistically mimicking a patient– specific operating environment. The force is rendered using haptic devices controlled by parametrized mathematical models that represent the bone–burr contact. This dissertation presents and discusses a haptic simulation of a bone cutting burr, that it is being developed as a component of a training system for temporal bone surgery. A physically based model was used to describe the burr– bone interaction, including haptic forces evaluation, bone erosion process and resulting debris. The model was experimentally validated and calibrated by employing a custom experimental set–up consisting of a force–controlled robot arm holding a high–speed rotating tool and a contact force measuring apparatus. Psychophysical testing was also carried out to assess individual reaction to the haptic environment. The results suggest that the simulator is capable of rendering the basic material differences required for bone burring tasks. The current implementation, directly operating on a voxel discretization of patientspecific 3D CT and MR imaging data, is efficient enough to provide real–time haptic and visual feedback on a low–end multi–processing PC platform.

    Techniques For Sensor-Integrated Robotic Systems: Raman Spectra Analysis, Image Guidance, And Kinematic Calibration

    Get PDF
    Robotics and sensor technology have made impressive advancements over the years. There are now robotic systems that help perform surgeries or explore the surface of Mars, and there are sensors that detect trace amounts of explosives or identify diseased human tissue. The most powerful systems integrate robots and sensors, which are natural complements to each other. Sensors can provide information that might otherwise be unavailable due to indirect robotic manipulation (e.g., images of the target environment), and robots can provide suitably precise positioning of an analytical sensor. To have an effective sensor-integrated robotic system, multiple capabilities are needed in the areas of sensors, robotics, and techniques for robot/sensor integration. However, for many types of applications, there are shortcomings in the current technologies employed to provide these capabilities. For the analysis of complex sensor signals, there is a need for improved algorithms and open platforms that enable techniques to be shared. For the path planning and tracking of integrated sensors and the visualization of collected information, image guidance systems that support advanced analytical sensors would be very beneficial. For robotic placement of a sensor, easily usable calibration procedures and methods to overcome limited feedback could help improve the accuracy. To help address these issues, some novel systems and techniques were developed in this research. First, a software system was created to process, analyze, and classify data from a specific kind of sensor (a Raman spectrometer). The system is open and extensible, and it contains novel techniques for processing and analyzing the sensor data. Second, an image guidance system was made for use with a sensor-integrated robotic system (a Raman probe attached to a surgical system). The system supports tool tracking, sensor activation, real-time sensor data analysis, and presentation of the results in a 3D computer visualization of the environment. Third, a kinematic calibration technique was developed for serial manipulators. It requires no external measurement devices for calibration, provides solutions for some limitations of existing techniques, and can significantly enhance the positional accuracy of a robot to improve sensor placement. The implemented techniques and systems were successfully evaluated using various data sets and conditions. Together, the contributions of this work provide important building blocks for an accurate robot with an integrated analytical sensor. This type of a system would be a powerful tool for many future applications, such as a surgical robot that automatically scans for diseased tissue and assists the surgeon in the necessary treatment. Ultimately, this work is intended to foster the development of advanced sensor-integrated robotic systems

    Autonomous model building using vision and manipulation

    Get PDF
    It is often the case that robotic systems require models, in order to successfully control themselves, and to interact with the world. Models take many forms and include kinematic models to plan motions, dynamics models to understand the interaction of forces, and models of 3D geometry to check for collisions, to name but a few. Traditionally, models are provided to the robotic system by the designers that build the system. However, for long-term autonomy it becomes important for the robot to be able to build and maintain models of itself, and of objects it might encounter. In this thesis, the argument for enabling robotic systems to autonomously build models is advanced and explored. The main contribution of this research is to show how a layered approach can be taken to building models. Thus a robot, starting with a limited amount of information, can autonomously build a number of models, including a kinematic model, which describes the robot’s body, and allows it to plan and perform future movements. Key to the incremental, autonomous approach is the use of exploratory actions. These are actions that the robot can perform in order to gain some more information, either about itself, or about an object with which it is interacting. A method is then presented whereby a robot, after being powered on, can home its joints using just vision, i.e. traditional methods such as absolute encoders, or limit switches are not required. The ability to interact with objects in order to extract information is one of the main advantages that a robotic system has over a purely passive system, when attempting to learn about or build models of objects. In light of this, the next contribution of this research is to look beyond the robot’s body and to present methods with which a robot can autonomously build models of objects in the world around it. The first class of objects examined are flat pack cardboard boxes, a class of articulated objects with a number of interesting properties. It is shown how exploratory actions can be used to build a model of a flat pack cardboard box and to locate any hinges the box may have. Specifically, it is shown how when interacting with an object, a robot can combine haptic feedback from force sensors, with visual feedback from a camera to get more information from an object than would be possible using just a single sensor modality. The final contribution of this research is to present a series of exploratory actions for a robotic text reading system that allow text to be found and read from an object. The text reading system highlights how models of objects can take many forms, from a representation of their physical extents, to the text that is written on them

    Haptic and visual simulation of bone dissection

    Get PDF
    Tesi di dottorato: UniversitĂ  degli Studi di Cagliari, FacoltĂ  di Ingegneria, Dipartiemnto di Ingegneria Meccanica, XV Ciclo di Dottorato in Progettazione Meccanica.In bone dissection virtual simulation, force restitution represents the key to realistically mimicking a patient--specific operating environment. The force is rendered using haptic devices controlled by parametrized mathematical models that represent the bone--burr contact. This dissertation presents and discusses a haptic simulation of a bone cutting burr, that it is being developed as a component of a training system for temporal bone surgery. A physically based model was used to describe the burr--bone interaction, including haptic forces evaluation, bone erosion process and resulting debris. The model was experimentally validated and calibrated by employing a custom experimental set--up consisting of a force--controlled robot arm holding a high--speed rotating tool and a contact force measuring apparatus. Psychophysical testing was also carried out to assess individual reaction to the haptic environment. The results suggest that the simulator is capable of rendering the basic material differences required for bone burring tasks. The current implementation, directly operating on a voxel discretization of patient-specific 3D CT and MR imaging data, is efficient enough to provide real--time haptic and visual feedback on a low--end multi--processing PC platformInedit

    DESIGN AND CONSTRUCTION OF A FORCE-REFLECTING TELEOPERATION SYSTEM

    Full text link

    Conference on Intelligent Robotics in Field, Factory, Service, and Space (CIRFFSS 1994), volume 1

    Get PDF
    The AIAA/NASA Conference on Intelligent Robotics in Field, Factory, Service, and Space (CIRFFSS '94) was originally proposed because of the strong belief that America's problems of global economic competitiveness and job creation and preservation can partly be solved by the use of intelligent robotics, which are also required for human space exploration missions. Individual sessions addressed nuclear industry, agile manufacturing, security/building monitoring, on-orbit applications, vision and sensing technologies, situated control and low-level control, robotic systems architecture, environmental restoration and waste management, robotic remanufacturing, and healthcare applications

    Robotics 2010

    Get PDF
    Without a doubt, robotics has made an incredible progress over the last decades. The vision of developing, designing and creating technical systems that help humans to achieve hard and complex tasks, has intelligently led to an incredible variety of solutions. There are barely technical fields that could exhibit more interdisciplinary interconnections like robotics. This fact is generated by highly complex challenges imposed by robotic systems, especially the requirement on intelligent and autonomous operation. This book tries to give an insight into the evolutionary process that takes place in robotics. It provides articles covering a wide range of this exciting area. The progress of technical challenges and concepts may illuminate the relationship between developments that seem to be completely different at first sight. The robotics remains an exciting scientific and engineering field. The community looks optimistically ahead and also looks forward for the future challenges and new development
    corecore