52 research outputs found

    Grasping and Assembling with Modular Robots

    Get PDF
    A wide variety of problems, from manufacturing to disaster response and space exploration, can benefit from robotic systems that can firmly grasp objects or assemble various structures, particularly in difficult, dangerous environments. In this thesis, we study the two problems, robotic grasping and assembly, with a modular robotic approach that can facilitate the problems with versatility and robustness. First, this thesis develops a theoretical framework for grasping objects with customized effectors that have curved contact surfaces, with applications to modular robots. We present a collection of grasps and cages that can effectively restrain the mobility of a wide range of objects including polyhedra. Each of the grasps or cages is formed by at most three effectors. A stable grasp is obtained by simple motion planning and control. Based on the theory, we create a robotic system comprised of a modular manipulator equipped with customized end-effectors and a software suite for planning and control of the manipulator. Second, this thesis presents efficient assembly planning algorithms for constructing planar target structures collectively with a collection of homogeneous mobile modular robots. The algorithms are provably correct and address arbitrary target structures that may include internal holes. The resultant assembly plan supports parallel assembly and guarantees easy accessibility in the sense that a robot does not have to pass through a narrow gap while approaching its target position. Finally, we extend the algorithms to address various symmetric patterns formed by a collection of congruent rectangles on the plane. The basic ideas in this thesis have broad applications to manufacturing (restraint), humanitarian missions (forming airfields on the high seas), and service robotics (grasping and manipulation)

    Visual Dexterity: In-hand Dexterous Manipulation from Depth

    Full text link
    In-hand object reorientation is necessary for performing many dexterous manipulation tasks, such as tool use in unstructured environments that remain beyond the reach of current robots. Prior works built reorientation systems that assume one or many of the following specific circumstances: reorienting only specific objects with simple shapes, limited range of reorientation, slow or quasistatic manipulation, the need for specialized and costly sensor suites, simulation-only results, and other constraints which make the system infeasible for real-world deployment. We overcome these limitations and present a general object reorientation controller that is trained using reinforcement learning in simulation and evaluated in the real world. Our system uses readings from a single commodity depth camera to dynamically reorient complex objects by any amount in real time. The controller generalizes to novel objects not used during training. It is successful in the most challenging test: the ability to reorient objects in the air held by a downward-facing hand that must counteract gravity during reorientation. The results demonstrate that the policy transfer from simulation to the real world can be accomplished even for dynamic and contact-rich tasks. Lastly, our hardware only uses open-source components that cost less than five thousand dollars. Such construction makes it possible to replicate the work and democratize future research in dexterous manipulation. Videos are available at: https://taochenshh.github.io/projects/visual-dexterity

    Modelling and Interactional Control of a Multi-fingered Robotic Hand for Grasping and Manipulation.

    Get PDF
    PhDIn this thesis, the synthesis of a grasping and manipulation controller of the Barrett hand, which is an archetypal example of a multi-fingered robotic hand, is investigated in some detail. This synthesis involves not only the dynamic modelling of the robotic hand but also the control of the joint and workspace dynamics as well as the interaction of the hand with object it is grasping and the environment it is operating in. Grasping and manipulation of an object by a robotic hand is always challenging due to the uncertainties, associated with non-linearities of the robot dynamics, unknown location and stiffness parameters of the objects which are not structured in any sense and unknown contact mechanics during the interaction of the hand’s fingers and the object. To address these challenges, the fundamental task is to establish the mathematical model of the robot hand, model the body dynamics of the object and establish the contact mechanics between the hand and the object. A Lagrangian based mathematical model of the Barrett hand is developed for controller implementation. A physical SimMechanics based model of the Barrett hand is also developed in MATLAB/Simulink environment. A computed torque controller and an adaptive sliding model controller are designed for the hand and their performance is assessed both in the joint space and in the workspace. Stability analysis of the controllers are carried out before developing the control laws. The higher order sliding model controllers are developed for the position control assuming that the uncertainties are in place. Also, this controllers enhance the performance by reducing chattering of the control torques applied to the robot hand. A contact model is developed for the Barrett hand as its fingers grasp the object in the operating environment. The contact forces during the simulation of the interaction of the fingers with the object were monitored, for objects with different stiffness values. Position and force based impedance controllers are developed to optimise the contact force. To deal with the unknown stiffness of the environment, adaptation is implemented by identifying the impedance. An evolutionary algorithm is also used to estimate the desired impedance parameters of the dynamics of the coupled robot and compliant object. A Newton-Euler based model is developed for the rigid object body. A grasp map and a hand Jacobian are defined for the Barrett hand grasping an object. A fixed contact model with friction is considered for the grasping and the manipulation control. The compliant dynamics of Barrett hand and object is developed and the control problem is defined in terms of the contact force. An adaptive control framework is developed and implemented for different grasps and manipulation trajectories of the Barrett hand. The adaptive controller is developed in two stages: first, the unknown robot and object dynamics are estimated and second, the contact force is computed from the estimated dynamics. The stability of the controllers is ensured by applying Lyapunov’s direct method

    Scaled Autonomy for Networked Humanoids

    Get PDF
    Humanoid robots have been developed with the intention of aiding in environments designed for humans. As such, the control of humanoid morphology and effectiveness of human robot interaction form the two principal research issues for deploying these robots in the real world. In this thesis work, the issue of humanoid control is coupled with human robot interaction under the framework of scaled autonomy, where the human and robot exchange levels of control depending on the environment and task at hand. This scaled autonomy is approached with control algorithms for reactive stabilization of human commands and planned trajectories that encode semantically meaningful motion preferences in a sequential convex optimization framework. The control and planning algorithms have been extensively tested in the field for robustness and system verification. The RoboCup competition provides a benchmark competition for autonomous agents that are trained with a human supervisor. The kid-sized and adult-sized humanoid robots coordinate over a noisy network in a known environment with adversarial opponents, and the software and routines in this work allowed for five consecutive championships. Furthermore, the motion planning and user interfaces developed in the work have been tested in the noisy network of the DARPA Robotics Challenge (DRC) Trials and Finals in an unknown environment. Overall, the ability to extend simplified locomotion models to aid in semi-autonomous manipulation allows untrained humans to operate complex, high dimensional robots. This represents another step in the path to deploying humanoids in the real world, based on the low dimensional motion abstractions and proven performance in real world tasks like RoboCup and the DRC

    Manipulation Planning for Forceful Human-Robot-Collaboration

    Get PDF
    This thesis addresses the problem of manipulation planning for forceful human-robot collaboration. Particularly, the focus is on the scenario where a human applies a sequence of changing external forces through forceful operations (e.g. cutting a circular piece off a board) on an object that is grasped by a cooperative robot. We present a range of planners that 1) enable the robot to stabilize and position the object under the human applied forces by exploiting supports from both the object-robot and object-environment contacts; 2) improve task efficiency by minimizing the need of configuration and grasp changes required by the changing external forces; 3) improve human comfort during the forceful interaction by optimizing the defined comfort criteria. We first focus on the instance of using only robotic grasps, where the robot is supposed to grasp/regrasp the object multiple times to keep it stable under the changing external forces. We introduce a planner that can generate an efficient manipulation plan by intelligently deciding when the robot should change its grasp on the object as the human applies the forces, and choosing subsequent grasps such that they minimize the number of regrasps required in the long-term. The planner searches for such an efficient plan by first finding a minimal sequence of grasp configurations that are able to keep the object stable under the changing forces, and then generating connecting trajectories to switch between the planned configurations, i.e. planning regrasps. We perform the search for such a grasp (configuration) sequence by sampling stable configurations for the external forces, building an operation graph using these stable configurations and then searching the operation graph to minimize the number of regrasps. We solve the problem of bimanual regrasp planning under the assumption of no support surface, enabling the robot to regrasp an object in the air by finding intermediate configurations at which both the bimanual and unimanual grasps can hold the object stable under gravity. We present a variety of experiments to show the performance of our planner, particularly in minimizing the number of regrasps for forceful manipulation tasks and planning stable regrasps. We then explore the problem of using both the object-environment contacts and object-robot contacts, which enlarges the set of stable configurations and thus boosts the robot’s capability in stabilizing the object under external forces. We present a planner that can intelligently exploit the environment’s and robot’s stabilization capabilities within a unified planning framework to search for a minimal number of stable contact configurations. A big computational bottleneck in this planner is due to the static stability analysis of a large number of candidate configurations. We introduce a containment relation between different contact configurations, to efficiently prune the stability checking process. We present a set of real-robot and simulated experiments illustrating the effectiveness of the proposed framework. We present a detailed analysis of the proposed containment relationship, particularly in improving the planning efficiency. We present a planning algorithm to further improve the cooperative robot behaviour concerning human comfort during the forceful human-robot interaction. Particularly, we are interested in empowering the robot with the capability of grasping and positioning the object not only to ensure the object stability against the human applied forces, but also to improve human experience and comfort during the interaction. We address human comfort as the muscular activation level required to apply a desired external force, together with the human spatial perception, i.e. the so-called peripersonal-space comfort during the interaction. We propose to maximize both comfort metrics to optimize the robot and object configuration such that the human can apply a forceful operation comfortably. We present a set of human-robot drilling and cutting experiments which verify the efficiency of the proposed metrics in improving the overall comfort and HRI experience, without compromising the force stability. In addition to the above planning work, we present a conic formulation to approximate the distribution of a forceful operation in the wrench space with a polyhedral cone, which enables the planner to efficiently assess the stability of a system configuration even in the presence of force uncertainties that are inherent in the human applied forceful operations. We also develop a graphical user interface, which human users can easily use to specify various forceful tasks, i.e. sequences of forceful operations on selected objects, in an interactive manner. The user interface ties in human task specification, on-demand manipulation planning and robot-assisted fabrication together. We present a set of human-robot experiments using the interface demonstrating the feasibility of our system. In short, in this thesis we present a series of planners for object manipulation under changing external forces. We show the object contacts with the robot and the environment enable the robot to manipulate an object under external forces, while making the most of the object contacts has the potential to eliminate redundant changes during manipulation, e.g. regrasp, and thus improve task efficiency and smoothness. We also show the necessity of optimizing human comfort in planning for forceful human-robot manipulation tasks. We believe the work presented here can be a key component in a human-robot collaboration framework

    Learning to grasp in unstructured environments with deep convolutional neural networks using a Baxter Research Robot

    Get PDF
    Recent advancements in Deep Learning have accelerated the capabilities of robotic systems in terms of visual perception, object manipulation, automated navigation, and human-robot collaboration. The capability of a robotic system to manipulate objects in unstructured environments is becoming an increasingly necessary skill. Due to the dynamic nature of these environments, traditional methods, that require expert human knowledge, fail to adapt automatically. After reviewing the relevant literature a method was proposed to utilise deep transfer learning techniques to detect object grasps from coloured depth images. A grasp describes how a robotic end-effector can be arranged to securely grasp an object and successfully lift it without slippage. In this study, a ResNet-50 convolutional neural network (CNN) model is trained on the Cornell grasp dataset. The training was completed within 30 hours using a workstation PC with accelerated GPU support via an NVIDIA Titan X. The trained grasp detection model was further evaluated with a Baxter research robot and a Microsoft Kinect-v2 and a successful grasp detection accuracy of 93.91% was achieved on a diverse set of novel objects. Physical grasping trials were conducted on a set of 8 different objects. The overall system achieves an average grasp success rate of 65.0% while performing the grasp detection in under 25 milliseconds. The results analysis concluded that the objects with reasonably straight edges and moderately pronounced heights above the table are easily detected and grasped by the system

    Robotics 2010

    Get PDF
    Without a doubt, robotics has made an incredible progress over the last decades. The vision of developing, designing and creating technical systems that help humans to achieve hard and complex tasks, has intelligently led to an incredible variety of solutions. There are barely technical fields that could exhibit more interdisciplinary interconnections like robotics. This fact is generated by highly complex challenges imposed by robotic systems, especially the requirement on intelligent and autonomous operation. This book tries to give an insight into the evolutionary process that takes place in robotics. It provides articles covering a wide range of this exciting area. The progress of technical challenges and concepts may illuminate the relationship between developments that seem to be completely different at first sight. The robotics remains an exciting scientific and engineering field. The community looks optimistically ahead and also looks forward for the future challenges and new development

    Emerging Trends in Mechatronics

    Get PDF
    Mechatronics is a multidisciplinary branch of engineering combining mechanical, electrical and electronics, control and automation, and computer engineering fields. The main research task of mechatronics is design, control, and optimization of advanced devices, products, and hybrid systems utilizing the concepts found in all these fields. The purpose of this special issue is to help better understand how mechatronics will impact on the practice and research of developing advanced techniques to model, control, and optimize complex systems. The special issue presents recent advances in mechatronics and related technologies. The selected topics give an overview of the state of the art and present new research results and prospects for the future development of the interdisciplinary field of mechatronic systems

    A Robotic System for Learning Visually-Driven Grasp Planning (Dissertation Proposal)

    Get PDF
    We use findings in machine learning, developmental psychology, and neurophysiology to guide a robotic learning system\u27s level of representation both for actions and for percepts. Visually-driven grasping is chosen as the experimental task since it has general applicability and it has been extensively researched from several perspectives. An implementation of a robotic system with a gripper, compliant instrumented wrist, arm and vision is used to test these ideas. Several sensorimotor primitives (vision segmentation and manipulatory reflexes) are implemented in this system and may be thought of as the innate perceptual and motor abilities of the system. Applying empirical learning techniques to real situations brings up such important issues as observation sparsity in high-dimensional spaces, arbitrary underlying functional forms of the reinforcement distribution and robustness to noise in exemplars. The well-established technique of non-parametric projection pursuit regression (PPR) is used to accomplish reinforcement learning by searching for projections of high-dimensional data sets that capture task invariants. We also pursue the following problem: how can we use human expertise and insight into grasping to train a system to select both appropriate hand preshapes and approaches for a wide variety of objects, and then have it verify and refine its skills through trial and error. To accomplish this learning we propose a new class of Density Adaptive reinforcement learning algorithms. These algorithms use statistical tests to identify possibly interesting regions of the attribute space in which the dynamics of the task change. They automatically concentrate the building of high resolution descriptions of the reinforcement in those areas, and build low resolution representations in regions that are either not populated in the given task or are highly uniform in outcome. Additionally, the use of any learning process generally implies failures along the way. Therefore, the mechanics of the untrained robotic system must be able to tolerate mistakes during learning and not damage itself. We address this by the use of an instrumented, compliant robot wrist that controls impact forces
    • …
    corecore