22 research outputs found

    Grasping and Assembling with Modular Robots

    Get PDF
    A wide variety of problems, from manufacturing to disaster response and space exploration, can benefit from robotic systems that can firmly grasp objects or assemble various structures, particularly in difficult, dangerous environments. In this thesis, we study the two problems, robotic grasping and assembly, with a modular robotic approach that can facilitate the problems with versatility and robustness. First, this thesis develops a theoretical framework for grasping objects with customized effectors that have curved contact surfaces, with applications to modular robots. We present a collection of grasps and cages that can effectively restrain the mobility of a wide range of objects including polyhedra. Each of the grasps or cages is formed by at most three effectors. A stable grasp is obtained by simple motion planning and control. Based on the theory, we create a robotic system comprised of a modular manipulator equipped with customized end-effectors and a software suite for planning and control of the manipulator. Second, this thesis presents efficient assembly planning algorithms for constructing planar target structures collectively with a collection of homogeneous mobile modular robots. The algorithms are provably correct and address arbitrary target structures that may include internal holes. The resultant assembly plan supports parallel assembly and guarantees easy accessibility in the sense that a robot does not have to pass through a narrow gap while approaching its target position. Finally, we extend the algorithms to address various symmetric patterns formed by a collection of congruent rectangles on the plane. The basic ideas in this thesis have broad applications to manufacturing (restraint), humanitarian missions (forming airfields on the high seas), and service robotics (grasping and manipulation)

    Whole-Hand Robotic Manipulation with Rolling, Sliding, and Caging

    Get PDF
    Traditional manipulation planning and modeling relies on strong assumptions about contact. Specifically, it is common to assume that contacts are fixed and do not slide. This assumption ensures that objects are stably grasped during every step of the manipulation, to avoid ejection. However, this assumption limits achievable manipulation to the feasible motion of the closed-loop kinematic chains formed by the object and fingers. To improve manipulation capability, it has been shown that relaxing contact constraints and allowing sliding can enhance dexterity. But in order to safely manipulate with shifting contacts, other safeguards must be used to protect against ejection. “Caging manipulation,” in which the object is geometrically trapped by the fingers, can be employed to guarantee that an object never leaves the hand, regardless of constantly changing contact conditions. Mechanical compliance and underactuated joint coupling, or carefully chosen design parameters, can be used to passively create a caging grasp – protecting against accidental ejection – while simultaneously manipulating with all parts of the hand. And with passive ejection avoidance, hand control schemes can be made very simple, while still accomplishing manipulation. In place of complex control, better design can be used to improve manipulation capability—by making smart choices about parameters such as phalanx length, joint stiffness, joint coupling schemes, finger frictional properties, and actuator mode of operation. I will present an approach for modeling fully actuated and underactuated whole-hand-manipulation with shifting contacts, show results demonstrating the relationship between design parameters and manipulation metrics, and show how this can produce highly dexterous manipulators

    対象物体と指配置のコンフィグレーション空間を用いた不確かさを扱える効率的なケージング計画

    Get PDF
    学位の種別:課程博士University of Tokyo(東京大学

    Dexterous grasping of novel objects from a single view

    Get PDF
    In this thesis, a novel generative-evaluative method was proposed to solve the problem of dexterous grasping of the novel object with a single view. The generative model is learned from human demonstration. The grasps generated by the generative model are used to train the evaluative model. Two novel evaluative network architectures are proposed. The evaluative model is a deep evaluative network that is trained in the simulation. The generative-evaluative method is tested in a real grasp data set with 49 previously unseen challenging objects. The generative-evaluative method achieves a success rate of 78% that outperforms the purely generative method, that has a success rate of 57%. The thesis provides insights into the strengths and weaknesses of the generative-evaluative method by comparing different deep network architectures

    Learning-based robotic grasping: A review

    Get PDF
    As personalization technology increasingly orchestrates individualized shopping or marketing experiences in industries such as logistics, fast-moving consumer goods, and food delivery, these sectors require flexible solutions that can automate object grasping for unknown or unseen objects without much modification or downtime. Most solutions in the market are based on traditional object recognition and are, therefore, not suitable for grasping unknown objects with varying shapes and textures. Adequate learning policies enable robotic grasping to accommodate high-mix and low-volume manufacturing scenarios. In this paper, we review the recent development of learning-based robotic grasping techniques from a corpus of over 150 papers. In addition to addressing the current achievements from researchers all over the world, we also point out the gaps and challenges faced in AI-enabled grasping, which hinder robotization in the aforementioned industries. In addition to 3D object segmentation and learning-based grasping benchmarks, we have also performed a comprehensive market survey regarding tactile sensors and robot skin. Furthermore, we reviewed the latest literature on how sensor feedback can be trained by a learning model to provide valid inputs for grasping stability. Finally, learning-based soft gripping is evaluated as soft grippers can accommodate objects of various sizes and shapes and can even handle fragile objects. In general, robotic grasping can achieve higher flexibility and adaptability, when equipped with learning algorithms

    Learning deep representations for robotics applications

    Get PDF
    In this thesis, two hierarchical learning representations are explored in computer vision tasks. First, a novel graph theoretic method for statistical shape analysis, called Compositional Hierarchy of Parts (CHOP), was proposed. The method utilises line-based features as its building blocks for the representation of shapes. A deep, multi-layer vocabulary is learned by recursively compressing this initial representation. The key contribution of this work is to formulate layerwise learning as a frequent sub-graph discovery problem, solved using the Minimum Description Length (MDL) principle. The experiments show that CHOP employs part shareability and data compression features, and yields state-of- the-art shape retrieval performance on 3 benchmark datasets. In the second part of the thesis, a hybrid generative-evaluative method was used to solve the dexterous grasping problem. This approach combines a learned dexterous grasp generation model with two novel evaluative models based on Convolutional Neural Networks (CNNs). The data- efficient generative method learns from a human demonstrator. The evaluative models are trained in simulation, using the grasps proposed by the generative approach and the depth images of the objects from a single view. On a real grasp dataset of 49 scenes with previously unseen objects, the proposed hybrid architecture outperforms the purely generative method, with a grasp success rate of 77.7% to 57.1%. The thesis concludes by comparing the two families of deep architectures, compositional hierarchies and DNNs, providing insights on their strengths and weaknesses

    Flexible Object Manipulation

    Get PDF
    Flexible objects are a challenge to manipulate. Their motions are hard to predict, and the high number of degrees of freedom makes sensing, control, and planning difficult. Additionally, they have more complex friction and contact issues than rigid bodies, and they may stretch and compress. In this thesis, I explore two major types of flexible materials: cloth and string. For rigid bodies, one of the most basic problems in manipulation is the development of immobilizing grasps. The same problem exists for flexible objects. I have shown that a simple polygonal piece of cloth can be fully immobilized by grasping all convex vertices and no more than one third of the concave vertices. I also explored simple manipulation methods that make use of gravity to reduce the number of fingers necessary for grasping. I have built a system for folding a T-shirt using a 4 DOF arm and a fixed-length iron bar which simulates two fingers. The main goal with string manipulation has been to tie knots without the use of any sensing. I have developed single-piece fixtures capable of tying knots in fishing line, solder, and wire, along with a more complex track-based system for autonomously tying a knot in steel wire. I have also developed a series of different fixtures that use compressed air to tie knots in string. Additionally, I have designed four-piece fixtures, which demonstrate a way to fully enclose a knot during the insertion process, while guaranteeing that extraction will always succeed

    Distributed framework for a multi-purpose household robotic arm

    Get PDF
    Projecte final de carrera fet en col.laboració amb l'Institut de Robòtica i Informàtica IndustrialThe concept of household robotic servants has been in our mind for ages, and domestic appliances are far more robotised than they used to be. At present, manufacturers are starting to introduce small, household human-interactive robots to the market. Any human-interactive device has safety, endurability and simplicity constraints, which are especially strict when it comes to robots. Indeed, we are still far from a multi-purpose intelligent household robot, but human-interactive robots and arti cial intelligence research has evolved considerably, demonstration prototypes are a proof of what can be done. This project contributes to the research in humaninteractive robots, as the robotic arm and hand used are specially designed for human-interactive applications. The present study provides a distributed framework for an arm and a hand devices based on the robotics YARP protocol using the WAMTM arm and the BarrettHandTM as well as a basic modular client application complemented with vision. Firstly, two device drivers and a network interface are designed and implemented to control the WAMTM arm and the BarrettHandTM from the network. The drivers allow abstract access to each device, providing three ports: command requests port, state requests port and asynchronous replies port. Secondly, each driver is then encapsulated by YARP devices publishing realtime monitoring feedback and motion control to the network through what is called a Network wrapper. In particular, the network wrapper for the WAMTM arm and BarrettHandTM provides a state port, command port, Remote Procedure Call (RPC) port and an asynchronous noti cations port. The state port provides the WAMTM position and orientation feedback at 50 Hz, which represents a maximum blindness of one centimetre. This rst part of the project sets the foundations of a distributed, complete robot, whose design enables processing and power payload to be shared by di erent workstations. Moreover, users are able to work with the robot remotely over Ethernet and Wireless through a clear, understandable local interface within YARP. In addition to the distributed robotic framework provided, a client software framework with vision is also supplied. The client framework establishes a general software shell for further development and is organized in the basic, separate robotic branches: control, vision and plani cation. The vision module supports distributed image grabbing on mobile robotics, and shared-memory for xed, local vision. In order to incorporate environment interaction and robot autonomy with the planner, hand-eye transformation matrices have been obtained to perform object grasping and manipulation. The image processing is based on OpenCV libraries and provides object recognition with Scale Invariant Feature Transform (SIFT) features matching, Hough transform and polygon approximation algorithms. Grasping and path planning use pre-de ned grasps which take into account the size, shape and orientation of the target objects. The proof-of-concept applications feature a household robotic arm with the ability to tidy randomly distributed common kitchen objects to speci ed locations, with robot real-time monitoring and basic control. The device modularity introduced in this project philosophy of decoupling communication, device local access and the components, was successful. Thanks to the abstract access and decoupling, the demonstration applications provided were easily deployed to test the arm's performance and its remote control and monitorization. Moreover, both resultant frameworks are arm-independent and the design is currently being adopted by other projects' devices within the IRI

    Capture and generalisation of close interaction with objects

    Get PDF
    Robust manipulation capture and retargeting has been a longstanding goal in both the fields of animation and robotics. In this thesis I describe a new approach to capture both the geometry and motion of interactions with objects, dealing with the problems of occlusion by the use of magnetic systems, and performing the reconstruction of the geometry by an RGB-D sensor alongside visual markers. This ‘interaction capture’ allows the scene to be described in terms of the spatial relationships between the character and the object using novel topological representations such as the Electric Parameters, which parametrise the outer space of an object using properties of the surface of the object. I describe the properties of these representations for motion generalisation and discuss how they can be applied to the problems of human-like motion generation and programming by demonstration. These generalised interactions are shown to be valid by demonstration of retargeting grasping and manipulation to robots with dissimilar kinematics and morphology using only local, gradient-based planning
    corecore