4,889 research outputs found

    Learning Contact-Rich Manipulation Skills with Guided Policy Search

    Full text link
    Autonomous learning of object manipulation skills can enable robots to acquire rich behavioral repertoires that scale to the variety of objects found in the real world. However, current motion skill learning methods typically restrict the behavior to a compact, low-dimensional representation, limiting its expressiveness and generality. In this paper, we extend a recently developed policy search method \cite{la-lnnpg-14} and use it to learn a range of dynamic manipulation behaviors with highly general policy representations, without using known models or example demonstrations. Our approach learns a set of trajectories for the desired motion skill by using iteratively refitted time-varying linear models, and then unifies these trajectories into a single control policy that can generalize to new situations. To enable this method to run on a real robot, we introduce several improvements that reduce the sample count and automate parameter selection. We show that our method can acquire fast, fluent behaviors after only minutes of interaction time, and can learn robust controllers for complex tasks, including putting together a toy airplane, stacking tight-fitting lego blocks, placing wooden rings onto tight-fitting pegs, inserting a shoe tree into a shoe, and screwing bottle caps onto bottles

    A General-Purpose Graphics Processing Unit (GPGPU)-Accelerated Robotic Controller Using a Low Power Mobile Platform

    Get PDF
    Robotic controllers have to execute various complex independent tasks repeatedly. Massive processing power is required by the motion controllers to compute the solution of these computationally intensive algorithms. General-purpose graphics processing unit (GPGPU)-enabled mobile phones can be leveraged for acceleration of these motion controllers. Embedded GPUs can replace several dedicated computing boards by a single powerful and less power-consuming GPU. In this paper, the inverse kinematic algorithm based numeric controllers is proposed and realized using the GPGPU of a handheld mobile device. This work is the extension of a desktop GPU-accelerated robotic controller presented at DAS’16 where the comparative analysis of different sequential and concurrent controllers is discussed. First of all, the inverse kinematic algorithm is sequentially realized using Arduino-Due microcontroller and the field-programmable gate array (FPGA) is used for its parallel implementation. Execution speeds of these controllers are compared with two different GPGPU architectures (Nvidia Quadro K2200 and Nvidia Shield K1 Tablet), programmed with Compute Unified Device Architecture (CUDA) computing language. Experimental data shows that the proposed mobile platform-based scheme outperform s the FPGA by 5 and boasts a 100 speedup over the Arduino-based sequential implementation

    Automated sequence and motion planning for robotic spatial extrusion of 3D trusses

    Full text link
    While robotic spatial extrusion has demonstrated a new and efficient means to fabricate 3D truss structures in architectural scale, a major challenge remains in automatically planning extrusion sequence and robotic motion for trusses with unconstrained topologies. This paper presents the first attempt in the field to rigorously formulate the extrusion sequence and motion planning (SAMP) problem, using a CSP encoding. Furthermore, this research proposes a new hierarchical planning framework to solve the extrusion SAMP problems that usually have a long planning horizon and 3D configuration complexity. By decoupling sequence and motion planning, the planning framework is able to efficiently solve the extrusion sequence, end-effector poses, joint configurations, and transition trajectories for spatial trusses with nonstandard topologies. This paper also presents the first detailed computation data to reveal the runtime bottleneck on solving SAMP problems, which provides insight and comparing baseline for future algorithmic development. Together with the algorithmic results, this paper also presents an open-source and modularized software implementation called Choreo that is machine-agnostic. To demonstrate the power of this algorithmic framework, three case studies, including real fabrication and simulation results, are presented.Comment: 24 pages, 16 figure

    Reducing the Barrier to Entry of Complex Robotic Software: a MoveIt! Case Study

    Full text link
    Developing robot agnostic software frameworks involves synthesizing the disparate fields of robotic theory and software engineering while simultaneously accounting for a large variability in hardware designs and control paradigms. As the capabilities of robotic software frameworks increase, the setup difficulty and learning curve for new users also increase. If the entry barriers for configuring and using the software on robots is too high, even the most powerful of frameworks are useless. A growing need exists in robotic software engineering to aid users in getting started with, and customizing, the software framework as necessary for particular robotic applications. In this paper a case study is presented for the best practices found for lowering the barrier of entry in the MoveIt! framework, an open-source tool for mobile manipulation in ROS, that allows users to 1) quickly get basic motion planning functionality with minimal initial setup, 2) automate its configuration and optimization, and 3) easily customize its components. A graphical interface that assists the user in configuring MoveIt! is the cornerstone of our approach, coupled with the use of an existing standardized robot model for input, automatically generated robot-specific configuration files, and a plugin-based architecture for extensibility. These best practices are summarized into a set of barrier to entry design principles applicable to other robotic software. The approaches for lowering the entry barrier are evaluated by usage statistics, a user survey, and compared against our design objectives for their effectiveness to users

    Differentiable Robot Neural Distance Function for Adaptive Grasp Synthesis on a Unified Robotic Arm-Hand System

    Full text link
    Grasping is a fundamental skill for robots to interact with their environment. While grasp execution requires coordinated movement of the hand and arm to achieve a collision-free and secure grip, many grasp synthesis studies address arm and hand motion planning independently, leading to potentially unreachable grasps in practical settings. The challenge of determining integrated arm-hand configurations arises from its computational complexity and high-dimensional nature. We address this challenge by presenting a novel differentiable robot neural distance function. Our approach excels in capturing intricate geometry across various joint configurations while preserving differentiability. This innovative representation proves instrumental in efficiently addressing downstream tasks with stringent contact constraints. Leveraging this, we introduce an adaptive grasp synthesis framework that exploits the full potential of the unified arm-hand system for diverse grasping tasks. Our neural joint space distance function achieves an 84.7% error reduction compared to baseline methods. We validated our approaches on a unified robotic arm-hand system that consists of a 7-DoF robot arm and a 16-DoF multi-fingered robotic hand. Results demonstrate that our approach empowers this high-DoF system to generate and execute various arm-hand grasp configurations that adapt to the size of the target objects while ensuring whole-body movements to be collision-free.Comment: Under revie

    A Rapidly Reconfigurable Robotics Workcell and Its Applictions for Tissue Engineering

    Get PDF
    This article describes the development of a component-based technology robot system that can be rapidly configured to perform a specific manufacturing task. The system is conceived with standard and inter-operable components including actuator modules, rigid link connectors and tools that can be assembled into robots with arbitrary geometry and degrees of freedom. The reconfigurable "plug-and-play" robot kinematic and dynamic modeling algorithms are developed. These algorithms are the basis for the control and simulation of reconfigurable robots. The concept of robot configuration optimization is introduced for the effective use of the rapidly reconfigurable robots. Control and communications of the workcell components are facilitated by a workcell-wide TCP/IP network and device level CAN-bus networks. An object-oriented simulation and visualization software for the reconfigurable robot is developed based on Windows NT. Prototypes of the robot systems configured to perform 3D contour following task and the positioning task are constructed and demonstrated. Applications of such systems for biomedical tissue scaffold fabrication are considered.Singapore-MIT Alliance (SMA

    Automating Robot Planning Using Product and Manufacturing Information

    Get PDF
    AbstractAdvances in sensing, modeling, and control have made it possible to increase the accuracy of robots, and enable them to perform in dynamic environments. Often, performance deficiencies are not evident until late in the development of the manufacturing process, which delays the beginning of production and may cause damage to parts that have already undergone costly manufacturing steps. The goal of this research is to determine if a robot can meet manufacturing requirements, how to optimally plan robot activities, and to monitor robot processes to track performance. To achieve this, representations of product and manufacturing information and robot capabilities should be carried through the design, process planning, production, and analysis phases. Standards for the exchange of this information have been developed, such as ISO 10303 Part 242 for semantic product and manufacturing information and device kinematics, and the Robot Operating System Industrial specification for robot modeling, path planning, and execution. This paper surveys the relevant technologies and standards needed to enable automated deployment of robots in new application areas

    Unified Modeling of Unconventional Modular and Reconfigurable Manipulation System

    Full text link
    Customization of manipulator configurations using modularity and reconfigurability aspects is receiving much attention. Modules presented so far in literature deals with the conventional and standard configurations. This paper presents the 3D printable, light-weight and unconventional modules: MOIRs' Mark-2, to develop any custom `n'-Degrees-of-Freedom (DoF) serial manipulator even with the non-parallel and non-perpendicular jointed configuration. These unconventional designs of modular configurations seek an easy adaptable solution for both modular assembly and software interfaces for automatic modeling and control. A strategy of assembling the modules, automatic and unified modeling of the modular and reconfigurable manipulators with unconventional parameters is proposed in this paper using the proposed 4 modular units. A reconfigurable software architecture is presented for the automatic generation of kinematic and dynamic models and configuration files, through which, a designer can design, validate using visualization, plan and execute the motion of the developed configuration as required. The framework developed is based upon an open source platform called as Robot Operating System (ROS), which acts as a digital twin for the modular configurations. For the experimental demonstration, a 3D printed modular library is developed and an unconventional configuration is assembled, using the proposed modules followed by automatic modeling and control, for a single cell of the vertical farm setup
    • …
    corecore