5,420 research outputs found
Reducing the Barrier to Entry of Complex Robotic Software: a MoveIt! Case Study
Developing robot agnostic software frameworks involves synthesizing the
disparate fields of robotic theory and software engineering while
simultaneously accounting for a large variability in hardware designs and
control paradigms. As the capabilities of robotic software frameworks increase,
the setup difficulty and learning curve for new users also increase. If the
entry barriers for configuring and using the software on robots is too high,
even the most powerful of frameworks are useless. A growing need exists in
robotic software engineering to aid users in getting started with, and
customizing, the software framework as necessary for particular robotic
applications. In this paper a case study is presented for the best practices
found for lowering the barrier of entry in the MoveIt! framework, an
open-source tool for mobile manipulation in ROS, that allows users to 1)
quickly get basic motion planning functionality with minimal initial setup, 2)
automate its configuration and optimization, and 3) easily customize its
components. A graphical interface that assists the user in configuring MoveIt!
is the cornerstone of our approach, coupled with the use of an existing
standardized robot model for input, automatically generated robot-specific
configuration files, and a plugin-based architecture for extensibility. These
best practices are summarized into a set of barrier to entry design principles
applicable to other robotic software. The approaches for lowering the entry
barrier are evaluated by usage statistics, a user survey, and compared against
our design objectives for their effectiveness to users
Learning to Navigate Cloth using Haptics
We present a controller that allows an arm-like manipulator to navigate
deformable cloth garments in simulation through the use of haptic information.
The main challenge of such a controller is to avoid getting tangled in, tearing
or punching through the deforming cloth. Our controller aggregates force
information from a number of haptic-sensing spheres all along the manipulator
for guidance. Based on haptic forces, each individual sphere updates its target
location, and the conflicts that arise between this set of desired positions is
resolved by solving an inverse kinematic problem with constraints.
Reinforcement learning is used to train the controller for a single
haptic-sensing sphere, where a training run is terminated (and thus penalized)
when large forces are detected due to contact between the sphere and a
simplified model of the cloth. In simulation, we demonstrate successful
navigation of a robotic arm through a variety of garments, including an
isolated sleeve, a jacket, a shirt, and shorts. Our controller out-performs two
baseline controllers: one without haptics and another that was trained based on
large forces between the sphere and cloth, but without early termination.Comment: Supplementary video available at https://youtu.be/iHqwZPKVd4A.
Related publications http://www.cc.gatech.edu/~karenliu/Robotic_dressing.htm
Adaptive Tesselation CMAC
An ndaptive tessellation variant of the CMAC architecture is introduced. Adaptive tessellation is an error-based scheme for distributing input representations. Simulations show that the new network outperforms the original CMAC at a vnriety of learning tasks, including learning the inverse kinematics of a two-link arm.Office of Naval Research (N00014-92-J-4015, N00014-91-J-4100); National Science Foundation (IRI-90-00530); Boston University Presidential Graduate Fellowshi
Asymmetric Dual-Arm Task Execution using an Extended Relative Jacobian
Coordinated dual-arm manipulation tasks can be broadly characterized as
possessing absolute and relative motion components. Relative motion tasks, in
particular, are inherently redundant in the way they can be distributed between
end-effectors. In this work, we analyse cooperative manipulation in terms of
the asymmetric resolution of relative motion tasks. We discuss how existing
approaches enable the asymmetric execution of a relative motion task, and show
how an asymmetric relative motion space can be defined. We leverage this result
to propose an extended relative Jacobian to model the cooperative system, which
allows a user to set a concrete degree of asymmetry in the task execution. This
is achieved without the need for prescribing an absolute motion target.
Instead, the absolute motion remains available as a functional redundancy to
the system. We illustrate the properties of our proposed Jacobian through
numerical simulations of a novel differential Inverse Kinematics algorithm.Comment: Accepted for presentation at ISRR19. 16 Page
From virtual demonstration to real-world manipulation using LSTM and MDN
Robots assisting the disabled or elderly must perform complex manipulation
tasks and must adapt to the home environment and preferences of their user.
Learning from demonstration is a promising choice, that would allow the
non-technical user to teach the robot different tasks. However, collecting
demonstrations in the home environment of a disabled user is time consuming,
disruptive to the comfort of the user, and presents safety challenges. It would
be desirable to perform the demonstrations in a virtual environment. In this
paper we describe a solution to the challenging problem of behavior transfer
from virtual demonstration to a physical robot. The virtual demonstrations are
used to train a deep neural network based controller, which is using a Long
Short Term Memory (LSTM) recurrent neural network to generate trajectories. The
training process uses a Mixture Density Network (MDN) to calculate an error
signal suitable for the multimodal nature of demonstrations. The controller
learned in the virtual environment is transferred to a physical robot (a
Rethink Robotics Baxter). An off-the-shelf vision component is used to
substitute for geometric knowledge available in the simulation and an inverse
kinematics module is used to allow the Baxter to enact the trajectory. Our
experimental studies validate the three contributions of the paper: (1) the
controller learned from virtual demonstrations can be used to successfully
perform the manipulation tasks on a physical robot, (2) the LSTM+MDN
architectural choice outperforms other choices, such as the use of feedforward
networks and mean-squared error based training signals and (3) allowing
imperfect demonstrations in the training set also allows the controller to
learn how to correct its manipulation mistakes
Automated pick-up of suturing needles for robotic surgical assistance
Robot-assisted laparoscopic prostatectomy (RALP) is a treatment for prostate
cancer that involves complete or nerve sparing removal prostate tissue that
contains cancer. After removal the bladder neck is successively sutured
directly with the urethra. The procedure is called urethrovesical anastomosis
and is one of the most dexterity demanding tasks during RALP. Two suturing
instruments and a pair of needles are used in combination to perform a running
stitch during urethrovesical anastomosis. While robotic instruments provide
enhanced dexterity to perform the anastomosis, it is still highly challenging
and difficult to learn. In this paper, we presents a vision-guided needle
grasping method for automatically grasping the needle that has been inserted
into the patient prior to anastomosis. We aim to automatically grasp the
suturing needle in a position that avoids hand-offs and immediately enables the
start of suturing. The full grasping process can be broken down into: a needle
detection algorithm; an approach phase where the surgical tool moves closer to
the needle based on visual feedback; and a grasping phase through path planning
based on observed surgical practice. Our experimental results show examples of
successful autonomous grasping that has the potential to simplify and decrease
the operational time in RALP by assisting a small component of urethrovesical
anastomosis
Multiple configuration shell-core structured robotic manipulator with interchangeable mechatronic joints : a thesis presented in partial fulfilment of the requirements for the degree of Masters of Engineering in Mechatronics at Massey University, Turitea Campus, Palmerston North, New Zealand
With the increase of robotic technology utilised throughout industry, the need for skilled
labour in this area has increased also. As a result, education dealing with robotics has
grown at both the high-school and tertiary educational level. Despite the range of
pedagogical robots currently on the market, there seems to be a low variety of these
systems specifically related to the types of robotic manipulator arms popular for industrial
applications. Furthermore, a fixed-arm system is limited to only serve as an educational
supplement for that specific configuration and therefore cannot demonstrate more than
one of the numerous industrial-type robotic arms.
The Shell-Core structured robotic manipulator concept has been proposed to improve the
quality and variety of available pedagogical robotic arm systems on the market. This is
achieved by the reconfigurable nature of the concept, which incorporates shell and core
structural units to make the construction of at least 5 mainstream industrial arms
possible. The platform will be suitable, but not limited to use within the educational
robotics industry at high-school and higher educational levels and may appeal to
hobbyists.
Later dubbed SMILE (Smart Manipulator with Interchangeable Links and Effectors), the
system utilises core units to provide either rotational or linear actuation in a single plane.
A variety of shell units are then implemented as the body of the robotic arm, serving as
appropriate offsets to achieve the required configuration. A prototype consisting of a
limited number of ‘building blocks’ was developed for proof-of-concept, found capable of
achieving several of the proposed configurations.
The outcome of this research is encouraging, with a Massey patent search confirming the
unique features of the proposed concept. The prototype system is an economic, easy to
implement, plug and play, and multiple-configuration robotic manipulator, suitable for
various applications
- …