9 research outputs found
CASSL: Curriculum Accelerated Self-Supervised Learning
Recent self-supervised learning approaches focus on using a few thousand data
points to learn policies for high-level, low-dimensional action spaces.
However, scaling this framework for high-dimensional control require either
scaling up the data collection efforts or using a clever sampling strategy for
training. We present a novel approach - Curriculum Accelerated Self-Supervised
Learning (CASSL) - to train policies that map visual information to high-level,
higher- dimensional action spaces. CASSL orders the sampling of training data
based on control dimensions: the learning and sampling are focused on few
control parameters before other parameters. The right curriculum for learning
is suggested by variance-based global sensitivity analysis of the control
space. We apply our CASSL framework to learning how to grasp using an adaptive,
underactuated multi-fingered gripper, a challenging system to control. Our
experimental results indicate that CASSL provides significant improvement and
generalization compared to baseline methods such as staged curriculum learning
(8% increase) and complete end-to-end learning with random exploration (14%
improvement) tested on a set of novel objects
CabiNet: Scaling Neural Collision Detection for Object Rearrangement with Procedural Scene Generation
We address the important problem of generalizing robotic rearrangement to
clutter without any explicit object models. We first generate over 650K
cluttered scenes - orders of magnitude more than prior work - in diverse
everyday environments, such as cabinets and shelves. We render synthetic
partial point clouds from this data and use it to train our CabiNet model
architecture. CabiNet is a collision model that accepts object and scene point
clouds, captured from a single-view depth observation, and predicts collisions
for SE(3) object poses in the scene. Our representation has a fast inference
speed of 7 microseconds per query with nearly 20% higher performance than
baseline approaches in challenging environments. We use this collision model in
conjunction with a Model Predictive Path Integral (MPPI) planner to generate
collision-free trajectories for picking and placing in clutter. CabiNet also
predicts waypoints, computed from the scene's signed distance field (SDF), that
allows the robot to navigate tight spaces during rearrangement. This improves
rearrangement performance by nearly 35% compared to baselines. We
systematically evaluate our approach, procedurally generate simulated
experiments, and demonstrate that our approach directly transfers to the real
world, despite training exclusively in simulation. Robot experiment demos in
completely unknown scenes and objects can be found at this http
https://cabinet-object-rearrangement.github.i
Quantification of LIP-mediated cell engulfment.
<p>A, Panel a shows the percent of Ad-LIP MDA-MB-468 cells that are double positive for GFP and CellTracker label. Panel b shows the percent of control Ad-GFP MDA-MB-468 cells that are double positive for GFP and CellTracker label. B, Quantification of the percent of GFP positive cells that have engulfed uninfected CellTracker labeled cells. Results are shown for 16 different experiments with a p-value of <0.0001 using paired t-test and Wilcoxon matched pairs test for statistical analysis. C, Quantification of the percent of GFP positive cells that have engulfed uninfected CellTracker labeled cells after treatment with 40 µM of the ROCK inhibitor, Y-27632. Results are shown for 5 different experiments with a p-value of <0.0001 using ANOVA followed by the Student-Newman-Keuls multiple comparisons test.</p
Learning Accurate Kinematic Control of Cable-Driven Surgical Robots Using Data Cleaning and Gaussian Process Regression
Abstract — Precise control of industrial automation systems with non-linear kinematics due to joint elasticity, variation in cable tensioning, or backlash is challenging; especially in systems that can only be controlled through an interface with an imprecise internal kinematic model. Cable-driven Robotic Surgical Assistants (RSAs) are one example of such an automation system, as they are designed for master-slave teleoperation. We consider the problem of learning a function to modify commands to the inaccurate control interface such that executing the modified command on the system results in a desired state. To achieve this, we must learn a mapping that accounts for the non-linearities in the kinematic chain that are not accounted for by the system’s internal model. Gaussian Process Regression (GPR) is a data-driven technique that can estimate this non-linear correction in a task-specific region of state space, but it is sensitive to corruption of training examples due to partial occlusion or lighting changes. In this paper, we extend the use of GPR to learn a non-linear correction for cable-driven surgical robots by using i) velocity as a feature in the regression and ii) removing corrupted training observations based on rotation limits and the magnitude of velocity. We evaluate this approach on the Raven II Surgical Robot on the task of grasping foam “damaged tissue ” fragments, using the PhaseSpace LED-based motion capture system to track the Raven end-effector. Our main result is a reduction in the norm of the mean position error from 2.6 cm to 0.2 cm and the norm of the mean angular error from 20.6 degrees to 2.8 degrees when correcting commands for a set of held-out trajectories. We also use the learned mapping to achieve a 3.8 × speedup over past results on the task of autonomous surgical debridement. Further information on this research, including data, code, photos, and video, is available a