31,706 research outputs found
Combining Physical Simulators and Object-Based Networks for Control
Physics engines play an important role in robot planning and control;
however, many real-world control problems involve complex contact dynamics that
cannot be characterized analytically. Most physics engines therefore employ .
approximations that lead to a loss in precision. In this paper, we propose a
hybrid dynamics model, simulator-augmented interaction networks (SAIN),
combining a physics engine with an object-based neural network for dynamics
modeling. Compared with existing models that are purely analytical or purely
data-driven, our hybrid model captures the dynamics of interacting objects in a
more accurate and data-efficient manner.Experiments both in simulation and on a
real robot suggest that it also leads to better performance when used in
complex control tasks. Finally, we show that our model generalizes to novel
environments with varying object shapes and materials.Comment: ICRA 2019; Project page: http://sain.csail.mit.ed
Assistive robotics: research challenges and ethics education initiatives
Assistive robotics is a fast growing field aimed at helping healthcarers in hospitals, rehabilitation centers and nursery homes, as well as empowering people with reduced mobility at home, so that they can autonomously fulfill their daily living activities. The need to function in dynamic human-centered environments poses new research challenges: robotic assistants need to have friendly interfaces, be highly adaptable and customizable, very compliant and intrinsically safe to people, as well as able to handle deformable materials.
Besides technical challenges, assistive robotics raises also ethical defies, which have led to the emergence of a new discipline: Roboethics. Several institutions are developing regulations and standards, and many ethics education initiatives include contents on human-robot interaction and human dignity in assistive situations.
In this paper, the state of the art in assistive robotics is briefly reviewed, and educational materials from a university course on Ethics in Social Robotics and AI focusing on the assistive context are presented.Peer ReviewedPostprint (author's final draft
TossingBot: Learning to Throw Arbitrary Objects with Residual Physics
We investigate whether a robot arm can learn to pick and throw arbitrary
objects into selected boxes quickly and accurately. Throwing has the potential
to increase the physical reachability and picking speed of a robot arm.
However, precisely throwing arbitrary objects in unstructured settings presents
many challenges: from acquiring reliable pre-throw conditions (e.g. initial
pose of object in manipulator) to handling varying object-centric properties
(e.g. mass distribution, friction, shape) and dynamics (e.g. aerodynamics). In
this work, we propose an end-to-end formulation that jointly learns to infer
control parameters for grasping and throwing motion primitives from visual
observations (images of arbitrary objects in a bin) through trial and error.
Within this formulation, we investigate the synergies between grasping and
throwing (i.e., learning grasps that enable more accurate throws) and between
simulation and deep learning (i.e., using deep networks to predict residuals on
top of control parameters predicted by a physics simulator). The resulting
system, TossingBot, is able to grasp and throw arbitrary objects into boxes
located outside its maximum reach range at 500+ mean picks per hour (600+
grasps per hour with 85% throwing accuracy); and generalizes to new objects and
target locations. Videos are available at https://tossingbot.cs.princeton.eduComment: Summary Video: https://youtu.be/f5Zn2Up2RjQ Project webpage:
https://tossingbot.cs.princeton.ed
- …