15,725 research outputs found
Data-Driven Grasp Synthesis - A Survey
We review the work on data-driven grasp synthesis and the methodologies for
sampling and ranking candidate grasps. We divide the approaches into three
groups based on whether they synthesize grasps for known, familiar or unknown
objects. This structure allows us to identify common object representations and
perceptual processes that facilitate the employed data-driven grasp synthesis
technique. In the case of known objects, we concentrate on the approaches that
are based on object recognition and pose estimation. In the case of familiar
objects, the techniques use some form of a similarity matching to a set of
previously encountered objects. Finally for the approaches dealing with unknown
objects, the core part is the extraction of specific features that are
indicative of good grasps. Our survey provides an overview of the different
methodologies and discusses open problems in the area of robot grasping. We
also draw a parallel to the classical approaches that rely on analytic
formulations.Comment: 20 pages, 30 Figures, submitted to IEEE Transactions on Robotic
Learning to Fly by Crashing
How do you learn to navigate an Unmanned Aerial Vehicle (UAV) and avoid
obstacles? One approach is to use a small dataset collected by human experts:
however, high capacity learning algorithms tend to overfit when trained with
little data. An alternative is to use simulation. But the gap between
simulation and real world remains large especially for perception problems. The
reason most research avoids using large-scale real data is the fear of crashes!
In this paper, we propose to bite the bullet and collect a dataset of crashes
itself! We build a drone whose sole purpose is to crash into objects: it
samples naive trajectories and crashes into random objects. We crash our drone
11,500 times to create one of the biggest UAV crash dataset. This dataset
captures the different ways in which a UAV can crash. We use all this negative
flying data in conjunction with positive data sampled from the same
trajectories to learn a simple yet powerful policy for UAV navigation. We show
that this simple self-supervised model is quite effective in navigating the UAV
even in extremely cluttered environments with dynamic obstacles including
humans. For supplementary video see: https://youtu.be/u151hJaGKU
Vision based motion control for a humanoid head
This paper describes the design of a motion control algorithm for a humanoid robotic head, which consists of a neck with four degrees of freedom and two eyes (a stereo pair system) that tilt on a common axis and rotate sideways freely. The kinematic and dynamic properties of the head are analyzed and modeled using screw theory. The motion control algorithm is designed to receive, as an input, the output of a vision processing algorithm and to exploit the redundancy of the system for the realization of the movements. This algorithm is designed to enable the head to focus on and to follow a target, showing human-like motions. The performance of the control algorithm has been tested in a simulated environment and, then, experimentally applied to the real humanoid head
Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery
One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeonâs navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions
FlightGoggles: A Modular Framework for Photorealistic Camera, Exteroceptive Sensor, and Dynamics Simulation
FlightGoggles is a photorealistic sensor simulator for perception-driven
robotic vehicles. The key contributions of FlightGoggles are twofold. First,
FlightGoggles provides photorealistic exteroceptive sensor simulation using
graphics assets generated with photogrammetry. Second, it provides the ability
to combine (i) synthetic exteroceptive measurements generated in silico in real
time and (ii) vehicle dynamics and proprioceptive measurements generated in
motio by vehicle(s) in a motion-capture facility. FlightGoggles is capable of
simulating a virtual-reality environment around autonomous vehicle(s). While a
vehicle is in flight in the FlightGoggles virtual reality environment,
exteroceptive sensors are rendered synthetically in real time while all complex
extrinsic dynamics are generated organically through the natural interactions
of the vehicle. The FlightGoggles framework allows for researchers to
accelerate development by circumventing the need to estimate complex and
hard-to-model interactions such as aerodynamics, motor mechanics, battery
electrochemistry, and behavior of other agents. The ability to perform
vehicle-in-the-loop experiments with photorealistic exteroceptive sensor
simulation facilitates novel research directions involving, e.g., fast and
agile autonomous flight in obstacle-rich environments, safe human interaction,
and flexible sensor selection. FlightGoggles has been utilized as the main test
for selecting nine teams that will advance in the AlphaPilot autonomous drone
racing challenge. We survey approaches and results from the top AlphaPilot
teams, which may be of independent interest.Comment: Initial version appeared at IROS 2019. Supplementary material can be
found at https://flightgoggles.mit.edu. Revision includes description of new
FlightGoggles features, such as a photogrammetric model of the MIT Stata
Center, new rendering settings, and a Python AP
- âŠ