42 research outputs found
A High-Rate, Heterogeneous Data Set from the Darpa Urban Challenge
This paper describes a data set collected by MIT’s autonomous vehicle Talos during the 2007 DARPA Urban Challenge. Data from a high-precision navigation system, five cameras, 12 SICK planar laser range scanners, and a Velodyne high-density laser range scanner were synchronized and logged to disk for 90 km of travel. In addition to documenting a number of large loop closures useful for developing mapping and localization algorithms, this data set also records the first robotic traffic jam and two autonomous vehicle collisions. It is our hope that this data set will be useful to the autonomous vehicle community, especially those developing robotic perception capabilities.United States. Defense Advanced Research Projects Agency (Urban Challenge, ARPA Order No. W369/00, Program Code DIRO, issued by DARPA/CMO under Contract No. HR0011-06-C-0149
Continuous Humanoid Locomotion over Uneven Terrain using Stereo Fusion
Abstract — For humanoid robots to fulfill their mobility po-tential they must demonstrate reliable and efficient locomotion over rugged and irregular terrain. In this paper we present the perception and planning algorithms which have allowed a humanoid robot to use only passive stereo imagery (as opposed to actuating a laser range sensor) to safely plan footsteps to continuously walk over rough and uneven surfaces without stopping. The perception system continuously integrates stereo imagery to build a consistent 3D model of the terrain which is then used by our footstep planner which reasons about obstacle avoidance, kinematic reachability and foot rotation through mixed-integer quadratic optimization to plan the required step positions. We illustrate that our stereo imagery fusion approach can measure the walking terrain with sufficient accuracy that it matches the quality of terrain estimates from LIDAR. To our knowledge this is the first such demonstration of the use of computer vision to carry out general purpose terrain estimation on a locomoting robot — and additionally to do so in continuous motion. A particular integration challenge was ensuring that these two computationally intensive systems oper-ate with minimal latency (below 1 second) to allow re-planning while walking. The results of extensive experimentation and quantitative analysis are also presented. Our results indicate that a laser range sensor is not necessary to achieve locomotion in these challenging situations. I
An Architecture for Online Affordance-based Perception and Whole-body Planning
The DARPA Robotics Challenge Trials held in December 2013 provided a landmark demonstration of dexterous mobile robots executing a variety of tasks aided by a remote human operator using only data from the robot's sensor suite transmitted over a constrained, field-realistic communications link. We describe the design considerations, architecture, implementation and performance of the software that Team MIT developed to command and control an Atlas humanoid robot. Our design emphasized human interaction with an efficient motion planner, where operators expressed desired robot actions in terms of affordances fit using perception and manipulated in a custom user interface. We highlight several important lessons we learned while developing our system on a highly compressed schedule
Team MIT Urban Challenge Technical Report
This technical report describes Team MITs approach to theDARPA Urban Challenge. We have developed a novel strategy forusing many inexpensive sensors, mounted on the vehicle periphery,and calibrated with a new cross-Âmodal calibrationtechnique. Lidar, camera, and radar data streams are processedusing an innovative, locally smooth state representation thatprovides robust perception for real time autonomous control. Aresilient planning and control architecture has been developedfor driving in traffic, comprised of an innovative combination ofwellÂproven algorithms for mission planning, situationalplanning, situational interpretation, and trajectory control. These innovations are being incorporated in two new roboticvehicles equipped for autonomous driving in urban environments,with extensive testing on a DARPA site visit course. Experimentalresults demonstrate all basic navigation and some basic trafficbehaviors, including unoccupied autonomous driving, lanefollowing using pure-Âpursuit control and our local frameperception strategy, obstacle avoidance using kino-Âdynamic RRTpath planning, U-Âturns, and precedence evaluation amongst othercars at intersections using our situational interpreter. We areworking to extend these approaches to advanced navigation andtraffic scenarios
Robust Camera Pose Recovery Using Stochastic Geometry
The objective of three-dimensional (3-D) machine vision is to infer geometric properties (shape, dimensions) and photometric attributes (color, texture, reflectance) from a set of two-dimensional images. Such vision tasks rely on accurate camera calibration, that is, estimates of the camera’s intrinsic parameters, such as focal length, principal point, and radial lens distortion, and extrinsic parameters—orientation, position, and scale relative to a fixed frame of reference. This thesis introduces methods for automatic recovery of precise extrinsic camera pose among a large set of images, assuming that accurate intrinsic parameters and rough estimates of extrinsic parameters are available. Although the motivating application is metric 3-D reconstruction of urban environments from pose-annotated hemispherical imagery, few domain-specific restrictions are required or imposed. Orientation is recovered independently of position via the detection and optimal alignment o
Robust camera pose recovery using stochastic geometry
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.Includes bibliographical references (p. 177-187).by Matthew E. Antone.Ph.D
Synthesis of navigable 3-D environments from human-augmented image data
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.Includes bibliographical references (p. 95-96).by Matthew E. Antone.M.Eng
Fully Automated Laser Range Calibration
We present a novel method for fully automated exterior calibration of a 2D scanning laser range sensor that attains accurate pose with respect to a fixed 3D reference frame. This task is crucial for applications that attempt to recover self-consistent 3D environment maps and produce accurately registered or fused sensor data. A key contribution of our approach lies in the design of a class of calibration target objects whose pose can be reliably recognized from a single observation (i.e. from one 2D range data stripe). Unlike other techniques, we do not require simultaneous camera views or motion of the sensor, making our approach simple, flexible and environment-independent. In this paper we illustrate the target geometry and derive the relationship between a single 2D range scan and the 3D sensor pose. We describe an algorithm for closed-form solution of the 6 DOF pose that minimizes an algebraic error metric, and an iterative refinement scheme that subsequently minimizes geometric error. Finally, we report performance and stability of our technique on synthetic and real data sets, and demonstrate accuracy within 1 degree of orientation and 3 cm of position in a realistic configuration.