12 research outputs found

    A High-Rate, Heterogeneous Data Set from the Darpa Urban Challenge

    Get PDF
    This paper describes a data set collected by MIT’s autonomous vehicle Talos during the 2007 DARPA Urban Challenge. Data from a high-precision navigation system, five cameras, 12 SICK planar laser range scanners, and a Velodyne high-density laser range scanner were synchronized and logged to disk for 90 km of travel. In addition to documenting a number of large loop closures useful for developing mapping and localization algorithms, this data set also records the first robotic traffic jam and two autonomous vehicle collisions. It is our hope that this data set will be useful to the autonomous vehicle community, especially those developing robotic perception capabilities.United States. Defense Advanced Research Projects Agency (Urban Challenge, ARPA Order No. W369/00, Program Code DIRO, issued by DARPA/CMO under Contract No. HR0011-06-C-0149

    Qualitative Failure Analysis for a Small Quadrotor Unmanned Aircraft System

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/106490/1/AIAA2013-4761.pd

    Autonomes Fahren – ein Top-Down-Ansatz

    Get PDF
    This paper presents a functional system architecture for an “autonomous vehicle” in the sense of amodular building block system. It is developed in a topdown approach based on the definition of the functional requirements for an autonomous vehicle and explicitly combines perception-based and localization-based approaches. Both the definition and the functional system architecture consider the aspects operating by the human being, mission accomplishment, map data, localization, environmental and self-perception as well as cooperation. The functional system architecture is developed in the context of the research project “Stadtpilot” at the Technische UniversitĂ€t Braunschweig.In diesem Artikel stellen wir eine funktionale Systemarchitektur fĂŒr ein “autonom fahrendes Straßenfahrzeug” vor, die im Sinne eines modularen Baukastensystems entworfen ist. Sie wurde in einemTop- Down-Ansatz ausgehend von einerDefinition des Funktionsumfangs eines “autonom fahrenden Straßenfahrzeugs” entwickelt und fĂŒhrt explizit wahrnehmungsbasierte und lokalisierungsbasierte AnsĂ€tze zusammen. Sowohl dieDefinition des Funktionsumfanges als auch die funktionale Systemarchitektur berĂŒcksichtigen die Aspekte Bedienung, Missionsumsetzung, Karten, Lokalisierung, Umfeld- und Selbstwahrnehmung sowie Kooperation. Die Ergebnisse basieren unter anderem auf Erkenntnissen aus dem Projekt “Stadtpilot” der Technischen UniversitĂ€t Braunschweig

    Progress toward multi‐robot reconnaissance and the MAGIC 2010 competition

    Full text link
    Tasks like search‐and‐rescue and urban reconnaissance benefit from large numbers of robots working together, but high levels of autonomy are needed to reduce operator requirements to practical levels. Reducing the reliance of such systems on human operators presents a number of technical challenges, including automatic task allocation, global state and map estimation, robot perception, path planning, communications, and human‐robot interfaces. This paper describes our 14‐robot team, which won the MAGIC 2010 competition. It was designed to perform urban reconnaissance missions. In the paper, we describe a variety of autonomous systems that require minimal human effort to control a large number of autonomously exploring robots. Maintaining a consistent global map, which is essential for autonomous planning and for giving humans situational awareness, required the development of fast loop‐closing, map optimization, and communications algorithms. Key to our approach was a decoupled centralized planning architecture that allowed individual robots to execute tasks myopically, but whose behavior was coordinated centrally. We will describe technical contributions throughout our system that played a significant role in its performance. We will also present results from our system both from the competition and from subsequent quantitative evaluations, pointing out areas in which the system performed well and where interesting research problems remain. © 2012 Wiley Periodicals, Inc.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/93532/1/21426_ftp.pd

    Modified System Design and Implementation of an Intelligent Assistive Robotic Manipulator

    Get PDF
    This thesis presents three improvements to the current UCF MANUS systems. The first improvement modifies the existing fine motion controller into PI controller that has been optimized to prevent the object from leaving the view of the cameras used for visual servoing. This is achieved by adding a weight matrix to the proportional part of the controller that is constrained by an artificial ROI. When the feature points being used are approaching the boundaries of the ROI, the optimized controller weights are calculated using quadratic programming and added to the nominal proportional gain portion of the controller. The second improvement was a compensatory gross motion method designed to ensure that the desired object can be identified. If the object cannot be identified after the initial gross motion, the end-effector will then be moved to one of three different locations around the object until the object is identified or all possible positions are checked. This framework combines the Kanade-Lucase-Tomasi local tracking method with the ferns global detector/tracker to create a method that utilizes the strengths of both systems to overcome their inherent weaknesses. The last improvement is a particle-filter based tracking algorithm that robustifies the visual servoing function of fine motion. This method performs better than the current global detector/tracker that was being implemented by allowing the tracker to successfully track the object in complex environments with non-ideal conditions

    Decentralized path planning for multiple agents in complex environments using rapidly-exploring random trees

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 89-94).This thesis presents a novel approach to address the challenge of planning paths for real-world multi-agent systems operating in complex environments. The technique developed, the Decentralized Multi-Agent Rapidly-exploring Random Tree (DMARRT) algorithm, is an extension of the CL-RRT algorithm to the multi-agent case, retaining its ability to plan quickly even with complex constraints. Moreover, a merit-based token passing coordination strategy is also presented as a core component of the DMA-RRT algorithm. This coordination strategy makes use of the tree of feasible trajectories grown in the CL-RRT algorithm to dynamically update the order in which agents plan. This reordering is based on a measure of each agent's incentive to replan and allows agents with a greater incentive to plan sooner, thus reducing the global cost and improving the team's overall performance. An extended version of the algorithm, Cooperative DMA-RRT, is also presented to introduce cooperation between agents during the path selection process. The paths generated are proven to satisfy inter-agent constraints, such as collision avoidance, and a set of simulation and experimental results verify the algorithm's performance. A small scale rover is also presented as part of a practical test platform for the DMA-RRT algorithm.by Vishnu R. Desaraju.S.M

    Experimental testbeds for real-time motion planning : implementation and lessons learned

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 103-107).A fundamental step in research on autonomous robotic systems is the actual development and test of experimental platforms, to validate the system design and the effective integration of hardware and real-time software. The objective of this thesis is to report on experimental implementation of platforms and testing environments for real-time motion planning. First of all, robust planning and control system using closed-loop prediction RRT approach was implemented on a robotic forklift. The system displayed robust performance in the execution of several tasks in an uncertain demonstration environment at Fort Belvoir in Virginia, in June, 2009. Second, an economical testbed based on an infrared motion capture system is implemented for indoors experiments. Exploiting the advantages of a controlled indoor environment and reliable navigation outputs through motion capture system, different variations of the planning problem can be explored with accuracy, safety, and convenience.(cont.) Additionally, a motion planning problem for a robotic vehicle whose dynamics depends on unknown parameters is introduced. Typically, the motion planning problems in robotics assume perfect knowledge of the robots' dynamics, and both planner and controller are responsible only for their own parts in hierarchical sense of the framework. A different approach is proposed here, in which the planner takes explicitly into account the uncertainties about the model parameters, and generates completely safe plans for the whole uncertain parameter range. As the vehicle executes the generated plan, the parameter uncertainty is decreased based on the observed behavior, and it gradually allows more efficient planning with smaller uncertainties.by Jeong hwan Jeon.S.M

    Reliable and safe autonomy for ground vehicles in unstructured environments

    Get PDF
    This thesis is concerned with the algorithms and systems that are required to enable safe autonomous operation of an unmanned ground vehicle (UGV) in an unstructured and unknown environment; one in which there is no speci c infrastructure to assist the vehicle autonomy and complete a priori information is not available. Under these conditions it is necessary for an autonomous system to perceive the surrounding environment, in order to perform safe and reliable control actions with respect to the context of the vehicle, its task and the world. Speci cally, exteroceptive sensors measure physical properties of the world. This information is interpreted to extract a higher level perception, then mapped to provide a consistent spatial context. This map of perceived information forms an integral part of the autonomous UGV (AUGV) control system architecture, therefore any perception or mapping errors reduce the reliability and safety of the system. Currently, commercially viable autonomous systems achieve the requisite level of reliability and safety by using strong structure within their operational environment. This permits the use of powerful assumptions about the world, which greatly simplify the perception requirements. For example, in an urban context, things that look approximately like roads are roads. In an indoor environment, vertical structure must be avoided and everything else is traversable. By contrast, when this structure is not available, little can be assumed and the burden on perception is very large. In these cases, reliability and safety must currently be provided by a tightly integrated human supervisor. The major contribution of this thesis is to provide a holistic approach to identify and mitigate the primary sources of error in typical AUGV sensor feedback systems (comprising perception and mapping), to promote reliability and safety. This includes an analysis of the geometric and temporal errors that occur in the coordinate transformations that are required for mapping and methods to minimise these errors in real systems. Interpretive errors are also studied and methods to mitigate them are presented. These methods combine information theoretic measures with multiple sensor modalities, to improve perceptive classi cation and provide sensor redundancy. The work in this thesis is implemented and tested on a real AUGV system, but the methods do not rely on any particular aspects of this vehicle. They are all generally and widely applicable. This thesis provides a rm base at a low level, from which continued research in autonomous reliability and safety at ever higher levels can be performed

    Lane estimation for autonomous vehicles using vision and LIDAR

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 109-114).Autonomous ground vehicles, or self-driving cars, require a high level of situational awareness in order to operate safely and eciently in real-world conditions. A system able to quickly and reliably estimate the location of the roadway and its lanes based upon local sensor data would be a valuable asset both to fully autonomous vehicles as well as driver assistance technologies. To be most useful, the system must accommodate a variety of roadways, a range of weather and lighting conditions, and highly dynamic scenes with other vehicles and moving objects. Lane estimation can be modeled as a curve estimation problem, where sensor data provides partial and noisy observations of curves. The number of curves to estimate may be initially unknown and many of the observations may be outliers and false detections (e.g., due to tree shadows or lens are). The challenge is to detect lanes when and where they exist, and to update the lane estimates as new observations are received. This thesis describes algorithms for feature detection and curve estimation, as well as a novel curve representation that permits fast and ecient estimation while rejecting outliers. Locally observed road paint and curb features are fused together in a lane estimation framework that detects and estimates all nearby travel lanes.(cont.) The system handles roads with complex geometries and makes no assumptions about the position and orientation of the vehicle with respect to the roadway. Early versions of these algorithms successfully guided a fully autonomous Land Rover LR3 through the 2007 DARPA Urban Challenge, a 90km urban race course, at speeds up to 40 km/h amidst moving traffic. We evaluate these and subsequent versions with a ground truth dataset containing manually labeled lane geometries for every moment of vehicle travel in two large and diverse datasets that include more than 300,000 images and 44km of roadway. The results illustrate the capabilities of our algorithms for robust lane estimation in the face of challenging conditions and unknown roadways.by Albert S. Huang.Ph.D

    Mapping of complex marine environments using an unmanned surface craft

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 185-199).Recent technology has combined accurate GPS localization with mapping to build 3D maps in a diverse range of terrestrial environments, but the mapping of marine environments lags behind. This is particularly true in shallow water and coastal areas with man-made structures such as bridges, piers, and marinas, which can pose formidable challenges to autonomous underwater vehicle (AUV) operations. In this thesis, we propose a new approach for mapping shallow water marine environments, combining data from both above and below the water in a robust probabilistic state estimation framework. The ability to rapidly acquire detailed maps of these environments would have many applications, including surveillance, environmental monitoring, forensic search, and disaster recovery. Whereas most recent AUV mapping research has been limited to open waters, far from man-made surface structures, in our work we focus on complex shallow water environments, such as rivers and harbors, where man-made structures block GPS signals and pose hazards to navigation. Our goal is to enable an autonomous surface craft to combine data from the heterogeneous environments above and below the water surface - as if the water were drained, and we had a complete integrated model of the marine environment, with full visibility. To tackle this problem, we propose a new framework for 3D SLAM in marine environments that combines data obtained concurrently from above and below the water in a robust probabilistic state estimation framework. Our work makes systems, algorithmic, and experimental contributions in perceptual robotics for the marine environment. We have created a novel Autonomous Surface Vehicle (ASV), equipped with substantial onboard computation and an extensive sensor suite that includes three SICK lidars, a Blueview MB2250 imaging sonar, a Doppler Velocity Log, and an integrated global positioning system/inertial measurement unit (GPS/IMU) device. The data from these sensors is processed in a hybrid metric/topological SLAM state estimation framework. A key challenge to mapping is extracting effective constraints from 3D lidar data despite GPS loss and reacquisition. This was achieved by developing a GPS trust engine that uses a semi-supervised learning classifier to ascertain the validity of GPS information for different segments of the vehicle trajectory. This eliminates the troublesome effects of multipath on the vehicle trajectory estimate, and provides cues for submap decomposition. Localization from lidar point clouds is performed using octrees combined with Iterative Closest Point (ICP) matching, which provides constraints between submaps both within and across different mapping sessions. Submap positions are optimized via least squares optimization of the graph of constraints, to achieve global alignment. The global vehicle trajectory is used for subsea sonar bathymetric map generation and for mesh reconstruction from lidar data for 3D visualization of above-water structures. We present experimental results in the vicinity of several structures spanning or along the Charles River between Boston and Cambridge, MA. The Harvard and Longfellow Bridges, three sailing pavilions and a yacht club provide structures of interest, having both extensive superstructure and subsurface foundations. To quantitatively assess the mapping error, we compare against a georeferenced model of the Harvard Bridge using blueprints from the Library of Congress. Our results demonstrate the potential of this new approach to achieve robust and efficient model capture for complex shallow-water marine environments. Future work aims to incorporate autonomy for path planning of a region of interest while performing collision avoidance to enable fully autonomous surveys that achieve full sensor coverage of a complete marine environment.by Jacques Chadwick Leedekerken.Ph.D
    corecore