635 research outputs found

    Driving experience of an indirect vision cockpit(本文)

    Get PDF

    Lane estimation for autonomous vehicles using vision and LIDAR

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 109-114).Autonomous ground vehicles, or self-driving cars, require a high level of situational awareness in order to operate safely and eciently in real-world conditions. A system able to quickly and reliably estimate the location of the roadway and its lanes based upon local sensor data would be a valuable asset both to fully autonomous vehicles as well as driver assistance technologies. To be most useful, the system must accommodate a variety of roadways, a range of weather and lighting conditions, and highly dynamic scenes with other vehicles and moving objects. Lane estimation can be modeled as a curve estimation problem, where sensor data provides partial and noisy observations of curves. The number of curves to estimate may be initially unknown and many of the observations may be outliers and false detections (e.g., due to tree shadows or lens are). The challenge is to detect lanes when and where they exist, and to update the lane estimates as new observations are received. This thesis describes algorithms for feature detection and curve estimation, as well as a novel curve representation that permits fast and ecient estimation while rejecting outliers. Locally observed road paint and curb features are fused together in a lane estimation framework that detects and estimates all nearby travel lanes.(cont.) The system handles roads with complex geometries and makes no assumptions about the position and orientation of the vehicle with respect to the roadway. Early versions of these algorithms successfully guided a fully autonomous Land Rover LR3 through the 2007 DARPA Urban Challenge, a 90km urban race course, at speeds up to 40 km/h amidst moving traffic. We evaluate these and subsequent versions with a ground truth dataset containing manually labeled lane geometries for every moment of vehicle travel in two large and diverse datasets that include more than 300,000 images and 44km of roadway. The results illustrate the capabilities of our algorithms for robust lane estimation in the face of challenging conditions and unknown roadways.by Albert S. Huang.Ph.D

    Multi-task near-field perception for autonomous driving using surround-view fisheye cameras

    Get PDF
    Die Bildung der Augen führte zum Urknall der Evolution. Die Dynamik änderte sich von einem primitiven Organismus, der auf den Kontakt mit der Nahrung wartete, zu einem Organismus, der durch visuelle Sensoren gesucht wurde. Das menschliche Auge ist eine der raffiniertesten Entwicklungen der Evolution, aber es hat immer noch Mängel. Der Mensch hat über Millionen von Jahren einen biologischen Wahrnehmungsalgorithmus entwickelt, der in der Lage ist, Autos zu fahren, Maschinen zu bedienen, Flugzeuge zu steuern und Schiffe zu navigieren. Die Automatisierung dieser Fähigkeiten für Computer ist entscheidend für verschiedene Anwendungen, darunter selbstfahrende Autos, Augmented Realität und architektonische Vermessung. Die visuelle Nahfeldwahrnehmung im Kontext von selbstfahrenden Autos kann die Umgebung in einem Bereich von 0 - 10 Metern und 360° Abdeckung um das Fahrzeug herum wahrnehmen. Sie ist eine entscheidende Entscheidungskomponente bei der Entwicklung eines sichereren automatisierten Fahrens. Jüngste Fortschritte im Bereich Computer Vision und Deep Learning in Verbindung mit hochwertigen Sensoren wie Kameras und LiDARs haben ausgereifte Lösungen für die visuelle Wahrnehmung hervorgebracht. Bisher stand die Fernfeldwahrnehmung im Vordergrund. Ein weiteres wichtiges Problem ist die begrenzte Rechenleistung, die für die Entwicklung von Echtzeit-Anwendungen zur Verfügung steht. Aufgrund dieses Engpasses kommt es häufig zu einem Kompromiss zwischen Leistung und Laufzeiteffizienz. Wir konzentrieren uns auf die folgenden Themen, um diese anzugehen: 1) Entwicklung von Nahfeld-Wahrnehmungsalgorithmen mit hoher Leistung und geringer Rechenkomplexität für verschiedene visuelle Wahrnehmungsaufgaben wie geometrische und semantische Aufgaben unter Verwendung von faltbaren neuronalen Netzen. 2) Verwendung von Multi-Task-Learning zur Überwindung von Rechenengpässen durch die gemeinsame Nutzung von initialen Faltungsschichten zwischen den Aufgaben und die Entwicklung von Optimierungsstrategien, die die Aufgaben ausbalancieren.The formation of eyes led to the big bang of evolution. The dynamics changed from a primitive organism waiting for the food to come into contact for eating food being sought after by visual sensors. The human eye is one of the most sophisticated developments of evolution, but it still has defects. Humans have evolved a biological perception algorithm capable of driving cars, operating machinery, piloting aircraft, and navigating ships over millions of years. Automating these capabilities for computers is critical for various applications, including self-driving cars, augmented reality, and architectural surveying. Near-field visual perception in the context of self-driving cars can perceive the environment in a range of 0 - 10 meters and 360° coverage around the vehicle. It is a critical decision-making component in the development of safer automated driving. Recent advances in computer vision and deep learning, in conjunction with high-quality sensors such as cameras and LiDARs, have fueled mature visual perception solutions. Until now, far-field perception has been the primary focus. Another significant issue is the limited processing power available for developing real-time applications. Because of this bottleneck, there is frequently a trade-off between performance and run-time efficiency. We concentrate on the following issues in order to address them: 1) Developing near-field perception algorithms with high performance and low computational complexity for various visual perception tasks such as geometric and semantic tasks using convolutional neural networks. 2) Using Multi-Task Learning to overcome computational bottlenecks by sharing initial convolutional layers between tasks and developing optimization strategies that balance tasks

    Efficient resource allocation for automotive active vision systems

    Get PDF
    Individual mobility on roads has a noticeable impact upon peoples' lives, including traffic accidents resulting in severe, or even lethal injuries. Therefore the main goal when operating a vehicle is to safely participate in road-traffic while minimising the adverse effects on our environment. This goal is pursued by road safety measures ranging from safety-oriented road design to driver assistance systems. The latter require exteroceptive sensors to acquire information about the vehicle's current environment. In this thesis an efficient resource allocation for automotive vision systems is proposed. The notion of allocating resources implies the presence of processes that observe the whole environment and that are able to effeciently direct attentive processes. Directing attention constitutes a decision making process dependent upon the environment it operates in, the goal it pursues, and the sensor resources and computational resources it allocates. The sensor resources considered in this thesis are a subset of the multi-modal sensor system on a test vehicle provided by Audi AG, which is also used to evaluate our proposed resource allocation system. This thesis presents an original contribution in three respects. First, a system architecture designed to efficiently allocate both high-resolution sensor resources and computational expensive processes based upon low-resolution sensor data is proposed. Second, a novel method to estimate 3-D range motion, e cient scan-patterns for spin image based classifiers, and an evaluation of track-to-track fusion algorithms present contributions in the field of data processing methods. Third, a Pareto efficient multi-objective resource allocation method is formalised, implemented, and evaluated using road traffic test sequences

    Fault-Tolerant Vision for Vehicle Guidance in Agriculture

    Get PDF

    Ground Vehicle Platooning Control and Sensing in an Adversarial Environment

    Get PDF
    The highways of the world are growing more congested. People are inherently bad drivers from a safety and system reliability perspective. Self-driving cars are one solution to this problem, as automation can remove human error and react consistently to unexpected events. Automated vehicles have been touted as a potential solution to improving highway utilization and increasing the safety of people on the roads. Automated vehicles have proven to be capable of interacting safely with human drivers, but the technology is still new. This means that there are points of failure that have not been discovered yet. The focus of this work is to provide a platform to evaluate the security and reliability of automated ground vehicles in an adversarial environment. An existing system was already in place, but it was limited to longitudinal control, relying on a steel cable to keep the vehicle on track. The upgraded platform was developed with computer vision to drive the vehicle around a track in order to facilitate an extended attack. Sensing and control methods for the platform are proposed to provide a baseline for the experimental platform. Vehicle control depends on extensive sensor systems to determine the vehicle position relative to its surroundings. A potential attack on a vehicle could be performed by jamming the sensors necessary to reliably control the vehicle. A method to extend the sensing utility of a camera is proposed as a countermeasure against a sensor jamming attack. A monocular camera can be used to determine the bearing to a target, and this work extends the sensor capabilities to estimate the distance to the target. This provides a redundant sensor if the standard distance sensor of a vehicle is compromised by a malicious agent. For a 320×200 pixel camera, the distance estimation is accurate between 0.5 and 3 m. One previously discovered vulnerability of automated highway systems is that vehicles can coordinate an attack to induce traffic jams and collisions. The effects of this attack on a vehicle system with mixed human and automated vehicles are analyzed. The insertion of human drivers into the system stabilizes the traffic jam at the cost of highway utilization

    Ground Vehicle Platooning Control and Sensing in an Adversarial Environment

    Get PDF
    The highways of the world are growing more congested. People are inherently bad drivers from a safety and system reliability perspective. Self-driving cars are one solution to this problem, as automation can remove human error and react consistently to unexpected events. Automated vehicles have been touted as a potential solution to improving highway utilization and increasing the safety of people on the roads. Automated vehicles have proven to be capable of interacting safely with human drivers, but the technology is still new. This means that there are points of failure that have not been discovered yet. The focus of this work is to provide a platform to evaluate the security and reliability of automated ground vehicles in an adversarial environment. An existing system was already in place, but it was limited to longitudinal control, relying on a steel cable to keep the vehicle on track. The upgraded platform was developed with computer vision to drive the vehicle around a track in order to facilitate an extended attack. Sensing and control methods for the platform are proposed to provide a baseline for the experimental platform. Vehicle control depends on extensive sensor systems to determine the vehicle position relative to its surroundings. A potential attack on a vehicle could be performed by jamming the sensors necessary to reliably control the vehicle. A method to extend the sensing utility of a camera is proposed as a countermeasure against a sensor jamming attack. A monocular camera can be used to determine the bearing to a target, and this work extends the sensor capabilities to estimate the distance to the target. This provides a redundant sensor if the standard distance sensor of a vehicle is compromised by a malicious agent. For a 320×200 pixel camera, the distance estimation is accurate between 0.5 and 3 m. One previously discovered vulnerability of automated highway systems is that vehicles can coordinate an attack to induce traffic jams and collisions. The effects of this attack on a vehicle system with mixed human and automated vehicles are analyzed. The insertion of human drivers into the system stabilizes the traffic jam at the cost of highway utilization

    Robust ego-localization using monocular visual odometry

    Get PDF
    corecore