5 research outputs found

    Detecting, tracking, and warning of traffic threats to police stopped along the roadside

    Get PDF
    Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (pages 65-66).Despite years of research into improving the safety of police roadside stops, reckless drivers continue to injure or kill police personnel stopped on the roadside at an alarming rate. We have proposed to reduce this problem through a "divert and alert" approach, projecting lasers onto the road surface as virtual flares to divert incoming vehicles, and alerting officers of dangerous incoming vehicles early enough to take life-saving evasive action. This thesis describes the initial development of the Officer Alerting Mechanism (OAM), which uses cameras to detect and track incoming vehicles, and calculates their real-world positions and trajectories. It presents a procedure for calibrating the camera software system with the laser, as well as a system that allows an officer to draw an arbitrary laser pattern on the screen that is then projected onto the road. Trajectories violating the "no-go" zone of the projected laser pattern are detected and the officer is accordingly alerted of a potentially dangerous vehicle.by James Karraker.M. Eng

    Kontextunterstützte Fahrstreifenschätzung - Szeneninterpretation durch Lernen der räumlichen Beziehungen zwischen Bildmerkmalen und Fahrstreifen

    Get PDF
    Advanced driver assistance systems and automatic driving are becoming more and more important in the market of personal mobility. By increasing traffic safety and allowing the driver to use the traveling time for other activities, the generation of intelligent vehicles creates a new definition of mobility in the future. To extend the limitations of systems for fully automated driving, research institutions and companies are trying to master more complex vehicle environments like urban traffic. While current approaches for camera-based vehicle environment perception use traditional image segmentation and object detection techniques, this work presents a big step towards comprehensive scene understanding. For this purpose, powerful machine learning methods are applied to learning the spatial relations between several types of features in the camera image and the vehicle trajectory. The registration of these spatial relations for all features in a video frame leads to a distribution map which allows the matching of a lane model. Additionally, the context of the current vehicle environment is determined by extracting global image features. Several possibilities for improving the lane detection performance with additional context information are analyzed, and the combination of global and local lane or lane border detection methods is proposed. It is shown that many different types of features within the vehicle environment provide important information about the lane and that, by modeling the spatial relations between features and the trajectory of the vehicle, its lane can be detected. It is also shown that knowledge about the current scene context can be used to improve the lane detection performance.Fahrerassistenzsysteme und automatisches Fahren gewinnen im Bereich der Mobilität mehr und mehr an Bedeutung. Durch erhöhte Sicherheit und die Möglichkeit einer anderweitigen Nutzung der Reisezeit wird die Entwicklung intelligenter Fahrzeuge die Mobilität der Zukunft neu definieren. Um die Grenzen der Systeme für vollautomatisches Fahren zu erweitern, versuchen Forschungseinrichtungen und Firmen bereits, neben gut strukturierten Autobahn-Szenarien auch komplexere Fahrzeugumgebungen zu meistern, wie z.B. den Stadtverkehr. Während derzeitige Methoden zur kamerabasierten Fahrzeugumfelderfassung traditionelle Bildsegmentierungs- und Objekterkennungstechniken verwenden, stellt diese Arbeit einen großen Schritt in Richtung umfassenden Szenenverstehens dar. Zu diesem Zweck werden leistungsfähige Methoden des maschinellen Lernens eingesetzt, um die räumlichen Beziehungen zwischen diversen Merkmalen im Kamerabild und der Fahrzeugtrajektorie zu lernen. Das Zusammenführen der räumlichen Beziehungen aller gefundenen Merkmale eines Kamerabildes resultiert in einer sogenannten Verteilungskarte, in die ein Fahrstreifenmodell eingepasst werden kann. Des Weiteren wird mit Hilfe globaler Bildmerkmale der Kontext der aktuellen Szene bestimmt. Es werden verschiedene Möglichkeiten untersucht, mit Hilfe der gewonnenen Kontextinformation die Fahrstreifenerkennung zu verbessern, und es wird erläutert, wie ein solches globales Verfahren mit einer lokalen Methode zur Fahrstreifen- bzw. Fahrstreifenbegrenzungserkennung kombiniert werden kann. Es wird gezeigt, dass viele verschiedene Arten von Merkmalen in der Fahrzeugumgebung wichtige Informationen über die Lage des Fahrstreifens liefern und dass dieser Fahrstreifen im Bild detektiert werden kann, indem dessen räumliche Beziehungen zu diesen Merkmalen modelliert werden. Außerdem wird gezeigt, wie zusätzliches Wissen über den aktuellen Kontext die Qualität der Fahrstreifenerkennung erhöhen kann

    Lane estimation for autonomous vehicles using vision and LIDAR

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 109-114).Autonomous ground vehicles, or self-driving cars, require a high level of situational awareness in order to operate safely and eciently in real-world conditions. A system able to quickly and reliably estimate the location of the roadway and its lanes based upon local sensor data would be a valuable asset both to fully autonomous vehicles as well as driver assistance technologies. To be most useful, the system must accommodate a variety of roadways, a range of weather and lighting conditions, and highly dynamic scenes with other vehicles and moving objects. Lane estimation can be modeled as a curve estimation problem, where sensor data provides partial and noisy observations of curves. The number of curves to estimate may be initially unknown and many of the observations may be outliers and false detections (e.g., due to tree shadows or lens are). The challenge is to detect lanes when and where they exist, and to update the lane estimates as new observations are received. This thesis describes algorithms for feature detection and curve estimation, as well as a novel curve representation that permits fast and ecient estimation while rejecting outliers. Locally observed road paint and curb features are fused together in a lane estimation framework that detects and estimates all nearby travel lanes.(cont.) The system handles roads with complex geometries and makes no assumptions about the position and orientation of the vehicle with respect to the roadway. Early versions of these algorithms successfully guided a fully autonomous Land Rover LR3 through the 2007 DARPA Urban Challenge, a 90km urban race course, at speeds up to 40 km/h amidst moving traffic. We evaluate these and subsequent versions with a ground truth dataset containing manually labeled lane geometries for every moment of vehicle travel in two large and diverse datasets that include more than 300,000 images and 44km of roadway. The results illustrate the capabilities of our algorithms for robust lane estimation in the face of challenging conditions and unknown roadways.by Albert S. Huang.Ph.D

    Generación de Trayectorias de Curvatura Continua para el Seguimiento de Líneas basado en Visión Artificial

    Full text link
    Desarrollo matemático y análisis de nuevas técnicas para la generación de trayectorias de curvatura continua aplicado al problema del seguimiento de línea con curvatura y brusquedad acotadas.Girbés Juan, V. (2010). Generación de Trayectorias de Curvatura Continua para el Seguimiento de Líneas basado en Visión Artificial. http://hdl.handle.net/10251/12881Archivo delegad
    corecore