376 research outputs found
Combining Stereo Disparity and Optical Flow for Basic Scene Flow
Scene flow is a description of real world motion in 3D that contains more
information than optical flow. Because of its complexity there exists no
applicable variant for real-time scene flow estimation in an automotive or
commercial vehicle context that is sufficiently robust and accurate. Therefore,
many applications estimate the 2D optical flow instead. In this paper, we
examine the combination of top-performing state-of-the-art optical flow and
stereo disparity algorithms in order to achieve a basic scene flow. On the
public KITTI Scene Flow Benchmark we demonstrate the reasonable accuracy of the
combination approach and show its speed in computation.Comment: Commercial Vehicle Technology Symposium (CVTS), 201
Portable and Scalable In-vehicle Laboratory Instrumentation for the Design of i-ADAS
According to the WHO (World Health Organization), world-wide deaths from injuries are projected to rise from 5.1 million in 1990 to 8.4 million in 2020, with traffic-related incidents as the major cause for this increase. Intelligent, Advanced Driving Assis tance Systems (i-ADAS) provide a number of solutions to these safety challenges. We developed a scalable in-vehicle mobile i-ADAS research platform for the purpose of traffic context analysis and behavioral prediction designed for understanding fun damental issues in intelligent vehicles. We outline our approach and describe the in-vehicle instrumentation
Vehicular Instrumentation and Data Processing for the Study of Driver Intent
The primary goal of this thesis is to provide processed experimental data needed to determine whether driver intentionality and driving-related actions can be predicted from quantitative and qualitative analysis of driver behaviour. Towards this end, an instrumented experimental vehicle capable of recording several synchronized streams of data from the surroundings of the vehicle, the driver gaze with head pose and the vehicle state in a naturalistic driving environment was designed and developed. Several driving data sequences in both urban and rural environments were recorded with the instrumented vehicle. These sequences were automatically annotated for relevant artifacts such as lanes, vehicles and safely driveable areas within road lanes. A framework and associated algorithms required for cross-calibrating the gaze tracking system with the world coordinate system mounted on the outdoor stereo system was also designed and implemented, allowing the mapping of the driver gaze with the surrounding environment. This instrumentation is currently being used for the study of driver intent, geared towards the development of driver maneuver prediction models
Towards a Common Software/Hardware Methodology for Future Advanced Driver Assistance Systems
The European research project DESERVE (DEvelopment platform for Safe and Efficient dRiVE, 2012-2015) had the aim of designing and developing a platform tool to cope with the continuously increasing complexity and the simultaneous need to reduce cost for future embedded Advanced Driver Assistance Systems (ADAS). For this purpose, the DESERVE platform profits from cross-domain software reuse, standardization of automotive software component interfaces, and easy but safety-compliant integration of heterogeneous modules. This enables the development of a new generation of ADAS applications, which challengingly combine different functions, sensors, actuators, hardware platforms, and Human Machine Interfaces (HMI). This book presents the different results of the DESERVE project concerning the ADAS development platform, test case functions, and validation and evaluation of different approaches. The reader is invited to substantiate the content of this book with the deliverables published during the DESERVE project. Technical topics discussed in this book include:Modern ADAS development platforms;Design space exploration;Driving modelling;Video-based and Radar-based ADAS functions;HMI for ADAS;Vehicle-hardware-in-the-loop validation system
Sensor fusion in driving assistance systems
Mención Internacional en el título de doctorLa vida diaria en los países desarrollados y en vías de desarrollo depende en
gran medida del transporte urbano y en carretera. Esta actividad supone un
coste importante para sus usuarios activos y pasivos en términos de polución
y accidentes, muy habitualmente debidos al factor humano. Los nuevos desarrollos
en seguridad y asistencia a la conducción, llamados Advanced Driving
Assistance Systems (ADAS), buscan mejorar la seguridad en el transporte, y
a medio plazo, llegar a la conducción autónoma.
Los ADAS, al igual que la conducción humana, están basados en sensores
que proporcionan información acerca del entorno, y la fiabilidad de los sensores
es crucial para las aplicaciones ADAS al igual que las capacidades
sensoriales lo son para la conducción humana. Una de las formas de aumentar
la fiabilidad de los sensores es el uso de la Fusión Sensorial, desarrollando
nuevas estrategias para el modelado del entorno de conducción gracias al uso
de diversos sensores, y obteniendo una información mejorada a partid de los
datos disponibles.
La presente tesis pretende ofrecer una solución novedosa para la detección
y clasificación de obstáculos en aplicaciones de automoción, usando fusión
vii
sensorial con dos sensores ampliamente disponibles en el mercado: la cámara
de espectro visible y el escáner láser. Cámaras y láseres son sensores
comúnmente usados en la literatura científica, cada vez más accesibles y listos
para ser empleados en aplicaciones reales. La solución propuesta permite la
detección y clasificación de algunos de los obstáculos comúnmente presentes
en la vía, como son ciclistas y peatones.
En esta tesis se han explorado novedosos enfoques para la detección y clasificación,
desde la clasificación empleando clusters de nubes de puntos obtenidas
desde el escáner láser, hasta las técnicas de domain adaptation para la creación
de bases de datos de imágenes sintéticas, pasando por la extracción inteligente
de clusters y la detección y eliminación del suelo en nubes de puntos.Life in developed and developing countries is highly dependent on road and
urban motor transport. This activity involves a high cost for its active and passive
users in terms of pollution and accidents, which are largely attributable to
the human factor. New developments in safety and driving assistance, called
Advanced Driving Assistance Systems (ADAS), are intended to improve
security in transportation, and, in the mid-term, lead to autonomous driving.
ADAS, like the human driving, are based on sensors, which provide information
about the environment, and sensors’ reliability is crucial for ADAS
applications in the same way the sensing abilities are crucial for human driving.
One of the ways to improve reliability for sensors is the use of Sensor
Fusion, developing novel strategies for environment modeling with the help of
several sensors and obtaining an enhanced information from the combination
of the available data.
The present thesis is intended to offer a novel solution for obstacle detection
and classification in automotive applications using sensor fusion with two
highly available sensors in the market: visible spectrum camera and laser
scanner. Cameras and lasers are commonly used sensors in the scientific
literature, increasingly affordable and ready to be deployed in real world
applications. The solution proposed provides obstacle detection and classification
for some obstacles commonly present in the road, such as pedestrians and bicycles.
Novel approaches for detection and classification have been explored in this
thesis, from point cloud clustering classification for laser scanner, to domain
adaptation techniques for synthetic dataset creation, and including intelligent
clustering extraction and ground detection and removal from point clouds.Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Cristina Olaverri Monreal.- Secretario: Arturo de la Escalera Hueso.- Vocal: José Eugenio Naranjo Hernánde
Towards a Common Software/Hardware Methodology for Future Advanced Driver Assistance Systems
The European research project DESERVE (DEvelopment platform for Safe and Efficient dRiVE, 2012-2015) had the aim of designing and developing a platform tool to cope with the continuously increasing complexity and the simultaneous need to reduce cost for future embedded Advanced Driver Assistance Systems (ADAS). For this purpose, the DESERVE platform profits from cross-domain software reuse, standardization of automotive software component interfaces, and easy but safety-compliant integration of heterogeneous modules. This enables the development of a new generation of ADAS applications, which challengingly combine different functions, sensors, actuators, hardware platforms, and Human Machine Interfaces (HMI). This book presents the different results of the DESERVE project concerning the ADAS development platform, test case functions, and validation and evaluation of different approaches. The reader is invited to substantiate the content of this book with the deliverables published during the DESERVE project. Technical topics discussed in this book include:Modern ADAS development platforms;Design space exploration;Driving modelling;Video-based and Radar-based ADAS functions;HMI for ADAS;Vehicle-hardware-in-the-loop validation system
Supervised learning and inference of semantic information from road scene images
Premio Extraordinario de Doctorado de la UAH en el año académico 2013-2014Nowadays, vision sensors are employed in automotive industry to integrate advanced functionalities that assist humans while driving. However, autonomous vehicles is a hot field of research both in academic and industrial sectors and entails a step beyond ADAS. Particularly, several challenges arise from autonomous navigation in urban scenarios due to their naturalistic complexity in terms of structure and dynamic participants (e.g. pedestrians, vehicles, vegetation, etc.). Hence, providing image understanding capabilities to autonomous robotics platforms is an essential target because cameras can capture the 3D scene as perceived by a human. In fact, given this need for 3D scene understanding, there is an increasing interest on joint objects and scene labeling in the form of geometry and semantic inference of the relevant entities contained in urban environments. In this regard, this Thesis tackles two challenges: 1) the prediction of road intersections geometry and, 2) the detection and orientation estimation of cars, pedestrians and cyclists. Different features extracted from stereo images of the KITTI public urban dataset are employed. This Thesis proposes a supervised learning of discriminative models that rely on strong machine learning techniques for data mining visual features. For the first task, we use 2D occupancy grid maps that are built from the stereo sequences captured by a moving vehicle in a mid-sized city. Based on these bird?s eye view images, we propose a smart parameterization of the layout of straight roads and 4 intersecting roads. The dependencies between the proposed discrete random variables that define the layouts are represented with Probabilistic Graphical Models. Then, the problem is formulated as a structured prediction, in which we employ Conditional Random Fields (CRF) for learning and convex Belief Propagation (dcBP) and Branch and Bound (BB) for inference. For the validation of the proposed methodology, a set of tests are carried out, which are based on real images and synthetic images with varying levels of random noise. In relation to the object detection and orientation estimation challenge in road scenes, this Thesis goal is to compete in the international challenge known as KITTI evaluation benchmark, which encourages researchers to push forward the current state of the art on visual recognition methods, particularized for 3D urban scene understanding. This Thesis proposes to modify the successful part-based object detector known as DPM in order to learn richer models from 2.5D data (color and disparity). Therefore, we revisit the DPM framework, which is based on HOG features and mixture models trained with a latent SVM formulation. Next, this Thesis performs a set of modifications on top of DPM: I) An extension to the DPM training pipeline that accounts for 3D-aware features. II) A detailed analysis of the supervised parameter learning. III) Two additional approaches: "feature whitening" and "stereo consistency check". Additionally, a) we analyze the KITTI dataset and several subtleties regarding to the evaluation protocol; b) a large set of cross-validated experiments show the performance of our contributions and, c) finally, our best performing approach is publicly ranked on the KITTI website, being the first one that reports results with stereo data, yielding an increased object detection precision (3%-6%) for the class 'car' and ranking first for the class cyclist
Supervised learning and inference of semantic information from road scene images
Premio Extraordinario de Doctorado de la UAH en el año académico 2013-2014Nowadays, vision sensors are employed in automotive industry to integrate advanced functionalities that assist humans while driving. However, autonomous vehicles is a hot field of research both in academic and industrial sectors and entails a step beyond ADAS. Particularly, several challenges arise from autonomous navigation in urban scenarios due to their naturalistic complexity in terms of structure and dynamic participants (e.g. pedestrians, vehicles, vegetation, etc.). Hence, providing image understanding capabilities to autonomous robotics platforms is an essential target because cameras can capture the 3D scene as perceived by a human. In fact, given this need for 3D scene understanding, there is an increasing interest on joint objects and scene labeling in the form of geometry and semantic inference of the relevant entities contained in urban environments. In this regard, this Thesis tackles two challenges: 1) the prediction of road intersections geometry and, 2) the detection and orientation estimation of cars, pedestrians and cyclists. Different features extracted from stereo images of the KITTI public urban dataset are employed. This Thesis proposes a supervised learning of discriminative models that rely on strong machine learning techniques for data mining visual features. For the first task, we use 2D occupancy grid maps that are built from the stereo sequences captured by a moving vehicle in a mid-sized city. Based on these bird?s eye view images, we propose a smart parameterization of the layout of straight roads and 4 intersecting roads. The dependencies between the proposed discrete random variables that define the layouts are represented with Probabilistic Graphical Models. Then, the problem is formulated as a structured prediction, in which we employ Conditional Random Fields (CRF) for learning and convex Belief Propagation (dcBP) and Branch and Bound (BB) for inference. For the validation of the proposed methodology, a set of tests are carried out, which are based on real images and synthetic images with varying levels of random noise. In relation to the object detection and orientation estimation challenge in road scenes, this Thesis goal is to compete in the international challenge known as KITTI evaluation benchmark, which encourages researchers to push forward the current state of the art on visual recognition methods, particularized for 3D urban scene understanding. This Thesis proposes to modify the successful part-based object detector known as DPM in order to learn richer models from 2.5D data (color and disparity). Therefore, we revisit the DPM framework, which is based on HOG features and mixture models trained with a latent SVM formulation. Next, this Thesis performs a set of modifications on top of DPM: I) An extension to the DPM training pipeline that accounts for 3D-aware features. II) A detailed analysis of the supervised parameter learning. III) Two additional approaches: "feature whitening" and "stereo consistency check". Additionally, a) we analyze the KITTI dataset and several subtleties regarding to the evaluation protocol; b) a large set of cross-validated experiments show the performance of our contributions and, c) finally, our best performing approach is publicly ranked on the KITTI website, being the first one that reports results with stereo data, yielding an increased object detection precision (3%-6%) for the class 'car' and ranking first for the class cyclist
- …