1,514 research outputs found

    Using Deep Neural Networks to Classify Symbolic Road Markings for Autonomous Vehicles

    Get PDF
    To make autonomous cars as safe as feasible for all road users, it is essential to interpret as many sources of trustworthy information as possible. There has been substantial research into interpreting objects such as traffic lights and pedestrian information, however, less attention has been paid to the Symbolic Road Markings (SRMs). SRMs are essential information that needs to be interpreted by autonomous vehicles, hence, this case study presents a comprehensive model primarily focused on classifying painted symbolic road markings by using a region of interest (ROI) detector and a deep convolutional neural network (DCNN). This two-stage model has been trained and tested using an extensive public dataset. The two-stage model investigated in this research includes SRM classification by using Hough lines where features were extracted and the CNN model was trained and tested. An ROI detector is presented that crops and segments the road lane to eliminate non-essential features of the image. The investigated model is robust, achieving up to 92.96 percent accuracy with 26.07 and 40.1 frames per second (FPS) using ROI scaled and raw images, respectively

    Lane Endpoint Detection and Position Accuracy Evaluation for Sensor Fusion-Based Vehicle Localization on Highways

    Get PDF
    Landmark-based vehicle localization is a key component of both autonomous driving and advanced driver assistance systems (ADAS). Previously used landmarks in highways such as lane markings lack information on longitudinal positions. To address this problem, lane endpoints can be used as landmarks. This paper proposes two essential components when using lane endpoints as landmarks: lane endpoint detection and its accuracy evaluation. First, it proposes a method to efficiently detect lane endpoints using a monocular forward-looking camera, which is the most widely installed perception sensor. Lane endpoints are detected with a small amount of computation based on the following steps: lane detection, lane endpoint candidate generation, and lane endpoint candidate verification. Second, it proposes a method to reliably measure the position accuracy of the lane endpoints detected from images taken while the camera is moving at high speed. A camera is installed with a mobile mapping system (MMS) in a vehicle, and the position accuracy of the lane endpoints detected by the camera is measured by comparing their positions with ground truths obtained by the MMS. In the experiment, the proposed methods were evaluated and compared with previous methods based on a dataset acquired while driving on 80 km of highway in both daytime and nighttime. Document type: Articl

    Path planning and map monitoring for self-driving vehicles based on HD maps

    Get PDF
    Este trabajo ha sido realizado dentro del contexto del proyecto Techs4AgeCar en el grupo de investigación Robesafe, cuyo objetivo es el desarrollo de un vehículo de conducción autónoma. Forma parte de dos líneas distintas del proyecto, la de mapeado y la de planificación, ya que ambas están directamente relacionadas. Se ha desarrollado un planificador de rutas global basado en mapas de alta defición (HD Maps) offline previamente generados. Por otro lado, también se ha cubierto toda la parte de generación de mapas que posteriormente son utilizados por el planificador. Además, se ha desarrollado un módulo capaz de aprovechar la información proporcionado por el mapa, de forma que se monitorizan los elementos relevantes y cercanos al coche que afectan a la ruta, como son carriles, intersecciones y elementos regulatoriosThis work has been done within the context of the Techs4AgeCar project in the Robesafe research group, whose project focuses on the development of an autonomous driving vehicle. This work is part of two different layers of the project, mapping and planning layers, since both are directly related. A global route planner has been developed based on previously generated offline HD Maps. Therefore, the entire part of generating maps that are later used by the planner has also been covered. In addition, a module capable of taking advantage of the information provided by the map has been developed, so that the relevant elements close to the vehicle that affect the route such as lanes, intersections and regulatory elements are monitored.Máster Universitario en Ingeniería Industrial (M 141

    Sistemas de suporte à condução autónoma adequados a plataforma robótica 4-wheel skid-steer: percepção, movimento e simulação

    Get PDF
    As competições de robótica móvel desempenham papel preponderante na difusão da ciência e da engenharia ao público em geral. E também um espaço dedicado ao ensaio e comparação de diferentes estratégias e abordagens aos diversos desafios da robótica móvel. Uma das vertentes que tem reunido maior interesse nos promotores deste género de iniciativas e entre o público em geral são as competições de condução autónoma. Tipicamente as Competi¸c˜oes de Condução Autónoma (CCA) tentam reproduzir um ambiente semelhante a uma estrutura rodoviária tradicional, no qual sistemas autónomos deverão dar resposta a um conjunto variado de desafios que vão desde a deteção da faixa de rodagem `a interação com distintos elementos que compõem uma estrutura rodoviária típica, do planeamento trajetórias à localização. O objectivo desta dissertação de mestrado visa documentar o processo de desenho e concepção de uma plataforma robótica móvel do tipo 4-wheel skid-steer para realização de tarefas de condução autónoma em ambiente estruturado numa pista que pretende replicar uma via de circulação automóvel dotada de sinalética básica e alguns obstáculos. Paralelamente, a dissertação pretende também fazer uma análise qualitativa entre o processo de simulação e a sua transposição para uma plataforma robótica física. inferir sobre a diferenças de performance e de comportamento.Mobile robotics competitions play an important role in the diffusion of science and engineering to the general public. It is also a space dedicated to test and compare different strategies and approaches to several challenges of mobile robotics. One of the aspects that has attracted more the interest of promoters for this kind of initiatives and general public is the autonomous driving competitions. Typically, Autonomous Driving Competitions (CCAs) attempt to replicate an environment similar to a traditional road structure, in which autonomous systems should respond to a wide variety of challenges ranging from lane detection to interaction with distinct elements that exist in a typical road structure, from planning trajectories to location. The aim of this master’s thesis is to document the process of designing and endow a 4-wheel skid-steer mobile robotic platform to carry out autonomous driving tasks in a structured environment on a track that intends to replicate a motorized roadway including signs and obstacles. In parallel, the dissertation also intends to make a qualitative analysis between the simulation process and the transposition of the developed algorithm to a physical robotic platform, analysing the differences in performance and behavior

    Multimodal machine learning for intelligent mobility

    Get PDF
    Scientific problems are solved by finding the optimal solution for a specific task. Some problems can be solved analytically while other problems are solved using data driven methods. The use of digital technologies to improve the transportation of people and goods, which is referred to as intelligent mobility, is one of the principal beneficiaries of data driven solutions. Autonomous vehicles are at the heart of the developments that propel Intelligent Mobility. Due to the high dimensionality and complexities involved in real-world environments, it needs to become commonplace for intelligent mobility to use data-driven solutions. As it is near impossible to program decision making logic for every eventuality manually. While recent developments of data-driven solutions such as deep learning facilitate machines to learn effectively from large datasets, the application of techniques within safety-critical systems such as driverless cars remain scarce.Autonomous vehicles need to be able to make context-driven decisions autonomously in different environments in which they operate. The recent literature on driverless vehicle research is heavily focused only on road or highway environments but have discounted pedestrianized areas and indoor environments. These unstructured environments tend to have more clutter and change rapidly over time. Therefore, for intelligent mobility to make a significant impact on human life, it is vital to extend the application beyond the structured environments. To further advance intelligent mobility, researchers need to take cues from multiple sensor streams, and multiple machine learning algorithms so that decisions can be robust and reliable. Only then will machines indeed be able to operate in unstructured and dynamic environments safely. Towards addressing these limitations, this thesis investigates data driven solutions towards crucial building blocks in intelligent mobility. Specifically, the thesis investigates multimodal sensor data fusion, machine learning, multimodal deep representation learning and its application of intelligent mobility. This work demonstrates that mobile robots can use multimodal machine learning to derive driver policy and therefore make autonomous decisions.To facilitate autonomous decisions necessary to derive safe driving algorithms, we present an algorithm for free space detection and human activity recognition. Driving these decision-making algorithms are specific datasets collected throughout this study. They include the Loughborough London Autonomous Vehicle dataset, and the Loughborough London Human Activity Recognition dataset. The datasets were collected using an autonomous platform design and developed in house as part of this research activity. The proposed framework for Free-Space Detection is based on an active learning paradigm that leverages the relative uncertainty of multimodal sensor data streams (ultrasound and camera). It utilizes an online learning methodology to continuously update the learnt model whenever the vehicle experiences new environments. The proposed Free Space Detection algorithm enables an autonomous vehicle to self-learn, evolve and adapt to new environments never encountered before. The results illustrate that online learning mechanism is superior to one-off training of deep neural networks that require large datasets to generalize to unfamiliar surroundings. The thesis takes the view that human should be at the centre of any technological development related to artificial intelligence. It is imperative within the spectrum of intelligent mobility where an autonomous vehicle should be aware of what humans are doing in its vicinity. Towards improving the robustness of human activity recognition, this thesis proposes a novel algorithm that classifies point-cloud data originated from Light Detection and Ranging sensors. The proposed algorithm leverages multimodality by using the camera data to identify humans and segment the region of interest in point cloud data. The corresponding 3-dimensional data was converted to a Fisher Vector Representation before being classified by a deep Convolutional Neural Network. The proposed algorithm classifies the indoor activities performed by a human subject with an average precision of 90.3%. When compared to an alternative point cloud classifier, PointNet[1], [2], the proposed framework out preformed on all classes. The developed autonomous testbed for data collection and algorithm validation, as well as the multimodal data-driven solutions for driverless cars, is the major contributions of this thesis. It is anticipated that these results and the testbed will have significant implications on the future of intelligent mobility by amplifying the developments of intelligent driverless vehicles.</div

    Older Pedestrian Characteristics for Use in Highway Design

    Get PDF
    DTFH61-91-C-00028The objective of this project was to develop traffic planning and engineering guidelines for the design of pedestrian facilities that are sensitive to the needs of older pedestrians. A detailed task analysis and literature review were conducted to identify the aspects of the pedestrian's tasks that are difficult for older persons, including motor, sensory, perceptual, cognitive, and behavioral factors. Several activities were undertaken to identify specific problems experienced by older pedestrians that could be addressed by changes in design standards and operational practices. These activities included analysis of accident exposure data, a survey of older pedestrians, focus group discussions, and a survey of practitioners. It was determined that older pedestrians experience difficulties at signalized intersections and often do not have sufficient time to cross. A field study was conducted to determine the walking speed, startup time, and stride length of older pedestrians. More than 7,000 pedestrians in 4 cities were observed in order to measure these parameters. Specific recommendations for changes to highway design and operational practices are described

    A comparison among deep learning techniques in an autonomous driving context

    Get PDF
    Al giorno d’oggi, l’intelligenza artificiale è uno dei campi di ricerca che sta ricevendo sempre più attenzioni. Il miglioramento della potenza computazionale a disposizione dei ricercatori e sviluppatori sta rinvigorendo tutto il potenziale che era stato espresso a livello teorico agli albori dell’Intelligenza Artificiale. Tra tutti i campi dell’Intelligenza Artificiale, quella che sta attualmente suscitando maggiore interesse è la guida autonoma. Tantissime case automobilistiche e i più illustri college americani stanno investendo sempre più risorse su questa tecnologia. La ricerca e la descrizione dell’ampio spettro delle tecnologie disponibili per la guida autonoma è parte del confronto svolto in questo elaborato. Il caso di studio si incentra su un’azienda che partendo da zero, vorrebbe elaborare un sistema di guida autonoma senza dati, in breve tempo ed utilizzando solo sensori fatti da loro. Partendo da reti neurali e algoritmi classici, si è arrivati ad utilizzare algoritmi come A3C per descrivere tutte l’ampio spettro di possibilità. Le tecnologie selezionate verranno confrontate in due esperimenti. Il primo è un esperimento di pura visione artificiale usando DeepTesla. In questo esperimento verranno confrontate tecnologie quali le tradizionali tecniche di visione artificiale, CNN e CNN combinate con LSTM. Obiettivo è identificare quale algoritmo ha performance migliori elaborando solo immagini. Il secondo è un esperimento su CARLA, un simulatore basato su Unreal Engine. In questo esperimento, i risultati ottenuti in ambiente simulato con CNN combinate con LSTM, verranno confrontati con i risultati ottenuti con A3C. Obiettivo sarà capire se queste tecniche sono in grado di muoversi in autonomia utilizzando i dati forniti dal simulatore. Il confronto mira ad identificare le criticità e i possibili miglioramenti futuri di ciascuno degli algoritmi proposti in modo da poter trovare una soluzione fattibile che porta ottimi risultati in tempi brevi
    • …
    corecore