8,329 research outputs found

    Is it Safe to Drive? An Overview of Factors, Challenges, and Datasets for Driveability Assessment in Autonomous Driving

    Full text link
    With recent advances in learning algorithms and hardware development, autonomous cars have shown promise when operating in structured environments under good driving conditions. However, for complex, cluttered and unseen environments with high uncertainty, autonomous driving systems still frequently demonstrate erroneous or unexpected behaviors, that could lead to catastrophic outcomes. Autonomous vehicles should ideally adapt to driving conditions; while this can be achieved through multiple routes, it would be beneficial as a first step to be able to characterize Driveability in some quantified form. To this end, this paper aims to create a framework for investigating different factors that can impact driveability. Also, one of the main mechanisms to adapt autonomous driving systems to any driving condition is to be able to learn and generalize from representative scenarios. The machine learning algorithms that currently do so learn predominantly in a supervised manner and consequently need sufficient data for robust and efficient learning. Therefore, we also perform a comparative overview of 45 public driving datasets that enable learning and publish this dataset index at https://sites.google.com/view/driveability-survey-datasets. Specifically, we categorize the datasets according to use cases, and highlight the datasets that capture complicated and hazardous driving conditions which can be better used for training robust driving models. Furthermore, by discussions of what driving scenarios are not covered by existing public datasets and what driveability factors need more investigation and data acquisition, this paper aims to encourage both targeted dataset collection and the proposal of novel driveability metrics that enhance the robustness of autonomous cars in adverse environments

    Self-Driving Cars: A Survey

    Full text link
    We survey research on self-driving cars published in the literature focusing on autonomous cars developed since the DARPA challenges, which are equipped with an autonomy system that can be categorized as SAE level 3 or higher. The architecture of the autonomy system of self-driving cars is typically organized into the perception system and the decision-making system. The perception system is generally divided into many subsystems responsible for tasks such as self-driving-car localization, static obstacles mapping, moving obstacles detection and tracking, road mapping, traffic signalization detection and recognition, among others. The decision-making system is commonly partitioned as well into many subsystems responsible for tasks such as route planning, path planning, behavior selection, motion planning, and control. In this survey, we present the typical architecture of the autonomy system of self-driving cars. We also review research on relevant methods for perception and decision making. Furthermore, we present a detailed description of the architecture of the autonomy system of the self-driving car developed at the Universidade Federal do Esp\'irito Santo (UFES), named Intelligent Autonomous Robotics Automobile (IARA). Finally, we list prominent self-driving car research platforms developed by academia and technology companies, and reported in the media

    Integrating Inter-vehicular Communication, Vehicle Localization, and a Digital Map for Cooperative Adaptive Cruise Control with Target Detection Loss

    Full text link
    Adaptive Cruise Control (ACC) is an Advanced Driver Assistance System (ADAS) that enables vehicle following with desired inter-vehicular distances. Cooperative Adaptive Cruise Control (CACC) is upgraded ACC that utilizes additional inter-vehicular wireless communication to share vehicle states such as acceleration to enable shorter gap following. Both ACC and CACC rely on range sensors such as radar to obtain the actual inter-vehicular distance for gap-keeping control. The range sensor may lose detection of the target, the preceding vehicle, on curvy roads or steep hills due to limited angle of view. Unfavourable weather conditions, target selection failure, or hardware issue may also result in target detection loss. During target detection loss, the vehicle following system usually falls back to Cruise Control (CC) wherein the follower vehicle maintains a constant speed. In this work, we propose an alternative way to obtain the inter-vehicular distance during target detection loss to continue vehicle following. The proposed algorithm integrates inter-vehicular communication, accurate vehicle localization, and a digital map with lane center information to approximate the inter-vehicular distance. In-lab robot following experiments demonstrated that the proposed algorithm provided desirable inter-vehicular distance approximation. Although the algorithm is intended for vehicle following application, it can also be used for other scenarios that demand vehicles' relative distance approximation. The work also showcases our in-lab development effort of robotic emulation of traffic for connected and automated vehicles

    A Methodological Review of Visual Road Recognition Procedures for Autonomous Driving Applications

    Full text link
    The current research interest in autonomous driving is growing at a rapid pace, attracting great investments from both the academic and corporate sectors. In order for vehicles to be fully autonomous, it is imperative that the driver assistance system is adapt in road and lane keeping. In this paper, we present a methodological review of techniques with a focus on visual road detection and recognition. We adopt a pragmatic outlook in presenting this review, whereby the procedures of road recognition is emphasised with respect to its practical implementations. The contribution of this review hence covers the topic in two parts -- the first part describes the methodological approach to conventional road detection, which covers the algorithms and approaches involved to classify and segregate roads from non-road regions; and the other part focuses on recent state-of-the-art machine learning techniques that are applied to visual road recognition, with an emphasis on methods that incorporate convolutional neural networks and semantic segmentation. A subsequent overview of recent implementations in the commercial sector is also presented, along with some recent research works pertaining to road detections.Comment: 14 pages, 6 Figures, 2 Tables. Permission to reprint granted from original figure author

    Road Segmentation Using CNN with GRU

    Full text link
    This paper presents an accurate and fast algorithm for road segmentation using convolutional neural network (CNN) and gated recurrent units (GRU). For autonomous vehicles, road segmentation is a fundamental task that can provide the drivable area for path planning. The existing deep neural network based segmentation algorithms usually take a very deep encoder-decoder structure to fuse pixels, which requires heavy computations, large memory and long processing time. Hereby, a CNN-GRU network model is proposed and trained to perform road segmentation using data captured by the front camera of a vehicle. GRU network obtains a long spatial sequence with lower computational complexity, comparing to traditional encoder-decoder architecture. The proposed road detector is evaluated on the KITTI road benchmark and achieves high accuracy for road segmentation at real-time processing speed.Comment: submitted to IV201

    Vision-based Navigation of Autonomous Vehicle in Roadway Environments with Unexpected Hazards

    Full text link
    Vision-based navigation of autonomous vehicles primarily depends on the Deep Neural Network (DNN) based systems in which the controller obtains input from sensors/detectors, such as cameras and produces a vehicle control output, such as a steering wheel angle to navigate the vehicle safely in a roadway traffic environment. Typically, these DNN-based systems of the autonomous vehicle are trained through supervised learning; however, recent studies show that a trained DNN-based system can be compromised by perturbation or adversarial inputs. Similarly, this perturbation can be introduced into the DNN-based systems of autonomous vehicle by unexpected roadway hazards, such as debris and roadblocks. In this study, we first introduce a roadway hazardous environment (both intentional and unintentional roadway hazards) that can compromise the DNN-based navigational system of an autonomous vehicle, and produces an incorrect steering wheel angle, which can cause crashes resulting in fatality and injury. Then, we develop a DNN-based autonomous vehicle driving system using object detection and semantic segmentation to mitigate the adverse effect of this type of hazardous environment, which helps the autonomous vehicle to navigate safely around such hazards. We find that our developed DNN-based autonomous vehicle driving system including hazardous object detection and semantic segmentation improves the navigational ability of an autonomous vehicle to avoid a potential hazard by 21% compared to the traditional DNN-based autonomous vehicle driving system.Comment: 17 pages, 12 image

    Joint Attention in Driver-Pedestrian Interaction: from Theory to Practice

    Full text link
    Today, one of the major challenges that autonomous vehicles are facing is the ability to drive in urban environments. Such a task requires communication between autonomous vehicles and other road users in order to resolve various traffic ambiguities. The interaction between road users is a form of negotiation in which the parties involved have to share their attention regarding a common objective or a goal (e.g. crossing an intersection), and coordinate their actions in order to accomplish it. In this literature review we aim to address the interaction problem between pedestrians and drivers (or vehicles) from joint attention point of view. More specifically, we will discuss the theoretical background behind joint attention, its application to traffic interaction and practical approaches to implementing joint attention for autonomous vehicles

    Securing Connected & Autonomous Vehicles: Challenges Posed by Adversarial Machine Learning and The Way Forward

    Full text link
    Connected and autonomous vehicles (CAVs) will form the backbone of future next-generation intelligent transportation systems (ITS) providing travel comfort, road safety, along with a number of value-added services. Such a transformation---which will be fuelled by concomitant advances in technologies for machine learning (ML) and wireless communications---will enable a future vehicular ecosystem that is better featured and more efficient. However, there are lurking security problems related to the use of ML in such a critical setting where an incorrect ML decision may not only be a nuisance but can lead to loss of precious lives. In this paper, we present an in-depth overview of the various challenges associated with the application of ML in vehicular networks. In addition, we formulate the ML pipeline of CAVs and present various potential security issues associated with the adoption of ML methods. In particular, we focus on the perspective of adversarial ML attacks on CAVs and outline a solution to defend against adversarial attacks in multiple settings

    Robust Lane Detection from Continuous Driving Scenes Using Deep Neural Networks

    Full text link
    Lane detection in driving scenes is an important module for autonomous vehicles and advanced driver assistance systems. In recent years, many sophisticated lane detection methods have been proposed. However, most methods focus on detecting the lane from one single image, and often lead to unsatisfactory performance in handling some extremely-bad situations such as heavy shadow, severe mark degradation, serious vehicle occlusion, and so on. In fact, lanes are continuous line structures on the road. Consequently, the lane that cannot be accurately detected in one current frame may potentially be inferred out by incorporating information of previous frames. To this end, we investigate lane detection by using multiple frames of a continuous driving scene, and propose a hybrid deep architecture by combining the convolutional neural network (CNN) and the recurrent neural network (RNN). Specifically, information of each frame is abstracted by a CNN block, and the CNN features of multiple continuous frames, holding the property of time-series, are then fed into the RNN block for feature learning and lane prediction. Extensive experiments on two large-scale datasets demonstrate that, the proposed method outperforms the competing methods in lane detection, especially in handling difficult situations.Comment: IEEE Transactions on Vehicular Technology, 69(1), 202

    Caroline: An Autonomously Driving Vehicle for Urban Environments

    Full text link
    The 2007 DARPA Urban Challenge afforded the golden opportunity for the Technische Universit\"at Braunschweig to demonstrate its abilities to develop an autonomously driving vehicle to compete with the world's best competitors. After several stages of qualification, our team CarOLO qualified early for the DARPA Urban Challenge Final Event and was among only eleven teams from initially 89 competitors to compete in the final. We had the ability to work together in a large group of experts, each contributing his expertise in his discipline, and significant organisational, financial and technical support by local sponsors who helped us to become the best non-US team. In this report, we describe the 2007 DARPA Urban Challenge, our contribution "Caroline", the technology and algorithms along with her performance in the DARPA Urban Challenge Final Event on November 3, 2007.Comment: 68 pages, 7 figure
    • …
    corecore