10,572 research outputs found
On Offline Evaluation of Vision-based Driving Models
Autonomous driving models should ideally be evaluated by deploying them on a
fleet of physical vehicles in the real world. Unfortunately, this approach is
not practical for the vast majority of researchers. An attractive alternative
is to evaluate models offline, on a pre-collected validation dataset with
ground truth annotation. In this paper, we investigate the relation between
various online and offline metrics for evaluation of autonomous driving models.
We find that offline prediction error is not necessarily correlated with
driving quality, and two models with identical prediction error can differ
dramatically in their driving performance. We show that the correlation of
offline evaluation with driving quality can be significantly improved by
selecting an appropriate validation dataset and suitable offline metrics. The
supplementary video can be viewed at
https://www.youtube.com/watch?v=P8K8Z-iF0cYComment: Published at the ECCV 2018 conferenc
END-TO-END LEARNING UTILIZING TEMPORAL INFORMATION FOR VISION- BASED AUTONOMOUS DRIVING
End-to-End learning models trained with conditional imitation learning (CIL) have demonstrated their capabilities in driving autonomously in dynamic environments. The performance of such models however is limited as most of them fail to utilize the temporal information, which resides in a sequence of observations. In this work, we explore the use of temporal information with a recurrent network to improve driving performance. We propose a model that combines a pre-trained, deeper convolutional neural network to better capture image features with a long short-term memory network to better explore temporal information. Experimental results indicate that the proposed model achieves performance gain in several tasks in the CARLA benchmark, compared to the state-of-the-art models. In particular, comparing with other CIL-based models in the most challenging task, navigation in dynamic environments, we achieve a 96% success rate while other CIL-based models had 82-92% in training conditions; we also achieved 88% while other CIL-based models did 42-90% in the new town and new weather conditions. The subsequent ablation study also shows that all the major features of the proposed model are essential for improving performance. We, therefore, believe that this work contributes significantly towards safe, efficient, clean autonomous driving for future smart cities
An Overview about Emerging Technologies of Autonomous Driving
Since DARPA started Grand Challenges in 2004 and Urban Challenges in 2007,
autonomous driving has been the most active field of AI applications. This
paper gives an overview about technical aspects of autonomous driving
technologies and open problems. We investigate the major fields of self-driving
systems, such as perception, mapping and localization, prediction, planning and
control, simulation, V2X and safety etc. Especially we elaborate on all these
issues in a framework of data closed loop, a popular platform to solve the long
tailed autonomous driving problems
Incremental Adversarial Domain Adaptation for Continually Changing Environments
Continuous appearance shifts such as changes in weather and lighting
conditions can impact the performance of deployed machine learning models.
While unsupervised domain adaptation aims to address this challenge, current
approaches do not utilise the continuity of the occurring shifts. In
particular, many robotics applications exhibit these conditions and thus
facilitate the potential to incrementally adapt a learnt model over minor
shifts which integrate to massive differences over time. Our work presents an
adversarial approach for lifelong, incremental domain adaptation which benefits
from unsupervised alignment to a series of intermediate domains which
successively diverge from the labelled source domain. We empirically
demonstrate that our incremental approach improves handling of large appearance
changes, e.g. day to night, on a traversable-path segmentation task compared
with a direct, single alignment step approach. Furthermore, by approximating
the feature distribution for the source domain with a generative adversarial
network, the deployment module can be rendered fully independent of retaining
potentially large amounts of the related source training data for only a minor
reduction in performance.Comment: International Conference on Robotics and Automation 201
Autonomy 2.0: The Quest for Economies of Scale
With the advancement of robotics and AI technologies in the past decade, we
have now entered the age of autonomous machines. In this new age of information
technology, autonomous machines, such as service robots, autonomous drones,
delivery robots, and autonomous vehicles, rather than humans, will provide
services. In this article, through examining the technical challenges and
economic impact of the digital economy, we argue that scalability is both
highly necessary from a technical perspective and significantly advantageous
from an economic perspective, thus is the key for the autonomy industry to
achieve its full potential. Nonetheless, the current development paradigm,
dubbed Autonomy 1.0, scales with the number of engineers, instead of with the
amount of data or compute resources, hence preventing the autonomy industry to
fully benefit from the economies of scale, especially the exponentially
cheapening compute cost and the explosion of available data. We further analyze
the key scalability blockers and explain how a new development paradigm, dubbed
Autonomy 2.0, can address these problems to greatly boost the autonomy
industry
3D Visual Perception for Self-Driving Cars using a Multi-Camera System: Calibration, Mapping, Localization, and Obstacle Detection
Cameras are a crucial exteroceptive sensor for self-driving cars as they are
low-cost and small, provide appearance information about the environment, and
work in various weather conditions. They can be used for multiple purposes such
as visual navigation and obstacle detection. We can use a surround multi-camera
system to cover the full 360-degree field-of-view around the car. In this way,
we avoid blind spots which can otherwise lead to accidents. To minimize the
number of cameras needed for surround perception, we utilize fisheye cameras.
Consequently, standard vision pipelines for 3D mapping, visual localization,
obstacle detection, etc. need to be adapted to take full advantage of the
availability of multiple cameras rather than treat each camera individually. In
addition, processing of fisheye images has to be supported. In this paper, we
describe the camera calibration and subsequent processing pipeline for
multi-fisheye-camera systems developed as part of the V-Charge project. This
project seeks to enable automated valet parking for self-driving cars. Our
pipeline is able to precisely calibrate multi-camera systems, build sparse 3D
maps for visual navigation, visually localize the car with respect to these
maps, generate accurate dense maps, as well as detect obstacles based on
real-time depth map extraction
DeepIPCv2: LiDAR-powered Robust Environmental Perception and Navigational Control for Autonomous Vehicle
We present DeepIPCv2, an autonomous driving model that perceives the
environment using a LiDAR sensor for more robust drivability, especially when
driving under poor illumination conditions. DeepIPCv2 takes a set of LiDAR
point clouds for its main perception input. As point clouds are not affected by
illumination changes, they can provide a clear observation of the surroundings
no matter what the condition is. This results in a better scene understanding
and stable features provided by the perception module to support the controller
module in estimating navigational control properly. To evaluate its
performance, we conduct several tests by deploying the model to predict a set
of driving records and perform real automated driving under three different
conditions. We also conduct ablation and comparative studies with some recent
models to justify its performance. Based on the experimental results, DeepIPCv2
shows a robust performance by achieving the best drivability in all conditions.
Codes are available at https://github.com/oskarnatan/DeepIPCv
- …