12,882 research outputs found
Self-Driving Cars: A Survey
We survey research on self-driving cars published in the literature focusing
on autonomous cars developed since the DARPA challenges, which are equipped
with an autonomy system that can be categorized as SAE level 3 or higher. The
architecture of the autonomy system of self-driving cars is typically organized
into the perception system and the decision-making system. The perception
system is generally divided into many subsystems responsible for tasks such as
self-driving-car localization, static obstacles mapping, moving obstacles
detection and tracking, road mapping, traffic signalization detection and
recognition, among others. The decision-making system is commonly partitioned
as well into many subsystems responsible for tasks such as route planning, path
planning, behavior selection, motion planning, and control. In this survey, we
present the typical architecture of the autonomy system of self-driving cars.
We also review research on relevant methods for perception and decision making.
Furthermore, we present a detailed description of the architecture of the
autonomy system of the self-driving car developed at the Universidade Federal
do Esp\'irito Santo (UFES), named Intelligent Autonomous Robotics Automobile
(IARA). Finally, we list prominent self-driving car research platforms
developed by academia and technology companies, and reported in the media
Embedded Platforms for Computer Vision-based Advanced Driver Assistance Systems: a Survey
Computer Vision, either alone or combined with other technologies such as
radar or Lidar, is one of the key technologies used in Advanced Driver
Assistance Systems (ADAS). Its role understanding and analysing the driving
scene is of great importance as it can be noted by the number of ADAS
applications that use this technology. However, porting a vision algorithm to
an embedded automotive system is still very challenging, as there must be a
trade-off between several design requisites. Furthermore, there is not a
standard implementation platform, so different alternatives have been proposed
by both the scientific community and the industry. This paper aims to review
the requisites and the different embedded implementation platforms that can be
used for Computer Vision-based ADAS, with a critical analysis and an outlook to
future trends.Comment: 10 pages. To be published in ITS World Congress 201
Can we unify monocular detectors for autonomous driving by using the pixel-wise semantic segmentation of CNNs?
Autonomous driving is a challenging topic that requires complex solutions in
perception tasks such as recognition of road, lanes, traffic signs or lights,
vehicles and pedestrians. Through years of research, computer vision has grown
capable of tackling these tasks with monocular detectors that can provide
remarkable detection rates with relatively low processing times. However, the
recent appearance of Convolutional Neural Networks (CNNs) has revolutionized
the computer vision field and has made possible approaches to perform full
pixel-wise semantic segmentation in times close to real time (even on hardware
that can be carried on a vehicle). In this paper, we propose to use full image
segmentation as an approach to simplify and unify most of the detection tasks
required in the perception module of an autonomous vehicle, analyzing major
concerns such as computation time and detection performance.Comment: Extended abstract presented in IV16-WS Deepdriving
(http://iv2016.berkeleyvision.org/
Review on Computer Vision Techniques in Emergency Situation
In emergency situations, actions that save lives and limit the impact of
hazards are crucial. In order to act, situational awareness is needed to decide
what to do. Geolocalized photos and video of the situations as they evolve can
be crucial in better understanding them and making decisions faster. Cameras
are almost everywhere these days, either in terms of smartphones, installed
CCTV cameras, UAVs or others. However, this poses challenges in big data and
information overflow. Moreover, most of the time there are no disasters at any
given location, so humans aiming to detect sudden situations may not be as
alert as needed at any point in time. Consequently, computer vision tools can
be an excellent decision support. The number of emergencies where computer
vision tools has been considered or used is very wide, and there is a great
overlap across related emergency research. Researchers tend to focus on
state-of-the-art systems that cover the same emergency as they are studying,
obviating important research in other fields. In order to unveil this overlap,
the survey is divided along four main axes: the types of emergencies that have
been studied in computer vision, the objective that the algorithms can address,
the type of hardware needed and the algorithms used. Therefore, this review
provides a broad overview of the progress of computer vision covering all sorts
of emergencies.Comment: 25 page
Ego-Lane Analysis System (ELAS): Dataset and Algorithms
Decreasing costs of vision sensors and advances in embedded hardware boosted
lane related research detection, estimation, and tracking in the past two
decades. The interest in this topic has increased even more with the demand for
advanced driver assistance systems (ADAS) and self-driving cars. Although
extensively studied independently, there is still need for studies that propose
a combined solution for the multiple problems related to the ego-lane, such as
lane departure warning (LDW), lane change detection, lane marking type (LMT)
classification, road markings detection and classification, and detection of
adjacent lanes (i.e., immediate left and right lanes) presence. In this paper,
we propose a real-time Ego-Lane Analysis System (ELAS) capable of estimating
ego-lane position, classifying LMTs and road markings, performing LDW and
detecting lane change events. The proposed vision-based system works on a
temporal sequence of images. Lane marking features are extracted in perspective
and Inverse Perspective Mapping (IPM) images that are combined to increase
robustness. The final estimated lane is modeled as a spline using a combination
of methods (Hough lines with Kalman filter and spline with particle filter).
Based on the estimated lane, all other events are detected. To validate ELAS
and cover the lack of lane datasets in the literature, a new dataset with more
than 20 different scenes (in more than 15,000 frames) and considering a variety
of scenarios (urban road, highways, traffic, shadows, etc.) was created. The
dataset was manually annotated and made publicly available to enable evaluation
of several events that are of interest for the research community (i.e., lane
estimation, change, and centering; road markings; intersections; LMTs;
crosswalks and adjacent lanes). ELAS achieved high detection rates in all
real-world events and proved to be ready for real-time applications.Comment: 13 pages, 17 figures,
github.com/rodrigoberriel/ego-lane-analysis-system, and published by Image
and Vision Computing (IMAVIS
Composition and Application of Current Advanced Driving Assistance System: A Review
Due to the growing awareness of driving safety and the development of
sophisticated technologies, advanced driving assistance system (ADAS) has been
equipped in more and more vehicles with higher accuracy and lower price. The
latest progress in this field has called for a review to sum up the
conventional knowledge of ADAS, the state-of-the-art researches, and novel
applications in real-world. With the help of this kind of review, newcomers in
this field can get basic knowledge easier and other researchers may be inspired
with potential future development possibility.
This paper makes a general introduction about ADAS by analyzing its hardware
support and computation algorithms. Different types of perception sensors are
introduced from their interior feature classifications, installation positions,
supporting ADAS functions, and pros and cons. The comparisons between different
sensors are concluded and illustrated from their inherent characters and
specific usages serving for each ADAS function. The current algorithms for ADAS
functions are also collected and briefly presented in this paper from both
traditional methods and novel ideas. Additionally, discussions about the
definition of ADAS from different institutes are reviewed in this paper, and
future approaches about ADAS in China are introduced in particular
Securing Connected & Autonomous Vehicles: Challenges Posed by Adversarial Machine Learning and The Way Forward
Connected and autonomous vehicles (CAVs) will form the backbone of future
next-generation intelligent transportation systems (ITS) providing travel
comfort, road safety, along with a number of value-added services. Such a
transformation---which will be fuelled by concomitant advances in technologies
for machine learning (ML) and wireless communications---will enable a future
vehicular ecosystem that is better featured and more efficient. However, there
are lurking security problems related to the use of ML in such a critical
setting where an incorrect ML decision may not only be a nuisance but can lead
to loss of precious lives. In this paper, we present an in-depth overview of
the various challenges associated with the application of ML in vehicular
networks. In addition, we formulate the ML pipeline of CAVs and present various
potential security issues associated with the adoption of ML methods. In
particular, we focus on the perspective of adversarial ML attacks on CAVs and
outline a solution to defend against adversarial attacks in multiple settings
Computer Vision Systems in Road Vehicles: A Review
The number of road vehicles significantly increased in recent decades. This
trend accompanied a build-up of road infrastructure and development of various
control systems to increase road traffic safety, road capacity and travel
comfort. In traffic safety significant development has been made and today's
systems more and more include cameras and computer vision methods. Cameras are
used as part of the road infrastructure or in vehicles. In this paper a review
on computer vision systems in vehicles from the stand point of traffic
engineering is given. Safety problems of road vehicles are presented, current
state of the art in-vehicle vision systems is described and open problems with
future research directions are discussed.Comment: Part of the Proceedings of the Croatian Computer Vision Workshop,
CCVW 2013, Year
Joint Attention in Driver-Pedestrian Interaction: from Theory to Practice
Today, one of the major challenges that autonomous vehicles are facing is the
ability to drive in urban environments. Such a task requires communication
between autonomous vehicles and other road users in order to resolve various
traffic ambiguities. The interaction between road users is a form of
negotiation in which the parties involved have to share their attention
regarding a common objective or a goal (e.g. crossing an intersection), and
coordinate their actions in order to accomplish it. In this literature review
we aim to address the interaction problem between pedestrians and drivers (or
vehicles) from joint attention point of view. More specifically, we will
discuss the theoretical background behind joint attention, its application to
traffic interaction and practical approaches to implementing joint attention
for autonomous vehicles
Deep Learning for Large-Scale Traffic-Sign Detection and Recognition
Automatic detection and recognition of traffic signs plays a crucial role in
management of the traffic-sign inventory. It provides accurate and timely way
to manage traffic-sign inventory with a minimal human effort. In the computer
vision community the recognition and detection of traffic signs is a
well-researched problem. A vast majority of existing approaches perform well on
traffic signs needed for advanced drivers-assistance and autonomous systems.
However, this represents a relatively small number of all traffic signs (around
50 categories out of several hundred) and performance on the remaining set of
traffic signs, which are required to eliminate the manual labor in traffic-sign
inventory management, remains an open question. In this paper, we address the
issue of detecting and recognizing a large number of traffic-sign categories
suitable for automating traffic-sign inventory management. We adopt a
convolutional neural network (CNN) approach, the Mask R-CNN, to address the
full pipeline of detection and recognition with automatic end-to-end learning.
We propose several improvements that are evaluated on the detection of traffic
signs and result in an improved overall performance. This approach is applied
to detection of 200 traffic-sign categories represented in our novel dataset.
Results are reported on highly challenging traffic-sign categories that have
not yet been considered in previous works. We provide comprehensive analysis of
the deep learning method for the detection of traffic signs with large
intra-category appearance variation and show below 3% error rates with the
proposed approach, which is sufficient for deployment in practical applications
of traffic-sign inventory management.Comment: Accepted for publication in IEEE Transactions on Intelligent
Transportation System
- …