226 research outputs found
The Right (Angled) Perspective: Improving the Understanding of Road Scenes Using Boosted Inverse Perspective Mapping
Many tasks performed by autonomous vehicles such as road marking detection,
object tracking, and path planning are simpler in bird's-eye view. Hence,
Inverse Perspective Mapping (IPM) is often applied to remove the perspective
effect from a vehicle's front-facing camera and to remap its images into a 2D
domain, resulting in a top-down view. Unfortunately, however, this leads to
unnatural blurring and stretching of objects at further distance, due to the
resolution of the camera, limiting applicability. In this paper, we present an
adversarial learning approach for generating a significantly improved IPM from
a single camera image in real time. The generated bird's-eye-view images
contain sharper features (e.g. road markings) and a more homogeneous
illumination, while (dynamic) objects are automatically removed from the scene,
thus revealing the underlying road layout in an improved fashion. We
demonstrate our framework using real-world data from the Oxford RobotCar
Dataset and show that scene understanding tasks directly benefit from our
boosted IPM approach.Comment: equal contribution of first two authors, 8 full pages, 6 figures,
accepted at IV 201
An Overview about Emerging Technologies of Autonomous Driving
Since DARPA started Grand Challenges in 2004 and Urban Challenges in 2007,
autonomous driving has been the most active field of AI applications. This
paper gives an overview about technical aspects of autonomous driving
technologies and open problems. We investigate the major fields of self-driving
systems, such as perception, mapping and localization, prediction, planning and
control, simulation, V2X and safety etc. Especially we elaborate on all these
issues in a framework of data closed loop, a popular platform to solve the long
tailed autonomous driving problems
AI-based framework for automatically extracting high-low features from NDS data to understand driver behavior
Our ability to detect and characterize unsafe driving behaviors in naturalistic driving environments and associate them with road crashes will be a significant step toward developing effective crash countermeasures. Due to some limitations, researchers have not yet fully achieved the stated goal of characterizing unsafe driving behaviors. These limitations include, but are not limited to, the high cost of data collection and the manual processes required to extract information from NDS data. In light of this limitations, the primary objective of this study is to develop an artificial intelligence (AI) framework for automatically extracting high-low features from the NDS dataset to explain driver behavior using a low-cost data collection method. The author proposed three novel objectives for achieving the study's objective in light of the identified research gaps. Initially, the study develops a low-cost data acquisition system for gathering data on naturalistic driving. Second, the study develops a framework that automatically extracts high- to low-level features, such as vehicle density, turning movements, and lane changes, from the data collected by the developed data acquisition system. Thirdly, the study extracted information from the NDS data to gain a better understanding of people's car-following behavior and other driving behaviors in order to develop countermeasures for traffic safety through data collection and analysis. The first objective of this study is to develop a multifunctional smartphone application for collecting NDS data. Three major modules comprised the designed app: a front-end user interface module, a sensor module, and a backend module. The front-end, which is also the application's user interface, was created to provide a streamlined view that exposed the application's key features via a tab bar controller. This allows us to compartmentalize the application's critical components into separate views. The backend module provides computational resources that can be used to accelerate front-end query responses. Google Firebase powered the backend of the developed application. The sensor modules included CoreMotion, CoreLocation, and AVKit. CoreMotion collects motion and environmental data from the onboard hardware of iOS devices, including accelerometers, gyroscopes, pedometers, magnetometers, and barometers. In contrast, CoreLocation determines the altitude, orientation, and geographical location of a device, as well as its position relative to an adjacent iBeacon device. The AVKit finally provides a high-level interface for video content playback. To achieve objective two, we formulated the problem as both a classification and time-series segmentation problem. This is due to the fact that the majority of existing driver maneuver detection methods formulate the problem as a pure classification problem, assuming a discretized input signal with known start and end locations for each event or segment. In practice, however, vehicle telemetry data used for detecting driver maneuvers are continuous; thus, a fully automated driver maneuver detection system should incorporate solutions for both time series segmentation and classification. The five stages of our proposed methodology are as follows: 1) data preprocessing, 2) segmentation of events, 3) machine learning classification, 4) heuristics classification, and 5) frame-by-frame video annotation. The result of the study indicates that the gyroscope reading is an exceptional parameter for extracting driving events, as its accuracy was consistent across all four models developed. The study reveals that the Energy Maximization Algorithm's accuracy ranges from 56.80 percent (left lane change) to 85.20 percent (right lane change) (lane-keeping) All four models developed had comparable accuracies to studies that used similar models. The 1D-CNN model had the highest accuracy (98.99 percent), followed by the LSTM model (97.75 percent), the RF model (97.71 percent), and the SVM model (97.65 percent). To serve as a ground truth, continuous signal data was annotated. In addition, the proposed method outperformed the fixed time window approach. The study analyzed the overall pipeline's accuracy by penalizing the F1 scores of the ML models with the EMA's duration score. The pipeline's accuracy ranged between 56.8 percent and 85.0 percent overall. The ultimate goal of this study was to extract variables from naturalistic driving videos that would facilitate an understanding of driver behavior in a naturalistic driving environment. To achieve this objective, three sub-goals were established. First, we developed a framework for extracting features pertinent to comprehending the behavior of natural-environment drivers. Using the extracted features, we then analyzed the car-following behaviors of various demographic groups. Thirdly, using a machine learning algorithm, we modeled the acceleration of both the ego-vehicle and the leading vehicle. Younger drivers are more likely to be aggressive, according to the findings of this study. In addition, the study revealed that drivers tend to accelerate when the distance between them and the vehicle in front of them is substantial. Lastly, compared to younger drivers, elderly motorists maintain a significantly larger following distance. This study's results have numerous safety implications. First, the analysis of the driving behavior of different demographic groups will enable safety engineers to develop the most effective crash countermeasures by enhancing their understanding of the driving styles of different demographic groups and the causes of collisions. Second, the models developed to predict the acceleration of both the ego-vehicle and the leading vehicle will provide enough information to explain the behavior of the ego-driver.Includes bibliographical references
Recommended from our members
Explainable and Advisable Learning for Self-driving Vehicles
Deep neural perception and control networks are likely to be a key component of self-driving vehicles. These models need to be explainable - they should provide easy-to-interpret rationales for their behavior - so that passengers, insurance companies, law enforcement, developers, etc., can understand what triggered a particular behavior. Explanations may be triggered by the neural controller, namely introspective explanations, or informed by the neural controller's output, namely rationalizations. Our work has focused on the challenge of generating introspective explanations of deep models for self-driving vehicles. In Chapter 3, we begin by exploring the use of visual explanations. These explanations take the form of real-time highlighted regions of an image that causally influence the network's output (steering control). In the first stage, we use a visual attention model to train a convolution network end-to-end from images to steering angle. The attention model highlights image regions that potentially influence the network's output. Some of these are true influences, but some are spurious. We then apply a causal filtering step to determine which input regions actually influence the output. This produces more succinct visual explanations and more accurately exposes the network's behavior. In Chapter 4, we add an attention-based video-to-text model to produce textual explanations of model actions, e.g. "the car slows down because the road is wet". The attention maps of controller and explanation model are aligned so that explanations are grounded in the parts of the scene that mattered to the controller. We explore two approaches to attention alignment, strong- and weak-alignment. These explainable systems represent an externalization of tacit knowledge. The network's opaque reasoning is simplified to a situation-specific dependence on a visible object in the image. This makes them brittle and potentially unsafe in situations that do not match training data. In Chapter 5, we propose to address this issue by augmenting training data with natural language advice from a human. Advice includes guidance about what to do and where to attend. We present the first step toward advice-giving, where we train an end-to-end vehicle controller that accepts advice. The controller adapts the way it attends to the scene (visual attention) and the control (steering and speed). Further, in Chapter 6, we propose a new approach that learns vehicle control with the help of long-term (global) human advice. Specifically, our system learns to summarize its visual observations in natural language, predict an appropriate action response (e.g. "I see a pedestrian crossing, so I stop"), and predict the controls, accordingly
Advancements in 3D Lane Detection Using LiDAR Point Clouds: From Data Collection to Model Development
Advanced Driver-Assistance Systems (ADAS) have successfully integrated
learning-based techniques into vehicle perception and decision-making. However,
their application in 3D lane detection for effective driving environment
perception is hindered by the lack of comprehensive LiDAR datasets. The sparse
nature of LiDAR point cloud data prevents an efficient manual annotation
process. To solve this problem, we present LiSV-3DLane, a large-scale 3D lane
dataset that comprises 20k frames of surround-view LiDAR point clouds with
enriched semantic annotation. Unlike existing datasets confined to a frontal
perspective, LiSV-3DLane provides a full 360-degree spatial panorama around the
ego vehicle, capturing complex lane patterns in both urban and highway
environments. We leverage the geometric traits of lane lines and the intrinsic
spatial attributes of LiDAR data to design a simple yet effective automatic
annotation pipeline for generating finer lane labels. To propel future
research, we propose a novel LiDAR-based 3D lane detection model, LiLaDet,
incorporating the spatial geometry learning of the LiDAR point cloud into
Bird's Eye View (BEV) based lane identification. Experimental results indicate
that LiLaDet outperforms existing camera- and LiDAR-based approaches in the 3D
lane detection task on the K-Lane dataset and our LiSV-3DLane.Comment: 7 pages, 6 figure
An intelligent modular real-time vision-based system for environment perception
A significant portion of driving hazards is caused by human error and
disregard for local driving regulations; Consequently, an intelligent
assistance system can be beneficial. This paper proposes a novel vision-based
modular package to ensure drivers' safety by perceiving the environment. Each
module is designed based on accuracy and inference time to deliver real-time
performance. As a result, the proposed system can be implemented on a wide
range of vehicles with minimum hardware requirements. Our modular package
comprises four main sections: lane detection, object detection, segmentation,
and monocular depth estimation. Each section is accompanied by novel techniques
to improve the accuracy of others along with the entire system. Furthermore, a
GUI is developed to display perceived information to the driver. In addition to
using public datasets, like BDD100K, we have also collected and annotated a
local dataset that we utilize to fine-tune and evaluate our system. We show
that the accuracy of our system is above 80% in all the sections. Our code and
data are available at
https://github.com/Pandas-Team/Autonomous-Vehicle-Environment-PerceptionComment: Accepted in NeurIPS 2022 Workshop on Machine Learning for Autonomous
Drivin
Lane detection in autonomous vehicles : A systematic review
One of the essential systems in autonomous vehicles for ensuring a secure circumstance for drivers and passengers is the Advanced Driver Assistance System (ADAS). Adaptive Cruise Control, Automatic Braking/Steer Away, Lane-Keeping System, Blind Spot Assist, Lane Departure Warning System, and Lane Detection are examples of ADAS. Lane detection displays information specific to the geometrical features of lane line structures to the vehicle's intelligent system to show the position of lane markings. This article reviews the methods employed for lane detection in an autonomous vehicle. A systematic literature review (SLR) has been carried out to analyze the most delicate approach to detecting the road lane for the benefit of the automation industry. One hundred and two publications from well-known databases were chosen for this review. The trend was discovered after thoroughly examining the selected articles on the method implemented for detecting the road lane from 2018 until 2021. The selected literature used various methods, with the input dataset being one of two types: self-collected or acquired from an online public dataset. In the meantime, the methodologies include geometric modeling and traditional methods, while AI includes deep learning and machine learning. The use of deep learning has been increasingly researched throughout the last four years. Some studies used stand-Alone deep learning implementations for lane detection problems. Meanwhile, some research focuses on merging deep learning with other machine learning techniques and classical methodologies. Recent advancements imply that attention mechanism has become a popular combined strategy with deep learning methods. The use of deep algorithms in conjunction with other techniques showed promising outcomes. This research aims to provide a complete overview of the literature on lane detection methods, highlighting which approaches are currently being researched and the performance of existing state-of-The-Art techniques. Also, the paper covered the equipment used to collect the dataset for the training process and the dataset used for network training, validation, and testing. This review yields a valuable foundation on lane detection techniques, challenges, and opportunities and supports new research works in this automation field. For further study, it is suggested to put more effort into accuracy improvement, increased speed performance, and more challenging work on various extreme conditions in detecting the road lane
DESIGNING AN INDUCTIVE SENSOR FOR ROAD TRAFFIC MONITORING SYSTEMS AND CONTROL
The purpose of this study is to design an inductive sensor,which detect a vehicle on the
road. The main objectives are to design an inductive sensor using an enameled copper
wire and interface it to an electronics circuit. The analyses of experiments will mainly
the important part of this project. Then, a demonstration will be held to demonstrate the
sensing process using a working model.
This sensor can change some work from manual to automatically. Examples of situation
that can implement this sensor is to control the barrier automatically on the main gates
on the roads, to monitor traffic on a narrow curved portion of the road and to count the
number of vehicles from a particular point per unit time.
At present, there are a lot of sensors available in the market that uses inductive sensor.
Many methods can be used in detecting the presence of vehicle and a complete circuit
of inductive sensor has also been developed. The result from these methods will assist
in the future work of this project
- …