180 research outputs found
Parameterized Synthetic Image Data Set for Fisheye Lens
Based on different projection geometry, a fisheye image can be presented as a
parameterized non-rectilinear image. Deep neural networks(DNN) is one of the
solutions to extract parameters for fisheye image feature description. However,
a large number of images are required for training a reasonable prediction
model for DNN. In this paper, we propose to extend the scale of the training
dataset using parameterized synthetic images. It effectively boosts the
diversity of images and avoids the data scale limitation. To simulate different
viewing angles and distances, we adopt controllable parameterized projection
processes on transformation. The reliability of the proposed method is proved
by testing images captured by our fisheye camera. The synthetic dataset is the
first dataset that is able to extend to a big scale labeled fisheye image
dataset. It is accessible via: http://www2.leuphana.de/misl/fisheye-data-set/.Comment: 2018 5th International Conference on Information Science and Control
Engineerin
FisheyeMultiNet: Real-time Multi-task Learning Architecture for Surround-view Automated Parking System.
Automated Parking is a low speed manoeuvring scenario which is quite unstructured and complex, requiring full 360° near-field sensing around the vehicle. In this paper, we discuss the design and implementation of an automated parking system from the perspective of camera based deep learning algorithms. We provide a holistic overview of an industrial system covering the embedded system, use cases and the deep learning architecture. We demonstrate a real-time multi-task deep learning network called FisheyeMultiNet, which detects all the necessary objects for parking on a low-power embedded system. FisheyeMultiNet runs at 15 fps for 4 cameras and it has three tasks namely object detection, semantic segmentation and soiling detection. To encourage further research, we release a partial dataset of 5,000 images containing semantic segmentation and bounding box detection ground truth via WoodScape project [Yogamani et al., 2019]
Near-field Perception for Low-Speed Vehicle Automation using Surround-view Fisheye Cameras
Cameras are the primary sensor in automated driving systems. They provide
high information density and are optimal for detecting road infrastructure cues
laid out for human vision. Surround-view camera systems typically comprise of
four fisheye cameras with 190{\deg}+ field of view covering the entire
360{\deg} around the vehicle focused on near-field sensing. They are the
principal sensors for low-speed, high accuracy, and close-range sensing
applications, such as automated parking, traffic jam assistance, and low-speed
emergency braking. In this work, we provide a detailed survey of such vision
systems, setting up the survey in the context of an architecture that can be
decomposed into four modular components namely Recognition, Reconstruction,
Relocalization, and Reorganization. We jointly call this the 4R Architecture.
We discuss how each component accomplishes a specific aspect and provide a
positional argument that they can be synergized to form a complete perception
system for low-speed automation. We support this argument by presenting results
from previous works and by presenting architecture proposals for such a system.
Qualitative results are presented in the video at https://youtu.be/ae8bCOF77uY.Comment: Accepted for publication at IEEE Transactions on Intelligent
Transportation System
ADD: An Automatic Desensitization Fisheye Dataset for Autonomous Driving
Autonomous driving systems require many images for analyzing the surrounding
environment. However, there is fewer data protection for private information
among these captured images, such as pedestrian faces or vehicle license
plates, which has become a significant issue. In this paper, in response to the
call for data security laws and regulations and based on the advantages of
large Field of View(FoV) of the fisheye camera, we build the first Autopilot
Desensitization Dataset, called ADD, and formulate the first
deep-learning-based image desensitization framework, to promote the study of
image desensitization in autonomous driving scenarios. The compiled dataset
consists of 650K images, including different face and vehicle license plate
information captured by the surround-view fisheye camera. It covers various
autonomous driving scenarios, including diverse facial characteristics and
license plate colors. Then, we propose an efficient multitask desensitization
network called DesCenterNet as a benchmark on the ADD dataset, which can
perform face and vehicle license plate detection and desensitization tasks.
Based on ADD, we further provide an evaluation criterion for desensitization
performance, and extensive comparison experiments have verified the
effectiveness and superiority of our method on image desensitization
- …