876 research outputs found
Multi-task near-field perception for autonomous driving using surround-view fisheye cameras
Die Bildung der Augen führte zum Urknall der Evolution. Die Dynamik änderte sich von einem primitiven Organismus, der auf den Kontakt mit der Nahrung wartete, zu einem Organismus, der durch visuelle Sensoren gesucht wurde. Das menschliche Auge ist eine der raffiniertesten Entwicklungen der Evolution, aber es hat immer noch Mängel. Der Mensch hat über Millionen von Jahren einen biologischen Wahrnehmungsalgorithmus entwickelt, der in der Lage ist, Autos zu fahren, Maschinen zu bedienen, Flugzeuge zu steuern und Schiffe zu navigieren. Die Automatisierung dieser Fähigkeiten für Computer ist entscheidend für verschiedene Anwendungen, darunter selbstfahrende Autos, Augmented Realität und architektonische Vermessung. Die visuelle Nahfeldwahrnehmung im Kontext von selbstfahrenden Autos kann die Umgebung in einem Bereich von 0 - 10 Metern und 360° Abdeckung um das Fahrzeug herum wahrnehmen. Sie ist eine entscheidende Entscheidungskomponente bei der Entwicklung eines sichereren automatisierten Fahrens. Jüngste Fortschritte im Bereich Computer Vision und Deep Learning in Verbindung mit hochwertigen Sensoren wie Kameras und LiDARs haben ausgereifte Lösungen für die visuelle Wahrnehmung hervorgebracht. Bisher stand die Fernfeldwahrnehmung im Vordergrund. Ein weiteres wichtiges Problem ist die begrenzte Rechenleistung, die für die Entwicklung von Echtzeit-Anwendungen zur Verfügung steht. Aufgrund dieses Engpasses kommt es häufig zu einem Kompromiss zwischen Leistung und Laufzeiteffizienz. Wir konzentrieren uns auf die folgenden Themen, um diese anzugehen: 1) Entwicklung von Nahfeld-Wahrnehmungsalgorithmen mit hoher Leistung und geringer Rechenkomplexität für verschiedene visuelle Wahrnehmungsaufgaben wie geometrische und semantische Aufgaben unter Verwendung von faltbaren neuronalen Netzen. 2) Verwendung von Multi-Task-Learning zur Überwindung von Rechenengpässen durch die gemeinsame Nutzung von initialen Faltungsschichten zwischen den Aufgaben und die Entwicklung von Optimierungsstrategien, die die Aufgaben ausbalancieren.The formation of eyes led to the big bang of evolution. The dynamics changed from a primitive organism waiting for the food to come into contact for eating food being sought after by visual sensors. The human eye is one of the most sophisticated developments of evolution, but it still has defects. Humans have evolved a biological perception algorithm capable of driving cars, operating machinery, piloting aircraft, and navigating ships over millions of years. Automating these capabilities for computers is critical for various applications, including self-driving cars, augmented reality, and architectural surveying. Near-field visual perception in the context of self-driving cars can perceive the environment in a range of 0 - 10 meters and 360° coverage around the vehicle. It is a critical decision-making component in the development of safer automated driving. Recent advances in computer vision and deep learning, in conjunction with high-quality sensors such as cameras and LiDARs, have fueled mature visual perception solutions. Until now, far-field perception has been the primary focus. Another significant issue is the limited processing power available for developing real-time applications. Because of this bottleneck, there is frequently a trade-off between performance and run-time efficiency. We concentrate on the following issues in order to address them: 1) Developing near-field perception algorithms with high performance and low computational complexity for various visual perception tasks such as geometric and semantic tasks using convolutional neural networks. 2) Using Multi-Task Learning to overcome computational bottlenecks by sharing initial convolutional layers between tasks and developing optimization strategies that balance tasks
An Online Learning System for Wireless Charging Alignment using Surround-view Fisheye Cameras
Electric Vehicles are increasingly common, with inductive chargepads being
considered a convenient and efficient means of charging electric vehicles.
However, drivers are typically poor at aligning the vehicle to the necessary
accuracy for efficient inductive charging, making the automated alignment of
the two charging plates desirable. In parallel to the electrification of the
vehicular fleet, automated parking systems that make use of surround-view
camera systems are becoming increasingly popular. In this work, we propose a
system based on the surround-view camera architecture to detect, localize, and
automatically align the vehicle with the inductive chargepad. The visual design
of the chargepads is not standardized and not necessarily known beforehand.
Therefore, a system that relies on offline training will fail in some
situations. Thus, we propose a self-supervised online learning method that
leverages the driver's actions when manually aligning the vehicle with the
chargepad and combine it with weak supervision from semantic segmentation and
depth to learn a classifier to auto-annotate the chargepad in the video for
further training. In this way, when faced with a previously unseen chargepad,
the driver needs only manually align the vehicle a single time. As the
chargepad is flat on the ground, it is not easy to detect it from a distance.
Thus, we propose using a Visual SLAM pipeline to learn landmarks relative to
the chargepad to enable alignment from a greater range. We demonstrate the
working system on an automated vehicle as illustrated in the video at
https://youtu.be/_cLCmkW4UYo. To encourage further research, we will share a
chargepad dataset used in this work.Comment: Accepted for publication at IEEE Transactions on Intelligent
Transportation System
LiDAR-BEVMTN: Real-Time LiDAR Bird's-Eye View Multi-Task Perception Network for Autonomous Driving
LiDAR is crucial for robust 3D scene perception in autonomous driving. LiDAR
perception has the largest body of literature after camera perception. However,
multi-task learning across tasks like detection, segmentation, and motion
estimation using LiDAR remains relatively unexplored, especially on
automotive-grade embedded platforms. We present a real-time multi-task
convolutional neural network for LiDAR-based object detection, semantics, and
motion segmentation. The unified architecture comprises a shared encoder and
task-specific decoders, enabling joint representation learning. We propose a
novel Semantic Weighting and Guidance (SWAG) module to transfer semantic
features for improved object detection selectively. Our heterogeneous training
scheme combines diverse datasets and exploits complementary cues between tasks.
The work provides the first embedded implementation unifying these key
perception tasks from LiDAR point clouds achieving 3ms latency on the embedded
NVIDIA Xavier platform. We achieve state-of-the-art results for two tasks,
semantic and motion segmentation, and close to state-of-the-art performance for
3D object detection. By maximizing hardware efficiency and leveraging
multi-task synergies, our method delivers an accurate and efficient solution
tailored for real-world automated driving deployment. Qualitative results can
be seen at https://youtu.be/H-hWRzv2lIY
UnRectDepthNet: Self-Supervised Monocular Depth Estimation using a Generic Framework for Handling Common Camera Distortion Models
In classical computer vision, rectification is an integral part of multi-view
depth estimation. It typically includes epipolar rectification and lens
distortion correction. This process simplifies the depth estimation
significantly, and thus it has been adopted in CNN approaches. However,
rectification has several side effects, including a reduced field of view
(FOV), resampling distortion, and sensitivity to calibration errors. The
effects are particularly pronounced in case of significant distortion (e.g.,
wide-angle fisheye cameras). In this paper, we propose a generic scale-aware
self-supervised pipeline for estimating depth, euclidean distance, and visual
odometry from unrectified monocular videos. We demonstrate a similar level of
precision on the unrectified KITTI dataset with barrel distortion comparable to
the rectified KITTI dataset. The intuition being that the rectification step
can be implicitly absorbed within the CNN model, which learns the distortion
model without increasing complexity. Our approach does not suffer from a
reduced field of view and avoids computational costs for rectification at
inference time. To further illustrate the general applicability of the proposed
framework, we apply it to wide-angle fisheye cameras with 190
horizontal field of view. The training framework UnRectDepthNet takes in the
camera distortion model as an argument and adapts projection and unprojection
functions accordingly. The proposed algorithm is evaluated further on the KITTI
rectified dataset, and we achieve state-of-the-art results that improve upon
our previous work FisheyeDistanceNet. Qualitative results on a distorted test
scene video sequence indicate excellent performance
https://youtu.be/K6pbx3bU4Ss.Comment: Minor fixes added after IROS 2020 Camera ready submission. IROS 2020
presentation video - https://www.youtube.com/watch?v=3Br2KSWZRr
FisheyeMODNet: Moving Object Detection on Surround-view Cameras for Autonomous Driving
Moving Object Detection (MOD) is an important task for achieving robust autonomous driving. An autonomous vehicle has to estimate collision risk with other interacting objects in the environment and calculate an optional trajectory. Collision risk is typically higher for moving objects than static ones due to the need to estimate the future states and poses of the objects for decision making. This is particularly important for near-range objects around the vehicle which are typically detected by a fisheye surroundview system that captures a 360± view of the scene. In this work, we propose a CNN architecture for moving object detection using fisheye images that were captured in autonomous driving environment. As motion geometry is highly non-linear and unique for fisheye cameras, we will make an improved version of the current dataset public to encourage further research. To target embedded deployment, we design a lightweight encoder sharing weights across sequential images. The proposed network runs at 15 fps using Jetston Nvidia TX2 embedded GPU at accuracy of 40% IoU and 69.5% mIoU
- …