7 research outputs found

    FisheyeMultiNet: Real-time Multi-task Learning Architecture for Surround-view Automated Parking System.

    Get PDF
    Automated Parking is a low speed manoeuvring scenario which is quite unstructured and complex, requiring full 360° near-field sensing around the vehicle. In this paper, we discuss the design and implementation of an automated parking system from the perspective of camera based deep learning algorithms. We provide a holistic overview of an industrial system covering the embedded system, use cases and the deep learning architecture. We demonstrate a real-time multi-task deep learning network called FisheyeMultiNet, which detects all the necessary objects for parking on a low-power embedded system. FisheyeMultiNet runs at 15 fps for 4 cameras and it has three tasks namely object detection, semantic segmentation and soiling detection. To encourage further research, we release a partial dataset of 5,000 images containing semantic segmentation and bounding box detection ground truth via WoodScape project [Yogamani et al., 2019]

    Near-field Perception for Low-Speed Vehicle Automation using Surround-view Fisheye Cameras

    Full text link
    Cameras are the primary sensor in automated driving systems. They provide high information density and are optimal for detecting road infrastructure cues laid out for human vision. Surround-view camera systems typically comprise of four fisheye cameras with 190{\deg}+ field of view covering the entire 360{\deg} around the vehicle focused on near-field sensing. They are the principal sensors for low-speed, high accuracy, and close-range sensing applications, such as automated parking, traffic jam assistance, and low-speed emergency braking. In this work, we provide a detailed survey of such vision systems, setting up the survey in the context of an architecture that can be decomposed into four modular components namely Recognition, Reconstruction, Relocalization, and Reorganization. We jointly call this the 4R Architecture. We discuss how each component accomplishes a specific aspect and provide a positional argument that they can be synergized to form a complete perception system for low-speed automation. We support this argument by presenting results from previous works and by presenting architecture proposals for such a system. Qualitative results are presented in the video at https://youtu.be/ae8bCOF77uY.Comment: Accepted for publication at IEEE Transactions on Intelligent Transportation System

    A PHYSICAL TEST ARTIFACT FOR EVALUATING EDGE CASES OF INDIVIDUAL AND FUSED AUTOMATED DRIVING PERCEPTION SENSORS

    Get PDF
    With the advent of technologies to support autonomous vehicles (AVs), the number of different AV models from a variety of companies and organizations has proliferated. With this increase in options comes the need to physically evaluate their perception systems. However, there is a lack of standard methods to physically evaluate these perception systems. A set of test artifacts can be used to compare the performances of perception systems, but the artifacts must be usable with different types of perception sensors and various sensor fusion systems. This thesis presents the development of an artifact that injects edge case scenarios into the environment through undetectable and detectable capabilities for light detection and ranging (LiDAR) and radar sensors. Once these artifact capabilities were validated through testing, an evaluation method was developed using this test artifact to compare colored point cloud datasets produced by two LiDAR-camera fusion systems and a stereo camera system. This proposed evaluation method ranks the sensor fusion systems through three metrics that describe the accuracy their colored point cloud representation of both sides of the test artifact. The metrics measure the point cloud\u27s ability to accurately fill the correct amount of space the artifact encompasses in the environment, the spread of the points within this coverage, and the variation in their color values. With this artifact, the evaluation method produced results that followed prior observations, proving the artifact\u27s use of physically comparing different sensor fusion systems

    Advances in Automated Driving Systems

    Get PDF
    Electrification, automation of vehicle control, digitalization and new mobility are the mega-trends in automotive engineering, and they are strongly connected. While many demonstrations for highly automated vehicles have been made worldwide, many challenges remain in bringing automated vehicles to the market for private and commercial use. The main challenges are as follows: reliable machine perception; accepted standards for vehicle-type approval and homologation; verification and validation of the functional safety, especially at SAE level 3+ systems; legal and ethical implications; acceptance of vehicle automation by occupants and society; interaction between automated and human-controlled vehicles in mixed traffic; human–machine interaction and usability; manipulation, misuse and cyber-security; the system costs of hard- and software and development efforts. This Special Issue was prepared in the years 2021 and 2022 and includes 15 papers with original research related to recent advances in the aforementioned challenges. The topics of this Special Issue cover: Machine perception for SAE L3+ driving automation; Trajectory planning and decision-making in complex traffic situations; X-by-Wire system components; Verification and validation of SAE L3+ systems; Misuse, manipulation and cybersecurity; Human–machine interactions, driver monitoring and driver-intention recognition; Road infrastructure measures for the introduction of SAE L3+ systems; Solutions for interactions between human- and machine-controlled vehicles in mixed traffic
    corecore