8,720 research outputs found

    A review and perspective on optical phased array for automotive LiDAR

    Get PDF
    This paper aims to review the state of the art of Light Detection and Ranging (LiDAR) sensors for automotive applications, and particularly for automated vehicles, focusing on recent advances in the field of integrated LiDAR, and one of its key components: the Optical Phased Array (OPA). LiDAR is still a sensor that divides the automotive community, with several automotive companies investing in it, and some companies stating that LiDAR is a ‘useless appendix’. However, currently there is not a single sensor technology able to robustly and completely support automated navigation. Therefore, LiDAR, with its capability to map in 3 dimensions (3D) the vehicle surroundings, is a strong candidate to support Automated Vehicles (AVs). This manuscript highlights current AV sensor challenges, and it analyses the strengths and weaknesses of the perception sensor currently deployed. Then, the manuscript discusses the main LiDAR technologies emerging in automotive, and focuses on integrated LiDAR, challenges associated with light beam steering on a chip, the use of Optical Phased Arrays, finally discussing current factors hindering the affirmation of silicon photonics OPAs and their future research directions

    Automotive radar – A signalprocessing perspective oncurrent technology andfuture systems

    Get PDF
    IEEE Distinguished Microwave LecturerRadar systems are a key technology of modern vehicle safety & comfort systems. Without doubt it will only be the symbiosis of Radar, Lidar and camera-based sensor systems which can enable advanced autonomous driving functions soon. Several next generation car models are such announced to have more than 10 radar sensors per vehicle, allowing for the generation of a radar-based 360° surround view necessary for advanced driver assistance as well as semi-autonomous operation. Hence the demand from the automotive industry for high-precision, multi-functional radar systems is higher than ever before, and the increased requirements on functionality and sensor capabilities lead to research and development activities in the field of automotive radar systems in both industry and academic worlds. Current automotive radar technology is almost exclusively based on the principle of frequency-modulated continuous-wave (FMCW) radar, which has been well known for several decades. However, together with an increase of hardware capabilities such as higher carrier frequencies, modulation bandwidths and ramp slopes, as well as a scaling up of simultaneously utilized transmit and receive channels with independent modulation features, new degrees of freedom have been added to traditional FMCW radar system design and signal processing. The anticipated presentation will accordingly introduce the topic with a review on the fundamentals of radar and FMCW radar. After introducing the system architecture of traditional and modern automotive FMCW radar sensors, with e.g. insights into the concepts of distributed or centralized processing and sensor data fusion, the presentation will dive into the details of fast-chirp FMCW processing – the modulation mode which is used by the vast majority of current automotive FMCW radar systems.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    People tracking by cooperative fusion of RADAR and camera sensors

    Get PDF
    Accurate 3D tracking of objects from monocular camera poses challenges due to the loss of depth during projection. Although ranging by RADAR has proven effective in highway environments, people tracking remains beyond the capability of single sensor systems. In this paper, we propose a cooperative RADAR-camera fusion method for people tracking on the ground plane. Using average person height, joint detection likelihood is calculated by back-projecting detections from the camera onto the RADAR Range-Azimuth data. Peaks in the joint likelihood, representing candidate targets, are fed into a Particle Filter tracker. Depending on the association outcome, particles are updated using the associated detections (Tracking by Detection), or by sampling the raw likelihood itself (Tracking Before Detection). Utilizing the raw likelihood data has the advantage that lost targets are continuously tracked even if the camera or RADAR signal is below the detection threshold. We show that in single target, uncluttered environments, the proposed method entirely outperforms camera-only tracking. Experiments in a real-world urban environment also confirm that the cooperative fusion tracker produces significantly better estimates, even in difficult and ambiguous situations

    Satellite Navigation for the Age of Autonomy

    Full text link
    Global Navigation Satellite Systems (GNSS) brought navigation to the masses. Coupled with smartphones, the blue dot in the palm of our hands has forever changed the way we interact with the world. Looking forward, cyber-physical systems such as self-driving cars and aerial mobility are pushing the limits of what localization technologies including GNSS can provide. This autonomous revolution requires a solution that supports safety-critical operation, centimeter positioning, and cyber-security for millions of users. To meet these demands, we propose a navigation service from Low Earth Orbiting (LEO) satellites which deliver precision in-part through faster motion, higher power signals for added robustness to interference, constellation autonomous integrity monitoring for integrity, and encryption / authentication for resistance to spoofing attacks. This paradigm is enabled by the 'New Space' movement, where highly capable satellites and components are now built on assembly lines and launch costs have decreased by more than tenfold. Such a ubiquitous positioning service enables a consistent and secure standard where trustworthy information can be validated and shared, extending the electronic horizon from sensor line of sight to an entire city. This enables the situational awareness needed for true safe operation to support autonomy at scale.Comment: 11 pages, 8 figures, 2020 IEEE/ION Position, Location and Navigation Symposium (PLANS

    Aerial LiDAR-based 3D Object Detection And Tracking For Traffic Monitoring

    Get PDF
    The proliferation of Light Detection and Ranging (LiDAR) technology in the automotive industry has quickly promoted its use in many emerging areas in smart cities and internet-of-things. Compared to other sensors, like cameras and radars, LiDAR provides up to 64 scanning channels, vertical and horizontal field of view, high precision, high detection range, and great performance under poor weather conditions. In this paper, we propose a novel aerial traffic monitoring solution based on Light Detection and Ranging (LiDAR) technology. By equipping unmanned aerial vehicles (UAVs) with a LiDAR sensor, we generate 3D point cloud data that can be used for object detection and tracking. Due to the unavailability of LiDAR data from the sky, we propose to use a 3D simulator. Then, we implement Point Voxel-RCNN (PV-RCNN) to perform road user detection (e.g., vehicles and pedestrians). Subsequently, we implement an Unscented Kalman filter, which takes a 3D detected object as input and uses its information to predict the state of the 3D box before the next LiDAR scan gets loaded. Finally, we update the measurement by using the new observation of the point cloud and correct the previous prediction\u27s belief. The simulation results illustrate the performance gain (around 8 %) achieved by our solution compared to other 3D point cloud solutions

    Practical classification of different moving targets using automotive radar and deep neural networks

    Get PDF
    In this work, the authors present results for classification of different classes of targets (car, single and multiple people, bicycle) using automotive radar data and different neural networks. A fast implementation of radar algorithms for detection, tracking, and micro-Doppler extraction is proposed in conjunction with the automotive radar transceiver TEF810X and microcontroller unit SR32R274 manufactured by NXP Semiconductors. Three different types of neural networks are considered, namely a classic convolutional network, a residual network, and a combination of convolutional and recurrent network, for different classification problems across the four classes of targets recorded. Considerable accuracy (close to 100% in some cases) and low latency of the radar pre-processing prior to classification (∼0.55 s to produce a 0.5 s long spectrogram) are demonstrated in this study, and possible shortcomings and outstanding issues are discussed
    • …
    corecore