5,098 research outputs found

    Full waveform LiDAR for adverse weather conditions

    Get PDF

    Impacto da calibração num LiDAR baseado em visão estereoscópica

    Get PDF
    Every year 1.3 million people die due to road accidents. Given that the main culprit is human error, autonomous driving is the path to avert and prevent these numbers. An autonomous vehicle must be able to perceive its surroundings, therefore requiring vision sensors. Of the many kinds of vision sensors available, the three main automotive vision sensors are cameras, RADAR and LiDAR. LiDARs have the unique capability of capturing a high-resolution point cloud, thus enabling 3D object detection. However, current LiDAR technology is still immature and expensive, which makes it unattractive to the automotive market. We propose an alternative LiDAR concept – the LiDART – that is able to generate a point cloud simply resorting to stereoscopic vision and dot projection. LiDART takes advantage of mass-produced components such as a dot pattern projector and a stereoscopic camera rig, thus inherently overcoming problems in cost and maturity. Nonetheless, LiDART has four key challenges: noise, correspondence, centroiding and calibration. This thesis focuses on the calibration aspects of LiDART and aims to investigate the systematic error introduced by standard calibration techniques. In this work, the quality of stereoscopic calibration was assessed both experimentally and numerically. The experimental validation consisted in assembling a prototype and calibrating it using standard calibration techniques for stereoscopic vision. Calibration quality was assessed by estimating the distance to a target. As for numerical assessment, a simulation tool was developed to cross-validate most experimental results. The obtained results show that standard calibration techniques result in a considerable systematic error, reaching 30% of the correct distance. Nonetheless, the estimated error depends monotonically on distance. Consequently, the systematic error can be significantly reduced if better calibration methods, specifically designed for the application at hand, are used in the future.Todos os anos 1.3 milhões de pessoas perdem a vida devido a acidentes de viação. Dado que a principal razão por detrás destes trágicos números é o erro humano, o caminho para prevenir perder tantas vidas passa pela condução autónoma. Um veículo autónomo deve ser capaz de observar o cenário envolvente. Para tal, são necessários sensores de visão. Dos vários sensores de visão disponiveis no mercado, os três principais sensores de visão automotivos são a câmara, o RADAR e o Li- DAR. O LiDAR tem a capacidade única de capturar uma nuvem de pontos com alta resolução, permitindo assim deteção de objetos em 3D. Contudo, a tecnologia por detrás de um LiDAR é atualmente dispendiosa e imatura, o que tem dificultado a adoção por parte de fabricantes de automóveis. Este trabalho propõe um conceito de LiDAR alternativo – o LiDART – capaz de gerar uma nuvem de pontos recorrendo simplesmente a visão estereoscópica e à projeção de pontos. O LiDART tem a vantagem de se basear em componentes produzidos em massa, tais como um projector de pontos e uma câmara estereoscópica, ultrapassando assim os problemas de custo e maturidade. Não obstante, o LiDART tem quatro desafios principais: ruído, correspondência, estimação de centróide e calibração. Esta tese foca-se nas características de calibração do LiDART, tendo como objectivo investigar o erro sistemático introduzido por técnicas de calibração comuns. A qualidade da calibração foi avaliada experimentalmente e numericamente. A validação experimental consistiu em montar um protótipo e calibrá-lo de várias maneiras. A qualidade da calibração foi então avaliada através da estimação da distância a um alvo. Relativamente à parte numérica, desenvolveu-se uma ferramenta de simulação para validar grande parte dos resultados experimentais. Os resultados obtidos mostram que técnicas de calibração comuns resultam num erro sistemático considerável, chegando a 30% da distância correta. Porém, o erro de estimação varia monotonicamente com a distância. Consequentemente, o erro sistemático pode ser reduzido significativamente se melhores métodos de calibração, especialmente pensados para a aplicação em questão, forem aplicados no futuro.Mestrado em Engenharia Eletrónica e Telecomunicaçõe

    Customized Co-Simulation Environment for Autonomous Driving Algorithm Development and Evaluation

    Full text link
    Increasing the implemented SAE level of autonomy in road vehicles requires extensive simulations and verifications in a realistic simulation environment before proving ground and public road testing. The level of detail in the simulation environment helps ensure the safety of a real-world implementation and reduces algorithm development cost by allowing developers to complete most of the validation in the simulation environment. Considering sensors like camera, LIDAR, radar, and V2X used in autonomous vehicles, it is essential to create a simulation environment that can provide these sensor simulations as realistically as possible. While sensor simulations are of crucial importance for perception algorithm development, the simulation environment will be incomplete for the simulation of holistic AV operation without being complemented by a realistic vehicle dynamic model and traffic cosimulation. Therefore, this paper investigates existing simulation environments, identifies use case scenarios, and creates a cosimulation environment to satisfy the simulation requirements for autonomous driving function development using the Carla simulator based on the Unreal game engine for the environment, Sumo or Vissim for traffic co-simulation, Carsim or Matlab, Simulink for vehicle dynamics co-simulation and Autoware or the author or user routines for autonomous driving algorithm co-simulation. As a result of this work, a model-based vehicle dynamics simulation with realistic sensor simulation and traffic simulation is presented. A sensor fusion methodology is implemented in the created simulation environment as a use case scenario. The results of this work will be a valuable resource for researchers who need a comprehensive co-simulation environment to develop connected and autonomous driving algorithms

    An Adversarial Super-Resolution Remedy for Radar Design Trade-offs

    Full text link
    Radar is of vital importance in many fields, such as autonomous driving, safety and surveillance applications. However, it suffers from stringent constraints on its design parametrization leading to multiple trade-offs. For example, the bandwidth in FMCW radars is inversely proportional with both the maximum unambiguous range and range resolution. In this work, we introduce a new method for circumventing radar design trade-offs. We propose the use of recent advances in computer vision, more specifically generative adversarial networks (GANs), to enhance low-resolution radar acquisitions into higher resolution counterparts while maintaining the advantages of the low-resolution parametrization. The capability of the proposed method was evaluated on the velocity resolution and range-azimuth trade-offs in micro-Doppler signatures and FMCW uniform linear array (ULA) radars, respectively.Comment: Accepted in EUSIPCO 2019, 5 page

    Static Background Removal in Vehicular Radar: Filtering in Azimuth-Elevation-Doppler Domain

    Full text link
    A significant challenge in autonomous driving systems lies in image understanding within complex environments, particularly dense traffic scenarios. An effective solution to this challenge involves removing the background or static objects from the scene, so as to enhance the detection of moving targets as key component of improving overall system performance. In this paper, we present an efficient algorithm for background removal in automotive radar applications, specifically utilizing a frequency-modulated continuous wave (FMCW) radar. Our proposed algorithm follows a three-step approach, encompassing radar signal preprocessing, three-dimensional (3D) ego-motion estimation, and notch filter-based background removal in the azimuth-elevation-Doppler domain. To begin, we model the received signal of the FMCW multiple-input multiple-output (MIMO) radar and develop a signal processing framework for extracting four-dimensional (4D) point clouds. Subsequently, we introduce a robust 3D ego-motion estimation algorithm that accurately estimates radar ego-motion speed, accounting for Doppler ambiguity, by processing the point clouds. Additionally, our algorithm leverages the relationship between Doppler velocity, azimuth angle, elevation angle, and radar ego-motion speed to identify the spectrum belonging to background clutter. Subsequently, we employ notch filters to effectively filter out the background clutter. The performance of our algorithm is evaluated using both simulated data and extensive experiments with real-world data. The results demonstrate its effectiveness in efficiently removing background clutter and enhacing perception within complex environments. By offering a fast and computationally efficient solution, our approach effectively addresses challenges posed by non-homogeneous environments and real-time processing requirements

    Motorcycles that see: Multifocal stereo vision sensor for advanced safety systems in tilting vehicles

    Get PDF
    Advanced driver assistance systems, ADAS, have shown the possibility to anticipate crash accidents and effectively assist road users in critical traffic situations. This is not the case for motorcyclists, in fact ADAS for motorcycles are still barely developed. Our aim was to study a camera-based sensor for the application of preventive safety in tilting vehicles. We identified two road conflict situations for which automotive remote sensors installed in a tilting vehicle are likely to fail in the identification of critical obstacles. Accordingly, we set two experiments conducted in real traffic conditions to test our stereo vision sensor. Our promising results support the application of this type of sensors for advanced motorcycle safety applications

    Recent Advances in mmWave-Radar-Based Sensing, Its Applications, and Machine Learning Techniques: A Review

    Get PDF
    Human gesture detection, obstacle detection, collision avoidance, parking aids, automotive driving, medical, meteorological, industrial, agriculture, defense, space, and other relevant fields have all benefited from recent advancements in mmWave radar sensor technology. A mmWave radar has several advantages that set it apart from other types of sensors. A mmWave radar can operate in bright, dazzling, or no-light conditions. A mmWave radar has better antenna miniaturization than other traditional radars, and it has better range resolution. However, as more data sets have been made available, there has been a significant increase in the potential for incorporating radar data into different machine learning methods for various applications. This review focuses on key performance metrics in mmWave-radar-based sensing, detailed applications, and machine learning techniques used with mmWave radar for a variety of tasks. This article starts out with a discussion of the various working bands of mmWave radars, then moves on to various types of mmWave radars and their key specifications, mmWave radar data interpretation, vast applications in various domains, and, in the end, a discussion of machine learning algorithms applied with radar data for various applications. Our review serves as a practical reference for beginners developing mmWave-radar-based applications by utilizing machine learning techniques.publishedVersio
    • …
    corecore