1,561 research outputs found

    LiDAR-based Weather Detection: Automotive LiDAR Sensors in Adverse Weather Conditions

    Get PDF
    Technologische Verbesserungen erhöhen den Automatisierungsgrad von Fahrzeugen. Der natürliche Schritt ist dabei, den Fahrer dort zu unterstützen, wo er es am meisten wünscht: bei schlechtem Wetter. Das Wetter beeinflusst alle Sensoren, die zur Wahrnehmung der Umgebung verwendet werden, daher ist es entscheidend, diese Effekte zu berücksichtigen und abzuschwächen. Die vorliegende Dissertation konzentriert sich auf die gerade entstehende Technologie der automobilen Light Detection and Ranging (LiDAR)-Sensoren und trägt zur Entwicklung von autonomen Fahrzeugen bei, die in der Lage sind, unter verschiedenen Wetterbedingungen zu fahren. Die Grundlage ist der erste LiDAR-Punktwolken-Datensatz mit dem Schwerpunkt auf schlechte Wetterbedingungen, welcher punktweise annonatatierte Wetterinformationen enthält, während er unter kontrollierten Wetterbedingungen aufgezeichnet wurde. Dieser Datensatz wird durch eine neuartige Wetter-Augmentation erweitert, um realistische Wettereffekte erzeugen zu können. Ein neuartiger Ansatz zur Klassifizierung des Wetterzustands und der erste CNN-basierte Entrauschungsalgorithmus werden entwickelt. Das Ergebnis ist eine genaue Vorhersage des Wetterstatus und eine Verbesserung der Punktwolkenqualität. Kontrollierte Umgebungen unter verschiedenen Wetterbedingungen ermöglichen die Evaluierung der oben genannten Ansätze und liefern wertvolle Informationen für das automatisierte und autonome Fahren

    Robo3D: Towards Robust and Reliable 3D Perception against Corruptions

    Full text link
    The robustness of 3D perception systems under natural corruptions from environments and sensors is pivotal for safety-critical applications. Existing large-scale 3D perception datasets often contain data that are meticulously cleaned. Such configurations, however, cannot reflect the reliability of perception models during the deployment stage. In this work, we present Robo3D, the first comprehensive benchmark heading toward probing the robustness of 3D detectors and segmentors under out-of-distribution scenarios against natural corruptions that occur in real-world environments. Specifically, we consider eight corruption types stemming from adversarial weather conditions, external disturbances, and internal sensor failure. We uncover that, although promising results have been progressively achieved on standard benchmarks, state-of-the-art 3D perception models are at risk of being vulnerable to corruptions. We draw key observations on the use of data representations, augmentation schemes, and training strategies, that could severely affect the model's performance. To pursue better robustness, we propose a density-insensitive training framework along with a simple flexible voxelization strategy to enhance the model resiliency. We hope our benchmark and approach could inspire future research in designing more robust and reliable 3D perception models. Our robustness benchmark suite is publicly available.Comment: 33 pages, 26 figures, 26 tables; code at https://github.com/ldkong1205/Robo3D project page at https://ldkong.com/Robo3

    Living in a Material World: Learning Material Properties from Full-Waveform Flash Lidar Data for Semantic Segmentation

    Full text link
    Advances in lidar technology have made the collection of 3D point clouds fast and easy. While most lidar sensors return per-point intensity (or reflectance) values along with range measurements, flash lidar sensors are able to provide information about the shape of the return pulse. The shape of the return waveform is affected by many factors, including the distance that the light pulse travels and the angle of incidence with a surface. Importantly, the shape of the return waveform also depends on the material properties of the reflecting surface. In this paper, we investigate whether the material type or class can be determined from the full-waveform response. First, as a proof of concept, we demonstrate that the extra information about material class, if known accurately, can improve performance on scene understanding tasks such as semantic segmentation. Next, we learn two different full-waveform material classifiers: a random forest classifier and a temporal convolutional neural network (TCN) classifier. We find that, in some cases, material types can be distinguished, and that the TCN generally performs better across a wider range of materials. However, factors such as angle of incidence, material colour, and material similarity may hinder overall performance.Comment: In Proceedings of the Conference on Robots and Vision (CRV'23), Montreal, Canada, Jun. 6-8, 202

    3D People Surveillance on Range Data Sequences of a Rotating Lidar

    Get PDF
    In this paper, we propose an approach on real-time 3D people surveillance, with probabilistic foreground modeling, multiple person tracking and on-line re-identification. Our principal aim is to demonstrate the capabilities of a special range sensor, called rotating multi-beam (RMB) Lidar, as a future possible surveillance camera. We present methodological contributions in two key issues. First, we introduce a hybrid 2D--3D method for robust foreground-background classification of the recorded RMB-Lidar point clouds, with eliminating spurious effects resulted by quantification error of the discretized view angle, non-linear position corrections of sensor calibration, and background flickering, in particularly due to motion of vegetation. Second, we propose a real-time method for moving pedestrian detection and tracking in RMB-Lidar sequences of dense surveillance scenarios, with short- and long-term object assignment. We introduce a novel person re-identification algorithm based on solely the Lidar measurements, utilizing in parallel the range and the intensity channels of the sensor, which provide biometric features. Quantitative evaluation is performed on seven outdoor Lidar sequences containing various multi-target scenarios displaying challenging outdoor conditions with low point density and multiple occlusions

    Temporal behavior and processing of the LiDAR signal in fog

    Get PDF
    The interest in LiDAR imaging systems has recently increased in outdoor ground-based applications related to computer vision, in fields like autonomous vehicles. However, for the complete settling of the technology, there are still obstacles related to outdoor performance, being its use in adverse weather conditions one of the most challenging. When working in bad weather, data shown in point clouds is unreliable and its temporal behavior is unknown. We have designed, constructed, and tested a scanning-pulsed LiDAR imaging system with outstanding characteristics related to optoelectronic modifications, in particular including digitization capabilities of each of the pulses. The system performance was tested in a macro-scale fog chamber and, using the collected data, two relevant phenomena were identified: the backscattering signal of light that first interacts with the media and false-positive points that appear due to the scattering properties of the media. Digitization of the complete signal can be used to develop algorithms to identify and get rid of them. Our contribution is related to the digitization, analysis, and characterization of the acquired signal when steering to a target under foggy conditions, as well as the proposal of different strategies to improve point clouds generated in these conditions.This work was supported by the Spanish Ministry of Science and Innovation (MICINN) under the project PID2020-119484RB-I00. The first author gratefully acknowledges the Universitat Politècnica de Catalunya and Banco Santander for the financial support of her predoctoral research grant.Peer ReviewedPostprint (author's final draft

    Study of Seasonal change and Water Stress Condition in Plant Leaf Using Polarimetric Lidar Measurement

    Get PDF
    Study of vegetation is of great importance to the improvement of agriculture and forest management. Although there have been various attempts to characterize vegetation using remote sensing techniques, polarimetric lidar is a novel remote sensing tool that has shown potential in vegetation remote sensing. In this thesis, a near-infrared polarimetric lidar at 1064 nm was used to investigate the effects of seasonal change and water stress condition on plant leaves. Two variables, time and water content, affected the plant leaf laser depolarization ratio measurement. The first study focused on the maple tree in order to figure out how seasonal change affected the maple leaf depolarization. Seasonal trendline was obtained and revealed an overall downward trend over time. In the second study, the leaves from maple, lemon, and rubber trees were investigated to study the effect of water stress on the depolarization ratio. It was discovered that the leaf depolarization ratio increased for more water content and went down for less water content. In addition, leaf samples were collected in the morning, afternoon, and evening, respectively, to study the diurnal change. Statistical analysis suggested that depolarization ratio did not change significantly for the different times of a day. It was suggested that the seasonal change had a greater effect on depolarization than the diurnal change. This study demonstrates that the near-infrared polarimetric lidar system has an ability to remotely characterize the vegetation internal conditions that may not be visible to the human eyes. Furthermore, the lidar system has the potential to differentiate the various plant species based on the depolarization ratio. In conclusion, the polarimetric lidar system at 1064-nm is an effective and sensitive enough remote sensing tool which can be widely used in active remote sensing

    CNN YOLO를 이용한 열화상기반 영상과 점구름 기반 엔드밀 감시 시스템

    Get PDF
    학위논문(석사) -- 서울대학교대학원 : 공과대학 기계공학부, 2022. 8. 안성훈.As adoption of smart-factory system in manufacturing becoming inevitable, autonomous monitoring system in the field of machining has become viral nowadays. Among various methods in autonomous monitoring, vision-based monitoring is the most sought-after. This system uses vision sensors integrated with detection models developed through deep learning. However, the disadvantage of being greatly affected by optical conditions, such as ambient lighting or reflective materials, critically affects the performance in terms of monitoring. Instead of vision sensors, LiDAR, which provides depth map by measuring light returning time using infrared radiation (IR) directly to the object, can be complementary method. The study presents a LiDAR ((Light Detection and Ranging)-based end mill state monitoring system, which renders strengths of both vision and LiDAR detecting. This system uses point cloud and IR intensity data acquired by the LiDAR while object detection algorithm developed based on deep learning is engaged during the detection stage. The point cloud data is used to detect and determine the length of the endmill while the IR intensity is used to detect the wear present on the endmill. Convolutional neural network based You Only Look Once (YOLO) algorithm is selected as an object detection algorithm for real-time monitoring. Also, the quality of point cloud has been improved using data prep-processing method. Finally, it is verified that end mill state has been monitored with high accuracy at the actual machining environment.제조 분야에서 스마트 팩토리 시스템의 도입으로 인해 가공 과정의 무인 모니터링 시스템이 필연적으로 도입되고 있다. 무인 모니터링의 다양한 방법 중 비전 기반 모니터링이 가장 많이 쓰이고 있다. 해당 비전 기반 시스템의 경우 딥 러닝을 통해 개발된 감지 모델과 통합된 비전 센서를 사용한다. 하지만 주변 조명이나 반사 물질과 같은 광학적 조건에 크게 영향을 받는 단점은 모니터링 측면에서 성능에 치명적인 영향을 미치기에 이를 보완하는 대안이 필요하다. 이 연구에서는 비전 센서 대신 적외선(IR)을 물체에 직접 조사하여 빛의 왕복 시간을 측정하여 깊이 정보를 측정하는 LiDAR를 이용하여 비전 센서의 한계를 보완하는 시스템을 소개한다. 또한 비전과 LiDAR 감지의 장점을 모두 제공하는 LiDAR 기반 엔드밀 상태 모니터링 시스템을 제시한다. 이 시스템은 LiDAR에서 획득한 점 구름 정보 및 IR 강도 데이터를 사용하며, 딥 러닝을 기반으로 개발된 객체 감지 알고리즘은 감지 단계와 엔드밀의 길이를 감지하고 측정하는 데 사용되며 IR 강도는 엔드밀에 존재하는 마모 혹은 파손 정보를 감지하는 데 사용된다. 실시간 모니터링을 위한 객체 감지 알고리즘으로 YOLO(You Only Look Once) 알고리즘을 기반으로 하는 컨볼루션 신경망이 선택되었으며 데이터 전처리를 통해 포인트 클라우드의 품질을 향상시켰다. 마지막으로 실제 가공 환경에서 엔드밀 상태를 높은 정확도로 모니터링하는 과정을 진행하였다.1. Introduction . 1 1.1 Tool monitoring in CNC machines 1 1.2 LiDAR and point cloud map. 5 1.3 IR intensity application 7 2. System modelling 9 2.1 End mill monitoring system overview 9 2.2 Hardware setup . 11 2.3 End mill failure monitoring 15 2.4 YOLO setup 18 3. Data processing . 19 3.1 Confidence score. 19 3.2 Noise removal 20 3.3 Point cloud accumulation. 22 3.4 IR intensity monitoring 26 4. Experiments and results . 28 4.1 Data gathering 28 4.2 Training 30 4.3 Results . 32 5. Conclusion . 39 Reference 41 Abstract (In Korean) 43석

    Cosys-AirSim: A Real-Time Simulation Framework Expanded for Complex Industrial Applications

    Full text link
    Within academia and industry, there has been a need for expansive simulation frameworks that include model-based simulation of sensors, mobile vehicles, and the environment around them. To this end, the modular, real-time, and open-source AirSim framework has been a popular community-built system that fulfills some of those needs. However, the framework required adding systems to serve some complex industrial applications, including designing and testing new sensor modalities, Simultaneous Localization And Mapping (SLAM), autonomous navigation algorithms, and transfer learning with machine learning models. In this work, we discuss the modification and additions to our open-source version of the AirSim simulation framework, including new sensor modalities, vehicle types, and methods to generate realistic environments with changeable objects procedurally. Furthermore, we show the various applications and use cases the framework can serve.Comment: Accepted at Annual Modeling and Simulation Conference, ANNSIM 202
    corecore