10 research outputs found

    Enhancing UAV Classification with Synthetic Data: GMM LiDAR Simulator for Aerial Surveillance Applications

    Get PDF
    The proliferation of Unmanned Aerial Vehicles (UAVs) has raised safety concerns due to the potential threats resulting from their misuse or malicious intent. Due to their compact size, high-resolution surveillance systems such as LiDAR sensors are necessary to exert effective control over the airspace. Given the large volume of data that these technologies generate, efficient Deep Learning (DL) algorithms are needed to make their real-time implementation feasible. However, the training of DL models requires extense and diverse datasets, which in certain scenarios may not be available. Therefore, this work introduces a novel method based on Gaussian Mixture Models (GMMs) for simulating realistic synthetic point clouds of UAVs. This simulator is calibrated using experimental data and allows to probabilistically replicate the intricacies of sensor ray propagation, thereby addressing the limitations of current Ray Tracing (RT) simulators such as Gazebo or CARLA. In this study, we perform a quantitative analysis of the point cloud quality of the GMM simulator, comparing it with the results obtained using a RT approach. Additionally, we evaluate the effectiveness of both methods in training object classifiers. Results demonstrate the GMM simulator’s potential for creating realistic synthetic databasesAgencia Estatal de Investigación | Ref. PID2021-125060OB-100Agencia Estatal de Investigación | Ref. TED2021-129757B-C3Ministerio de Universidades | Ref.FPU21/0117

    Metrics for Specification, Validation, and Uncertainty Prediction for Credibility in Simulation of Active Perception Sensor Systems

    Get PDF
    The immense effort required for the safety validation of an automated driving system of SAE level 3 or higher is known not to be feasible by real test drives alone. Therefore, simulation is key even for limited operational design domains for homologation of automated driving functions. Consequently, all simulation models used as tools for this purpose must be qualified beforehand. For this, in addition to their verification and validation, uncertainty quantification (VV&UQ) and prediction for the application domain are required for the credibility of the simulation model. To enable such VV&UQ, a particularly developed lidar sensor system simulation is utilized to present new metrics that can be used holistically to demonstrate the model credibility and -maturity for simulation models of active perception sensor systems. The holistic process towards model credibility starts with the formulation of the requirements for the models. In this context, the threshold values of the metrics as acceptance criteria are quantifiable by the relevance analysis of the cause-effect chains prevailing in different scenarios, and should intuitively be in the same unit as the simulated metric for this purpose. These relationships can be inferred via the presented aligned methods “Perception Sensor Collaborative Effect and Cause Tree” (PerCollECT) and “Cause, Effect, and Phenomenon Relevance Analysis” (CEPRA). For sample validation, each experiment must be accompanied by reference measurements, as these then serve as simulation input. Since the reference data collection is subject to epistemic as well as aleatory uncertainty, which are both propagated through the simulation in the form of input data variation, this leads to several slightly different simulation results. In the simulation of measured signals and data over time considered here, this combination of uncertainties is best expressed as superimposed cumulative distribution functions. The metric must therefore be able to handle such so-called p-boxes as a result of the large set of simulations. In the present work, the area validation metric (AVM) is selected by a detailed analysis as the best of the metrics already used and extended to be able to fulfill all the requirements. This results in the corrected AVM (CAVM), which quantifies the model scattering error with respect to the real scatter. Finally, the double validation metric (DVM) is elaborated as a double-vector of the former metric with the estimate for the model bias. The novel metric is exemplarily applied to the empirical cumulative distribution functions of lidar measurements and the p-boxes from their re-simulations. In this regard, aleatory and epistemic uncertainties are taken into account for the first time and the novel metrics are successfully established. The quantification of the uncertainties and error prediction of a sensor model based on the sample validation is also demonstrated for the first time

    Label Efficient 3D Scene Understanding

    Get PDF
    3D scene understanding models are becoming increasingly integrated into modern society. With applications ranging from autonomous driving, Augmented Real- ity, Virtual Reality, robotics and mapping, the demand for well-behaved models is rapidly increasing. A key requirement for training modern 3D models is high- quality manually labelled training data. Collecting training data is often the time and monetary bottleneck, limiting the size of datasets. As modern data-driven neu- ral networks require very large datasets to achieve good generalisation, finding al- ternative strategies to manual labelling is sought after for many industries. In this thesis, we present a comprehensive study on achieving 3D scene under- standing with fewer labels. Specifically, we evaluate 4 approaches: existing data, synthetic data, weakly-supervised and self-supervised. Existing data looks at the potential of using readily available national mapping data as coarse labels for train- ing a building segmentation model. We further introduce an energy-based active contour snake algorithm to improve label quality by utilising co-registered LiDAR data. This is attractive as whilst the models may still require manual labels, these labels already exist. Synthetic data also exploits already existing data which was not originally designed for training neural networks. We demonstrate a pipeline for generating a synthetic Mobile Laser Scanner dataset. We experimentally evalu- ate if such a synthetic dataset can be used to pre-train smaller real-world datasets, increasing the generalisation with less data. A weakly-supervised approach is presented which allows for competitive per- formance on challenging real-world benchmark 3D scene understanding datasets with up to 95% less data. We propose a novel learning approach where the loss function is learnt. Our key insight is that the loss function is a local function and therefore can be trained with less data on a simpler task. Once trained our loss function can be used to train a 3D object detector using only unlabelled scenes. Our method is both flexible and very scalable, even performing well across datasets. Finally, we propose a method which only requires a single geometric represen- tation of each object class as supervision for 3D monocular object detection. We discuss why typical L2-like losses do not work for 3D object detection when us- ing differentiable renderer-based optimisation. We show that the undesirable local- minimas that the L2-like losses fall into can be avoided with the inclusion of a Generative Adversarial Network-like loss. We achieve state-of-the-art performance on the challenging 6DoF LineMOD dataset, without any scene level labels
    corecore