458 research outputs found

    AdvMono3D: Advanced Monocular 3D Object Detection with Depth-Aware Robust Adversarial Training

    Full text link
    Monocular 3D object detection plays a pivotal role in the field of autonomous driving and numerous deep learning-based methods have made significant breakthroughs in this area. Despite the advancements in detection accuracy and efficiency, these models tend to fail when faced with such attacks, rendering them ineffective. Therefore, bolstering the adversarial robustness of 3D detection models has become a crucial issue that demands immediate attention and innovative solutions. To mitigate this issue, we propose a depth-aware robust adversarial training method for monocular 3D object detection, dubbed DART3D. Specifically, we first design an adversarial attack that iteratively degrades the 2D and 3D perception capabilities of 3D object detection models(IDP), serves as the foundation for our subsequent defense mechanism. In response to this attack, we propose an uncertainty-based residual learning method for adversarial training. Our adversarial training approach capitalizes on the inherent uncertainty, enabling the model to significantly improve its robustness against adversarial attacks. We conducted extensive experiments on the KITTI 3D datasets, demonstrating that DART3D surpasses direct adversarial training (the most popular approach) under attacks in 3D object detection APR40AP_{R40} of car category for the Easy, Moderate, and Hard settings, with improvements of 4.415%, 4.112%, and 3.195%, respectively

    Predictive World Models from Real-World Partial Observations

    Full text link
    Cognitive scientists believe adaptable intelligent agents like humans perform reasoning through learned causal mental simulations of agents and environments. The problem of learning such simulations is called predictive world modeling. Recently, reinforcement learning (RL) agents leveraging world models have achieved SOTA performance in game environments. However, understanding how to apply the world modeling approach in complex real-world environments relevant to mobile robots remains an open question. In this paper, we present a framework for learning a probabilistic predictive world model for real-world road environments. We implement the model using a hierarchical VAE (HVAE) capable of predicting a diverse set of fully observed plausible worlds from accumulated sensor observations. While prior HVAE methods require complete states as ground truth for learning, we present a novel sequential training method to allow HVAEs to learn to predict complete states from partially observed states only. We experimentally demonstrate accurate spatial structure prediction of deterministic regions achieving 96.21 IoU, and close the gap to perfect prediction by 62% for stochastic regions using the best prediction. By extending HVAEs to cases where complete ground truth states do not exist, we facilitate continual learning of spatial prediction as a step towards realizing explainable and comprehensive predictive world models for real-world mobile robotics applications. Code is available at https://github.com/robin-karlsson0/predictive-world-models.Comment: Accepted for IEEE MOST 202

    MONOCULAR DEPTH PREDICTION IN PHOTOGRAMMETRIC APPLICATIONS

    Get PDF
    Abstract. Despite the recent success of learning-based monocular depth estimation algorithms and the release of large-scale datasets for training, the methods are limited to depth map prediction and still struggle to yield reliable results in the 3D space without additional scene cues. Indeed, although state-of-the-art approaches produce quality depth maps, they generally fail to recover the 3D structure of the scene robustly. This work explores supervised CNN architectures for monocular depth estimation and evaluates their potential in 3D reconstruction. Since most available datasets for training are not designed toward this goal and are limited to specific indoor scenarios, a new metric, large-scale synthetic benchmark (ArchDepth) is introduced that renders near real-world scenarios of outdoor scenes. A encoder-decoder architecture is used for training, and the generalization of the approach is evaluated via depth inference in unseen views in synthetic and real-world scenarios. The depth map predictions are also projected in the 3D space using a separate module. Results are qualitatively and quantitatively evaluated and compared with state-of-the-art algorithms for single image 3D scene recovery
    • …
    corecore