36 research outputs found

    FEW SHOT PHOTOGRAMETRY: A COMPARISON BETWEEN NERF AND MVS-SFM FOR THE DOCUMENTATION OF CULTURAL HERITAGE

    Get PDF
    3D documentation methods for Digital Cultural Heritage (DCH) domain is a field that becomes increasingly interdisciplinary, breaking down boundaries that have long separated experts from different domains. In the past, there has been an ambiguous claim for ownership of skills, methodologies, and expertise in the heritage sciences. This study aims to contribute to the dialogue between these different disciplines by presenting a novel approach for 3D documentation of an ancient statue. The method combines TLS acquisition and MVS pipeline using images from a DJI Mavic 2 drone. Additionally, the study compares the accuracy and final product of the Deep Points (DP) and Neural Radiance Fields (NeRF) methods, using the TLS acquisition as validation ground truth. Firstly, a TLS acquisition was performed on an ancient statue using a Faro Focus 2 scanner. Next, a multi-view stereo (MVS) pipeline was adopted using 2D images captured by a Mini-2 DJI Mavic 2 drone from a distance of approximately 1 meter around the statue. Finally, the same images were used to train and run the NeRF network after being reduced by 90%. The main contribution of this paper is to improve our understanding of this method and compare the accuracy and final product of two different approaches - direct projection (DP) and NeRF - by exploiting a TLS acquisition as the validation ground truth. Results show that the NeRF approach outperforms DP in terms of accuracy and produces a more realistic final product. This paper has important implications for the field of CH preservation, as it offers a new and effective method for generating 3D models of ancient statues. This technology can help to document and preserve important cultural artifacts for future generations, while also providing new insights into the history and culture of different civilizations. Overall, the results of this study demonstrate the potential of combining TLS and NeRF for generating accurate and realistic 3D models of ancient statues

    A Survey on Modelling of Automotive Radar Sensors for Virtual Test and Validation of Automated Driving

    Get PDF
    Radar sensors were among the first perceptual sensors used for automated driving. Although several other technologies such as lidar, camera, and ultrasonic sensors are available, radar sensors have maintained and will continue to maintain their importance due to their reliability in adverse weather conditions. Virtual methods are being developed for verification and validation of automated driving functions to reduce the time and cost of testing. Due to the complexity of modelling high-frequency wave propagation and signal processing and perception algorithms, sensor models that seek a high degree of accuracy are challenging to simulate. Therefore, a variety of different modelling approaches have been presented in the last two decades. This paper comprehensively summarises the heterogeneous state of the art in radar sensor modelling. Instead of a technology-oriented classification as introduced in previous review articles, we present a classification of how these models can be used in vehicle development by using the V-model originating from software development. Sensor models are divided into operational, functional, technical, and individual models. The application and usability of these models along the development process are summarised in a comprehensive tabular overview, which is intended to support future research and development at the vehicle level and will be continuously updated

    Obstacle Detection and Avoidance System Based on Monocular Camera and Size Expansion Algorithm for UAVs

    Get PDF
    One of the most challenging problems in the domain of autonomous aerial vehicles is the designing of a robust real-time obstacle detection and avoidance system. This problem is complex, especially for the micro and small aerial vehicles, that is due to the Size, Weight and Power (SWaP) constraints. Therefore, using lightweight sensors (i.e., Digital camera) can be the best choice comparing with other sensors; such as laser or radar. For real-time applications, different works are based on stereo cameras in order to obtain a 3D model of the obstacles, or to estimate their depth. Instead, in this paper, a method that mimics the human behavior of detecting the collision state of the approaching obstacles using monocular camera is proposed. The key of the proposed algorithm is to analyze the size changes of the detected feature points, combined with the expansion ratios of the convex hull constructed around the detected feature points from consecutive frames. During the Aerial Vehicle (UAV) motion, the detection algorithm estimates the changes in the size of the area of the approaching obstacles. First, the method detects the feature points of the obstacles, then extracts the obstacles that have the probability of getting close toward the UAV. Secondly, by comparing the area ratio of the obstacle and the position of the UAV, the method decides if the detected obstacle may cause a collision. Finally, by estimating the obstacle 2D position in the image and combining with the tracked waypoints, the UAV performs the avoidance maneuver. The proposed algorithm was evaluated by performing real indoor and outdoor flights, and the obtained results show the accuracy of the proposed algorithm compared with other related works.Research supported by the Spanish Government through the Cicyt project ADAS ROAD-EYE (TRA2013-48314-C3-1-R)

    Air Force Institute of Technology Research Report 2020

    Get PDF
    This Research Report presents the FY20 research statistics and contributions of the Graduate School of Engineering and Management (EN) at AFIT. AFIT research interests and faculty expertise cover a broad spectrum of technical areas related to USAF needs, as reflected by the range of topics addressed in the faculty and student publications listed in this report. In most cases, the research work reported herein is directly sponsored by one or more USAF or DOD agencies. AFIT welcomes the opportunity to conduct research on additional topics of interest to the USAF, DOD, and other federal organizations when adequate manpower and financial resources are available and/or provided by a sponsor. In addition, AFIT provides research collaboration and technology transfer benefits to the public through Cooperative Research and Development Agreements (CRADAs). Interested individuals may discuss ideas for new research collaborations, potential CRADAs, or research proposals with individual faculty using the contact information in this document

    Point Normal Orientation and Surface Reconstruction by Incorporating Isovalue Constraints to Poisson Equation

    Full text link
    Oriented normals are common pre-requisites for many geometric algorithms based on point clouds, such as Poisson surface reconstruction. However, it is not trivial to obtain a consistent orientation. In this work, we bridge orientation and reconstruction in implicit space and propose a novel approach to orient point clouds by incorporating isovalue constraints to the Poisson equation. Feeding a well-oriented point cloud into a reconstruction approach, the indicator function values of the sample points should be close to the isovalue. Based on this observation and the Poisson equation, we propose an optimization formulation that combines isovalue constraints with local consistency requirements for normals. We optimize normals and implicit functions simultaneously and solve for a globally consistent orientation. Owing to the sparsity of the linear system, an average laptop can be used to run our method within reasonable time. Experiments show that our method can achieve high performance in non-uniform and noisy data and manage varying sampling densities, artifacts, multiple connected components, and nested surfaces

    ReSimAD: Zero-Shot 3D Domain Transfer for Autonomous Driving with Source Reconstruction and Target Simulation

    Full text link
    Domain shifts such as sensor type changes and geographical situation variations are prevalent in Autonomous Driving (AD), which poses a challenge since AD model relying on the previous-domain knowledge can be hardly directly deployed to a new domain without additional costs. In this paper, we provide a new perspective and approach of alleviating the domain shifts, by proposing a Reconstruction-Simulation-Perception (ReSimAD) scheme. Specifically, the implicit reconstruction process is based on the knowledge from the previous old domain, aiming to convert the domain-related knowledge into domain-invariant representations, e.g., 3D scene-level meshes. Besides, the point clouds simulation process of multiple new domains is conditioned on the above reconstructed 3D meshes, where the target-domain-like simulation samples can be obtained, thus reducing the cost of collecting and annotating new-domain data for the subsequent perception process. For experiments, we consider different cross-domain situations such as Waymo-to-KITTI, Waymo-to-nuScenes, Waymo-to-ONCE, etc, to verify the zero-shot target-domain perception using ReSimAD. Results demonstrate that our method is beneficial to boost the domain generalization ability, even promising for 3D pre-training.Comment: Code and simulated points are available at https://github.com/PJLab-ADG/3DTrans#resima
    corecore