26 research outputs found

    MFM-Net: Unpaired Shape Completion Network with Multi-stage Feature Matching

    Full text link
    Unpaired 3D object completion aims to predict a complete 3D shape from an incomplete input without knowing the correspondence between the complete and incomplete shapes during training. To build the correspondence between two data modalities, previous methods usually apply adversarial training to match the global shape features extracted by the encoder. However, this ignores the correspondence between multi-scaled geometric information embedded in the pyramidal hierarchy of the decoder, which makes previous methods struggle to generate high-quality complete shapes. To address this problem, we propose a novel unpaired shape completion network, named MFM-Net, using multi-stage feature matching, which decomposes the learning of geometric correspondence into multi-stages throughout the hierarchical generation process in the point cloud decoder. Specifically, MFM-Net adopts a dual path architecture to establish multiple feature matching channels in different layers of the decoder, which is then combined with the adversarial learning to merge the distribution of features from complete and incomplete modalities. In addition, a refinement is applied to enhance the details. As a result, MFM-Net makes use of a more comprehensive understanding to establish the geometric correspondence between complete and incomplete shapes in a local-to-global perspective, which enables more detailed geometric inference for generating high-quality complete shapes. We conduct comprehensive experiments on several datasets, and the results show that our method outperforms previous methods of unpaired point cloud completion with a large margin

    Editorial on Special Issue “Techniques and Applications of UAV-Based Photogrammetric 3D Mapping”

    No full text
    Recently, 3D mapping has begun to play an increasingly important role in photogrammetric applications [...

    Editorial on Special Issue ā€œTechniques and Applications of UAV-Based Photogrammetric 3D Mappingā€

    No full text
    Recently, 3D mapping has begun to play an increasingly important role in photogrammetric applications [...

    A Novel OpenMVS-Based Texture Reconstruction Method Based on the Fully Automatic Plane Segmentation for 3D Mesh Models

    No full text
    The Markov Random Field (MRF) energy function, constructed by existing OpenMVS-based 3D texture reconstruction algorithms, considers only the image label of the adjacent triangle face for the smoothness term and ignores the planar-structure information of the model. As a result, the generated texture charts results have too many fragments, leading to a serious local miscut and color discontinuity between texture charts. This paper fully utilizes the planar structure information of the mesh model and the visual information of the 3D triangle face on the image and proposes an improved, faster, and high-quality texture chart generation method based on the texture chart generation algorithm of the OpenMVS. This methodology of the proposed approach is as follows: (1) The visual quality on different visual images of each triangle face is scored using the visual information of the triangle face on each image in the mesh model. (2) A fully automatic Variational Shape Approximation (VSA) plane segmentation algorithm is used to segment the blocked 3D mesh models. The proposed fully automatic VSA-based plane segmentation algorithm is suitable for multi-threaded parallel processing, which solves the VSA framework needed to manually set the number of planes and the low computational efficiency in a large scene model. (3) The visual quality of the triangle face on different visual images is used as the data term, and the image label of adjective triangle and result of plane segmentation are utilized as the smoothness term to construct the MRF energy function. (4) An image label is assigned to each triangle by the minimizing energy function. A texture chart is generated by clustering the topologically-adjacent triangle faces with the same image label, and the jagged boundaries of the texture chart are smoothed. Three sets of data of different types were used for quantitative and qualitative evaluation. Compared with the original OpenMVS texture chart generation method, the experiments show that the proposed approach significantly reduces the number of texture charts, significantly improves miscuts and color differences between texture charts, and highly boosts the efficiency of VSA plane segmentation algorithm and OpenMVS texture reconstruction

    3D Mesh Pre-Processing Method Based on Feature Point Classification and Anisotropic Vertex Denoising Considering Scene Structure Characteristics

    No full text
    3D mesh denoising plays an important role in 3D model pre-processing and repair. A fundamental challenge in the mesh denoising process is to accurately extract features from the noise and to preserve and restore the scene structure features of the model. In this paper, we propose a novel feature-preserving mesh denoising method, which was based on robust guidance normal estimation, accurate feature point extraction and an anisotropic vertex denoising strategy. The methodology of the proposed approach is as follows: (1) The dual weight function that takes into account the angle characteristics is used to estimate the guidance normals of the surface, which improved the reliability of the joint bilateral filtering algorithm and avoids losing the corner structures; (2) The filtered facet normal is used to classify the feature points based on the normal voting tensor (NVT) method, which raised the accuracy and integrity of feature classification for the noisy model; (3) The anisotropic vertex update strategy is used in triangular mesh denoising: updating the non-feature points with isotropic neighborhood normals, which effectively suppressed the sharp edges from being smoothed; updating the feature points based on local geometric constraints, which preserved and restored the features while avoided sharp pseudo features. The detailed quantitative and qualitative analyses conducted on synthetic and real data show that our method can remove the noise of various mesh models and retain or restore the edge and corner features of the model without generating pseudo features

    Novel Adaptive Laser Scanning Method for Point Clouds of Free-Form Objects

    No full text
    Laser scanners are widely used to collect coordinates, also known as point-clouds, of three-dimensional free-form objects. For creating a solid model from a given point-cloud and transferring the data from the model, features-based optimization of the point-cloud to minimize the number if points in the cloud is required. To solve this problem, existing methods mainly extract significant points based on local surface variation of a predefined level. However, comprehensively describing an object’s geometric information using a predefined level is difficult since an object usually has multiple levels of details. Therefore, we propose a simplification method based on a multi-level strategy that adaptively determines the optimal level of points. For each level, significant points are extracted from the point cloud based on point importance measured by both local surface variation and the distribution of neighboring significant points. Furthermore, the degradation of perceptual quality for each level is evaluated by the adjusted mesh structural distortion measurement to select the optimal level. Experiments are performed to evaluate the effectiveness and applicability of the proposed method, demonstrating a reliable solution to optimize the adaptive laser scanning of point clouds for free-forms objects

    A Multi-View Dense Point Cloud Generation Algorithm Based on Low-Altitude Remote Sensing Images

    No full text
    This paper presents a novel multi-view dense point cloud generation algorithm based on low-altitude remote sensing images. The proposed method was designed to be especially effective in enhancing the density of point clouds generated by Multi-View Stereo (MVS) algorithms. To overcome the limitations of MVS and dense matching algorithms, an expanded patch was set up for each point in the point cloud. Then, a patch-based Multiphoto Geometrically Constrained Matching (MPGC) was employed to optimize points on the patch based on least square adjustment, the space geometry relationship, and epipolar line constraint. The major advantages of this approach are twofold: (1) compared with the MVS method, the proposed algorithm can achieve denser three-dimensional (3D) point cloud data; and (2) compared with the epipolar-based dense matching method, the proposed method utilizes redundant measurements to weaken the influence of occlusion and noise on matching results. Comparison studies and experimental results have validated the accuracy of the proposed algorithm in low-altitude remote sensing image dense point cloud generation

    An Effective Image Denoising Method for UAV Images via Improved Generative Adversarial Networks

    No full text
    Unmanned aerial vehicles (UAVs) are an inexpensive platform for collecting remote sensing images, but UAV images suffer from a content loss problem caused by noise. In order to solve the noise problem of UAV images, we propose a new methods to denoise UAV images. This paper introduces a novel deep neural network method based on generative adversarial learning to trace the mapping relationship between noisy and clean images. In our approach, perceptual reconstruction loss is used to establish a loss equation that continuously optimizes a min-max game theoretic model to obtain better UAV image denoising results. The generated denoised images by the proposed method enjoy clearer ground objects edges and more detailed textures of ground objects. In addition to the traditional comparison method, denoised UAV images and corresponding original clean UAV images were employed to perform image matching based on local features. At the same time, the classification experiment on the denoised images was also conducted to compare the denoising results of UAV images with others. The proposed method had achieved better results in these comparison experiments

    An Efficient Probabilistic Registration Based on Shape Descriptor for Heritage Field Inspection

    No full text
    Heritage documentation is implemented by digitally recording historical artifacts for the conservation and protection of these cultural heritage objects. As efficient spatial data acquisition tools, laser scanners have been widely used to collect highly accurate three-dimensional (3D) point clouds without damaging the original structure and the environment. To ensure the integrity and quality of the collected data, field inspection (i.e., on-spot checking the data quality) should be carried out to determine the need for additional measurements (i.e., extra laser scanning for areas with quality issues such as data missing and quality degradation). To facilitate inspection of all collected point clouds, especially checking the quality issues in overlaps between adjacent scans, all scans should be registered together. Thus, a point cloud registration method that is able to register scans fast and robustly is required. To fulfill the aim, this study proposes an efficient probabilistic registration for free-form cultural heritage objects by integrating the proposed principal direction descriptor and curve constraints. We developed a novel shape descriptor based on a local frame of principal directions. Within the frame, its density and distance feature images were generated to describe the shape of the local surface. We then embedded the descriptor into a probabilistic framework to reject ambiguous matches. Spatial curves were integrated as constraints to delimit the solution space. Finally, a multi-view registration was used to refine the position and orientation of each scan for the field inspection. Comprehensive experiments show that the proposed method was able to perform well in terms of rotation error, translation error, robustness, and runtime and outperformed some commonly used approaches

    Epipolar Rectification with Minimum Perspective Distortion for Oblique Images

    No full text
    Epipolar rectification is of great importance for 3D modeling by using UAV (Unmanned Aerial Vehicle) images; however, the existing methods seldom consider the perspective distortion relative to surface planes. Therefore, an algorithm for the rectification of oblique images is proposed and implemented in detail. The basic principle is to minimize the rectified imagesā€™ perspective distortion relative to the reference planes. First, this minimization problem is formulated as a cost function that is constructed by the tangent value of angle deformation; second, it provides a great deal of flexibility on using different reference planes, such as roofs and the faƧades of buildings, to generate rectified images. Furthermore, a reasonable scale is acquired according to the dihedral angle between the rectified image plane and the original image plane. The low-quality regions of oblique images are cropped out according to the distortion size. Experimental results revealed that the proposed rectification method can result in improved matching precision (Semi-global dense matching). The matching precision is increased by about 30% for roofs and increased by just 1% for faƧades, while the faƧades are not parallel to the baseline. In another designed experiment, the selected faƧades are parallel to the baseline, the matching precision has a great improvement for faƧades, by an average of 22%. This fully proves our proposed algorithm that elimination of perspective distortion on rectified images can significantly improve the accuracy of dense matching
    corecore