80 research outputs found

    High Resolution Surface Reconstruction of Cultural Heritage Objects Using Shape from Polarization Method

    Get PDF
    Nowadays, three-dimensional reconstruction is used in various fields like computer vision, computer graphics, mixed reality and digital twin. The three- dimensional reconstruction of cultural heritage objects is one of the most important applications in this area which is usually accomplished by close range photogrammetry. The problem here is that the images are often noisy, and the dense image matching method has significant limitations to reconstruct the geometric details of cultural heritage objects in practice. Therefore, displaying high-level details in three-dimensional models, especially for cultural heritage objects, is a severe challenge in this field. In this paper, the shape from polarization method has been investigated, a passive method with no drawbacks of active methods. In this method, the resolution of the depth maps can be dramatically increased using the information obtained from the polarization light by rotating a linear polarizing filter in front of a digital camera. Through these polarized images, the surface details of the object can be reconstructed locally with high accuracy. The fusion of polarization and photogrammetric methods is an appropriate solution for achieving high resolution three-dimensional reconstruction. The surface reconstruction assessments have been performed visually and quantitatively. The evaluations showed that the proposed method could significantly reconstruct the surfaces' details in the three-dimensional model compared to the photogrammetric method with 10 times higher depth resolution

    Geometric calibration of full spherical panoramic ricoh-theta camera

    Get PDF
    A novel calibration process of RICOH-THETA, full-view fisheye camera, is proposed which has numerous applications as a low cost sensor in different disciplines such as photogrammetry, robotic and machine vision and so on. Ricoh Company developed this camera in 2014 that consists of two lenses and is able to capture the whole surrounding environment in one shot. In this research, each lens is calibrated separately and interior/relative orientation parameters (IOPs and ROPs) of the camera are determined on the basis of designed calibration network on the central and side images captured by the aforementioned lenses. Accordingly, designed calibration network is considered as a free distortion grid and applied to the measured control points in the image space as correction terms by means of bilinear interpolation. By performing corresponding corrections, image coordinates are transformed to the unit sphere as an intermediate space between object space and image space in the form of spherical coordinates. Afterwards, IOPs and EOPs of each lens are determined separately through statistical bundle adjustment procedure based on collinearity condition equations. Subsequently, ROPs of two lenses is computed from both EOPs. Our experiments show that by applying 3*3 free distortion grid, image measurements residuals diminish from 1.5 to 0.25 degrees on aforementioned unit sphere

    Using pixel-based and object-based methods to classify urban hyperspectral features

    Get PDF
    Object-based image analysis methods have been developed recently. They have since become a very active research topic in the remote sensing community. This is mainly because the researchers have begun to study the spatial structures within the data. In contrast, pixel-based methods only use the spectral content of data. To evaluate the applicability of object-based image analysis methods for land-cover information extraction from hyperspectral data, a comprehensive comparative analysis was performed. In this study, six supervised classification methods were selected from pixel-based category, including the maximum likelihood (ML), fisher linear likelihood (FLL), support vector machine (SVM), binary encoding (BE), spectral angle mapper (SAM) and spectral information divergence (SID). The classifiers were conducted on several features extracted from original spectral bands in order to avoid the problem of the Hughes phenomenon, and obtain a sufficient number of training samples. Three supervised and four unsupervised feature extraction methods were used. Pixel based classification was conducted in the first step of the proposed algorithm. The effective feature number (EFN) was then obtained. Image objects were thereafter created using the fractal net evolution approach (FNEA), the segmentation method implemented in eCognition software. Several experiments have been carried out to find the best segmentation parameters. The classification accuracy of these objects was compared with the accuracy of the pixel-based methods. In these experiments, the Pavia University Campus hyperspectral dataset was used. This dataset was collected by the ROSIS sensor over an urban area in Italy. The results reveal that when using any combination of feature extraction and classification methods, the performance of object-based methods was better than pixel-based ones. Furthermore the statistical analysis of results shows that on average, there is almost an 8 percent improvement in classification accuracy when we use the object-based methods

    DEVELOPMENT OF A VOXEL BASED LOCAL PLANE FITTING FOR MULTI-SCALE REGISTRATION OF SEQUENTIAL MLS POINT CLOUDS

    Get PDF
    The Mobile Laser Scanner (MLS) system is one of the most accurate and fastest data acquisition systems for indoor and outdoor environments mapping. Today, to use this system in an indoor environment where it is impossible to capture GNSS data, Simultaneous Localization and Mapping (SLAM) is used. Most SLAM research has used probabilistic approaches to determine the sensor position and create a map, which leads to drift error in the final result due to their uncertainty. In addition, most SLAM methods give less importance to geometry and mapping concepts. This research aims to solve the SLAM problem by considering the adjustment concepts in mapping and geometrical principles of the environment and proposing an algorithm for reducing drift. For this purpose, a model-based registration is suggested. Correspondence points fall in the same voxel by voxelization, and the registration process is done using a plane model. In this research, two pyramid and simple registration methods are proposed. The results show that the simple registration algorithm is more efficient than the pyramid when the distance between sequential scans is not large otherwise, the pyramid registration is used. In the evaluation, by using simulated data in both pyramid and simple methods, 96.9% and 97.6% accuracy were obtained, respectively. The final test compares the proposed method with a SLAM method and ICP algorithm, which are described further

    AUTOMATIC EXTRACTION OF CONTROL POINTS FROM 3D LIDAR MOBILE MAPPING AND UAV IMAGERY FOR AERIAL TRIANGULATION

    Get PDF
    Installing targets and measuring them as ground control points (GCPs) are time consuming and cost inefficient tasks in a UAV photogrammetry project. This research aims to automatically extract GCPs from 3D LiDAR mobile mapping system (L-MMS) measurements and UAV imagery to perform aerial triangulation in a UAV photogrammetric network. The L-MMS allows to acquire 3D point clouds of an urban environment including floors and facades of buildings with an accuracy of a few centimetres. Integration of UAV imagery, as complementary information enables to reduce the acquisition time of measurement as well as increasing the automation level in a production line. Therefore, a higher quality measurements and more diverse products are obtained. This research hypothesises that the spatial accuracy of the L-MMS is higher than that of the UAV photogrammetric point clouds. The tie points are extracted from the UAV imagery based on the well-known SIFT method, and then matched. The structure from motion (SfM) algorithm is applied to estimate the 3D object coordinates of the matched tie points. Rigid registration is carried out between the point clouds obtained from the L-MMS and the SfM. For each tie point extracted from the SfM point clouds, their corresponding neighbouring points are selected from the L-MMS point clouds, and then a plane is fitted and then a tie point was projected on the plane, and this is how the LiDAR-based control points (LCPs) are calculated. The re-projection error of the analyses carried out on a test data sets of the Glian area in Iran show a half pixel size accuracy standing for a few centimetres range accuracy. Finally, a significant increasing of speed up in survey operations besides improving the spatial accuracy of the extracted LCPs are achieved

    SURFACE NORMAL RECONSTRUCTION USING POLARIZATION-UNET

    Get PDF
    Today, three-dimensional reconstruction of objects has many applications in various fields, and therefore, choosing a suitable method for high resolution three-dimensional reconstruction is an important issue and displaying high-level details in three-dimensional models is a serious challenge in this field. Until now, active methods have been used for high-resolution three-dimensional reconstruction. But the problem of active three-dimensional reconstruction methods is that they require a light source close to the object. Shape from polarization (SfP) is one of the best solutions for high-resolution three-dimensional reconstruction of objects, which is a passive method and does not have the drawbacks of active methods. The changes in polarization of the reflected light from an object can be analyzed by using a polarization camera or locating polarizing filter in front of the digital camera and rotating the filter. Using this information, the surface normal can be reconstructed with high accuracy, which will lead to local reconstruction of the surface details. In this paper, an end-to-end deep learning approach has been presented to produce the surface normal of objects. In this method a benchmark dataset has been used to train the neural network and evaluate the results. The results have been evaluated quantitatively and qualitatively by other methods and under different lighting conditions. The MAE value (Mean-Angular-Error) has been used for results evaluation. The evaluations showed that the proposed method could accurately reconstruct the surface normal of objects with the lowest MAE value which is equal to 18.06 degree on the whole dataset, in comparison to previous physics-based methods which are between 41.44 and 49.03 degree

    Range Camera Self-Calibration Based on Integrated Bundle Adjustment via Joint Setup with a 2D Digital Camera

    Get PDF
    Time-of-flight cameras, based on Photonic Mixer Device (PMD) technology, are capable of measuring distances to objects at high frame rates, however, the measured ranges and the intensity data contain systematic errors that need to be corrected. In this paper, a new integrated range camera self-calibration method via joint setup with a digital (RGB) camera is presented. This method can simultaneously estimate the systematic range error parameters as well as the interior and external orientation parameters of the camera. The calibration approach is based on photogrammetric bundle adjustment of observation equations originating from collinearity condition and a range errors model. Addition of a digital camera to the calibration process overcomes the limitations of small field of view and low pixel resolution of the range camera. The tests are performed on a dataset captured by a PMD[vision]-O3 camera from a multi-resolution test field of high contrast targets. An average improvement of 83% in RMS of range error and 72% in RMS of coordinate residual, over that achieved with basic calibration, was realized in an independent accuracy assessment. Our proposed calibration method also achieved 25% and 36% improvement on RMS of range error and coordinate residual, respectively, over that obtained by integrated calibration of the single PMD camera

    Pareto optimality solution of the multi-objective photogrammetric resection-intersection problem

    Get PDF
    Reconstruction of architectural structures from photographs has recently experienced intensive efforts in computer vision research. This is achieved through the solution of nonlinear least squares (NLS) problems to obtain accurate structure and motion estimates. In Photogrammetry, NLS contribute to the determination of the 3-dimensional (3D) terrain models from the images taken from photographs. The traditional NLS approach for solving the resection-intersection problem based on implicit formulation on the one hand suffers from the lack of provision by which the involved variables can be weighted. On the other hand, incorporation of explicit formulation expresses the objectives to be minimized in different forms, thus resulting in different parametric values for the estimated parameters at non-zero residuals. Sometimes, these objectives may conflict in a Pareto sense, namely, a small change in the parameters results in the increase of one objective and a decrease of the other, as is often the case in multi-objective problems. Such is often the case with error-in-all-variable (EIV) models, e.g., in the resection-intersection problem where such change in the parameters could be caused by errors in both image and reference coordinates.This study proposes the Pareto optimal approach as a possible improvement to the solution of the resection-intersection problem, where it provides simultaneous estimation of the coordinates and orientation parameters of the cameras in a two or multistation camera system on the basis of a properly weighted multi-objective function. This objective represents the weighted sum of the square of the direct explicit differences of the measured and computed ground as well as the image coordinates. The effectiveness of the proposed method is demonstrated by two camera calibration problems, where the internal and external orientation parameters are estimated on the basis of the collinearity equations, employing the data of a Manhattan-type test field as well as the data of an outdoor, real case experiment. In addition, an architectural structural reconstruction of the Merton college court in Oxford (UK) via estimation of camera matrices is also presented. Although these two problems are different, where the first case considers the error reduction of the image and spatial coordinates, while the second case considers the precision of the space coordinates, the Pareto optimality can handle both problems in a general and flexible way
    • …
    corecore