18 research outputs found

    Geometric calibration of full spherical panoramic ricoh-theta camera

    Get PDF
    A novel calibration process of RICOH-THETA, full-view fisheye camera, is proposed which has numerous applications as a low cost sensor in different disciplines such as photogrammetry, robotic and machine vision and so on. Ricoh Company developed this camera in 2014 that consists of two lenses and is able to capture the whole surrounding environment in one shot. In this research, each lens is calibrated separately and interior/relative orientation parameters (IOPs and ROPs) of the camera are determined on the basis of designed calibration network on the central and side images captured by the aforementioned lenses. Accordingly, designed calibration network is considered as a free distortion grid and applied to the measured control points in the image space as correction terms by means of bilinear interpolation. By performing corresponding corrections, image coordinates are transformed to the unit sphere as an intermediate space between object space and image space in the form of spherical coordinates. Afterwards, IOPs and EOPs of each lens are determined separately through statistical bundle adjustment procedure based on collinearity condition equations. Subsequently, ROPs of two lenses is computed from both EOPs. Our experiments show that by applying 3*3 free distortion grid, image measurements residuals diminish from 1.5 to 0.25 degrees on aforementioned unit sphere

    Using pixel-based and object-based methods to classify urban hyperspectral features

    Get PDF
    Object-based image analysis methods have been developed recently. They have since become a very active research topic in the remote sensing community. This is mainly because the researchers have begun to study the spatial structures within the data. In contrast, pixel-based methods only use the spectral content of data. To evaluate the applicability of object-based image analysis methods for land-cover information extraction from hyperspectral data, a comprehensive comparative analysis was performed. In this study, six supervised classification methods were selected from pixel-based category, including the maximum likelihood (ML), fisher linear likelihood (FLL), support vector machine (SVM), binary encoding (BE), spectral angle mapper (SAM) and spectral information divergence (SID). The classifiers were conducted on several features extracted from original spectral bands in order to avoid the problem of the Hughes phenomenon, and obtain a sufficient number of training samples. Three supervised and four unsupervised feature extraction methods were used. Pixel based classification was conducted in the first step of the proposed algorithm. The effective feature number (EFN) was then obtained. Image objects were thereafter created using the fractal net evolution approach (FNEA), the segmentation method implemented in eCognition software. Several experiments have been carried out to find the best segmentation parameters. The classification accuracy of these objects was compared with the accuracy of the pixel-based methods. In these experiments, the Pavia University Campus hyperspectral dataset was used. This dataset was collected by the ROSIS sensor over an urban area in Italy. The results reveal that when using any combination of feature extraction and classification methods, the performance of object-based methods was better than pixel-based ones. Furthermore the statistical analysis of results shows that on average, there is almost an 8 percent improvement in classification accuracy when we use the object-based methods

    Range Camera Self-Calibration Based on Integrated Bundle Adjustment via Joint Setup with a 2D Digital Camera

    Get PDF
    Time-of-flight cameras, based on Photonic Mixer Device (PMD) technology, are capable of measuring distances to objects at high frame rates, however, the measured ranges and the intensity data contain systematic errors that need to be corrected. In this paper, a new integrated range camera self-calibration method via joint setup with a digital (RGB) camera is presented. This method can simultaneously estimate the systematic range error parameters as well as the interior and external orientation parameters of the camera. The calibration approach is based on photogrammetric bundle adjustment of observation equations originating from collinearity condition and a range errors model. Addition of a digital camera to the calibration process overcomes the limitations of small field of view and low pixel resolution of the range camera. The tests are performed on a dataset captured by a PMD[vision]-O3 camera from a multi-resolution test field of high contrast targets. An average improvement of 83% in RMS of range error and 72% in RMS of coordinate residual, over that achieved with basic calibration, was realized in an independent accuracy assessment. Our proposed calibration method also achieved 25% and 36% improvement on RMS of range error and coordinate residual, respectively, over that obtained by integrated calibration of the single PMD camera

    Imagery Network Fine Registration by Reference Point Cloud Data Based on the Tie Points and Planes

    No full text
    Cameras and laser scanners are complementary tools for a 2D/3D information generation. Systematic and random errors cause the misalignment of the multi-sensor imagery and point cloud data. In this paper, a novel feature-based approach is proposed for imagery and point cloud fine registration. The tie points and its two neighbor pixels are matched in the overlap images, which are intersected in the object space to create the differential tie plane. A preprocessing is applied to the corresponding tie points and non-robust ones are removed. Initial coarse Exterior Orientation Parameters (EOPs), Interior Orientation Parameters (IOPs), and Additional Parameters (APs) are used to transform tie plane points to the object space. Then, the nearest points of the point cloud data to the transformed tie plane points are estimated. These estimated points are used to calculate Directional Vectors (DV) of the differential planes. As a constraint equation along with the collinearity equation, each object space tie point is forced to be located on the point cloud differential plane. Two different indoor and outdoor experimental data are used to assess the proposed approach. Achieved results show about 2.5 pixels errors on checkpoints. Such results demonstrated the robustness and practicality of the proposed approach

    EXTENDED HYBRID REGION GROWING SEGMENTATION OF POINT CLOUDS WITH DIFFERENT RESOLUTION FROM DENSE AERIAL IMAGE MATCHING

    No full text
    In the recent years, 3D city reconstruction is one of the active researches in the field of photogrammetry. The goal of this work is to improve and extend region growing based segmentation in the X-Y-Z image in the form of 3D structured data with combination of spectral information of RGB and grayscale image to extract building roofs, streets and vegetation. In order to process 3D point clouds, hybrid segmentation is carried out in both object space and image space. Our experiments on two case studies verify that updating plane parameters and robust least squares plane fitting improves the results of building extraction especially in case of low accurate point clouds. In addition, region growing in image space has been derived to the fact that grayscale image is more flexible than RGB image and results in more realistic building roofs

    A New Tie Plane-Based Method for Fine Registration of Imagery and Point Cloud Dataset

    No full text
    Today, both point cloud and imagery datasets processed for mapping aims. The precise fusion of both datasets is a major issue that leads to the fine registration problem. This article proposes a fine registration method based on a novel concept of tie plane. The assumption of our solution is that the laser scanner point cloud is much more accurate than the image interior and exterior geometric accuracy. In fact, we register the inaccurate image network to the accurate point cloud data. To do this, tie points are extracted from images. Then, the fine registration is commenced by filtering the unstable tie points as the preprocessing phase. Subsequently, tie planes are reconstructed around the remaining tie points by photogrammetric space intersection. The tie planes are locally fitted to the point cloud data via both normal and directional vectors. Afterward, a novel combined bundle adjustment is developed based on the conventional tie point equations and the new tie plane constraints. Therefore, the interior and exterior orientation parameters are refined. To evaluate our solution, both indoor and outdoor datasets are experimented. The results illustrate a registration error of about <1.6 pixels for both datasets, indicating ∼23% to 40% average accuracy improvement compared to the existing methods

    Evacuation planning using multiobjective evolutionary optimization approach

    No full text
    In an emergency situation, evacuation is conducted in order to displace people from a dangerous place to a safer place, and it usually needs to be done in a hurry. It is necessary to prepare evacuation plans in order to have a good response in an emergency situation. A central challenge in developing an evacuation plan is in determining the distribution of evacuees into the safe areas, that is, deciding where and from which road each evacuee should go. To achieve this aim, several objective functions should be brought into consideration and need to be satisfied simultaneously, though these objective functions may often conflict with each other. This paper aims to address the use of multiobjective evolutionary algorithms (MOEA) and the geographical information system (GIS) for evacuation planning. The paper proposes a three-step approach for evacuation planning. It explains that the last step, which corresponds to distribution of evacuees into the safe areas, is a spatial multiobjective optimization problem (MOP), because the objective functions and data required for solving the problem has a spatial component. To solve the MOP, two objective functions are defined, different algorithms for solving the problem are investigated, and the proper algorithm is selected. Finally, in the context of a case study project and based on the proposed approach and algorithm, evacuation planning is conducted in a GIS environment, and the results are tested. This paper is based on an ongoing research project in Iran.Disaster management Evacuation planning MOEA NSGA-II GIS

    Deep Learning-Based Change Detection Method for Environmental Change Monitoring Using Sentinel-2 Datasets

    No full text
    Change detection (CD) is an essential tool for the accurate understanding of land surface changes using Earth observation data and is extremely important for detecting the interactions between social and natural occurrences in geoscience. Binary change detection aims to detect changes and no changing areas, since improving the quality of the binary CD map is an important issue in remote sensing images; in this paper, a supervised deep learning (DL)-based change detection method was proposed to generate an accurate change map. Due to the good performance and great potential of DL in the domain of pattern recognition and nonlinear problem modeling, DL is becoming popular to resolve the CD problem using multitemporal remote sensing imageries. The purpose of using DL algorithms and especially convolutional neural networks (CNN) is to monitor the environmental change into change and no change classes. The Onera Satellite Change Detection (OSCD) datasets were used to evaluate the proposed method. Experimental results on the real dataset showed the effectiveness of the proposed algorithm. The overall accuracy and the kappa coefficient of the change map using the proposed method is over 95% and close to one, respectively
    corecore