420 research outputs found

    Detection of inconsistencies in geospatial data with geostatistics

    Get PDF
    Almost every researcher has come through observations that “drift” from the rest of the sample, suggesting some inconsistency. The aim of this paper is to propose a new inconsistent data detection method for continuous geospatial data based in Geostatistics, independently from the generative cause (measuring and execution errors and inherent variability data). The choice of Geostatistics is based in its ideal characteristics, as avoiding systematic errors, for example. The importance of a new inconsistent detection method proposal is in the fact that some existing methods used in geospatial data consider theoretical assumptions hardly attended. Equally, the choice of the data set is related to the importance of the LiDAR technology (Light Detection and Ranging) in the production of Digital Elevation Models (DEM). Thus, with the new methodology it was possible to detect and map discrepant data. Comparing it to a much utilized detections method, BoxPlot, the importance and functionality of the new method was verified, since the BoxPlot did not detect any data classified as discrepant. The proposed method pointed that, in average, 1,2% of the data of possible regionalized inferior outliers and, in average, 1,4% of possible regionalized superior outliers, in relation to the set of data used in the study

    A robust hierarchical clustering for georeferenced data

    Get PDF
    The detection of spatially contiguous clusters is a relevant task in geostatistics since near located observations might have similar features than distant ones. Spatially compact groups can also improve clustering results interpretation according to the different detected subregions. In this paper, we propose a robust metric approach to neutralize the effect of possible outliers, i.e. an exponential transformation of a dissimilarity measure between each pair of locations based on non-parametric kernel estimator of the direct and cross variograms (Fouedjio, 2016) and on a different bandwidth identification, suitable for agglomerative hierarchical clustering techniques applied to data indexed by geographical coordinates. Simulation results are very promising showing very good performances of our proposed metric with respect to the baseline ones. Finally, the new clustering approach is applied to two real-word data sets, both giving locations and top soil heavy metal concentrations

    Development of a variogram approach to spatial outlier detection using a supplemental digital elevation model dataset

    Get PDF
    When developing a ground water model, the quality of the dataset should first be evaluated. Spatial outliers can lead to predictions which are not representative of actual conditions. In order to isolate misrepresentative points, a method is presented which examines the experimental variogram of a ground water elevation dataset. To define a threshold variance between pairs of ground water elevation measures, ground elevation values from a digital elevation model (DEM) are used to determine a maximum reasonable variance expected to occur on the experimental variogram. To determine appropriate DEM parameters, a separate study was also done which observed characteristic behavior of gradient calculations for a DEM with fluctuating resolution and extent. This method is applied first to a synthetic dataset and then to a monitoring well network at Fort Leonard Wood, Missouri. Results of the analysis show that all points targeted as spatial outliers in the case study are justified for removal. This approach can readily be incorporated into the development of a regional groundwater model by kriging. The strengths of this method are that it incorporates supplemental DEM building of the concept that the groundwater surface is a smoothed version of the topographic surface. This method also takes advantage of every point pair relationship in that both neighboring points and distant pairs are compared --Abstract, page iv

    Optimal Surface Fitting of Point Clouds Using Local Refinement

    Get PDF
    This open access book provides insights into the novel Locally Refined B-spline (LR B-spline) surface format, which is suited for representing terrain and seabed data in a compact way. It provides an alternative to the well know raster and triangulated surface representations. An LR B-spline surface has an overall smooth behavior and allows the modeling of local details with only a limited growth in data volume. In regions where many data points belong to the same smooth area, LR B-splines allow a very lean representation of the shape by locally adapting the resolution of the spline space to the size and local shape variations of the region. The iterative method can be modified to improve the accuracy in particular domains of a point cloud. The use of statistical information criterion can help determining the optimal threshold, the number of iterations to perform as well as some parameters of the underlying mathematical functions (degree of the splines, parameter representation). The resulting surfaces are well suited for analysis and computing secondary information such as contour curves and minimum and maximum points. Also deformation analysis are potential applications of fitting point clouds with LR B-splines.publishedVersio

    Optimal Surface Fitting of Point Clouds Using Local Refinement : Application to GIS Data

    Get PDF
    This open access book provides insights into the novel Locally Refined B-spline (LR B-spline) surface format, which is suited for representing terrain and seabed data in a compact way. It provides an alternative to the well know raster and triangulated surface representations. An LR B-spline surface has an overall smooth behavior and allows the modeling of local details with only a limited growth in data volume. In regions where many data points belong to the same smooth area, LR B-splines allow a very lean representation of the shape by locally adapting the resolution of the spline space to the size and local shape variations of the region. The iterative method can be modified to improve the accuracy in particular domains of a point cloud. The use of statistical information criterion can help determining the optimal threshold, the number of iterations to perform as well as some parameters of the underlying mathematical functions (degree of the splines, parameter representation). The resulting surfaces are well suited for analysis and computing secondary information such as contour curves and minimum and maximum points. Also deformation analysis are potential applications of fitting point clouds with LR B-splines

    Semi-automated Generation of High-accuracy Digital Terrain Models along Roads Using Mobile Laser Scanning Data

    Get PDF
    Transportation agencies in many countries require high-accuracy (2-20 cm) digital terrain models (DTMs) along roads for various transportation related applications. Compared to traditional ground surveys and aerial photogrammetry, mobile laser scanning (MLS) has great potential for rapid acquisition of high-density and high-accuracy three-dimensional (3D) point clouds covering roadways. Such MLS point clouds can be used to generate high-accuracy DTMs in a cost-effective fashion. However, the large-volume, mixed-density and irregular-distribution of MLS points, as well as the complexity of the roadway environment, make DTM generation a very challenging task. In addition, most available software packages were originally developed for handling airborne laser scanning (ALS) point clouds, which cannot be directly used to process MLS point clouds. Therefore, methods and software tools to automatically generate DTMs along roads are urgently needed for transportation users. This thesis presents an applicable workflow to generate DTM from MLS point clouds. The entire strategy of DTM generation was divided into two main parts: removing non-ground points and interpolating ground points into gridded DTMs. First, a voxel-based upward growing algorithm was developed to effectively and accurately remove non-ground points. Then through a comparative study on four interpolation algorithms, namely Inverse Distance Weighted (IDW), Nearest Neighbour, Linear, and Natural Neighbours interpolation algorithms, the IDW interpolation algorithm was finally used to generate gridded DTMs due to its higher accuracy and higher computational efficiency. The obtained results demonstrated that the voxel-based upward growing algorithm is suitable for areas without steep terrain features. The average overall accuracy, correctness, and completeness values of this algorithm were 0.975, 0.980, and 0.986, respectively. In some cases, the overall accuracy can exceed 0.990. The results demonstrated that the semi-automated DTM generation method developed in this thesis was able to create DTMs with a centimetre-level grid size and 10 cm vertical accuracy using the MLS point clouds

    Optimal Surface Fitting of Point Clouds Using Local Refinement

    Get PDF
    This open access book provides insights into the novel Locally Refined B-spline (LR B-spline) surface format, which is suited for representing terrain and seabed data in a compact way. It provides an alternative to the well know raster and triangulated surface representations. An LR B-spline surface has an overall smooth behavior and allows the modeling of local details with only a limited growth in data volume. In regions where many data points belong to the same smooth area, LR B-splines allow a very lean representation of the shape by locally adapting the resolution of the spline space to the size and local shape variations of the region. The iterative method can be modified to improve the accuracy in particular domains of a point cloud. The use of statistical information criterion can help determining the optimal threshold, the number of iterations to perform as well as some parameters of the underlying mathematical functions (degree of the splines, parameter representation). The resulting surfaces are well suited for analysis and computing secondary information such as contour curves and minimum and maximum points. Also deformation analysis are potential applications of fitting point clouds with LR B-splines

    Robust Geographically and Temporally Weighted Regression Using S-estimator in Criminal Case in East Java Province

    Get PDF
    Geographically weighted regression (GWR) is a model that can be used for data with spatial varying. Geographically and Temporally Weighted Regression (GTWR) is a development of the GWR model for data spatial and temporal varying. Parameter estimation in GTWR model uses weighted least square method which is very sensitive to outliers data. The outlier caused bias in parameter estimation, so it must be handled by robust GTWR (RGTWR). In this research, S-estimator was used to handle outliers and estimate an RGTWR. Both GTWR and RGTWR is used to build model crime rate in East Java 2011-2015.  The Crime rate is used as a response variable and the percentage of poor people, population density, and human development index are used as explanatory variables. The best model in this research is RGTWR using S-estimator. RGTWR using S-estimator has a coefficient of determination equal to 98,2  meanwhile RMSE equal to 33.941 and MAD equal to 4.994

    Adaptive Methods for Point Cloud and Mesh Processing

    Get PDF
    Point clouds and 3D meshes are widely used in numerous applications ranging from games to virtual reality to autonomous vehicles. This dissertation proposes several approaches for noise removal and calibration of noisy point cloud data and 3D mesh sharpening methods. Order statistic filters have been proven to be very successful in image processing and other domains as well. Different variations of order statistics filters originally proposed for image processing are extended to point cloud filtering in this dissertation. A brand-new adaptive vector median is proposed in this dissertation for removing noise and outliers from noisy point cloud data. The major contributions of this research lie in four aspects: 1) Four order statistic algorithms are extended, and one adaptive filtering method is proposed for the noisy point cloud with improved results such as preserving significant features. These methods are applied to standard models as well as synthetic models, and real scenes, 2) A hardware acceleration of the proposed method using Microsoft parallel pattern library for filtering point clouds is implemented using multicore processors, 3) A new method for aerial LIDAR data filtering is proposed. The objective is to develop a method to enable automatic extraction of ground points from aerial LIDAR data with minimal human intervention, and 4) A novel method for mesh color sharpening using the discrete Laplace-Beltrami operator is proposed. Median and order statistics-based filters are widely used in signal processing and image processing because they can easily remove outlier noise and preserve important features. This dissertation demonstrates a wide range of results with median filter, vector median filter, fuzzy vector median filter, adaptive mean, adaptive median, and adaptive vector median filter on point cloud data. The experiments show that large-scale noise is removed while preserving important features of the point cloud with reasonable computation time. Quantitative criteria (e.g., complexity, Hausdorff distance, and the root mean squared error (RMSE)), as well as qualitative criteria (e.g., the perceived visual quality of the processed point cloud), are employed to assess the performance of the filters in various cases corrupted by different noisy models. The adaptive vector median is further optimized for denoising or ground filtering aerial LIDAR data point cloud. The adaptive vector median is also accelerated on multi-core CPUs using Microsoft Parallel Patterns Library. In addition, this dissertation presents a new method for mesh color sharpening using the discrete Laplace-Beltrami operator, which is an approximation of second order derivatives on irregular 3D meshes. The one-ring neighborhood is utilized to compute the Laplace-Beltrami operator. The color for each vertex is updated by adding the Laplace-Beltrami operator of the vertex color weighted by a factor to its original value. Different discretizations of the Laplace-Beltrami operator have been proposed for geometrical processing of 3D meshes. This work utilizes several discretizations of the Laplace-Beltrami operator for sharpening 3D mesh colors and compares their performance. Experimental results demonstrated the effectiveness of the proposed algorithms
    • …
    corecore