2,957 research outputs found

    Evaluating the Differences of Gridding Techniques for Digital Elevation Models Generation and Their Influence on the Modeling of Stony Debris Flows Routing: A Case Study From Rovina di Cancia Basin (North-Eastern Italian Alps)

    Get PDF
    Debris \ufb02ows are among the most hazardous phenomena in mountain areas. To cope with debris \ufb02ow hazard, it is common to delineate the risk-prone areas through routing models. The most important input to debris \ufb02ow routing models are the topographic data, usually in the form of Digital Elevation Models (DEMs). The quality of DEMs depends on the accuracy, density, and spatial distribution of the sampled points; on the characteristics of the surface; and on the applied gridding methodology. Therefore, the choice of the interpolation method affects the realistic representation of the channel and fan morphology, and thus potentially the debris \ufb02ow routing modeling outcomes. In this paper, we initially investigate the performance of common interpolation methods (i.e., linear triangulation, natural neighbor, nearest neighbor, Inverse Distance to a Power, ANUDEM, Radial Basis Functions, and ordinary kriging) in building DEMs with the complex topography of a debris \ufb02ow channel located in the Venetian Dolomites (North-eastern Italian Alps), by using small footprint full- waveform Light Detection And Ranging (LiDAR) data. The investigation is carried out through a combination of statistical analysis of vertical accuracy, algorithm robustness, and spatial clustering of vertical errors, and multi-criteria shape reliability assessment. After that, we examine the in\ufb02uence of the tested interpolation algorithms on the performance of a Geographic Information System (GIS)-based cell model for simulating stony debris \ufb02ows routing. In detail, we investigate both the correlation between the DEMs heights uncertainty resulting from the gridding procedure and that on the corresponding simulated erosion/deposition depths, both the effect of interpolation algorithms on simulated areas, erosion and deposition volumes, solid-liquid discharges, and channel morphology after the event. The comparison among the tested interpolation methods highlights that the ANUDEM and ordinary kriging algorithms are not suitable for building DEMs with complex topography. Conversely, the linear triangulation, the natural neighbor algorithm, and the thin-plate spline plus tension and completely regularized spline functions ensure the best trade-off among accuracy and shape reliability. Anyway, the evaluation of the effects of gridding techniques on debris \ufb02ow routing modeling reveals that the choice of the interpolation algorithm does not signi\ufb01cantly affect the model outcomes

    SPATIO-TEMPORAL DYNAMICS OF SHORT-TERM TRAFFIC

    Get PDF
    Short-term traffic forecasting and missing data imputation can benefit from the use of neighboring traffic information, in addition to temporal data alone. However, little attention has been given to quantifying the effect of upstream and downstream traffic on the traffic at current location. The knowledge about temporal and spatial propagation of traffic is still limited in the current literature. To fill this gap, this dissertation research focus on revealing the spatio-temporal correlations between neighboring traffic to develop reliable algorithms for short-term traffic forecasting and data imputation based on spatio-temporal dynamics of traffic. In the first part of this dissertation, spatio-temporal relationships of speed series from consecutive segments were studied for different traffic conditions. The analysis results show that traffic speeds of consecutive segments are highly correlated. While downstream traffic tends to replicate the upstream condition under light traffic conditions, it may also affect upstream condition during congestion and build up situations. These effects were statistically quantified and an algorithm for properly choosing the “best” or most correlated neighbor(s), for potential traffic prediction or imputation purposes was proposed. In the second part of the dissertation, a spatio-temporal kriging (ST-Kriging) model that determines the most desirable extent of spatial and temporal traffic data from neighboring locations was developed for short-term traffic forecasting. The new ST-Kriging model outperforms all benchmark models under various traffic conditions. In the final part of the dissertation, a spatio-temporal data imputation approach was proposed and its performance was evaluated under scenarios with different data missing rates. Compared against previous methods, better flexibility and stable imputation accuracy were reported for this new imputation technique

    Hierarchical Bayesian auto-regressive models for large space time data with applications to ozone concentration modelling

    No full text
    Increasingly large volumes of space-time data are collected everywhere by mobile computing applications, and in many of these cases temporal data are obtained by registering events, for example telecommunication or web traffic data. Having both the spatial and temporal dimensions adds substantial complexity to data analysis and inference tasks. The computational complexity increases rapidly for fitting Bayesian hierarchical models, as such a task involves repeated inversion of large matrices. The primary focus of this paper is on developing space-time auto-regressive models under the hierarchical Bayesian setup. To handle large data sets, a recently developed Gaussian predictive process approximation method (Banerjee et al. [1]) is extended to include auto-regressive terms of latent space-time processes. Specifically, a space-time auto-regressive process, supported on a set of a smaller number of knot locations, is spatially interpolated to approximate the original space-time process. The resulting model is specified within a hierarchical Bayesian framework and Markov chain Monte Carlo techniques are used to make inference. The proposed model is applied for analysing the daily maximum 8-hour average ground level ozone concentration data from 1997 to 2006 from a large study region in the eastern United States. The developed methods allow accurate spatial prediction of a temporally aggregated ozone summary, known as the primary ozone standard, along with its uncertainty, at any unmonitored location during the study period. Trends in spatial patterns of many features of the posterior predictive distribution of the primary standard, such as the probability of non-compliance with respect to the standard, are obtained and illustrated

    Statistical and Machine Learning Models for Remote Sensing Data Mining - Recent Advancements

    Get PDF
    This book is a reprint of the Special Issue entitled "Statistical and Machine Learning Models for Remote Sensing Data Mining - Recent Advancements" that was published in Remote Sensing, MDPI. It provides insights into both core technical challenges and some selected critical applications of satellite remote sensing image analytics

    Analysis of tomographic images

    Get PDF

    Wavelet-based simulation of geological variables

    Get PDF
    This thesis introduces a number of conditional simulation algorithms using wavelet bases. These make use of two orthogonal wavelet bases, the Haar and the Db2 bases. Firstly, two single-level algorithms are introduced, HSIM: with the Haar basis and DB2SIM with the Db2 basis. HSIM reproduces the histogram and semivariogram model of isotropic samples but not the semivariogrnm model of anisotropic samples. DB2SIM reproduces the histogram and semivariogram model in both the isotropic and anisotropic cases but, because of the conditioning method employed, is not as computationally efficient as we would wish. Because of the limitations of HSIM and Db2SIM two multi-level wavelet-based conditional simulation algorithms PWSIM and DWSIM have then been developed. In PWSIM, the conditional realisations are obtained by post-processing non-conditional realisations generated via an available non-conditional simulation algorithm using kriging. In DWSJM the data are conditioned directly via properties of the discrete wavelet transform. Because of the conditioning method, DWSIM is faster than PWSIM. The performance of PWIM and DWSIM with respect to the Haar and the Db2 wavelet bases is assessed via the local and global accuracy of the results. Both quantitative and visual assessments indicate that, for both wavelet bases, the realisations obtained via PWSIM have more variability than those obtained via DWSIM. If the Haar basis is used, PWSIM and DWSIM perform equally well. If the Db2 basis is used then PWSIM performance is much better than DWSIM performance. For both PWSIM and DWSIM, the use of the Db2 basis rather than the Haar basis increases the computational effort without producing a comparable increase in algorithm performance. In PWSIM the use of the Db2 basis slightly improves algorithm performance but the use of the Db2 basis in DWSIM decreases algorithm performance. A performance comparison between DWSIM using the Haar basis and the commonly used conditional simulation algorithm SGSIM shows that DWSIM produces results that are at least as good as those obtained by SGSIM but with less computational effort. The computational advantage of DWSIM over SGSIM is especially pronounced when a large number of realisations are simulated. In addition, the result obtained via DWSIM does not depend on user defined parameters as is the case in both SGSIM and PWSIM. The final result here is a (Haar) wavelet-based conditional simulation algorithm DWSIM that performs well in both the isotropic and the anisotropic cases and, particularly when simulating a large number of realisations, is much faster than the standard algorithm in common use

    Detail Enhancing Denoising of Digitized 3D Models from a Mobile Scanning System

    Get PDF
    The acquisition process of digitizing a large-scale environment produces an enormous amount of raw geometry data. This data is corrupted by system noise, which leads to 3D surfaces that are not smooth and details that are distorted. Any scanning system has noise associate with the scanning hardware, both digital quantization errors and measurement inaccuracies, but a mobile scanning system has additional system noise introduced by the pose estimation of the hardware during data acquisition. The combined system noise generates data that is not handled well by existing noise reduction and smoothing techniques. This research is focused on enhancing the 3D models acquired by mobile scanning systems used to digitize large-scale environments. These digitization systems combine a variety of sensors – including laser range scanners, video cameras, and pose estimation hardware – on a mobile platform for the quick acquisition of 3D models of real world environments. The data acquired by such systems are extremely noisy, often with significant details being on the same order of magnitude as the system noise. By utilizing a unique 3D signal analysis tool, a denoising algorithm was developed that identifies regions of detail and enhances their geometry, while removing the effects of noise on the overall model. The developed algorithm can be useful for a variety of digitized 3D models, not just those involving mobile scanning systems. The challenges faced in this study were the automatic processing needs of the enhancement algorithm, and the need to fill a hole in the area of 3D model analysis in order to reduce the effect of system noise on the 3D models. In this context, our main contributions are the automation and integration of a data enhancement method not well known to the computer vision community, and the development of a novel 3D signal decomposition and analysis tool. The new technologies featured in this document are intuitive extensions of existing methods to new dimensionality and applications. The totality of the research has been applied towards detail enhancing denoising of scanned data from a mobile range scanning system, and results from both synthetic and real models are presented

    Examining trade-offs between social, psychological, and energy potential of urban form

    Get PDF
    Urban planners are often challenged with the task of developing design solutions which must meet multiple, and often contradictory, criteria. In this paper, we investigated the trade-offs between social, psychological, and energy potential of the fundamental elements of urban form: the street network and the building massing. Since formal methods to evaluate urban form from the psychological and social point of view are not readily available, we developed a methodological framework to quantify these criteria as the first contribution in this paper. To evaluate the psychological potential, we conducted a three-tiered empirical study starting from real world environments and then abstracting them to virtual environments. In each context, the implicit (physiological) response and explicit (subjective) response of pedestrians were measured. To quantify the social potential, we developed a street network centrality-based measure of social accessibility. For the energy potential, we created an energy model to analyze the impact of pure geometric form on the energy demand of the building stock. The second contribution of this work is a method to identify distinct clusters of urban form and, for each, explore the trade-offs between the select design criteria. We applied this method to two case studies identifying nine types of urban form and their respective potential trade-offs, which are directly applicable for the assessment of strategic decisions regarding urban form during the early planning stages

    Improving the applicability of radar rainfall estimates for urban pluvial flood modelling and forecasting

    Get PDF
    This work explores the possibility of improving the applicability of radar rainfall estimates (whose accuracy is generally insufficient) to the verification and operation of urban storm-water drainage models by employing a number of local gauge-based radar rainfall adjustment techniques. The adjustment techniques tested in this work include a simple mean-field bias (MFB) adjustment, as well as a more complex Bayesian radar-raingauge data merging method which aims at better preserving the spatial structure of rainfall fields. In addition, a novel technique (namely, local singularity analysis) is introduced and shown to improve the Bayesian method by better capturing and reproducing storm patterns and peaks. Two urban catchments were used as case studies in this work: the Cranbrook catchment (9 km2) in North-East London, and the Portobello catchment (53 km2) in the East of Edinburgh. In the former, the potential benefits of gauge-based adjusted radar rainfall estimates in an operational context were analysed, whereas in the latter the potential benefits of adjusted estimates for model verification purposes were explored. Different rainfall inputs, including raingauge, original radar and the aforementioned merged estimates were fed into the urban drainage models of the two catchments. The hydraulic outputs were compared against available flow and depth records. On the whole, the tested adjustment techniques proved to improve the applicability of radar rainfall estimates to urban hydrological applications, with the Bayesian-based methods, in particular the singularity sensitive one, providing more realistic and accurate rainfall fields which result in better reproduction of the urban drainage system’s dynamics. Further testing is still necessary in order to better assess the benefits of these adjustment methods, identify their shortcomings and improve them accordingly
    • …
    corecore