4,152 research outputs found
Permutation and Grouping Methods for Sharpening Gaussian Process Approximations
Vecchia's approximate likelihood for Gaussian process parameters depends on
how the observations are ordered, which can be viewed as a deficiency because
the exact likelihood is permutation-invariant. This article takes the
alternative standpoint that the ordering of the observations can be tuned to
sharpen the approximations. Advantageously chosen orderings can drastically
improve the approximations, and in fact, completely random orderings often
produce far more accurate approximations than default coordinate-based
orderings do. In addition to the permutation results, automatic methods for
grouping calculations of components of the approximation are introduced, having
the result of simultaneously improving the quality of the approximation and
reducing its computational burden. In common settings, reordering combined with
grouping reduces Kullback-Leibler divergence from the target model by a factor
of 80 and computation time by a factor of 2 compared to ungrouped
approximations with default ordering. The claims are supported by theory and
numerical results with comparisons to other approximations, including tapered
covariances and stochastic partial differential equation approximations.
Computational details are provided, including efficiently finding the orderings
and ordered nearest neighbors, and profiling out linear mean parameters and
using the approximations for prediction and conditional simulation. An
application to space-time satellite data is presented
High-Dimensional Bayesian Geostatistics
With the growing capabilities of Geographic Information Systems (GIS) and
user-friendly software, statisticians today routinely encounter geographically
referenced data containing observations from a large number of spatial
locations and time points. Over the last decade, hierarchical spatiotemporal
process models have become widely deployed statistical tools for researchers to
better understand the complex nature of spatial and temporal variability.
However, fitting hierarchical spatiotemporal models often involves expensive
matrix computations with complexity increasing in cubic order for the number of
spatial locations and temporal points. This renders such models unfeasible for
large data sets. This article offers a focused review of two methods for
constructing well-defined highly scalable spatiotemporal stochastic processes.
Both these processes can be used as "priors" for spatiotemporal random fields.
The first approach constructs a low-rank process operating on a
lower-dimensional subspace. The second approach constructs a Nearest-Neighbor
Gaussian Process (NNGP) that ensures sparse precision matrices for its finite
realizations. Both processes can be exploited as a scalable prior embedded
within a rich hierarchical modeling framework to deliver full Bayesian
inference. These approaches can be described as model-based solutions for big
spatiotemporal datasets. The models ensure that the algorithmic complexity has
floating point operations (flops), where the number of spatial
locations (per iteration). We compare these methods and provide some insight
into their methodological underpinnings
Evaluating the Differences of Gridding Techniques for Digital Elevation Models Generation and Their Influence on the Modeling of Stony Debris Flows Routing: A Case Study From Rovina di Cancia Basin (North-Eastern Italian Alps)
Debris \ufb02ows are among the most hazardous phenomena in mountain areas. To cope
with debris \ufb02ow hazard, it is common to delineate the risk-prone areas through
routing models. The most important input to debris \ufb02ow routing models are the
topographic data, usually in the form of Digital Elevation Models (DEMs). The quality
of DEMs depends on the accuracy, density, and spatial distribution of the sampled
points; on the characteristics of the surface; and on the applied gridding methodology.
Therefore, the choice of the interpolation method affects the realistic representation
of the channel and fan morphology, and thus potentially the debris \ufb02ow routing
modeling outcomes. In this paper, we initially investigate the performance of common
interpolation methods (i.e., linear triangulation, natural neighbor, nearest neighbor,
Inverse Distance to a Power, ANUDEM, Radial Basis Functions, and ordinary kriging)
in building DEMs with the complex topography of a debris \ufb02ow channel located
in the Venetian Dolomites (North-eastern Italian Alps), by using small footprint full-
waveform Light Detection And Ranging (LiDAR) data. The investigation is carried
out through a combination of statistical analysis of vertical accuracy, algorithm
robustness, and spatial clustering of vertical errors, and multi-criteria shape reliability
assessment. After that, we examine the in\ufb02uence of the tested interpolation algorithms
on the performance of a Geographic Information System (GIS)-based cell model for
simulating stony debris \ufb02ows routing. In detail, we investigate both the correlation
between the DEMs heights uncertainty resulting from the gridding procedure and
that on the corresponding simulated erosion/deposition depths, both the effect of
interpolation algorithms on simulated areas, erosion and deposition volumes, solid-liquid
discharges, and channel morphology after the event. The comparison among the tested
interpolation methods highlights that the ANUDEM and ordinary kriging algorithms
are not suitable for building DEMs with complex topography. Conversely, the linear
triangulation, the natural neighbor algorithm, and the thin-plate spline plus tension and completely regularized spline functions ensure the best trade-off among accuracy
and shape reliability. Anyway, the evaluation of the effects of gridding techniques on
debris \ufb02ow routing modeling reveals that the choice of the interpolation algorithm does
not signi\ufb01cantly affect the model outcomes
A nonlinearities inverse distance weighting spatial interpolation approach applied to the surface electromyography signal
Spatial interpolation of a surface electromyography (sEMG) signal from a set of signals recorded from a multi-electrode array is a challenge in biomedical signal processing. Consequently, it could be useful to increase the electrodes' density in detecting the skeletal muscles' motor units under detection's vacancy. This paper used two types of spatial interpolation methods for estimation: Inverse distance weighted (IDW) and Kriging. Furthermore, a new technique is proposed using a modified nonlinearity formula based on IDW. A set of EMG signals recorded from the noninvasive multi-electrode grid from different types of subjects, sex, age, and type of muscles have been studied when muscles are under regular tension activity. A goodness of fit measure (R2) is used to evaluate the proposed technique. The interpolated signals are compared with the actual signals; the Goodness of fit measure's value is almost 99%, with a processing time of 100msec. The resulting technique is shown to be of high accuracy and matching of spatial interpolated signals to actual signals compared with IDW and Kriging techniques
BEMDEC: An Adaptive and Robust Methodology for Digital Image Feature Extraction
The intriguing study of feature extraction, and edge detection in particular, has, as a result of the increased use of imagery, drawn even more attention not just from the field of computer science but also from a variety of scientific fields. However, various challenges surrounding the formulation of feature extraction operator, particularly of edges, which is capable of satisfying the necessary properties of low probability of error (i.e., failure of marking true edges), accuracy, and consistent response to a single edge, continue to persist. Moreover, it should be pointed out that most of the work in the area of feature extraction has been focused on improving many of the existing approaches rather than devising or adopting new ones. In the image processing subfield, where the needs constantly change, we must equally change the way we think.
In this digital world where the use of images, for variety of purposes, continues to increase, researchers, if they are serious about addressing the aforementioned limitations, must be able to think outside the box and step away from the usual in order to overcome these challenges. In this dissertation, we propose an adaptive and robust, yet simple, digital image features detection methodology using bidimensional empirical mode decomposition (BEMD), a sifting process that decomposes a signal into its two-dimensional (2D) bidimensional intrinsic mode functions (BIMFs). The method is further extended to detect corners and curves, and as such, dubbed as BEMDEC, indicating its ability to detect edges, corners and curves. In addition to the application of BEMD, a unique combination of a flexible envelope estimation algorithm, stopping criteria and boundary adjustment made the realization of this multi-feature detector possible. Further application of two morphological operators of binarization and thinning adds to the quality of the operator
- …