167,143 research outputs found
Evaluating the Differences of Gridding Techniques for Digital Elevation Models Generation and Their Influence on the Modeling of Stony Debris Flows Routing: A Case Study From Rovina di Cancia Basin (North-Eastern Italian Alps)
Debris \ufb02ows are among the most hazardous phenomena in mountain areas. To cope
with debris \ufb02ow hazard, it is common to delineate the risk-prone areas through
routing models. The most important input to debris \ufb02ow routing models are the
topographic data, usually in the form of Digital Elevation Models (DEMs). The quality
of DEMs depends on the accuracy, density, and spatial distribution of the sampled
points; on the characteristics of the surface; and on the applied gridding methodology.
Therefore, the choice of the interpolation method affects the realistic representation
of the channel and fan morphology, and thus potentially the debris \ufb02ow routing
modeling outcomes. In this paper, we initially investigate the performance of common
interpolation methods (i.e., linear triangulation, natural neighbor, nearest neighbor,
Inverse Distance to a Power, ANUDEM, Radial Basis Functions, and ordinary kriging)
in building DEMs with the complex topography of a debris \ufb02ow channel located
in the Venetian Dolomites (North-eastern Italian Alps), by using small footprint full-
waveform Light Detection And Ranging (LiDAR) data. The investigation is carried
out through a combination of statistical analysis of vertical accuracy, algorithm
robustness, and spatial clustering of vertical errors, and multi-criteria shape reliability
assessment. After that, we examine the in\ufb02uence of the tested interpolation algorithms
on the performance of a Geographic Information System (GIS)-based cell model for
simulating stony debris \ufb02ows routing. In detail, we investigate both the correlation
between the DEMs heights uncertainty resulting from the gridding procedure and
that on the corresponding simulated erosion/deposition depths, both the effect of
interpolation algorithms on simulated areas, erosion and deposition volumes, solid-liquid
discharges, and channel morphology after the event. The comparison among the tested
interpolation methods highlights that the ANUDEM and ordinary kriging algorithms
are not suitable for building DEMs with complex topography. Conversely, the linear
triangulation, the natural neighbor algorithm, and the thin-plate spline plus tension and completely regularized spline functions ensure the best trade-off among accuracy
and shape reliability. Anyway, the evaluation of the effects of gridding techniques on
debris \ufb02ow routing modeling reveals that the choice of the interpolation algorithm does
not signi\ufb01cantly affect the model outcomes
Recommended from our members
Spatial characteristics of thunderstorm rainfall fields and their relation to runoff
The main aim of this study was to assess the ability of simple geometric measures of thunderstorm rainfall in explaining the runoff response from the watershed. For calculation of storm geometric properties (e.g. areal coverage of storm, areal coverage of the high-intensity portion of the storm, position of storm centroid and the movement of storm centroid in time), spatial information of rainfall is needed. However, generally the rainfall data consists of rainfall depth values over an unevenly spaced network of raingauges. For this study, rainfall depth values were available for 91 raingauges in a watershed of about 148 km2. There was a question about which interpolation method should be used for obtaining uniformly gridded data. Therefore, a small study was undertaken to compare cross-validation statistics and computed geometric parameters using two interpolation methods (kriging and multiquadric). These interpolation methods were used to estimate precipitation over a uniform 100 m × 100 m grid. The cross-validation results from the two methods were generally similar and neither method consistently performed better than the other did. In view of these results we decided to use multiquadric interpolation method for the rest of the study. Several geometric measures were then computed from interpolated surfaces for about 300 storm events occurring in a 17-year period. The correlation of these computed measures with basin runoff were then observed in an attempt to assess their relative importance in basin runoff response. It was observed that the majority of the storms (observed in the study) covered the entire watershed. Therefore, it was concluded that the areal coverage of storm was not a good indicator of the amount of runoff produced. The areal coverage of the storm core (10-min intensity greater than 25 mm/h), however, was found to be a much better predictor of runoff volume and peak rate. The most important variable in runoff production was found to be the volume of the storm core. It was also observed that the position of the storm core relative to the watershed outlet becomes more important as the catchment size increases, with storms positioned in the central portion of the watershed producing more runoff than those positioned near the outlet or near the head of the watershed. This observation indicates the importance of interaction of catchment size and shape with the spatial storm structure in runoff generation. Antecedent channel wetness was found to be of some importance in explaining runoff for the largest of the three watersheds studied but antecedent watershed wetness did not appreciably contributed to runoff explanation. © 2002 Elsevier Science B.V. All rights reserved
The Missing Link: Bayesian Detection and Measurement of Intermediate-Mass Black-Hole Binaries
We perform Bayesian analysis of gravitational-wave signals from non-spinning,
intermediate-mass black-hole binaries (IMBHBs) with observed total mass,
, from to and
mass ratio 1\mbox{--}4 using advanced LIGO and Virgo detectors. We employ
inspiral-merger-ringdown waveform models based on the effective-one-body
formalism and include subleading modes of radiation beyond the leading
mode. The presence of subleading modes increases signal power for inclined
binaries and allows for improved accuracy and precision in measurements of the
masses as well as breaking of extrinsic parameter degeneracies. For low total
masses, , the observed chirp
mass ( being the
symmetric mass ratio) is better measured. In contrast, as increasing power
comes from merger and ringdown, we find that the total mass
has better relative precision than . Indeed, at high
(), the signal resembles a
burst and the measurement thus extracts the dominant frequency of the signal
that depends on . Depending on the binary's inclination, at
signal-to-noise ratio (SNR) of , uncertainties in can be
as large as \sim 20 \mbox{--}25\% while uncertainties in are \sim 50 \mbox{--}60\% in binaries with unequal masses (those
numbers become versus in more symmetric binaries).
Although large, those uncertainties will establish the existence of IMBHs. Our
results show that gravitational-wave observations can offer a unique tool to
observe and understand the formation, evolution and demographics of IMBHs,
which are difficult to observe in the electromagnetic window. (abridged)Comment: 17 pages, 9 figures, 2 tables; updated to reflect published versio
Void Scaling and Void Profiles in CDM Models
An analysis of voids using cosmological N-body simulations of cold dark
matter models is presented. It employs a robust statistics of voids, that was
recently applied to discriminate between data from the Las Campanas Redshift
Survey and different cosmological models. Here we extend the analysis to 3D and
show that typical void sizes D in the simulated galaxy samples obey a linear
scaling relation with the mean galaxy separation lambda: D=D_0+nu*lambda. It
has the same slope nu as in 2D, but with lower absolute void sizes. The scaling
relation is able to discriminate between different cosmologies. For the best
standard LCDM model, the slope of the scaling relation for voids in the dark
matter halos is too steep as compared to the LCRS, with too small void sizes
for well sampled data sets. The scaling relation of voids for dark matter halos
with increasing mass thresholds is even steeper than that for samples of
galaxy-mass halos where we sparse sample the data. This shows the stronger
clustering of more massive halos. Further, we find a correlation of the void
size to its central and environmental average density. While there is little
sign of an evolution in samples of small DM halos with v_{circ} ~ 90 km/s,
voids in halos with circular velocity over 200 km/s are larger at redshift z =
3 due to the smaller halo number density. The flow of dark matter from the
underdense to overdense regions in an early established network of large scale
structure is also imprinted in the evolution of the density profiles with a
relative density decrease in void centers by 0.18 per redshift unit between z=3
and z=0.Comment: 12 pages, 9 eps figures, submitted to MNRA
Is there evidence for additional neutrino species from cosmology?
It has been suggested that recent cosmological and flavor-oscillation data
favor the existence of additional neutrino species beyond the three predicted
by the Standard Model of particle physics. We apply Bayesian model selection to
determine whether there is indeed any evidence from current cosmological
datasets for the standard cosmological model to be extended to include
additional neutrino flavors. The datasets employed include cosmic microwave
background temperature, polarization and lensing power spectra, and
measurements of the baryon acoustic oscillation scale and the Hubble constant.
We also consider other extensions to the standard neutrino model, such as
massive neutrinos, and possible degeneracies with other cosmological
parameters. The Bayesian evidence indicates that current cosmological data do
not require any non-standard neutrino properties.Comment: 17 pages, 7 figures. v3: replaced with version published in JCAP
(typo fixes, including Figure 1 units
- …