6,456 research outputs found
Interpolating point spread function anisotropy
Planned wide-field weak lensing surveys are expected to reduce the
statistical errors on the shear field to unprecedented levels. In contrast,
systematic errors like those induced by the convolution with the point spread
function (PSF) will not benefit from that scaling effect and will require very
accurate modeling and correction. While numerous methods have been devised to
carry out the PSF correction itself, modeling of the PSF shape and its spatial
variations across the instrument field of view has, so far, attracted much less
attention. This step is nevertheless crucial because the PSF is only known at
star positions while the correction has to be performed at any position on the
sky. A reliable interpolation scheme is therefore mandatory and a popular
approach has been to use low-order bivariate polynomials. In the present paper,
we evaluate four other classical spatial interpolation methods based on splines
(B-splines), inverse distance weighting (IDW), radial basis functions (RBF) and
ordinary Kriging (OK). These methods are tested on the Star-challenge part of
the GRavitational lEnsing Accuracy Testing 2010 (GREAT10) simulated data and
are compared with the classical polynomial fitting (Polyfit). We also test all
our interpolation methods independently of the way the PSF is modeled, by
interpolating the GREAT10 star fields themselves (i.e., the PSF parameters are
known exactly at star positions). We find in that case RBF to be the clear
winner, closely followed by the other local methods, IDW and OK. The global
methods, Polyfit and B-splines, are largely behind, especially in fields with
(ground-based) turbulent PSFs. In fields with non-turbulent PSFs, all
interpolators reach a variance on PSF systematics better than
the upper bound expected by future space-based surveys, with
the local interpolators performing better than the global ones
On Point Spread Function modelling: towards optimal interpolation
Point Spread Function (PSF) modeling is a central part of any astronomy data
analysis relying on measuring the shapes of objects. It is especially crucial
for weak gravitational lensing, in order to beat down systematics and allow one
to reach the full potential of weak lensing in measuring dark energy. A PSF
modeling pipeline is made of two main steps: the first one is to assess its
shape on stars, and the second is to interpolate it at any desired position
(usually galaxies). We focus on the second part, and compare different
interpolation schemes, including polynomial interpolation, radial basis
functions, Delaunay triangulation and Kriging. For that purpose, we develop
simulations of PSF fields, in which stars are built from a set of basis
functions defined from a Principal Components Analysis of a real ground-based
image. We find that Kriging gives the most reliable interpolation,
significantly better than the traditionally used polynomial interpolation. We
also note that although a Kriging interpolation on individual images is enough
to control systematics at the level necessary for current weak lensing surveys,
more elaborate techniques will have to be developed to reach future ambitious
surveys' requirements.Comment: Accepted for publication in MNRA
Application of DInSAR-GPS optimization for derivation of fine-scale surface motion maps of Southern California
A method based on random field theory and Gibbs-Markov random fields equivalency within Bayesian statistical framework is used to derive 3-D surface motion maps from sparse global positioning system (GPS) measurements and differential interferometric synthetic aperture radar (DInSAR) interferogram in the southern California region. The minimization of the Gibbs energy function is performed analytically, which is possible in the case when neighboring pixels are considered independent. The problem is well posed and the solution is unique and stable and not biased by the continuity condition. The technique produces a 3-D field containing estimates of surface motion on the spatial scale of the DInSAR image, over a given time period, complete with error estimates. Significant improvement in the accuracy of the vertical component and moderate improvement in the accuracy of the horizontal components of velocity are achieved in comparison with the GPS data alone. The method can be expanded to account for other available data sets, such as additional interferograms, lidar, or leveling data, in order to achieve even higher accuracy
An Ensemble Approach to Space-Time Interpolation
There has been much excitement and activity in recent years related to the relatively sudden availability of earth-related data and the computational capabilities to visualize and analyze these data. Despite the increased ability to collect and store large volumes of data, few individual data sets exist that provide both the requisite spatial and temporal observational frequency for many urban and/or regional-scale applications. The motivating view of this paper, however, is that the relative temporal richness of one data set can be leveraged with the relative spatial richness of another to fill in the gaps. We also note that any single interpolation technique has advantages and disadvantages. Particularly when focusing on the spatial or on the temporal dimension, this means that different techniques are more appropriate than others for specific types of data. We therefore propose a space- time interpolation approach whereby two interpolation methods – one for the temporal and one for the spatial dimension – are used in tandem in order to maximize the quality of the result. We call our ensemble approach the Space-Time Interpolation Environment (STIE). The primary steps within this environment include a spatial interpolator, a time-step processor, and a calibration step that enforces phenomenon-related behavioral constraints. The specific interpolation techniques used within the STIE can be chosen on the basis of suitability for the data and application at hand. In the current paper, we describe STIE conceptually including the structure of the data inputs and output, details of the primary steps (the STIE processors), and the mechanism for coordinating the data and the 1 processors. We then describe a case study focusing on urban land cover in Phoenix Arizona. Our empirical results show that STIE was effective as a space-time interpolator for urban land cover with an accuracy of 85.2% and furthermore that it was more effective than a single technique.
An Ensemble Approach to Space-Time Interpolation
There has been much excitement and activity in recent years related to the relatively sudden availability of earth-related data and the computational capabilities to visualize and analyze these data. Despite the increased ability to collect and store large volumes of data, few individual data sets exist that provide both the requisite spatial and temporal observational frequency for many urban and/or regional-scale applications. The motivating view of this paper, however, is that the relative temporal richness of one data set can be leveraged with the relative spatial richness of another to fill in the gaps. We also note that any single interpolation technique has advantages and disadvantages. Particularly when focusing on the spatial or on the temporal dimension, this means that different techniques are more appropriate than others for specific types of data. We therefore propose a space- time interpolation approach whereby two interpolation methods – one for the temporal and one for the spatial dimension – are used in tandem in order to maximize the quality of the result. We call our ensemble approach the Space-Time Interpolation Environment (STIE). The primary steps within this environment include a spatial interpolator, a time-step processor, and a calibration step that enforces phenomenon-related behavioral constraints. The specific interpolation techniques used within the STIE can be chosen on the basis of suitability for the data and application at hand. In the current paper, we describe STIE conceptually including the structure of the data inputs and output, details of the primary steps (the STIE processors), and the mechanism for coordinating the data and the processors. We then describe a case study focusing on urban land cover in Phoenix, Arizona. Our empirical results show that STIE was effective as a space-time interpolator for urban land cover with an accuracy of 85.2% and furthermore that it was more effective than a single technique.
Evaluating the Differences of Gridding Techniques for Digital Elevation Models Generation and Their Influence on the Modeling of Stony Debris Flows Routing: A Case Study From Rovina di Cancia Basin (North-Eastern Italian Alps)
Debris \ufb02ows are among the most hazardous phenomena in mountain areas. To cope
with debris \ufb02ow hazard, it is common to delineate the risk-prone areas through
routing models. The most important input to debris \ufb02ow routing models are the
topographic data, usually in the form of Digital Elevation Models (DEMs). The quality
of DEMs depends on the accuracy, density, and spatial distribution of the sampled
points; on the characteristics of the surface; and on the applied gridding methodology.
Therefore, the choice of the interpolation method affects the realistic representation
of the channel and fan morphology, and thus potentially the debris \ufb02ow routing
modeling outcomes. In this paper, we initially investigate the performance of common
interpolation methods (i.e., linear triangulation, natural neighbor, nearest neighbor,
Inverse Distance to a Power, ANUDEM, Radial Basis Functions, and ordinary kriging)
in building DEMs with the complex topography of a debris \ufb02ow channel located
in the Venetian Dolomites (North-eastern Italian Alps), by using small footprint full-
waveform Light Detection And Ranging (LiDAR) data. The investigation is carried
out through a combination of statistical analysis of vertical accuracy, algorithm
robustness, and spatial clustering of vertical errors, and multi-criteria shape reliability
assessment. After that, we examine the in\ufb02uence of the tested interpolation algorithms
on the performance of a Geographic Information System (GIS)-based cell model for
simulating stony debris \ufb02ows routing. In detail, we investigate both the correlation
between the DEMs heights uncertainty resulting from the gridding procedure and
that on the corresponding simulated erosion/deposition depths, both the effect of
interpolation algorithms on simulated areas, erosion and deposition volumes, solid-liquid
discharges, and channel morphology after the event. The comparison among the tested
interpolation methods highlights that the ANUDEM and ordinary kriging algorithms
are not suitable for building DEMs with complex topography. Conversely, the linear
triangulation, the natural neighbor algorithm, and the thin-plate spline plus tension and completely regularized spline functions ensure the best trade-off among accuracy
and shape reliability. Anyway, the evaluation of the effects of gridding techniques on
debris \ufb02ow routing modeling reveals that the choice of the interpolation algorithm does
not signi\ufb01cantly affect the model outcomes
Optimal interpolation of satellite and ground data for irradiance nowcasting at city scales
We use a Bayesian method, optimal interpolation, to improve satellite derived irradiance estimates at city-scales using ground sensor data. Optimal interpolation requires error covariances in the satellite estimates and ground data, which define how information from the sensor locations is distributed across a large area. We describe three methods to choose such covariances, including a covariance parameterization that depends on the relative cloudiness between locations. Results are computed with ground data from 22 sensors over a 75×80 km area centered on Tucson, AZ, using two satellite derived irradiance models. The improvements in standard error metrics for both satellite models indicate that our approach is applicable to additional satellite derived irradiance models. We also show that optimal interpolation can nearly eliminate mean bias error and improve the root mean squared error by 50%
- …