22,988 research outputs found

    Interpolating point spread function anisotropy

    Full text link
    Planned wide-field weak lensing surveys are expected to reduce the statistical errors on the shear field to unprecedented levels. In contrast, systematic errors like those induced by the convolution with the point spread function (PSF) will not benefit from that scaling effect and will require very accurate modeling and correction. While numerous methods have been devised to carry out the PSF correction itself, modeling of the PSF shape and its spatial variations across the instrument field of view has, so far, attracted much less attention. This step is nevertheless crucial because the PSF is only known at star positions while the correction has to be performed at any position on the sky. A reliable interpolation scheme is therefore mandatory and a popular approach has been to use low-order bivariate polynomials. In the present paper, we evaluate four other classical spatial interpolation methods based on splines (B-splines), inverse distance weighting (IDW), radial basis functions (RBF) and ordinary Kriging (OK). These methods are tested on the Star-challenge part of the GRavitational lEnsing Accuracy Testing 2010 (GREAT10) simulated data and are compared with the classical polynomial fitting (Polyfit). We also test all our interpolation methods independently of the way the PSF is modeled, by interpolating the GREAT10 star fields themselves (i.e., the PSF parameters are known exactly at star positions). We find in that case RBF to be the clear winner, closely followed by the other local methods, IDW and OK. The global methods, Polyfit and B-splines, are largely behind, especially in fields with (ground-based) turbulent PSFs. In fields with non-turbulent PSFs, all interpolators reach a variance on PSF systematics σsys2\sigma_{sys}^2 better than the 1×10−71\times10^{-7} upper bound expected by future space-based surveys, with the local interpolators performing better than the global ones

    Accurate and robust image superresolution by neural processing of local image representations

    Get PDF
    Image superresolution involves the processing of an image sequence to generate a still image with higher resolution. Classical approaches, such as bayesian MAP methods, require iterative minimization procedures, with high computational costs. Recently, the authors proposed a method to tackle this problem, based on the use of a hybrid MLP-PNN architecture. In this paper, we present a novel superresolution method, based on an evolution of this concept, to incorporate the use of local image models. A neural processing stage receives as input the value of model coefficients on local windows. The data dimension-ality is firstly reduced by application of PCA. An MLP, trained on synthetic se-quences with various amounts of noise, estimates the high-resolution image data. The effect of varying the dimension of the network input space is exam-ined, showing a complex, structured behavior. Quantitative results are presented showing the accuracy and robustness of the proposed method

    Accuracy of areal interpolation methods for count data

    Full text link
    The combination of several socio-economic data bases originating from different administrative sources collected on several different partitions of a geographic zone of interest into administrative units induces the so called areal interpolation problem. This problem is that of allocating the data from a set of source spatial units to a set of target spatial units. A particular case of that problem is the re-allocation to a single target partition which is a regular grid. At the European level for example, the EU directive 'INSPIRE', or INfrastructure for SPatial InfoRmation, encourages the states to provide socio-economic data on a common grid to facilitate economic studies across states. In the literature, there are three main types of such techniques: proportional weighting schemes, smoothing techniques and regression based interpolation. We propose a stochastic model based on Poisson point patterns to study the statistical accuracy of these techniques for regular grid targets in the case of count data. The error depends on the nature of the target variable and its correlation with the auxiliary variable. For simplicity, we restrict attention to proportional weighting schemes and Poisson regression based methods. Our conclusion is that there is no technique which always dominates

    Quality measures for soil surveys by lognormal kriging

    Get PDF
    If we know the variogram of a random variable then we can compute the prediction error variances (kriging variances) for kriged estimates of the variable at unsampled sites from sampling grids of different design and density. In this way the kriging variance is a useful pre-survey measure of the quality of statistical predictions, which can be used to design sampling schemes to achieve target quality requirements at minimal cost. However, many soil properties are lognormally distributed, and must be transformed to logarithms before geostatistical analysis. The predicted values on the log scale are then back-transformed. It is possible to compute the prediction error variance for a prediction by this lognormal kriging procedure. However, it does not depend only on the variogram of the variable and the sampling configuration, but also on the conditional mean of the prediction. We therefore cannot use the kriging variance directly as a pre-survey measure of quality for geostatistical surveys of lognormal variables. In this paper we present an alternative. First we show how the limits of a prediction interval for a variable predicted by lognormal kriging can be expressed as dimensionless quantities, proportions of the unknown median of the conditional distribution. This scaled prediction interval can be used as a presurvey quality measure since it depends only on the sampling configuration and the variogram of the log-transformed variable. Second, we show how a similar scaled prediction interval can be computed for the median value of a lognormal variable across a block, in the case of block kriging. This approach is then illustrated using variograms of lognormally distributed data on concentration of elements in the soils of a part of eastern England

    Estimation of Sparse MIMO Channels with Common Support

    Get PDF
    We consider the problem of estimating sparse communication channels in the MIMO context. In small to medium bandwidth communications, as in the current standards for OFDM and CDMA communication systems (with bandwidth up to 20 MHz), such channels are individually sparse and at the same time share a common support set. Since the underlying physical channels are inherently continuous-time, we propose a parametric sparse estimation technique based on finite rate of innovation (FRI) principles. Parametric estimation is especially relevant to MIMO communications as it allows for a robust estimation and concise description of the channels. The core of the algorithm is a generalization of conventional spectral estimation methods to multiple input signals with common support. We show the application of our technique for channel estimation in OFDM (uniformly/contiguous DFT pilots) and CDMA downlink (Walsh-Hadamard coded schemes). In the presence of additive white Gaussian noise, theoretical lower bounds on the estimation of SCS channel parameters in Rayleigh fading conditions are derived. Finally, an analytical spatial channel model is derived, and simulations on this model in the OFDM setting show the symbol error rate (SER) is reduced by a factor 2 (0 dB of SNR) to 5 (high SNR) compared to standard non-parametric methods - e.g. lowpass interpolation.Comment: 12 pages / 7 figures. Submitted to IEEE Transactions on Communicatio
    • 

    corecore