30,486 research outputs found

    Interpolating point spread function anisotropy

    Full text link
    Planned wide-field weak lensing surveys are expected to reduce the statistical errors on the shear field to unprecedented levels. In contrast, systematic errors like those induced by the convolution with the point spread function (PSF) will not benefit from that scaling effect and will require very accurate modeling and correction. While numerous methods have been devised to carry out the PSF correction itself, modeling of the PSF shape and its spatial variations across the instrument field of view has, so far, attracted much less attention. This step is nevertheless crucial because the PSF is only known at star positions while the correction has to be performed at any position on the sky. A reliable interpolation scheme is therefore mandatory and a popular approach has been to use low-order bivariate polynomials. In the present paper, we evaluate four other classical spatial interpolation methods based on splines (B-splines), inverse distance weighting (IDW), radial basis functions (RBF) and ordinary Kriging (OK). These methods are tested on the Star-challenge part of the GRavitational lEnsing Accuracy Testing 2010 (GREAT10) simulated data and are compared with the classical polynomial fitting (Polyfit). We also test all our interpolation methods independently of the way the PSF is modeled, by interpolating the GREAT10 star fields themselves (i.e., the PSF parameters are known exactly at star positions). We find in that case RBF to be the clear winner, closely followed by the other local methods, IDW and OK. The global methods, Polyfit and B-splines, are largely behind, especially in fields with (ground-based) turbulent PSFs. In fields with non-turbulent PSFs, all interpolators reach a variance on PSF systematics σsys2\sigma_{sys}^2 better than the 1×1071\times10^{-7} upper bound expected by future space-based surveys, with the local interpolators performing better than the global ones

    Multiperiodicity, modulations and flip-flops in variable star light curves I. Carrier fit method

    Full text link
    The light curves of variable stars are commonly described using simple trigonometric models, that make use of the assumption that the model parameters are constant in time. This assumption, however, is often violated, and consequently, time series models with components that vary slowly in time are of great interest. In this paper we introduce a class of data analysis and visualization methods which can be applied in many different contexts of variable star research, for example spotted stars, variables showing the Blazhko effect, and the spin-down of rapid rotators. The methods proposed are of explorative type, and can be of significant aid when performing a more thorough data analysis and interpretation with a more conventional method.Our methods are based on a straightforward decomposition of the input time series into a fast "clocking" periodicity and smooth modulating curves. The fast frequency, referred to as the carrier frequency, can be obtained from earlier observations (for instance in the case of photometric data the period can be obtained from independently measured radial velocities), postulated using some simple physical principles (Keplerian rotation laws in accretion disks), or estimated from the data as a certain mean frequency. The smooth modulating curves are described by trigonometric polynomials or splines. The data approximation procedures are based on standard computational packages implementing simple or constrained least-squares fit-type algorithms.Comment: 14 pages, 23 figures, submitted to Astronomy and Astrophysic

    Parametrization and penalties in spline models with an application to survival analysis

    Get PDF
    In this paper we show how a simple parametrization, built from the definition of cubic splines, can aid in the implementation and interpretation of penalized spline models, whatever configuration of knots we choose to use. We call this parametrization value-first derivative parametrization. We perform Bayesian inference by exploring the natural link between quadratic penalties and Gaussian priors. However, a full Bayesian analysis seems feasible only for some penalty functionals. Alternatives include empirical Bayes methods involving model selection type criteria. The proposed methodology is illustrated by an application to survival analysis where the usual Cox model is extended to allow for time-varying regression coefficients
    corecore