277,770 research outputs found

    Validation of nonlinear PCA

    Full text link
    Linear principal component analysis (PCA) can be extended to a nonlinear PCA by using artificial neural networks. But the benefit of curved components requires a careful control of the model complexity. Moreover, standard techniques for model selection, including cross-validation and more generally the use of an independent test set, fail when applied to nonlinear PCA because of its inherent unsupervised characteristics. This paper presents a new approach for validating the complexity of nonlinear PCA models by using the error in missing data estimation as a criterion for model selection. It is motivated by the idea that only the model of optimal complexity is able to predict missing values with the highest accuracy. While standard test set validation usually favours over-fitted nonlinear PCA models, the proposed model validation approach correctly selects the optimal model complexity.Comment: 12 pages, 5 figure

    Fast Point Spread Function Modeling with Deep Learning

    Full text link
    Modeling the Point Spread Function (PSF) of wide-field surveys is vital for many astrophysical applications and cosmological probes including weak gravitational lensing. The PSF smears the image of any recorded object and therefore needs to be taken into account when inferring properties of galaxies from astronomical images. In the case of cosmic shear, the PSF is one of the dominant sources of systematic errors and must be treated carefully to avoid biases in cosmological parameters. Recently, forward modeling approaches to calibrate shear measurements within the Monte-Carlo Control Loops (MCCLMCCL) framework have been developed. These methods typically require simulating a large amount of wide-field images, thus, the simulations need to be very fast yet have realistic properties in key features such as the PSF pattern. Hence, such forward modeling approaches require a very flexible PSF model, which is quick to evaluate and whose parameters can be estimated reliably from survey data. We present a PSF model that meets these requirements based on a fast deep-learning method to estimate its free parameters. We demonstrate our approach on publicly available SDSS data. We extract the most important features of the SDSS sample via principal component analysis. Next, we construct our model based on perturbations of a fixed base profile, ensuring that it captures these features. We then train a Convolutional Neural Network to estimate the free parameters of the model from noisy images of the PSF. This allows us to render a model image of each star, which we compare to the SDSS stars to evaluate the performance of our method. We find that our approach is able to accurately reproduce the SDSS PSF at the pixel level, which, due to the speed of both the model evaluation and the parameter estimation, offers good prospects for incorporating our method into the MCCLMCCL framework.Comment: 25 pages, 8 figures, 1 tabl

    Pre-Merger Localization of Gravitational-Wave Standard Sirens With LISA I: Harmonic Mode Decomposition

    Get PDF
    The continuous improvement in localization errors (sky position and distance) in real time as LISA observes the gradual inspiral of a supermassive black hole (SMBH) binary can be of great help in identifying any prompt electromagnetic counterpart associated with the merger. We develop a new method, based on a Fourier decomposition of the time-dependent, LISA-modulated gravitational-wave signal, to study this intricate problem. The method is faster than standard Monte Carlo simulations by orders of magnitude. By surveying the parameter space of potential LISA sources, we find that counterparts to SMBH binary mergers with total mass M~10^5-10^7 M_Sun and redshifts z<~3 can be localized to within the field of view of astronomical instruments (~deg^2) typically hours to weeks prior to coalescence. This will allow targeted searches for variable electromagnetic counterparts as the merger proceeds, as well as monitoring of the most energetic coalescence phase. A rich set of astrophysical and cosmological applications would emerge from the identification of electromagnetic counterparts to these gravitational-wave standard sirens.Comment: 29 pages, 12 figures, version accepted by Phys Rev

    Smoothing dynamic positron emission tomography time courses using functional principal components

    Get PDF
    A functional smoothing approach to the analysis of PET time course data is presented. By borrowing information across space and accounting for this pooling through the use of a nonparametric covariate adjustment, it is possible to smooth the PET time course data thus reducing the noise. A new model for functional data analysis, the Multiplicative Nonparametric Random Effects Model, is introduced to more accurately account for the variation in the data. A locally adaptive bandwidth choice helps to determine the correct amount of smoothing at each time point. This preprocessing step to smooth the data then allows Subsequent analysis by methods Such as Spectral Analysis to be substantially improved in terms of their mean squared error

    Analysing musical performance through functional data analysis: rhythmic structure in Schumann's Träumerei

    Get PDF
    Functional data analysis (FDA) is a relatively new branch of statistics devoted to describing and modelling data that are complete functions. Many relevant aspects of musical performance and perception can be understood and quantified as dynamic processes evolving as functions of time. In this paper, we show that FDA is a statistical methodology well suited for research into the field of quantitative musical performance analysis. To demonstrate this suitability, we consider tempo data for 28 performances of Schumann's Träumerei and analyse them by means of functional principal component analysis (one of the most powerful descriptive tools included in FDA). Specifically, we investigate the commonalities and differences between different performances regarding (expressive) timing, and we cluster similar performances together. We conclude that musical data considered as functional data reveal performance structures that might otherwise go unnoticed.Peer ReviewedPostprint (author's final draft

    On Point Spread Function modelling: towards optimal interpolation

    Get PDF
    Point Spread Function (PSF) modeling is a central part of any astronomy data analysis relying on measuring the shapes of objects. It is especially crucial for weak gravitational lensing, in order to beat down systematics and allow one to reach the full potential of weak lensing in measuring dark energy. A PSF modeling pipeline is made of two main steps: the first one is to assess its shape on stars, and the second is to interpolate it at any desired position (usually galaxies). We focus on the second part, and compare different interpolation schemes, including polynomial interpolation, radial basis functions, Delaunay triangulation and Kriging. For that purpose, we develop simulations of PSF fields, in which stars are built from a set of basis functions defined from a Principal Components Analysis of a real ground-based image. We find that Kriging gives the most reliable interpolation, significantly better than the traditionally used polynomial interpolation. We also note that although a Kriging interpolation on individual images is enough to control systematics at the level necessary for current weak lensing surveys, more elaborate techniques will have to be developed to reach future ambitious surveys' requirements.Comment: Accepted for publication in MNRA
    • …
    corecore