1,378 research outputs found

    Reconstructing Galaxy Spectral Energy Distributions from Broadband Photometry

    Get PDF
    We present a novel approach to photometric redshifts, one that merges the advantages of both the template fitting and empirical fitting algorithms, without any of their disadvantages. This technique derives a set of templates, describing the spectral energy distributions of galaxies, from a catalog with both multicolor photometry and spectroscopic redshifts. The algorithm is essentially using the shapes of the templates as the fitting parameters. From simulated multicolor data we show that for a small training set of galaxies we can reconstruct robustly the underlying spectral energy distributions even in the presence of substantial errors in the photometric observations. We apply these techniques to the multicolor and spectroscopic observations of the Hubble Deep Field building a set of template spectra that reproduced the observed galaxy colors to better than 10%. Finally we demonstrate that these improved spectral energy distributions lead to a photometric-redshift relation for the Hubble Deep Field that is more accurate than standard template-based approaches.Comment: 23 pages, 8 figures, LaTeX AASTeX, accepted for publication in A

    Decomposing data sets into skewness modes

    Full text link
    We derive the nonlinear equations satisfied by the coefficients of linear combinations that maximize their skewness when their variance is constrained to take a specific value. In order to numerically solve these nonlinear equations we develop a gradient-type flow that preserves the constraint. In combination with the Karhunen-Lo\`eve decomposition this leads to a set of orthogonal modes with maximal skewness. For illustration purposes we apply these techniques to atmospheric data; in this case the maximal-skewness modes correspond to strongly localized atmospheric flows. We show how these ideas can be extended, for example to maximal-flatness modes.Comment: Submitted for publication, 12 pages, 4 figure

    A Robust Classification of Galaxy Spectra: Dealing with Noisy and Incomplete Data

    Get PDF
    Over the next few years new spectroscopic surveys (from the optical surveys of the Sloan Digital Sky Survey and the 2 degree Field survey through to space-based ultraviolet satellites such as GALEX) will provide the opportunity and challenge of understanding how galaxies of different spectral type evolve with redshift. Techniques have been developed to classify galaxies based on their continuum and line spectra. Some of the most promising of these have used the Karhunen and Loeve transform (or Principal Component Analysis) to separate galaxies into distinct classes. Their limitation has been that they assume that the spectral coverage and quality of the spectra are constant for all galaxies within a given sample. In this paper we develop a general formalism that accounts for the missing data within the observed spectra (such as the removal of sky lines or the effect of sampling different intrinsic rest wavelength ranges due to the redshift of a galaxy). We demonstrate that by correcting for these gaps we can recover an almost redshift independent classification scheme. From this classification we can derive an optimal interpolation that reconstructs the underlying galaxy spectral energy distributions in the regions of missing data. This provides a simple and effective mechanism for building galaxy spectral energy distributions directly from data that may be noisy, incomplete or drawn from a number of different sources.Comment: 20 pages, 8 figures. Accepted for publication in A

    Principal manifolds and graphs in practice: from molecular biology to dynamical systems

    Full text link
    We present several applications of non-linear data modeling, using principal manifolds and principal graphs constructed using the metaphor of elasticity (elastic principal graph approach). These approaches are generalizations of the Kohonen's self-organizing maps, a class of artificial neural networks. On several examples we show advantages of using non-linear objects for data approximation in comparison to the linear ones. We propose four numerical criteria for comparing linear and non-linear mappings of datasets into the spaces of lower dimension. The examples are taken from comparative political science, from analysis of high-throughput data in molecular biology, from analysis of dynamical systems.Comment: 12 pages, 9 figure

    Spectral Templates from Multicolor Redshift Surveys

    Get PDF
    Understanding how the physical properties of galaxies (e.g. their spectral type or age) evolve as a function of redshift relies on having an accurate representation of galaxy spectral energy distributions. While it has been known for some time that galaxy spectra can be reconstructed from a handful of orthogonal basis templates, the underlying basis is poorly constrained. The limiting factor has been the lack of large samples of galaxies (covering a wide range in spectral type) with high signal-to-noise spectrophotometric observations. To alleviate this problem we introduce here a new technique for reconstructing galaxy spectral energy distributions directly from samples of galaxies with broadband photometric data and spectroscopic redshifts. Exploiting the statistical approach of the Karhunen-Loeve expansion, our iterative training procedure increasingly improves the eigenbasis, so that it provides better agreement with the photometry. We demonstrate the utility of this approach by applying these improved spectral energy distributions to the estimation of photometric redshifts for the HDF sample of galaxies. We find that in a small number of iterations the dispersion in the photometric redshifts estimator (a comparison between predicted and measured redshifts) can decrease by up to a factor of 2.Comment: 25 pages, 9 figures, LaTeX AASTeX, accepted for publication in A

    An extension of Wiener integration with the use of operator theory

    Full text link
    With the use of tensor product of Hilbert space, and a diagonalization procedure from operator theory, we derive an approximation formula for a general class of stochastic integrals. Further we establish a generalized Fourier expansion for these stochastic integrals. In our extension, we circumvent some of the limitations of the more widely used stochastic integral due to Wiener and Ito, i.e., stochastic integration with respect to Brownian motion. Finally we discuss the connection between the two approaches, as well as a priori estimates and applications.Comment: 13 page

    On dimension reduction in Gaussian filters

    Full text link
    A priori dimension reduction is a widely adopted technique for reducing the computational complexity of stationary inverse problems. In this setting, the solution of an inverse problem is parameterized by a low-dimensional basis that is often obtained from the truncated Karhunen-Loeve expansion of the prior distribution. For high-dimensional inverse problems equipped with smoothing priors, this technique can lead to drastic reductions in parameter dimension and significant computational savings. In this paper, we extend the concept of a priori dimension reduction to non-stationary inverse problems, in which the goal is to sequentially infer the state of a dynamical system. Our approach proceeds in an offline-online fashion. We first identify a low-dimensional subspace in the state space before solving the inverse problem (the offline phase), using either the method of "snapshots" or regularized covariance estimation. Then this subspace is used to reduce the computational complexity of various filtering algorithms - including the Kalman filter, extended Kalman filter, and ensemble Kalman filter - within a novel subspace-constrained Bayesian prediction-and-update procedure (the online phase). We demonstrate the performance of our new dimension reduction approach on various numerical examples. In some test cases, our approach reduces the dimensionality of the original problem by orders of magnitude and yields up to two orders of magnitude in computational savings

    A multifrequency analysis of radio variability of blazars

    Full text link
    We have carried out a multifrequency analysis of the radio variability of blazars, exploiting the data obtained during the extensive monitoring programs carried out at the University of Michigan Radio Astronomy Observatory (UMRAO, at 4.8, 8, and 14.5 GHz) and at the Metsahovi Radio Observatory (22 and 37 GHz). Two different techniques detect, in the Metsahovi light curves, evidences of periodicity at both frequencies for 5 sources (0224+671, 0945+408, 1226+023, 2200+420, and 2251+158). For the last three sources consistent periods are found also at the three UMRAO frequencies and the Scargle (1982) method yields an extremely low false-alarm probability. On the other hand, the 22 and 37 GHz periodicities of 0224+671 and 0945+408 (which were less extensively monitored at Metsahovi and for which we get a significant false-alarm probability) are not confirmed by the UMRAO database, where some indications of ill-defined periods about a factor of two longer are retrieved. We have also investigated the variability index, the structure function, and the distribution of intensity variations of the most extensively monitored sources. We find a statistically significant difference in the distribution of the variability index for BL Lac objects compared to flat-spectrum radio quasars (FSRQs), in the sense that the former objects are more variable. For both populations the variability index steadily increases with increasing frequency. The distribution of intensity variations also broadens with increasing frequency, and approaches a log-normal shape at the highest frequencies. We find that variability enhances by 20-30% the high frequency counts of extragalactic radio-sources at bright flux densities, such as those of the WMAP and Planck surveys.Comment: A&A accepted. 12 pages, 16 figure
    • …
    corecore