13 research outputs found

    CNN photometric redshifts in the SDSS at r20r\leq 20

    Full text link
    We release photometric redshifts, reaching \sim0.7, for \sim14M galaxies at r20r\leq 20 in the 11,500 deg2^2 of the SDSS north and south galactic caps. These estimates were inferred from a convolution neural network (CNN) trained on ugrizugriz stamp images of galaxies labelled with a spectroscopic redshift from the SDSS, GAMA and BOSS surveys. Representative training sets of \sim370k galaxies were constructed from the much larger combined spectroscopic data to limit biases, particularly those arising from the over-representation of Luminous Red Galaxies. The CNN outputs a redshift classification that offers all the benefits of a well-behaved PDF, with a width efficiently signaling unreliable estimates due to poor photometry or stellar sources. The dispersion, mean bias and rate of catastrophic failures of the median point estimate are of order σMAD=0.014\sigma_{\rm MAD}=0.014, =0.0015=0.0015, η(Δznorm>0.05)=4%\eta(|\Delta z_{\rm norm}|>0.05)=4\% on a representative test sample at r<19.8r<19.8, out-performing currently published estimates. The distributions in narrow intervals of magnitudes of the redshifts inferred for the photometric sample are in good agreement with the results of tomographic analyses. The inferred redshifts also match the photometric redshifts of the redMaPPer galaxy clusters for the probable cluster members. The CNN input and output are available at: https://deepdip.iap.fr/treyer+2023.Comment: Submitted to MNRA

    Multimodality for improved CNN photometric redshifts

    No full text
    International audiencePhotometric redshift estimation plays a crucial role in modern cosmological surveys for studying the universe’s large-scale structures and the evolution of galaxies. Deep learning has emerged as a powerful method to produce accurate photometric redshift estimates from multiband images of galaxies. Here, we introduce a multimodal approach consisting of the parallel processing of several subsets of prior image bands, the outputs of which are then merged for further processing through a convolutional neural network (CNN). We evaluate the performance of our method using three surveys: the Sloan Digital Sky Survey (SDSS), the Canada-France-Hawaii Telescope Legacy Survey (CFHTLS), and the Hyper Suprime-Cam (HSC). By improving the model’s ability to capture information embedded in the correlation between different bands, our technique surpasses state-of-the-art photometric redshift precision. We find that the positive gain does not depend on the specific architecture of the CNN and that it increases with the number of photometric filters available

    Multimodality for improved CNN photometric redshifts

    No full text
    International audiencePhotometric redshift estimation plays a crucial role in modern cosmological surveys for studying the universe's large-scale structures and the evolution of galaxies. Deep learning has emerged as a powerful method to produce accurate photometric redshift estimates from multi-band images of galaxies. Here, we introduce a multimodal approach consisting of the parallel processing of several subsets of image bands prior, the outputs of which are then merged for further processing through a convolutional neural network (CNN). We evaluate the performance of our method using three surveys: the Sloan Digital Sky Survey (SDSS), The Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) and Hyper Suprime-Cam (HSC). By improving the model's ability to capture information embedded in the correlation between different bands, our technique surpasses the state-of-the-art photometric redshift precision. We find that the positive gain does not depend on the specific architecture of the CNN and that it increases with the number of photometric filters available

    CNN photometric redshifts in the SDSS at r20r\leq 20

    No full text
    International audienceWe release photometric redshifts, reaching \sim0.7, for \sim14M galaxies at r20r\leq 20 in the 11,500 deg2^2 of the SDSS north and south galactic caps. These estimates were inferred from a convolution neural network (CNN) trained on ugrizugriz stamp images of galaxies labelled with a spectroscopic redshift from the SDSS, GAMA and BOSS surveys. Representative training sets of \sim370k galaxies were constructed from the much larger combined spectroscopic data to limit biases, particularly those arising from the over-representation of Luminous Red Galaxies. The CNN outputs a redshift classification that offers all the benefits of a well-behaved PDF, with a width efficiently signaling unreliable estimates due to poor photometry or stellar sources. The dispersion, mean bias and rate of catastrophic failures of the median point estimate are of order σMAD=0.014\sigma_{\rm MAD}=0.014, =0.0015=0.0015, η(Δznorm>0.05)=4%\eta(|\Delta z_{\rm norm}|>0.05)=4\% on a representative test sample at r<19.8r<19.8, out-performing currently published estimates. The distributions in narrow intervals of magnitudes of the redshifts inferred for the photometric sample are in good agreement with the results of tomographic analyses. The inferred redshifts also match the photometric redshifts of the redMaPPer galaxy clusters for the probable cluster members. The CNN input and output are available at: https://deepdip.iap.fr/treyer+2023

    CNN photometric redshifts in the SDSS at r20r\leq 20

    No full text
    International audienceWe release photometric redshifts, reaching \sim0.7, for \sim14M galaxies at r20r\leq 20 in the 11,500 deg2^2 of the SDSS north and south galactic caps. These estimates were inferred from a convolution neural network (CNN) trained on ugrizugriz stamp images of galaxies labelled with a spectroscopic redshift from the SDSS, GAMA and BOSS surveys. Representative training sets of \sim370k galaxies were constructed from the much larger combined spectroscopic data to limit biases, particularly those arising from the over-representation of Luminous Red Galaxies. The CNN outputs a redshift classification that offers all the benefits of a well-behaved PDF, with a width efficiently signaling unreliable estimates due to poor photometry or stellar sources. The dispersion, mean bias and rate of catastrophic failures of the median point estimate are of order σMAD=0.014\sigma_{\rm MAD}=0.014, =0.0015=0.0015, η(Δznorm>0.05)=4%\eta(|\Delta z_{\rm norm}|>0.05)=4\% on a representative test sample at r<19.8r<19.8, out-performing currently published estimates. The distributions in narrow intervals of magnitudes of the redshifts inferred for the photometric sample are in good agreement with the results of tomographic analyses. The inferred redshifts also match the photometric redshifts of the redMaPPer galaxy clusters for the probable cluster members. The CNN input and output are available at: https://deepdip.iap.fr/treyer+2023
    corecore