35 research outputs found

    Unsupervised feature-learning for galaxy SEDs with denoising autoencoders

    Full text link
    With the increasing number of deep multi-wavelength galaxy surveys, the spectral energy distribution (SED) of galaxies has become an invaluable tool for studying the formation of their structures and their evolution. In this context, standard analysis relies on simple spectro-photometric selection criteria based on a few SED colors. If this fully supervised classification already yielded clear achievements, it is not optimal to extract relevant information from the data. In this article, we propose to employ very recent advances in machine learning, and more precisely in feature learning, to derive a data-driven diagram. We show that the proposed approach based on denoising autoencoders recovers the bi-modality in the galaxy population in an unsupervised manner, without using any prior knowledge on galaxy SED classification. This technique has been compared to principal component analysis (PCA) and to standard color/color representations. In addition, preliminary results illustrate that this enables the capturing of extra physically meaningful information, such as redshift dependence, galaxy mass evolution and variation over the specific star formation rate. PCA also results in an unsupervised representation with physical properties, such as mass and sSFR, although this representation separates out. less other characteristics (bimodality, redshift evolution) than denoising autoencoders.Comment: 11 pages and 15 figures. To be published in A&

    Convergent ADMM Plug and Play PET Image Reconstruction

    Full text link
    In this work, we investigate hybrid PET reconstruction algorithms based on coupling a model-based variational reconstruction and the application of a separately learnt Deep Neural Network operator (DNN) in an ADMM Plug and Play framework. Following recent results in optimization, fixed point convergence of the scheme can be achieved by enforcing an additional constraint on network parameters during learning. We propose such an ADMM algorithm and show in a realistic [18F]-FDG synthetic brain exam that the proposed scheme indeed lead experimentally to convergence to a meaningful fixed point. When the proposed constraint is not enforced during learning of the DNN, the proposed ADMM algorithm was observed experimentally not to converge

    GREAT3 results I: systematic errors in shear estimation and the impact of real galaxy morphology

    Get PDF
    We present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically-varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about a spatially-varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by 1\sim 1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the S\'{e}rsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods' results support the simple model in which additive shear biases depend linearly on PSF ellipticity.Comment: 32 pages + 15 pages of technical appendices; 28 figures; submitted to MNRAS; latest version has minor updates in presentation of 4 figures, no changes in content or conclusion

    GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology

    Get PDF
    We present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ∼1percent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods' results support the simple model in which additive shear biases depend linearly on PSF ellipticit

    Exploitation de corrélations spatiales et temporelles en tomographie par émission de positrons

    Get PDF
    In this thesis we propose, implement, and evaluate algorithms improving spatial resolution in reconstructed images and reducing data noise in positron emission tomography imaging. These algorithms have been developed for a high resolution tomograph (HRRT) and applied to brain imaging, but can be used for other tomographs or studies. We first developed an iterative reconstruction algorithm including a stationary and isotropic model of resolution in image space, experimentally measured. We evaluated the impact of such a model of resolution in Monte-Carlo simulations, physical phantom experiments and in two clinical studies by comparing our algorithm with a reference reconstruction algorithm. This study suggests that biases due to partial volume effects are reduced, in particular in the clinical studies. Better spatial and temporal correlations are also found at the voxel level. However, other methods should be developed to further reduce data noise. We then proposed a maximum a posteriori denoising algorithm that can be used for dynamic data to denoise temporally raw data (sinograms) or reconstructed images. The a priori modeled the coefficients in a wavelet basis of all the signals without noise (in an image or sinogram). We compared this technique with a reference denoising method on replicated simulations. This illustrates the potential benefits of our approach of sinogram denoising.Nous proposons, implémentons et évaluons dans cette thèse des algorithmes permettant d'améliorer la résolution spatiale dans les images et débruiter les données en tomographie par émission de positrons. Ces algorithmes ont été employés pour des reconstructions sur une caméra à haute résolution (HRRT) et utilisés dans le cadre d'études cérébrales, mais la méthodologie développée peut être appliquée à d'autres caméras et dans d'autres situations. Dans un premier temps, nous avons développé une méthode de reconstruction itérative intégrant un modèle de résolution spatiale isotropique et stationnaire dans l'espace image, mesuré expérimentalement. Nous avons évalué les apports de cette méthode sur simulation Monte-Carlo, examens physiques et protocoles cliniques en la comparant à un algorithme de reconstruction de référence. Cette étude suggère une réduction des biais de quantification, en particulier dans l'étude clinique, et de meilleures corrélations spatialles et temporelles au niveau des voxels. Cependant, d'autres méthodes doivent être employées pour réduire le niveau de bruit dans les données. Dans un second temps, une approche de débruitage maximum a posteriori adaptée aux données dynamiques et permettant de débruiter temporellement les données d'acquisition (sinogrammes) ou images reconstruites a été proposé. L'a priori a été introduit en modélisant les coefficients dans une base d'ondelettes de l'ensemble des signaux recherchés (images ou sinogrammes). Nous avons comparé cette méthode à une méthode de débruitage de référence sur des simulations répliquées, ce qui illustre l'intérêt de l'approche de débruitage des sinogrammes

    Exploitation de corrélations spatiales et temporelles en tomographie par émission de positrons

    No full text
    Nous proposons, implémentons et évaluons dans cette thèse des algorithmes permettant d'améliorer la résolution spatiale dans les images et débruiter les données en tomographie par émission de positrons. Ces algorithmes ont été employés pour des reconstructions sur une caméra à haute résolution (HRRT) et utilisés dans le cadre d'études cérébrales, mais la méthodologie développée peut être appliquée à d'autres caméras et dans d'autres situations. Dans un premier temps, nous avons développé une méthode de reconstruction itérative intégrant un modèle de résolution spatiale isotrope et stationnaire dans l'espace image, mesuré expérimentalement. Nous avons évalué les apports de cette méthode sur simulation Monte-Carlo, examens physiques et protocoles cliniques en la comparant à un algorithme de reconstruction de référence. Cette étude suggère une réduction des biais de quantification, en particulier dans l'étude clinique, et de meilleures corrélations spatiales et temporelles au niveau des voxels. Cependant, d'autres méthodes doivent être employées pour réduire le niveau de bruit dans les données. Dans un second temps, une approche de débruitage maximum a posteriori adaptée aux données dynamiques et permettant de débruiter temporellement les données d'acquisition (sinogrammes) ou images reconstruites a été proposé. L'a priori a été introduit en modélisant les coefficients dans une base d'ondelettes de l'ensemble des signaux recherchés (images ou sinogrammes). Nous avons comparé cette méthode à une méthode de débruitage de référence sur des simulations répliquées, ce qui illustre l'intérêt de l'approche de débruitage des sinogrammes.In this thesis we propose, implement, and evaluate algorithms improving spatial resolution in reconstructed images and reducing data noise in positron emission tomography imaging. These algorithms have been developed for a high resolution tomograph (HRRT) and applied to brain imaging, but can be used for other tomographs or studies. We first developed an iterative reconstruction algorithm including a stationary and isotropic model of resolution in image space, experimentally measured. We evaluated the impact of such a model of resolution in Monte-Carlo simulations, physical phantom experiments and in two clinical studies by comparing our algorithm with a reference reconstruction algorithm. This study suggests that biases due to partial volume effects are reduced, in particular in the clinical studies. Better spatial and temporal correlations are also found at the voxel level. However, other methods should be developed to further reduce data noise. We then proposed a maximum a posteriori denoising algorithm that can be used for dynamic data to denoise temporally raw data (sinograms) or reconstructed images. The a priori modeled the coefficients in a wavelet basis of all the signals without noise (in an image or sinogram). We compared this technique with a reference denoising method on replicated simulations. This illustrates the potential benefits of our approach of sinogram denoising.ORSAY-PARIS 11-BU Sciences (914712101) / SudocSudocFranceF

    On the local geometry of the pet reconstruction problem

    No full text
    International audienceIntrinsic uncertainties related to the ill-posedness of the PET image reconstruction inverse problem are investigated in this work. These uncertainties could lead to instabilities in the reconstructed images in particular when Deep Learning approaches are employed. We propose a framework, based on a Bayesian hypothesis testing, enabling to define various distinguishability measures between PET images and propose a local analysis of such a measure in a neighborhood of a reference image. We test numerically this approach in a synthetic experiment of a Biograph TruePoint TrueV acquisition using a 3D brain PET phantom derived from a [18F]-FDG exam with 100 anatomo-functional regions extracted. Our analysis allows us to highlight the key factors impacting the detectability of variations in a region of interest and to exhibit concrete examples of directions along which variations may not be detectable, and instabilities could appear in PET image reconstruction. In addition, it allows us to conduct a quantitative analysis of the role played by the injected radiotracer dose, and to define thresholds below which clinically significant variations cannot be detected

    Learning with fixed point condition for convergent PnP PET reconstruction

    No full text
    International audienceThis work explores plug-and-play algorithms for PET reconstruction, combining deep learning with model-based variational methods. We aim to integrate classical convolutional architectures into algorithms such as ADMM and Forward-Backward (FB) while ensuring convergence and maintaining fixed-point control. We focus on the scenario where only high- and low-count reconstructed PET images are available for training our networks. Experimental results demonstrate that the proposed methods consistently reach fixed points with high likelihood and low mean squared error, thus showcasing the potential of convergent plug-and-play techniques for PET reconstruction

    On the local geometry of the pet reconstruction problem

    No full text
    International audienceIntrinsic uncertainties related to the ill-posedness of the PET reconstruction problem are investigated in this work. These uncertainties could lead to artifacts in the reconstructed images in particular when Deep Learning approaches are employed. We propose a framework enabling to define a distinguishability measure between PET images and propose a local analysis of such a measure in a neighborhood of a reference image. This approach is numerically tested on a synthetic experiment using a 3D brain PET phantom derived from a measured [18F]-FDG exam with 100 anatomofunctional regions extracted. Our analysis allows us to highlight the key factors impacting the detectability of variations and to exhibit concrete examples of clinically meaningful directions along which variations may not be detectable. In addition, we quantitatively analyze the role played by the injected radiotracer dose and show that low-dose scenarios are particularly prone to the presence of reconstruction artifacts
    corecore