84 research outputs found

    Horseshoe regularization for wavelet-based lensing inversion

    Full text link
    Gravitational lensing, a phenomenon in astronomy, occurs when the gravitational field of a massive object, such as a galaxy or a black hole, bends the path of light from a distant object behind it. This bending results in a distortion or magnification of the distant object's image, often seen as arcs or rings surrounding the foreground object. The Starlet wavelet transform offers a robust approach to representing galaxy images sparsely. This technique breaks down an image into wavelet coefficients at various scales and orientations, effectively capturing both large-scale structures and fine details. The Starlet wavelet transform offers a robust approach to representing galaxy images sparsely. This technique breaks down an image into wavelet coefficients at various scales and orientations, effectively capturing both large-scale structures and fine details. The horseshoe prior has emerged as a highly effective Bayesian technique for promoting sparsity and regularization in statistical modeling. It aggressively shrinks negligible values while preserving important features, making it particularly useful in situations where the reconstruction of an original image from limited noisy observations is inherently challenging. The main objective of this thesis is to apply sparse regularization techniques, particularly the horseshoe prior, to reconstruct the background source galaxy from gravitationally lensed images. By demonstrating the effectiveness of the horseshoe prior in this context, this thesis tackles the challenging inverse problem of reconstructing lensed galaxy images. Our proposed methodology involves applying the horseshoe prior to the wavelet coefficients of lensed galaxy images. By exploiting the sparsity of the wavelet representation and the noise-suppressing behavior of the horseshoe prior, we achieve well-regularized reconstructions that reduce noise and artifacts while preserving structural details. Experiments conducted on simulated lensed galaxy images demonstrate lower mean squared error and higher structural similarity with the horseshoe prior compared to alternative methods, validating its efficacy as an efficient sparse modeling technique.Les lentilles gravitationnelles se produisent lorsque le champ gravitationnel d'un objet massif dévie la trajectoire de la lumière provenant d'un objet lointain, entraînant une distorsion ou une amplification de l'image de l'objet lointain. La transformation Starlet fournit une méthode robuste pour obtenir une représentation éparse des images de galaxies, capturant efficacement leurs caractéristiques essentielles avec un minimum de données. Cette représentation réduit les besoins de stockage et de calcul, et facilite des tâches telles que le débruitage, la compression et l'extraction de caractéristiques. La distribution a priori de fer à cheval est une technique bayésienne efficace pour promouvoir la sparsité et la régularisation dans la modélisation statistique. Elle réduit de manière agressive les valeurs négligeables tout en préservant les caractéristiques importantes, ce qui la rend particulièrement utile dans les situations où la reconstruction d'une image originale à partir d'observations bruitées est difficile. Étant donné la nature mal posée de la reconstruction des images de galaxies à partir de données bruitées, l'utilisation de la distribution a priori devient cruciale pour résoudre les ambiguïtés. Les techniques utilisant une distribution a priori favorisant la sparsité ont été efficaces pour relever des défis similaires dans divers domaines. L'objectif principal de cette thèse est d'appliquer des techniques de régularisation favorisant la sparsité, en particulier la distribution a priori de fer à cheval, pour reconstruire les galaxies d'arrière-plan à partir d'images de lentilles gravitationnelles. Notre méthodologie proposée consiste à appliquer la distribution a priori de fer à cheval aux coefficients d'ondelettes des images de galaxies lentillées. En exploitant la sparsité de la représentation en ondelettes et le comportement de suppression du bruit de la distribution a priori de fer à cheval, nous obtenons des reconstructions bien régularisées qui réduisent le bruit et les artefacts tout en préservant les détails structurels. Des expériences menées sur des images simulées de galaxies lentillées montrent une erreur quadratique moyenne inférieure et une similarité structurelle plus élevée avec la distribution a priori de fer à cheval par rapport à d'autres méthodes, validant son efficacité

    Exploring the Components of the Universe Through Higher-Order Weak Lensing Statistics

    Get PDF
    Our current cosmological model, backed by a large body of evidence from a variety of different cosmological probes (for example, see [1, 2]), describes a Universe comprised of around 5% normal baryonic matter, 22% cold dark matter and 73% dark energy. While many cosmologists accept this so-called concordance cosmology – the ΛCDM cosmological model – as accurate, very little is known about the nature and properties of these dark components of the Universe. Studies of the cosmic microwave background (CMB), combined with other observational evidence of big bang nucleosynthesis indicate that dark matter is non-baryonic. This supports measurements on galaxy and cluster scales, which found evidence of a large proportion of dark matter. This dark matter appears to be cold and collisionless, apparent only through its gravitational effects

    LOFAR Sparse Image Reconstruction

    Get PDF
    Context. The LOw Frequency ARray (LOFAR) radio telescope is a giant digital phased array interferometer with multiple antennas distributed in Europe. It provides discrete sets of Fourier components of the sky brightness. Recovering the original brightness distribution with aperture synthesis forms an inverse problem that can be solved by various deconvolution and minimization methods Aims. Recent papers have established a clear link between the discrete nature of radio interferometry measurement and the "compressed sensing" (CS) theory, which supports sparse reconstruction methods to form an image from the measured visibilities. Empowered by proximal theory, CS offers a sound framework for efficient global minimization and sparse data representation using fast algorithms. Combined with instrumental direction-dependent effects (DDE) in the scope of a real instrument, we developed and validated a new method based on this framework Methods. We implemented a sparse reconstruction method in the standard LOFAR imaging tool and compared the photometric and resolution performance of this new imager with that of CLEAN-based methods (CLEAN and MS-CLEAN) with simulated and real LOFAR data Results. We show that i) sparse reconstruction performs as well as CLEAN in recovering the flux of point sources; ii) performs much better on extended objects (the root mean square error is reduced by a factor of up to 10); and iii) provides a solution with an effective angular resolution 2-3 times better than the CLEAN images. Conclusions. Sparse recovery gives a correct photometry on high dynamic and wide-field images and improved realistic structures of extended sources (of simulated and real LOFAR datasets). This sparse reconstruction method is compatible with modern interferometric imagers that handle DDE corrections (A- and W-projections) required for current and future instruments such as LOFAR and SKAComment: Published in A&A, 19 pages, 9 figure

    A comparative analysis of denoising algorithms for extragalactic imaging surveys

    Get PDF
    We present a comprehensive analysis of the performance of noise-reduction (``denoising'') algorithms to determine whether they provide advantages in source detection on extragalactic survey images. The methods under analysis are Perona-Malik filtering, Bilateral filter, Total Variation denoising, Structure-texture image decomposition, Non-local means, Wavelets, and Block-matching. We tested the algorithms on simulated images of extragalactic fields with resolution and depth typical of the Hubble, Spitzer, and Euclid Space Telescopes, and of ground-based instruments. After choosing their best internal parameters configuration, we assess their performance as a function of resolution, background level, and image type, also testing their ability to preserve the objects fluxes and shapes. We analyze in terms of completeness and purity the catalogs extracted after applying denoising algorithms on a simulated Euclid Wide Survey VIS image, on real H160 (HST) and K-band (HAWK-I) observations of the CANDELS GOODS-South field. Denoising algorithms often outperform the standard approach of filtering with the Point Spread Function (PSF) of the image. Applying Structure-Texture image decomposition, Perona-Malik filtering, the Total Variation method by Chambolle, and Bilateral filtering on the Euclid-VIS image, we obtain catalogs that are both more pure and complete by 0.2 magnitudes than those based on the standard approach. The same result is achieved with the Structure-Texture image decomposition algorithm applied on the H160 image. The advantage of denoising techniques with respect to PSF filtering increases at increasing depth. Moreover, these techniques better preserve the shape of the detected objects with respect to PSF smoothing. Denoising algorithms provide significant improvements in the detection of faint objects and enhance the scientific return of current and future extragalactic surveys.Comment: 30 pages, 55 figures; accepted for publication in A&

    Interactive volumetric segmentation for textile micro-tomography data using wavelets and nonlocal means

    Get PDF
    This work addresses segmentation of volumetric images of woven carbon fiber textiles from micro-tomography data. We propose a semi-supervised algorithm to classify carbon fibers that requires sparse input as opposed to completely labeled images. The main contributions are: (a) design of effective discriminative classifiers, for three-dimensional textile samples, trained on wavelet features for segmentation; (b) coupling of previous step with nonlocal means as simple, efficient alternative to the Potts model; and (c) demonstration of reuse of classifier to diverse samples containing similar content. We evaluate our work by curating test sets of voxels in the absence of a complete ground truth mask. The algorithm obtains an average 0.95 F1 score on test sets and average F1 score of 0.93 on new samples. We conclude with discussion of failure cases and propose future directions toward analysis of spatiotemporal high-resolution micro-tomography images

    Multiscale vision model for event detection and reconstruction in two-photon imaging data

    Get PDF
    Reliable detection of calcium waves in multiphoton imaging data is challenging because of the low signal-to-noise ratio and because of the unpredictability of the time and location of these spontaneous events. This paper describes our approach to calcium wave detection and reconstruction based on a modified multiscale vision model, an object detection framework based on the thresholding of wavelet coefficients and hierarchical trees of significant coefficients followed by nonlinear iterative partial object reconstruction, for the analysis of two-photon calcium imaging data. The framework is discussed in the context of detection and reconstruction of intercellular glial calcium waves. We extend the framework by a different decomposition algorithm and iterative reconstruction of the detected objects. Comparison with several popular state-of-the-art image denoising methods shows that performance of the multiscale vision model is similar in the denoising, but provides a better segmenation of the image into meaningful objects, whereas other methods need to be combined with dedicated thresholding and segmentation utilities
    • …
    corecore