10 research outputs found
Developing a victorious strategy to the second strong gravitational lensing data challenge
Strong lensing is a powerful probe of the matter distribution in galaxies and clusters and a relevant tool for cosmography. Analyses of strong gravitational lenses with deep learning have become a popular approach due to these astronomical objects’ rarity and image complexity. Next-generation surveys will provide more opportunities to derive science from these objects and an increasing data volume to be analysed. However, finding strong lenses is challenging, as their number densities are orders of magnitude below those of galaxies. Therefore, specific strong lensing search algorithms are required to discover the highest number of systems possible with high purity and low false alarm rate. The need for better algorithms has prompted the development of an open community data science competition named strong gravitational lensing challenge (SGLC). This work presents the deep learning strategies and methodology used to design the highest scoring algorithm in the second SGLC (II SGLC). We discuss the approach used for this data set, the choice of a suitable architecture, particularly the use of a network with two branches to work with images in different resolutions, and its optimization. We also discuss the detectability limit, the lessons learned, and prospects for defining a tailor-made architecture in a survey in contrast to a general one. Finally, we release the models and discuss the best choice to easily adapt the model to a data set representing a survey with a different instrument. This work helps to take a step towards efficient, adaptable, and accurate analyses of strong lenses with deep learning frameworks
The probability of galaxy-galaxy strong lensing events in hydrodynamical simulations of galaxy clusters
Meneghetti et al. (2020) recently reported an excess of galaxy-galaxy strong
lensing (GGSL) in galaxy clusters compared to expectations from the LCDM
cosmological model. Theoretical estimates of the GGSL probability are based on
the analysis of numerical hydrodynamical simulations in the LCDM cosmology. We
quantify the impact of the numerical resolution and AGN feedback scheme adopted
in cosmological simulations on the predicted GGSL probability and determine if
varying these simulation properties can alleviate the gap with observations. We
repeat the analysis of Meneghetti et al. (2020) on cluster-size halos simulated
with different mass and force resolutions and implementing several independent
AGN feedback schemes. We find that improving the mass resolution by a factor of
ten and twenty-five, while using the same galaxy formation model that includes
AGN feedback, does not affect the GGSL probability. We find similar results
regarding the choice of gravitational softening. On the contrary, adopting an
AGN feedback scheme that is less efficient at suppressing gas cooling and star
formation leads to an increase in the GGSL probability by a factor between
three and six. However, we notice that such simulations form overly massive
subhalos whose contribution to the lensing cross-section would be significant
while their Einstein radii are too large to be consistent with the
observations. The primary contributors to the observed GGSL cross-sections are
subhalos with smaller masses, that are compact enough to become critical for
lensing. The population with these required characteristics appears to be
absent in simulations.Comment: 13 pages, 11 figures. Submitted for publication on Astronomy and
Astrophysic
Cosmology from EoR/Cosmic dawn with the SKA
SKA Phase 1 will build upon early detections of the EoR by precursor instruments, such asMWA, PAPER, and LOFAR, and planned instruments, such as HERA, to make the first high signal-tonoise measurements of fluctuations in the 21 cm brightness temperature from both reionization and the cosmic dawn. This will allow both imaging and statistical maps of the 21cm signal at redshifts z = 627 and constrain the underlying cosmology and evolution of the density field. This era includes nearly 60% of the (in principle) observable volume of the Universe and many more linear modes than the CMB, presenting an opportunity for SKA to usher in a new level of precision cosmology. This optimistic picture is complicated by the need to understand and remove the effect of astrophysics, so that systematics rather than statistics will limit constraints. This chapter describes the cosmological, as opposed to astrophysical, information available to SKA Phase 1. Key areas for discussion include: cosmological parameters constraints using 21cm fluctuations as a tracer of the density field; lensing of the 21cm signal, constraints on heating via exotic physics such as decaying or annihilating dark matter; impact of fundamental physics such as non-Gaussianity or warm dark matter on the source population; and constraints on the bulk flows arising from the decoupling of baryons and photons at z = 1000. The chapter explores the path to separating cosmology from 'gastrophysics', for example via velocity space distortions and separation in redshift. We discuss new opportunities for extracting cosmology made possible by the sensitivity of SKA1 and explore the advances achievable with SKA2