At the crossroad between photometry and time-domain astronomy, light curves
are invaluable data objects to study distant events and sources of light even when
they can not be spatially resolved. In particular, the field of exoplanet sciences has
tremendously benefited from acquired stellar light curves to detect and characterise
a majority of the outer worlds that we know today. Yet, their analysis is challenged
by the astrophysical and instrumental noise often diluting the signals of interest. For
instance, the detection of shallow dips caused by transiting exoplanets in stellar light
curves typically require a precision of the order of 1 ppm to 100 ppm in units of
stellar flux, and their very study directly depends upon our capacity to correct for
instrumental and stellar trends.
The increasing number of light curves acquired from space and ground-based
telescopes—of the order of billions—opens up the possibility for global, efficient,
automated processing algorithms to replace individual, parametric and hard-coded
ones. Luckily, the field of deep learning is also progressing fast, revolutionising time
series problems and applications. This reinforces the incentive to develop data-driven
approaches hand-in-hand with existing scientific models and expertise.
With the study of exoplanetary transits in focus, I developed automated approaches to learn and correct for the time-correlated noise in and across light curves.
In particular, I present (i) a deep recurrent model trained via a forecasting objective
to detrend individual transit light curves (e.g. from the Spitzer space telescope); (ii)
the power of a Transformer-based model leveraging whole datasets of light curves
(e.g. from large transit surveys) to learn the trend via a masked objective; (iii) a
hybrid and flexible framework to combine neural networks with transit physics