53 research outputs found

    PACOME: Optimal multi-epoch combination of direct imaging observations for joint exoplanet detection and orbit estimation

    Full text link
    Exoplanet detections and characterizations via direct imaging require high contrast and high angular resolution. These requirements typically require (i) cutting-edge instrumental facilities, (ii) optimized differential imaging to introduce a diversity in the signals of the sought-for objects, and (iii) dedicated processing algorithms to further eliminate the residual stellar leakages. Substantial efforts have been undertaken on the design of more efficient post-processing algorithms but their performance remains upper-bounded at shorter angular separations due to the the lack of diversity induced by the processing of each epoch of observations individually. We propose a new algorithm that is able to combine several observations of the same star by accounting for the Keplerian orbital motion across epochs of the sought-for sources in order to constructively co-add their weak signals. The proposed algorithm, PACOME, integrates an exploration of the plausible orbits within a statistical detection and estimation formalism. It is extended to a multi-epoch combination of the maximum likelihood framework of PACO, which is a mono-epoch post-processing algorithm. We derive a reliable multi-epoch detection criterion, interpretable both in terms of probability of detection and of false alarm. We tested the proposed algorithm on several datasets obtained from the VLT/SPHERE instrument with IRDIS and IFS. By resorting to injections of synthetic exoplanets, we show that PACOME is able to detect sources remaining undetectable in mono-epoch frameworks. The gain in detection sensitivity scales as high as the square root of the number of epochs. We also applied PACOME on a set of observations from the HR 8799 star hosting four known exoplanets, which are detected with very high signal-to-noise ratios. In addition, its implementation is efficient, fast, and fully automatized.Comment: Accepted for publication in A&

    Exoplanet imaging data challenge: benchmarking the various image processing methods for exoplanet detection

    Get PDF
    The Exoplanet Imaging Data Challenge is a community-wide effort meant to offer a platform for a fair and common comparison of image processing methods designed for exoplanet direct detection. For this purpose, it gathers on a dedicated repository (Zenodo), data from several high-contrast ground-based instruments worldwide in which we injected synthetic planetary signals. The data challenge is hosted on the CodaLab competition platform, where participants can upload their results. The specifications of the data challenge are published on our website https://exoplanet-imaging-challenge.github.io/. The first phase, launched on the 1st of September 2019 and closed on the 1st of October 2020, consisted in detecting point sources in two types of common data-set in the field of high-contrast imaging: data taken in pupil-tracking mode at one wavelength (subchallenge 1, also referred to as ADI) and multispectral data taken in pupil-tracking mode (subchallenge 2, also referred to as ADI+mSDI). In this paper, we describe the approach, organisational lessons-learnt and current limitations of the data challenge, as well as preliminary results of the participants’ submissions for this first phase. In the future, we plan to provide permanent access to the standard library of data sets and metrics, in order to guide the validation and support the publications of innovative image processing algorithms dedicated to high-contrast imaging of planetary systems

    Combining statistical learning with deep learning for improved exoplanet detection and characterization

    No full text
    International audienceCombining statistical learning with deep learning for improved exoplanet detection and characterizatio

    Détection et caractérisation d'objets à partir de signaux faibles dans des images : applications en astronomie et microscopie

    No full text
    Detecting and characterizing objects in images in the low signal-to-noise ratio regime is a critical issue in many areas such as astronomy or microscopy. In astronomy, the detection of exoplanets and their characterization by direct imaging from the Earth is a hot topic. A target star and its close environment (hosting potential exoplanets) are observed on short exposures. In microscopy, in-line holography is a cost-effective method for characterizing microscopic objects. Based on the recording of a hologram, it allows a digital focusing in any plane of the imaged 3-D volume. In these two fields, the object detection problem is made difficult by the low contrast between the objects and the nonstationary background of the recorded images.In this thesis, we propose an unsupervised exoplanet detection and characterization algorithm based on the statistical modeling of background fluctuations. The method, based on a modeling of the statistical distribution of patches, captures their spatial covariances. It reaches a performance superior to state-of-the-art techniques on several datasets of the European high-contrast imager SPHERE operating at the Very Large Telescope. It produces statistically grounded and spatially-stationary detection maps in which detections can be performed at a constant probability of false alarm. It also produces photometrically unbiased spectral energy distributions of the detected sources. The use of a statistical model of the data leads to reliable photometric and astrometric accuracies. This methodological framework can be adapted to the detection of spatially-extended patterns in strong structured background, such as the diffraction patterns in holographic microscopy. We also propose robust approaches based on weighting strategies to reduce the influence of the numerous outliers present in real data. We show on holographic videos that the proposed weighting approach achieves a bias/variance tradeoff. In astronomy, the robustness improves the performance of our detection method in particular at close separations where the stellar residuals dominate. Our algorithms are adapted to benefit from the possible spectral diversity of the data, which improves the detection and characterization performance. All the algorithms developed are unsupervised: weighting and/or regularization parameters are estimated in a data-driven fashion. Beyond the applications in astronomy and microscopy, the signal processing methodologies introduced are general and could be applied to other detection and estimation problems.La détection et la caractérisation d’objets dans des images à faible rapport signal sur bruit est un problème courant dans de nombreux domaines tels que l’astronomie ou la microscopie. En astronomie, la détection des exoplanètes et leur caractérisation par imagerie directe depuis la Terre sont des sujets de recherche très actifs. Une étoile cible et son environnement proche (abritant potentiellement des exoplanètes) sont observés sur de courtes poses. En microscopie, l’holographie en ligne est une méthode de choix pour caractériser à faibles coûts les objets microscopiques. Basée sur l’enregistrement d’un hologramme, elle permet une mise au point numérique dans n’importe quel plan du volume 3-D imagé. Dans ces deux applications cibles, le problème est rendu difficile par le faible contraste entre les objets et le fond non stationnaire des images enregistrées.Dans cette thèse, nous proposons un algorithme non-supervisé dédié à la détection et à la caractérisation d’exoplanètes par une modélisation statistique des fluctuations du fond. Cette méthode est basée sur une modélisation de la distribution statistique des données à une échelle locale de patchs, capturant ainsi leur covariances spatiales. Testé sur plusieurs jeux de données de l’imageur haut-contraste SPHERE opérant au Très Grand Télescope Européen, cet algorithme atteint de meilleures performances que les méthodes de l’état de l’art. En particulier, les cartes de détection produites sont stationnaires et statistiquement fondées. La détection des exoplanètes peut ainsi être effectuée à probabilité de fausse alarme contrôlée. L’estimation de la distribution d’énergie spectrale des sources détectées est également non biaisée. L’utilisation d’un modèle statistique permet également de déduire des précisions photométriques et astrométriques fiables. Ce cadre méthodologique est ensuite adapté pour la détection de motifs spatialement étendus tels que les motifs de diffraction rencontrés en microscopie holographique qui sont également dominés par un fond non-stationnaire. Nous proposons aussi des approches robustes basées sur des stratégies de pondération afin de réduire l’influence des nombreuses valeurs aberrantes présentes sur les données réelles. Nous montrons sur des vidéos holographiques que les méthodes de pondération proposées permettent d’atteindre un compromis biais/variance. En astronomie, la robustesse améliore les performances de détection, en particulier à courtes séparations angulaires, où les fuites stellaires dominent. Les algorithmes développés sont également adaptés pour tirer parti de la diversité spectrale des données en plus de leur diversité temporelle, améliorant ainsi leurs performances de détection et de caractérisation. Tous les algorithmes développés sont totalement non-supervisés: les paramètres de pondération et/ou de régularisation sont estimés directement à partir des données. Au-delà des applications considérées en astronomie et en microscopie, les méthodes de traitement du signal introduites dans cette thèse sont générales et pourraient être appliquées à d’autres problèmes de détection et d’estimation

    Optimizing phase object reconstruction using an in-line digital holographic microscope and a reconstruction based on a Lorenz-Mie model

    No full text
    International audienceAmong the various configurations that may be used in digital holography, the original in-line “Gabor” configuration is the simplest setup, with a single beam. It requires sparsity of the sample but it is free from beam separation device and associated drawbacks. This option is particularly suited when cost, compact design or stability are important. This configuration is also easier to adapt on a traditional microscope. Finally, from the metrological point of view, this configuration, combined with parametric inverse reconstructions using Lorenz-Mie Theory, has proven to make possible highly accurate estimation of spherical particles parameters (3D location, radius and refractive index) with sub-micron accuracy. Experimental parameters such as the defocus distance, the choice of the objective, or the coherence of the source have a strong influence on the accuracy of the estimation. They are often studied experimentally on specific setups. We previously demonstrated the benefit of using statistical signal processing tools as the Cramér-Rao Lower Bounds to predict best theoretical accuracy reachable for opaque object. This accuracy depends on the image/hologram formation model, the noise model and the signal to noise ratio in the holograms. In a co-design framework, we propose here to investigate the influence of experimental parameters on the estimation of the radius and refractive index of micrometer-sized transparent spherical objects. In this context, we use Lorenz-Mie Theory to simulate spherical object holograms, to compute Cramér-Rao Lower bounds, and to numerically reconstruct the objects parameters using an inverse problem approach. Then, these theoretical studies are used to challenge our digital holographic microscopy setup and conclude about accuracy, limitations and possible enhancements

    Caractérisation robuste d'objets à partir de vidéos de microscopie sans lentille

    No full text
    International audienceLensless microscopy, also known as in-line digital holography, is a 3D quantitative imaging method used in various fields including microfluidics and biomedical imaging. To estimate the size and 3D location of microscopic objects in holograms, maximum likelihood methods have been shown to outperform traditional approaches based on 3D image reconstruction followed by 3D image analysis. However, the presence of objects other than the object of interest may bias maximum likelihood estimates. Using experimental videos of holograms, we show that replacing the maximum likelihood with a robust estimation procedure reduces this bias. We propose a criterion based on the intersection of confidence intervals in order to automatically set the level that distinguishes between inliers and outliers. We show that this criterion achieves a bias / variance trade-off. We also show that joint analysis of a sequence of holograms using the robust procedure is shown to further improve estimation accuracy.La microscopie sans lentille, également connue sous le nom d'holographie numérique en ligne, est une méthode d'imagerie quantitative en 3D utilisée dans divers domaines, notamment la microfluidique et l'imagerie biomédicale. Pour estimer la taille et la position 3D d'objets microscopiques à partir d'hologrammes, il a été démontré que les méthodes du maximum de vraisemblance surpassent les approches traditionnelles basées sur la restitution d'image 3D suivie d'une analyse d'image 3D. Cependant, la présence d'autres objets (autre que l'objet d'intérêt) peut biaiser l'estimation au sens du maximum de vraisemblance. En utilisant des vidéos d'hologrammes expérimentaux, nous montrons que l'utilisation d'une procédure d'estimation robuste, à la place du maximum de vraisemblance classique, réduit ce biais. Nous proposons également un critère basé sur l'intersection des intervalles de confiance afin de définir automatiquement le niveau permettant de différencier les inliers et les outliers. Nous montrons que ce critère réalise un compromis biais / variance. Nous montrons également que l'analyse conjointe d'une séquence d'hologrammes en utilisant cette procédure robuste améliore encore la précision de l'estimation

    Optimal multi-epoch combination of direct imaging observations for improved exoplanet detection

    No full text
    International audienceExoplanets detection by direct imaging remains one of the most challenging field of modern astronomy. The signal of the star can prevent the detection of orbiting companions in single datasets, but combining information from several observations helps boost the detection limits. We propose a new algorithm named PACOME, based on PACO’s approach, which optimally combines, in a maximum likelihood sense, multi-epoch datasets and improves the detection sensitivity of potential exoplanets by taking into account their orbital motions. The efficiency of the algorithm is tested on the well-known exoplanetary system 51 Eridani

    PACO ASDI: an algorithm for exoplanet detection and characterization in direct imaging with integral field spectrographs

    No full text
    International audienceContext. Exoplanet detection and characterization by direct imaging both rely on sophisticated instruments (adaptive optics and coronagraph) and adequate data processing methods. Angular and spectral differential imaging (ASDI) combines observations at different times and a range of wavelengths in order to separate the residual signal from the host star and the signal of interest corresponding to off-axis sources.Aims. Very high contrast detection is only possible with an accurate modeling of those two components, in particular of the background due to stellar leakages of the host star masked out by the coronagraph. Beyond the detection of point-like sources in the field of view, it is also essential to characterize the detection in terms of statistical significance and astrometry and to estimate the source spectrum.Methods. We extend our recent method PACO, based on local learning of patch covariances, in order to capture the spectral and temporal fluctuations of background structures. From this statistical modeling, we build a detection algorithm and a spectrum estimation method: PACO ASDI. The modeling of spectral correlations proves useful both in reducing detection artifacts and obtaining accurate statistical guarantees (detection thresholds and photometry confidence intervals).Results. An analysis of several ASDI datasets from the VLT/SPHERE-IFS instrument shows that PACO ASDI produces very clean detection maps, for which setting a detection threshold is statistically reliable. Compared to other algorithms used routinely to exploit the scientific results of SPHERE-IFS, sensitivity is improved and many false detections can be avoided. Spectrally smoothed spectra are also produced by PACO ASDI. The analysis of datasets with injected fake planets validates the recovered spectra and the computed confidence intervals.Conclusions.PACO ASDI is a high-contrast processing algorithm accounting for the spatio-spectral correlations of the data to produce statistically-grounded detection maps and reliable spectral estimations. Point source detections, photometric and astrometric characterizations are fully automatized

    Augmenter la limite de détection des exoplanètes par combinaison optimale d'observations multi-époques en imagerie directe

    No full text
    International audienceLa détection d'exoplanètes par imagerie directe est l'un des plus grands défis de l'astronomie moderne. L'intensité du signal de l'étoile peut empêcher la détection d'exoplanètes en orbite dans des données individuelles mais combiner les informations d'observations à plusieurs époques permet d'abaisser les limites de détection. Nous proposons un nouvel algorithme nommé PACOME, basée sur l'approche de PACO, combinant de manière optimale, au sens du maximum de vraisemblance, des jeux de données multi-époques et améliorant la sensibilité de détection d'éventuelles exoplanètes en tenant compte de leurs mouvements orbitaux. L'efficacité de l'algorithme est testée sur le système exoplanétaire HR 8799

    Fusion de données par filtrage adapté pour la détection d'exoplanètes en imagerie directe

    No full text
    Détecter des exoplanètes par imagerie directe est une tâche extrêmement ardue en raison du très haut contraste entre ces sources et leur étoile hôte. L'algorithme PACOME combine des observations à des époques différentes d'un même système extrasolaire en prenant en compte le mouvement Képlérien des sources qu'il cherche à détecter et permet d'atteindre une sensibilité encore inégalée. Dans ce papier, nous étendons le formalisme de cette méthode à une approche par filtrage adapté pour en dériver un critère optimal interprétable en terme de rapport signal sur bruit. Nous illustrons l'efficacité de la méthode sur des simulations réalistes de sources ajoutées dans des données réelles
    corecore