16 research outputs found

    Reduction of supernova light curves by vector Gaussian processes

    No full text
    International audienceBolometric light curves play an important role in understanding the underlying physics of various astrophysical phenomena, as they allow for a comprehensive modeling of the event and enable comparison between different objects. However, constructing these curves often requires the approximation and extrapolation from multicolor photometric observations. In this study, we introduce vector Gaussian processes as a new method for reduction of supernova light curves. This method enables us to approximate vector functions, even with inhomogeneous time-series data, while considering the correlation between light curves in different passbands. We applied this methodology to a sample of 29 superluminous supernovae (SLSNe) assembled using the Open Supernova Catalog. Their multicolor light curves were approximated using vector Gaussian processes. Subsequently, under the black-body assumption for the SLSN spectra at each moment of time, we reconstructed the bolometric light curves. The vector Gaussian processes developed in this work are accessible via the Python library gp-multistate-kernel on GitHub. Our approach provides an efficient tool for analyzing light curve data, opening new possibilities for astrophysical research

    Reduction of supernova light curves by vector Gaussian processes

    No full text
    International audienceBolometric light curves play an important role in understanding the underlying physics of various astrophysical phenomena, as they allow for a comprehensive modeling of the event and enable comparison between different objects. However, constructing these curves often requires the approximation and extrapolation from multicolor photometric observations. In this study, we introduce vector Gaussian processes as a new method for reduction of supernova light curves. This method enables us to approximate vector functions, even with inhomogeneous time-series data, while considering the correlation between light curves in different passbands. We applied this methodology to a sample of 29 superluminous supernovae (SLSNe) assembled using the Open Supernova Catalog. Their multicolor light curves were approximated using vector Gaussian processes. Subsequently, under the black-body assumption for the SLSN spectra at each moment of time, we reconstructed the bolometric light curves. The vector Gaussian processes developed in this work are accessible via the Python library gp-multistate-kernel on GitHub. Our approach provides an efficient tool for analyzing light curve data, opening new possibilities for astrophysical research

    How We Can Account for Type Ia Supernova Environment in Cosmological Analysis

    No full text
    International audienceAmong the other types of supernovae, Type Ia Supernovae (SNe Ia) have less luminosity dispersion at maximum light and show higher optical luminosities. These properties allow to use them as cosmological distance indicators that led to the discovery of the accelerating expansion of the Universe. However, even after the luminosity correction for stretch and color parameters—“standardization”, there is a remaining dispersion on the Hubble diagram of ~0.11 mag. This dispersion can be due to SN environmental effects—progenitor age, chemical composition, surrounding dust. In this work we study the impact of SN galactocentric distance (376 Pantheon SNe Ia) and host-galaxy morphology (275 Pantheon SNe Ia) on the light curve parameters. We confirm that the stretch-parameter depends on galactocentric distance and host morphology, but there is no significant correlation for the color. In the epoch of large transient surveys such as the Vera Rubin Observatory’s Legacy Survey of Space and Time, a study of environment and other possible sources of systematical uncertainties in the cosmological analysis is of high priority

    Machine learning techniques for analysis of photometric data from the Open Supernova catalog

    No full text
    International audienceThe next generation of astronomical surveys will revolutionize our understandingof the Universe, raising unprecedented data challenges in the process. One ofthem is the impossibility to rely on human scanning for the identification ofunusual/unpredicted astrophysical objects. Moreover, given that most of theavailable data will be in the form of photometric observations, suchcharacterization cannot rely on the existence of high resolution spectroscopicobservations. The goal of this project is to detect the anomalies in the OpenSupernova Catalog (http://sne.space/) with use of machine learning. We willdevelop a pipeline where human expertise and modern machine learning techniquescan complement each other. Using supernovae as a case study, our proposal isdivided in two parts: the first developing a strategy and pipeline whereanomalous objects are identified, and a second phase where such anomalousobjects submitted to careful individual analysis. The strategy requires aninitial data set for which spectroscopic is available for training purposes, butcan be applied to a much larger data set for which we only have photometricobservations. This project represents an effective strategy to guarantee weshall not overlook exciting new science hidden in the data we fought so hard toacquire

    Enabling the discovery of fast transients: A kilonova science module for the Fink broker

    No full text
    International audienceWe describe the fast transient classification algorithm in the center of the kilonova (KN) science module currently implemented in the Fink broker and report classification results based on simulated catalogs and real data from the ZTF alert stream. We used noiseless, homogeneously sampled simulations to construct a basis of principal components (PCs). All light curves from a more realistic ZTF simulation were written as a linear combination of this basis. The corresponding coefficients were used as features in training a random forest classifier. The same method was applied to long (>30 days) and medium (<30 days) light curves. The latter aimed to simulate the data situation found within the ZTF alert stream. Classification based on long light curves achieved 73.87% precision and 82.19% recall. Medium baseline analysis resulted in 69.30% precision and 69.74% recall, thus confirming the robustness of precision results when limited to 30 days of observations. In both cases, dwarf flares and point Type Ia supernovae were the most frequent contaminants. The final trained model was integrated into the Fink broker and has been distributing fast transients, tagged as \texttt{KN\_candidates}, to the astronomical community, especially through the GRANDMA collaboration. We showed that features specifically designed to grasp different light curve behaviors provide enough information to separate fast (KN-like) from slow (non-KN-like) evolving events. This module represents one crucial link in an intricate chain of infrastructure elements for multi-messenger astronomy which is currently being put in place by the Fink broker team in preparation for the arrival of data from the Vera Rubin Observatory Legacy Survey of Space and Time

    SNAD transient miner: Finding missed transient events in ZTF DR4 using k-D trees

    No full text
    International audienceWe report the automatic detection of 11 transients (7 possible supernovae and 4 active galactic nuclei candidates) within the Zwicky Transient Facility fourth data release (ZTF DR4), all of them observed in 2018 and absent from public catalogs. Among these, three were not part of the ZTF alert stream. Our transient mining strategy employs 41 physically motivated features extracted from both real light curves and four simulated light curve models (SN Ia, SN II, TDE, SLSN-I). These features are input to a k-D tree algorithm, from which we calculate the 15 nearest neighbors. After pre-processing and selection cuts, our dataset contained approximately a million objects among which we visually inspected the 105 closest neighbors from seven of our brightest, most well-sampled simulations, comprising 89 unique ZTF DR4 sources. Our result illustrates the potential of coherently incorporating domain knowledge and automatic learning algorithms, which is one of the guiding principles directing the SNAD team. It also demonstrates that the ZTF DR is a suitable testing ground for data mining algorithms aiming to prepare for the next generation of astronomical data

    Enabling the discovery of fast transients: A kilonova science module for the Fink broker

    No full text
    International audienceWe describe the fast transient classification algorithm in the center of the kilonova (KN) science module currently implemented in the Fink broker and report classification results based on simulated catalogs and real data from the ZTF alert stream. We used noiseless, homogeneously sampled simulations to construct a basis of principal components (PCs). All light curves from a more realistic ZTF simulation were written as a linear combination of this basis. The corresponding coefficients were used as features in training a random forest classifier. The same method was applied to long (>30 days) and medium (<30 days) light curves. The latter aimed to simulate the data situation found within the ZTF alert stream. Classification based on long light curves achieved 73.87% precision and 82.19% recall. Medium baseline analysis resulted in 69.30% precision and 69.74% recall, thus confirming the robustness of precision results when limited to 30 days of observations. In both cases, dwarf flares and point Type Ia supernovae were the most frequent contaminants. The final trained model was integrated into the Fink broker and has been distributing fast transients, tagged as \texttt{KN\_candidates}, to the astronomical community, especially through the GRANDMA collaboration. We showed that features specifically designed to grasp different light curve behaviors provide enough information to separate fast (KN-like) from slow (non-KN-like) evolving events. This module represents one crucial link in an intricate chain of infrastructure elements for multi-messenger astronomy which is currently being put in place by the Fink broker team in preparation for the arrival of data from the Vera Rubin Observatory Legacy Survey of Space and Time

    Enabling the discovery of fast transients: A kilonova science module for the Fink broker

    No full text
    International audienceWe describe the fast transient classification algorithm in the center of the kilonova (KN) science module currently implemented in the Fink broker and report classification results based on simulated catalogs and real data from the ZTF alert stream. We used noiseless, homogeneously sampled simulations to construct a basis of principal components (PCs). All light curves from a more realistic ZTF simulation were written as a linear combination of this basis. The corresponding coefficients were used as features in training a random forest classifier. The same method was applied to long (>30 days) and medium (<30 days) light curves. The latter aimed to simulate the data situation found within the ZTF alert stream. Classification based on long light curves achieved 73.87% precision and 82.19% recall. Medium baseline analysis resulted in 69.30% precision and 69.74% recall, thus confirming the robustness of precision results when limited to 30 days of observations. In both cases, dwarf flares and point Type Ia supernovae were the most frequent contaminants. The final trained model was integrated into the Fink broker and has been distributing fast transients, tagged as \texttt{KN\_candidates}, to the astronomical community, especially through the GRANDMA collaboration. We showed that features specifically designed to grasp different light curve behaviors provide enough information to separate fast (KN-like) from slow (non-KN-like) evolving events. This module represents one crucial link in an intricate chain of infrastructure elements for multi-messenger astronomy which is currently being put in place by the Fink broker team in preparation for the arrival of data from the Vera Rubin Observatory Legacy Survey of Space and Time

    Enabling the discovery of fast transients: A kilonova science module for the Fink broker

    No full text
    International audienceWe describe the fast transient classification algorithm in the center of the kilonova (KN) science module currently implemented in the Fink broker and report classification results based on simulated catalogs and real data from the ZTF alert stream. We used noiseless, homogeneously sampled simulations to construct a basis of principal components (PCs). All light curves from a more realistic ZTF simulation were written as a linear combination of this basis. The corresponding coefficients were used as features in training a random forest classifier. The same method was applied to long (>30 days) and medium (<30 days) light curves. The latter aimed to simulate the data situation found within the ZTF alert stream. Classification based on long light curves achieved 73.87% precision and 82.19% recall. Medium baseline analysis resulted in 69.30% precision and 69.74% recall, thus confirming the robustness of precision results when limited to 30 days of observations. In both cases, dwarf flares and point Type Ia supernovae were the most frequent contaminants. The final trained model was integrated into the Fink broker and has been distributing fast transients, tagged as \texttt{KN\_candidates}, to the astronomical community, especially through the GRANDMA collaboration. We showed that features specifically designed to grasp different light curve behaviors provide enough information to separate fast (KN-like) from slow (non-KN-like) evolving events. This module represents one crucial link in an intricate chain of infrastructure elements for multi-messenger astronomy which is currently being put in place by the Fink broker team in preparation for the arrival of data from the Vera Rubin Observatory Legacy Survey of Space and Time

    Rainbow: a colorful approach on multi-passband light curve estimation

    No full text
    International audienceWe present Rainbow, a physically motivated framework which enables simultaneous multi-band light curve fitting. It allows the user to construct a 2-dimensional continuous surface across wavelength and time, even in situations where the number of observations in each filter is significantly limited. Assuming the electromagnetic radiation emission from the transient can be approximated by a black-body, we combined an expected temperature evolution and a parametric function describing its bolometric light curve. These three ingredients allow the information available in one passband to guide the reconstruction in the others, thus enabling a proper use of multi-survey data. We demonstrate the effectiveness of our method by applying it to simulated data from the Photometric LSST Astronomical Time-series Classification Challenge (PLAsTiCC) as well as real data from the Young Supernova Experiment (YSE DR1).We evaluate the quality of the estimated light curves according to three different tests: goodness of fit, time of peak prediction and ability to transfer information to machine learning (ML) based classifiers. Results confirm that Rainbow leads to equivalent (SNII) or up to 75% better (SN Ibc) goodness of fit when compared to the Monochromatic approach. Similarly, accuracy when using Rainbow best-fit values as a parameter space in multi-class ML classification improves for all classes in our sample. An efficient implementation of Rainbow has been publicly released as part of the light curve package at https://github.com/light-curve/light-curve. Our approach enables straight forward light curve estimation for objects with observations in multiple filters and from multiple experiments. It is particularly well suited for situations where light curve sampling is sparse
    corecore