518 research outputs found

    Evaluating the effect of stellar multiplicity on the PSF of space-based weak lensing surveys

    Full text link
    The next generation of space-based telescopes used for weak lensing surveys will require exquisite point spread function (PSF) determination. Previously negligible effects may become important in the reconstruction of the PSF, in part because of the improved spatial resolution. In this paper, we show that unresolved multiple star systems can affect the ellipticity and size of the PSF and that this effect is not cancelled even when using many stars in the reconstruction process. We estimate the error in the reconstruction of the PSF due to the binaries in the star sample both analytically and with image simulations for different PSFs and stellar populations. The simulations support our analytical finding that the error on the size of the PSF is a function of the multiple stars distribution and of the intrinsic value of the size of the PSF, i.e. if all stars were single. Similarly, the modification of each of the complex ellipticity components (e1,e2) depends on the distribution of multiple stars and on the intrinsic complex ellipticity. Using image simulations, we also show that the predicted error in the PSF shape is a theoretical limit that can be reached only if large number of stars (up to thousands) are used together to build the PSF at any desired spatial position. For a lower number of stars, the PSF reconstruction is worse. Finally, we compute the effect of binarity for different stellar magnitudes and show that bright stars alter the PSF size and ellipticity more than faint stars. This may affect the design of PSF calibration strategies and the choice of the related calibration fields.Comment: 10 pages, 6 figures, accepted in A&

    Stellar classification from single-band imaging using machine learning

    Full text link
    Information on the spectral types of stars is of great interest in view of the exploitation of space-based imaging surveys. In this article, we investigate the classification of stars into spectral types using only the shape of their diffraction pattern in a single broad-band image. We propose a supervised machine learning approach to this endeavour, based on principal component analysis (PCA) for dimensionality reduction, followed by artificial neural networks (ANNs) estimating the spectral type. Our analysis is performed with image simulations mimicking the Hubble Space Telescope (HST) Advanced Camera for Surveys (ACS) in the F606W and F814W bands, as well as the Euclid VIS imager. We first demonstrate this classification in a simple context, assuming perfect knowledge of the point spread function (PSF) model and the possibility of accurately generating mock training data for the machine learning. We then analyse its performance in a fully data-driven situation, in which the training would be performed with a limited subset of bright stars from a survey, and an unknown PSF with spatial variations across the detector. We use simulations of main-sequence stars with flat distributions in spectral type and in signal-to-noise ratio, and classify these stars into 13 spectral subclasses, from O5 to M5. Under these conditions, the algorithm achieves a high success rate both for Euclid and HST images, with typical errors of half a spectral class. Although more detailed simulations would be needed to assess the performance of the algorithm on a specific survey, this shows that stellar classification from single-band images is well possible.Comment: 10 pages, 9 figures, 2 tables, accepted in A&

    Deep Convolutional Neural Networks as strong gravitational lens detectors

    Full text link
    Future large-scale surveys with high resolution imaging will provide us with a few 10510^5 new strong galaxy-scale lenses. These strong lensing systems however will be contained in large data amounts which are beyond the capacity of human experts to visually classify in a unbiased way. We present a new strong gravitational lens finder based on convolutional neural networks (CNNs). The method was applied to the Strong Lensing challenge organised by the Bologna Lens Factory. It achieved first and third place respectively on the space-based data-set and the ground-based data-set. The goal was to find a fully automated lens finder for ground-based and space-based surveys which minimizes human inspect. We compare the results of our CNN architecture and three new variations ("invariant" "views" and "residual") on the simulated data of the challenge. Each method has been trained separately 5 times on 17 000 simulated images, cross-validated using 3 000 images and then applied to a 100 000 image test set. We used two different metrics for evaluation, the area under the receiver operating characteristic curve (AUC) score and the recall with no false positive (Recall0FP\mathrm{Recall}_{\mathrm{0FP}}). For ground based data our best method achieved an AUC score of 0.9770.977 and a Recall0FP\mathrm{Recall}_{\mathrm{0FP}} of 0.500.50. For space-based data our best method achieved an AUC score of 0.9400.940 and a Recall0FP\mathrm{Recall}_{\mathrm{0FP}} of 0.320.32. On space-based data adding dihedral invariance to the CNN architecture diminished the overall score but achieved a higher no contamination recall. We found that using committees of 5 CNNs produce the best recall at zero contamination and consistenly score better AUC than a single CNN. We found that for every variation of our CNN lensfinder, we achieve AUC scores close to 11 within 6%6\%.Comment: 9 pages, accepted to A&

    La mise en commun en mathématique à l’aide d’un Tableau Blanc Interactif

    Get PDF
    Ce travail de recherche vise à mesurer l’impact que le Tableau Blanc Interactif peut avoir sur les élèves lors de la mise en commun en mathématique. Cet impact peut être très différent suivant la matière traitée et suivant la manière dont le TBI est utilisé. Je cherche donc à savoir comment l’apprentissage des élèves est influencé en testant différentes façons de faire une mise en commun à l’aide du TBI. Ces apprentissages seront testés et comparés à une classe dite de référence, pour ensuite être analysés et enfin en déduire les points négatifs et positifs du TBI. Grâce à ces résultats, il sera donc possible de mesurer l’efficacité du Tableau Blanc Interactif et observer quelle manière de l’utiliser s’avère optimale

    COSMOGRAIL: the COSmological MOnitoring of GRAvItational Lenses XV. Assessing the achievability and precision of time-delay measurements

    Full text link
    COSMOGRAIL is a long-term photometric monitoring of gravitationally lensed QSOs aimed at implementing Refsdal's time-delay method to measure cosmological parameters, in particular H0. Given long and well sampled light curves of strongly lensed QSOs, time-delay measurements require numerical techniques whose quality must be assessed. To this end, and also in view of future monitoring programs or surveys such as the LSST, a blind signal processing competition named Time Delay Challenge 1 (TDC1) was held in 2014. The aim of the present paper, which is based on the simulated light curves from the TDC1, is double. First, we test the performance of the time-delay measurement techniques currently used in COSMOGRAIL. Second, we analyse the quantity and quality of the harvest of time delays obtained from the TDC1 simulations. To achieve these goals, we first discover time delays through a careful inspection of the light curves via a dedicated visual interface. Our measurement algorithms can then be applied to the data in an automated way. We show that our techniques have no significant biases, and yield adequate uncertainty estimates resulting in reduced chi2 values between 0.5 and 1.0. We provide estimates for the number and precision of time-delay measurements that can be expected from future time-delay monitoring campaigns as a function of the photometric signal-to-noise ratio and of the true time delay. We make our blind measurements on the TDC1 data publicly availableComment: 11 pages, 8 figures, published in Astronomy & Astrophysic

    Toxic and drug-induced peripheral neuropathies: updates on causes, mechanisms and management.

    Get PDF
    PURPOSE OF REVIEW: This review discusses publications highlighting current research on toxic, chemotherapy-induced peripheral neuropathies (CIPNs), and drug-induced peripheral neuropathies (DIPNs). RECENT FINDINGS: The emphasis in clinical studies is on the early detection and grading of peripheral neuropathies, whereas recent studies in animal models have given insights into molecular mechanisms, with the discovery of novel neuronal, axonal, and Schwann cell targets. Some substances trigger inflammatory changes in the peripheral nerves. Pharmacogenetic techniques are underway to identify genes that may help to predict individuals at higher risk of developing DIPNs. Several papers have been published on chemoprotectants; however, to date, this approach has not been shown effective in clinical trials. SUMMARY: Both length and nonlength-dependent neuropathies are encountered, including small-fiber involvement. The introduction of new diagnostic techniques, such as excitability studies, skin laser Doppler flowmetry, and pharmacogenetics, holds promise for early detection and to elucidate underlying mechanisms. New approaches to improve functions and quality of life in CIPN patients are discussed. Apart from developing less neurotoxic anticancer therapies, there is still hope to identify chemoprotective agents (erythropoietin and substances involved in the endocannabinoid system are promising) able to prevent or correct painful CIPNs
    corecore