6,825 research outputs found

    Automated reliability assessment for spectroscopic redshift measurements

    Get PDF
    We present a new approach to automate the spectroscopic redshift reliability assessment based on machine learning (ML) and characteristics of the redshift probability density function (PDF). We propose to rephrase the spectroscopic redshift estimation into a Bayesian framework, in order to incorporate all sources of information and uncertainties related to the redshift estimation process, and produce a redshift posterior PDF that will be the starting-point for ML algorithms to provide an automated assessment of a redshift reliability. As a use case, public data from the VIMOS VLT Deep Survey is exploited to present and test this new methodology. We first tried to reproduce the existing reliability flags using supervised classification to describe different types of redshift PDFs, but due to the subjective definition of these flags, soon opted for a new homogeneous partitioning of the data into distinct clusters via unsupervised classification. After assessing the accuracy of the new clusters via resubstitution and test predictions, unlabelled data from preliminary mock simulations for the Euclid space mission are projected into this mapping to predict their redshift reliability labels.Comment: Submitted on 02 June 2017 (v1). Revised on 08 September 2017 (v2). Latest version 28 September 2017 (this version v3

    Photometric Redshifts with Surface Brightness Priors

    Full text link
    We use galaxy surface brightness as prior information to improve photometric redshift (photo-z) estimation. We apply our template-based photo-z method to imaging data from the ground-based VVDS survey and the space-based GOODS field from HST, and use spectroscopic redshifts to test our photometric redshifts for different galaxy types and redshifts. We find that the surface brightness prior eliminates a large fraction of outliers by lifting the degeneracy between the Lyman and 4000 Angstrom breaks. Bias and scatter are improved by about a factor of 2 with the prior for both the ground and space data. Ongoing and planned surveys from the ground and space will benefit, provided that care is taken in measurements of galaxy sizes and in the application of the prior. We discuss the image quality and signal-to-noise requirements that enable the surface brightness prior to be successfully applied.Comment: 15 pages, 13 figures, matches published versio

    The noise of cluster mass reconstructions from a source redshift distribution

    Get PDF
    The parameter-free reconstruction of the surface-mass density of clusters of galaxies is one of the principal applications of weak gravitational lensing. From the observable ellipticities of images of background galaxies, the tidal gravitational field (shear) of the mass distribution is estimated, and the corresponding surface mass density is constructed. The noise of the resulting mass map is investigated here, generalizing previous work which included mainly the noise due to the intrinsic galaxy ellipticities. Whereas this dominates the noise budget if the lens is very weak, other sources of noise become important, or even dominant, for the medium-strong lensing regime close to the center of clusters. In particular, shot noise due to a Poisson distribution of galaxy images, and increased shot noise owing to the correlation of galaxies in angular position and redshift, can yield significantly larger levels of noise than that from the intrinsic ellipticities only. We estimate the contributions from these various effects for two widely used smoothing operations, showing that one of them effectively removes the Poisson and the correlation noises related to angular positions of galaxies. Noise sources due to the spread in redshift of galaxies are still present in the optimized estimator and are shown to be relevant in many cases. We show how (even approximate) redshift information can be profitably used to reduce the noise in the mass map. The dependence of the various noise terms on the relevant parameters (lens redshift, strength, smoothing length, redshift distribution of background galaxies) are explicitly calculated and simple estimates are provided.Comment: 18 pages, A&A in pres

    Estimating photometric redshifts with artificial neural networks

    Get PDF
    A new approach to estimating photometric redshifts - using Artificial Neural Networks (ANNs) - is investigated. Unlike the standard template-fitting photometric redshift technique, a large spectroscopically-identified training set is required but, where one is available, ANNs produce photometric redshift accuracies at least as good as and often better than the template-fitting method. The Bayesian priors on the underlying redshift distribution are automatically taken into account. Furthermore, inputs other than galaxy colours - such as morphology, angular size and surface brightness - may be easily incorporated, and their utility assessed. Different ANN architectures are tested on a semi-analytic model galaxy catalogue and the results are compared with the template-fitting method. Finally the method is tested on a sample of ~ 20000 galaxies from the Sloan Digital Sky Survey. The r.m.s. redshift error in the range z < 0.35 is ~ 0.021.Comment: Submitted to MNRAS, 9 pages, 9 figures, substantial improvements to paper structur

    Multiphoton characterization and live cell imaging using fluorescent adenine analogue 2CNqA

    Get PDF
    Fluorescent nucleobase analogues (FBAs) are established tools for studying oligonucleotide structure, dynamics and interactions, and have recently also emerged as an attractive option for labeling RNA-based therapeutics. A recognized drawback of FBAs, however, is that they typically require excitation in the UV region, which for imaging in biological samples may have disadvantages related to phototoxicity, tissue penetration, and out-of-focus photobleaching. Multiphoton excitation has the potential to alleviate these issues and therefore, in this work, we characterize the multiphoton absorption properties and detectability of the highly fluorescent quadracyclic adenine analogue 2CNqA as a ribonucleotide monomer as well as incorporated, at one or two positions, into a 16mer antisense oligonucleotide (ASO). We found that 2CNqA has a two-photon absorption cross section that, among FBAs, is exceptionally high, with values of &amp; sigma;(2PA)(700 nm) = 5.8 GM, 6.8 GM, and 13 GM for the monomer, single-, and double-labelled oligonucleotide, respectively. Using fluorescence correlation spectroscopy, we show that the 2CNqA has a high 2P brightness as the monomer and when incorporated into the ASO, comparing favorably to other FBAs. We furthermore demonstrate the usefulness of the 2P imaging mode for improving detectability of 2CNqA-labelled ASOs in live cells

    PESSTO monitoring of SN 2012hn: further heterogeneity among faint type I supernovae

    Get PDF
    We present optical and infrared monitoring data of SN 2012hn collected by the Public ESO Spectroscopic Survey for Transient Objects (PESSTO). We show that SN 2012hn has a faint peak magnitude (MR ~ -15.7) and shows no hydrogen and no clear evidence for helium in its spectral evolution. Instead, we detect prominent Ca II lines at all epochs, which relates this transient to previously described 'Ca-rich' or 'gap' transients. However, the photospheric spectra (from -3 to +32 d with respect to peak) of SN 2012hn show a series of absorption lines which are unique, and a red continuum that is likely intrinsic rather than due to extinction. Lines of Ti II and Cr II are visible. This may be a temperature effect, which could also explain the red photospheric colour. A nebular spectrum at +150d shows prominent CaII, OI, CI and possibly MgI lines which appear similar in strength to those displayed by core-collapse SNe. To add to the puzzle, SN 2012hn is located at a projected distance of 6 kpc from an E/S0 host and is not close to any obvious starforming region. Overall SN 2012hn resembles a group of faint H-poor SNe that have been discovered recently and for which a convincing and consistent physical explanation is still missing. They all appear to explode preferentially in remote locations offset from a massive host galaxy with deep limits on any dwarf host galaxies, favouring old progenitor systems. SN 2012hn adds heterogeneity to this sample of objects. We discuss potential explosion channels including He-shell detonations and double detonations of white dwarfs as well as peculiar core-collapse SNe.Comment: 14 pages, 14 figures, accepted to MNRAS on 14/10/201

    Search for new resonant states in 10C and 11C and their impact on the cosmological lithium problem

    Full text link
    The observed primordial 7Li abundance in metal-poor halo stars is found to be lower than its Big-Bang nucleosynthesis (BBN) calculated value by a factor of approximately three. Some recent works suggested the possibility that this discrepancy originates from missing resonant reactions which would destroy the 7Be, parent of 7Li. The most promising candidate resonances which were found include a possibly missed 1- or 2- narrow state around 15 MeV in the compound nucleus 10C formed by 7Be+3He and a state close to 7.8 MeV in the compound nucleus 11C formed by 7Be+4He. In this work, we studied the high excitation energy region of 10C and the low excitation energy region in 11C via the reactions 10B(3He,t)10C and 11B(3He,t)11C, respectively, at the incident energy of 35 MeV. Our results for 10C do not support 7Be+3He as a possible solution for the 7Li problem. Concerning 11C results, the data show no new resonances in the excitation energy region of interest and this excludes 7Be+4He reaction channel as an explanation for the 7Li deficit.Comment: Accepted for publication in Phys. Rev. C (Rapid Communication

    A field-theoretic approach to the Wiener Sausage

    Get PDF
    The Wiener Sausage, the volume traced out by a sphere attached to a Brownian particle, is a classical problem in statistics and mathematical physics. Initially motivated by a range of field-theoretic, technical questions, we present a single loop renormalised perturbation theory of a stochastic process closely related to the Wiener Sausage, which, however, proves to be exact for the exponents and some amplitudes. The field-theoretic approach is particularly elegant and very enjoyable to see at work on such a classic problem. While we recover a number of known, classical results, the field-theoretic techniques deployed provide a particularly versatile framework, which allows easy calculation with different boundary conditions even of higher momenta and more complicated correlation functions. At the same time, we provide a highly instructive, non-trivial example for some of the technical particularities of the field-theoretic description of stochastic processes, such as excluded volume, lack of translational invariance and immobile particles. The aim of the present work is not to improve upon the well-established results for the Wiener Sausage, but to provide a field-theoretic approach to it, in order to gain a better understanding of the field-theoretic obstacles to overcome.Comment: 45 pages, 3 Figures, Springer styl

    Optical dropout galaxies lensed by the cluster A2667

    Full text link
    We investigate the nature and the physical properties of z, Y and J-dropout galaxies selected behind the lensing cluster A2667. This field is part of our project aimed at identifying z~7-10 candidates accessible to spectroscopic studies, based on deep photometry with ESO/VLT HAWK-I and FORS2 (zYJH and Ks-band images, AB(3 sigma)~26-27) on a sample of lensing clusters extracted from our multi-wavelength combined surveys with SPITZER, HST, and Herschel. In this paper we focus on the complete Y and J-dropout sample, as well as the bright z-dropouts fulfilling the selection criteria by Capak et al. (2011). 10 candidates are selected within the common field of ~33 arcmin2 (effective area once corrected for contamination and lensing dilution). All of them are detected in H and Ks bands in addition to J and/or IRAC 3.6/4.5, with H(AB)~23.4 to 25.2, and have modest magnification factors. Although best-fit photometric redshifts place all these candidates at high-z, the contamination by low-z interlopers is estimated at 50-75% level based on previous studies, and the comparison with the blank-field WIRCAM Ultra-Deep Survey (WUDS). The same result is obtained when photometric redshifts include a luminosity prior, allowing us to remove half of the original sample as likely z~1.7-3 interlopers with young stellar pulations and strong extinction. Two additional sources among the remaining sample could be identified at low-z based on a detection at 24 microns and on the HST z_850 band. These low-z interlopers are not well described by current templates given the large break, and cannot be easily identified based solely on optical and near-IR photometry. Given the estimated dust extinction and high SFRs, some of them could be also detected in the IR or sub-mm bands. After correction for likely contaminants, the observed counts at z>7.5 seem to be in agreement with an evolving LF. (abridged)Comment: 18 pages, 11 figures. Accepted for publication in A&

    Variability in efficiency of particulate organic carbon export: A model study

    Get PDF
    The flux of organic carbon from the surface ocean to mesopelagic depths is a key component of the global carbon cycle and is ultimately derived from primary production (PP) by phytoplankton. Only a small fraction of organic carbon produced by PP is exported from the upper ocean, referred to as the export efficiency (herein e-ratio). Limited observations of the e-ratio are available and there is thus considerable interest in using remotely-sensed parameters such as sea surface temperature to extrapolate local estimates to global annual export flux. Currently, there are large discrepancies between export estimates derived in this way; one possible explanation is spatial or temporal sampling bias in the observations. Here we examine global patterns in the spatial and seasonal variability in e-ratio and the subsequent effect on export estimates using a high resolution global biogeochemical model. NEMO-MEDUSA represents export as separate slow and fast sinking detrital material whose remineralisation is respectively temperature dependent and a function of ballasting minerals. We find that both temperature and the fraction of export carried by slow sinking particles are factors in determining e-ratio, suggesting that current empirical algorithms for e-ratio that only consider temperature are overly simple. We quantify the temporal lag between PP and export, which is greatest in regions of strong variability in PP where seasonal decoupling can result in large e-ratio variability. Extrapolating global export estimates from instantaneous measurements of e-ratio is strongly affected by seasonal variability, and can result in errors in estimated export of up to ±60%
    • …
    corecore