95 research outputs found
Photometry of supernovae in an image series : methods and application to the Supernova Legacy Survey (SNLS)
We present a technique to measure lightcurves of time-variable point sources
on a spatially structured background from imaging data. The technique was
developed to measure light curves of SNLS supernovae in order to infer their
distances. This photometry technique performs simultaneous PSF photometry at
the same sky position on an image series. We describe two implementations of
the method: one that resamples images before measuring fluxes, and one which
does not. In both instances, we sketch the key algorithms involved and present
the validation using semi-artificial sources introduced in real images in order
to assess the accuracy of the supernova flux measurements relative to that of
surrounding stars. We describe the methods required to anchor these PSF fluxes
to calibrated aperture catalogs, in order to derive SN magnitudes. We find a
marginally significant bias of 2 mmag of the after-resampling method, and no
bias at the mmag accuracy for the non-resampling method. Given surrounding star
magnitudes, we determine the systematic uncertainty of SN magnitudes to be less
than 1.5 mmag, which represents about one third of the current photometric
calibration uncertainty affecting SN measurements. The SN photometry delivers
several by-products: bright star PSF flux mea- surements which have a
repeatability of about 0.6%, as for aperture measurements; we measure relative
astrometric positions with a noise floor of 2.4 mas for a single-image bright
star measurement; we show that in all bands of the MegaCam instrument, stars
exhibit a profile linearly broadening with flux by about 0.5% over the whole
brightness range.Comment: Accepted for publication in A&A. 20 page
The DICE calibration project: design, characterization, and first results
We describe the design, operation, and first results of a photometric
calibration project, called DICE (Direct Illumination Calibration Experiment),
aiming at achieving precise instrumental calibration of optical telescopes. The
heart of DICE is an illumination device composed of 24 narrow-spectrum,
high-intensity, light-emitting diodes (LED) chosen to cover the
ultraviolet-to-near-infrared spectral range. It implements a point-like source
placed at a finite distance from the telescope entrance pupil, yielding a flat
field illumination that covers the entire field of view of the imager. The
purpose of this system is to perform a lightweight routine monitoring of the
imager passbands with a precision better than 5 per-mil on the relative
passband normalisations and about 3{\AA} on the filter cutoff positions. The
light source is calibrated on a spectrophotometric bench. As our fundamental
metrology standard, we use a photodiode calibrated at NIST. The radiant
intensity of each beam is mapped, and spectra are measured for each LED. All
measurements are conducted at temperatures ranging from 0{\deg}C to 25{\deg}C
in order to study the temperature dependence of the system. The photometric and
spectroscopic measurements are combined into a model that predicts the spectral
intensity of the source as a function of temperature. We find that the
calibration beams are stable at the level -- after taking the slight
temperature dependence of the LED emission properties into account. We show
that the spectral intensity of the source can be characterised with a precision
of 3{\AA} in wavelength. In flux, we reach an accuracy of about 0.2-0.5%
depending on how we understand the off-diagonal terms of the error budget
affecting the calibration of the NIST photodiode. With a routine 60-mn
calibration program, the apparatus is able to constrain the passbands at the
targeted precision levels.Comment: 25 pages, 27 figures, accepted for publication in A&
A Comparison of Algorithms for the Construction of SZ Cluster Catalogues
We evaluate the construction methodology of an all-sky catalogue of galaxy
clusters detected through the Sunyaev-Zel'dovich (SZ) effect. We perform an
extensive comparison of twelve algorithms applied to the same detailed
simulations of the millimeter and submillimeter sky based on a Planck-like
case. We present the results of this "SZ Challenge" in terms of catalogue
completeness, purity, astrometric and photometric reconstruction. Our results
provide a comparison of a representative sample of SZ detection algorithms and
highlight important issues in their application. In our study case, we show
that the exact expected number of clusters remains uncertain (about a thousand
cluster candidates at |b|> 20 deg with 90% purity) and that it depends on the
SZ model and on the detailed sky simulations, and on algorithmic implementation
of the detection methods. We also estimate the astrometric precision of the
cluster candidates which is found of the order of ~2 arcmins on average, and
the photometric uncertainty of order ~30%, depending on flux.Comment: Accepted for publication in A&A: 14 pages, 7 figures. Detailed
figures added in Appendi
Iterative destriping and photometric calibration for Planck-HFI, polarized, multi-detector map-making
We present an iterative scheme designed to recover calibrated I, Q, and U
maps from Planck-HFI data using the orbital dipole due to the satellite motion
with respect to the Solar System frame. It combines a map reconstruction, based
on a destriping technique, juxtaposed with an absolute calibration algorithm.
We evaluate systematic and statistical uncertainties incurred during both these
steps with the help of realistic, Planck-like simulations containing CMB,
foreground components and instrumental noise, and assess the accuracy of the
sky map reconstruction by considering the maps of the residuals and their
spectra. In particular, we discuss destriping residuals for polarization
sensitive detectors similar to those of Planck-HFI under different noise
hypotheses and show that these residuals are negligible (for intensity maps) or
smaller than the white noise level (for Q and U Stokes maps), for l > 50. We
also demonstrate that the combined level of residuals of this scheme remains
comparable to those of the destriping-only case except at very low l where
residuals from the calibration appear. For all the considered noise hypotheses,
the relative calibration precision is on the order of a few 10e-4, with a
systematic bias of the same order of magnitude.Comment: 18 pages, 21 figures. Match published versio
Maximum likelihood, parametric component separation and CMB B-mode detection in suborbital experiments
We investigate the performance of the parametric Maximum Likelihood component
separation method in the context of the CMB B-mode signal detection and its
characterization by small-scale CMB suborbital experiments. We consider
high-resolution (FWHM=8') balloon-borne and ground-based observatories mapping
low dust-contrast sky areas of 400 and 1000 square degrees, in three frequency
channels, 150, 250, 410 GHz, and 90, 150, 220 GHz, with sensitivity of order 1
to 10 micro-K per beam-size pixel. These are chosen to be representative of
some of the proposed, next-generation, bolometric experiments. We study the
residual foreground contributions left in the recovered CMB maps in the pixel
and harmonic domain and discuss their impact on a determination of the
tensor-to-scalar ratio, r. In particular, we find that the residuals derived
from the simulated data of the considered balloon-borne observatories are
sufficiently low not to be relevant for the B-mode science. However, the
ground-based observatories are in need of some external information to permit
satisfactory cleaning. We find that if such information is indeed available in
the latter case, both the ground-based and balloon-borne experiments can detect
the values of r as low as ~0.04 at 95% confidence level. The contribution of
the foreground residuals to these limits is found to be then subdominant and
these are driven by the statistical uncertainty due to CMB, including E-to-B
leakage, and noise. We emphasize that reaching such levels will require a
sufficient control of the level of systematic effects present in the data.Comment: 18 pages, 12 figures, 6 table
An Efficient Approach to Obtaining Large Numbers of Distant Supernova Host Galaxy Redshifts
We use the wide-field capabilities of the 2dF fibre positioner and the
AAOmega spectrograph on the Anglo-Australian Telescope (AAT) to obtain
redshifts of galaxies that hosted supernovae during the first three years of
the Supernova Legacy Survey (SNLS). With exposure times ranging from 10 to 60
ksec per galaxy, we were able to obtain redshifts for 400 host galaxies in two
SNLS fields, thereby substantially increasing the total number of SNLS
supernovae with host galaxy redshifts. The median redshift of the galaxies in
our sample that hosted photometrically classified Type Ia supernovae (SNe Ia)
is 0.77, which is 25% higher than the median redshift of spectroscopically
confirmed SNe Ia in the three-year sample of the SNLS. Our results demonstrate
that one can use wide-field fibre-fed multi-object spectrographs on 4m
telescopes to efficiently obtain redshifts for large numbers of supernova host
galaxies over the large areas of sky that will be covered by future
high-redshift supernova surveys, such as the Dark Energy Survey.Comment: 22 pages, 4 figures, accepted for publication in PAS
The pre-launch Planck Sky Model: a model of sky emission at submillimetre to centimetre wavelengths
We present the Planck Sky Model (PSM), a parametric model for the generation
of all-sky, few arcminute resolution maps of sky emission at submillimetre to
centimetre wavelengths, in both intensity and polarisation. Several options are
implemented to model the cosmic microwave background, Galactic diffuse emission
(synchrotron, free-free, thermal and spinning dust, CO lines), Galactic H-II
regions, extragalactic radio sources, dusty galaxies, and thermal and kinetic
Sunyaev-Zeldovich signals from clusters of galaxies. Each component is
simulated by means of educated interpolations/extrapolations of data sets
available at the time of the launch of the Planck mission, complemented by
state-of-the-art models of the emission. Distinctive features of the
simulations are: spatially varying spectral properties of synchrotron and dust;
different spectral parameters for each point source; modeling of the clustering
properties of extragalactic sources and of the power spectrum of fluctuations
in the cosmic infrared background. The PSM enables the production of random
realizations of the sky emission, constrained to match observational data
within their uncertainties, and is implemented in a software package that is
regularly updated with incoming information from observations. The model is
expected to serve as a useful tool for optimizing planned microwave and
sub-millimetre surveys and to test data processing and analysis pipelines. It
is, in particular, used for the development and validation of data analysis
pipelines within the planck collaboration. A version of the software that can
be used for simulating the observations for a variety of experiments is made
available on a dedicated website.Comment: 35 pages, 31 figure
- …