17 research outputs found
Mapping and simulating systematics due to spatially-varying observing conditions in DES Science Verification data
Spatially-varying depth and characteristics of observing conditions, such as seeing, airmass, or sky background, are major sources of systematic uncertainties in modern galaxy survey analyses, in particular in deep multi-epoch surveys. We present a framework to extract and project these sources of systematics onto the sky, and apply it to the Dark Energy Survey (DES) to map the observing conditions of the Science Verification (SV) data. The resulting distributions and maps of sources of systematics are used in several analyses of DES SV to perform detailed null tests with the data, and also to incorporate systematics in survey simulations. We illustrate the complementarity of these two approaches by comparing the SV data with the BCC-UFig, a synthetic sky catalogue generated by forward-modelling of the DES SV images. We analyse the BCC-UFig simulation to construct galaxy samples mimicking those used in SV galaxy clustering studies. We show that the spatially-varying survey depth imprinted in the observed galaxy densities and the redshift distributions of the SV data are successfully reproduced by the simulation and well-captured by the maps of observing conditions. The combined use of the maps, the SV data and the BCC-UFig simulation allows us to quantify the impact of spatial systematics on , the redshift distributions inferred using photometric redshifts. We conclude that spatial systematics in the SV data are mainly due to seeing fluctuations and are under control in current clustering and weak lensing analyses. The framework presented here is relevant to all multi-epoch surveys, and will be essential for exploiting future surveys such as the Large Synoptic Survey Telescope (LSST), which will require detailed null-tests and realistic end-to-end image simulations to correctly interpret the deep, high-cadence observations of the sky
Rodent models of focal cerebral ischemia: procedural pitfalls and translational problems
Rodent models of focal cerebral ischemia are essential tools in experimental stroke research. They have added tremendously to our understanding of injury mechanisms in stroke and have helped to identify potential therapeutic targets. A plethora of substances, however, in particular an overwhelming number of putative neuroprotective agents, have been shown to be effective in preclinical stroke research, but have failed in clinical trials. A lot of factors may have contributed to this failure of translation from bench to bedside. Often, deficits in the quality of experimental stroke research seem to be involved. In this article, we review the commonest rodent models of focal cerebral ischemia - middle cerebral artery occlusion, photothrombosis, and embolic stroke models - with their respective advantages and problems, and we address the issue of quality in preclinical stroke modeling as well as potential reasons for translational failure
Population Structure of the Bacterial Pathogen Xylella fastidiosa among Street Trees in Washington D.C.
Funding for Open Access provided by the UMD Libraries Open Access Publishing Fund.Bacterial leaf scorch, associated with the bacterial pathogen Xylella fastidiosa, is a widely
established and problematic disease of landscape ornamentals in Washington D.C. A multilocus
sequence typing analysis was performed using 10 housekeeping loci for X. fastidiosa
strains in order to better understand the epidemiology of leaf scorch disease in this municipal
environment. Samples were collected from 7 different tree species located throughout
the District of Columbia, consisting of 101 samples of symptomatic and asymptomatic foliage
from 84 different trees. Five strains of the bacteria were identified. Consistent with
prior data, these strains were host specific, with only one strain associated with members of
the red oak family, one strain associated with American elm, one strain associated with
American sycamore, and two strains associated with mulberry. Strains found for asymptomatic
foliage were the same as strains from the symptomatic foliage on individual trees.
Cross transmission of the strains was not observed at sites with multiple species of infected
trees within an approx. 25 m radius of one another. X. fastidiosa strain specificity observed
for each genus of tree suggests a highly specialized host-pathogen relationship
Improvement of the energy resolution via an optimized digital signal processing in GERDA Phase I
Population Structure of the Bacterial Pathogen Xylella fastidiosa among Street Trees in Washington D.C.
Involvement of p38 MAPK pathway in benzo(a)pyrene-induced human hepatoma cell migration and invasion
Recommended from our members
RedMaGiC: Selecting luminous red galaxies from the DES Science Verification data
We introduce redMaGiC, an automated algorithm for selecting luminous red galaxies (LRGs). The algorithm was specifically developed to minimize photometric redshift uncertainties in photometric large-scale structure studies. redMaGiC achieves this by self-training the colour cuts necessary to produce a luminosity-thresholded LRG sample of constant comoving density. We demonstrate that redMaGiC photo-zs are very nearly as accurate as the best machine learning-based methods, yet they require minimal spectroscopic training, do not suffer from extrapolation biases, and are very nearly Gaussian. We apply our algorithm to Dark Energy Survey (DES) Science Verification (SV) data to produce a redMaGiC catalogue sampling the redshift range z ∈ [0.2, 0.8]. Our fiducial sample has a comoving space density of 10-3 (h-1 Mpc)-3, and a median photo-z bias (zspec - zphoto) and scatter (σz/(1 + z)) of 0.005 and 0.017, respectively. The corresponding 5σ outlier fraction is 1.4 per cent. We also test our algorithm with Sloan Digital Sky Survey Data Release 8 and Stripe 82 data, and discuss how spectroscopic training can be used to control photo-z biases at the 0.1 per cent level