330 research outputs found
Two-pulse rapid remote surface contamination measurement.
This project demonstrated the feasibility of a 'pump-probe' optical detection method for standoff sensing of chemicals on surfaces. Such a measurement uses two optical pulses - one to remove the analyte (or a fragment of it) from the surface and the second to sense the removed material. As a particular example, this project targeted photofragmentation laser-induced fluorescence (PF-LIF) to detect of surface deposits of low-volatility chemical warfare agents (LVAs). Feasibility was demonstrated for four agent surrogates on eight realistic surfaces. Its sensitivity was established for measurements on concrete and aluminum. Extrapolations were made to demonstrate relevance to the needs of outside users. Several aspects of the surface PF-LIF physical mechanism were investigated and compared to that of vapor-phase measurements. The use of PF-LIF as a rapid screening tool to 'cue' more specific sensors was recommended. Its sensitivity was compared to that of Raman spectroscopy, which is both a potential 'confirmer' of PF-LIF 'hits' and is also a competing screening technology
Recommended from our members
Eye safe short range standoff aerosol cloud finder.
Because many solid objects, both stationary and mobile, will be present in an indoor environment, the design of an indoor aerosol cloud finding lidar (light detection and ranging) instrument presents a number of challenges. The cloud finder must be able to discriminate between these solid objects and aerosol clouds as small as 1-meter in depth in order to probe suspect clouds. While a near IR ({approx}1.5-{micro}m) laser is desirable for eye-safety, aerosol scattering cross sections are significantly lower in the near-IR than at visible or W wavelengths. The receiver must deal with a large dynamic range since the backscatter from solid object will be orders of magnitude larger than for aerosol clouds. Fast electronics with significant noise contributions will be required to obtain the necessary temporal resolution. We have developed a laboratory instrument to detect aerosol clouds in the presence of solid objects. In parallel, we have developed a lidar performance model for performing trade studies. Careful attention was paid to component details so that results obtained in this study could be applied towards the development of a practical instrument. The amplitude and temporal shape of the signal return are analyzed for discrimination of aerosol clouds in an indoor environment. We have assessed the feasibility and performance of candidate approaches for a fieldable instrument. With the near-IR PMT and a 1.5-{micro}m laser source providing 20-{micro}J pulses, we estimate a bio-aerosol detection limit of 3000 particles/l
Cosmological parameters from large scale structure - geometric versus shape information
The matter power spectrum as derived from large scale structure (LSS) surveys
contains two important and distinct pieces of information: an overall smooth
shape and the imprint of baryon acoustic oscillations (BAO). We investigate the
separate impact of these two types of information on cosmological parameter
estimation, and show that for the simplest cosmological models, the broad-band
shape information currently contained in the SDSS DR7 halo power spectrum (HPS)
is by far superseded by geometric information derived from the baryonic
features. An immediate corollary is that contrary to popular beliefs, the upper
limit on the neutrino mass m_\nu presently derived from LSS combined with
cosmic microwave background (CMB) data does not in fact arise from the possible
small-scale power suppression due to neutrino free-streaming, if we limit the
model framework to minimal LambdaCDM+m_\nu. However, in more complicated
models, such as those extended with extra light degrees of freedom and a dark
energy equation of state parameter w differing from -1, shape information
becomes crucial for the resolution of parameter degeneracies. This conclusion
will remain true even when data from the Planck surveyor become available. In
the course of our analysis, we introduce a new dewiggling procedure that allows
us to extend consistently the use of the SDSS HPS to models with an arbitrary
sound horizon at decoupling. All the cases considered here are compatible with
the conservative 95%-bounds \sum m_\nu < 1.16 eV, N_eff = 4.8 \pm 2.0.Comment: 18 pages, 4 figures; v2: references added, matches published versio
Stellar Content from high resolution galactic spectra via Maximum A Posteriori
This paper describes STECMAP (STEllar Content via Maximum A Posteriori), a
flexible, non-parametric inversion method for the interpretation of the
integrated light spectra of galaxies, based on synthetic spectra of single
stellar populations (SSPs). We focus on the recovery of a galaxy's star
formation history and stellar age-metallicity relation. We use the high
resolution SSPs produced by PEGASE-HR to quantify the informational content of
the wavelength range 4000 - 6800 Angstroms.
A detailed investigation of the properties of the corresponding simplified
linear problem is performed using singular value decomposition. It turns out to
be a powerful tool for explaining and predicting the behaviour of the
inversion. We provide means of quantifying the fundamental limitations of the
problem considering the intrinsic properties of the SSPs in the spectral range
of interest, as well as the noise in these models and in the data.
We performed a systematic simulation campaign and found that, when the time
elapsed between two bursts of star formation is larger than 0.8 dex, the
properties of each episode can be constrained with a precision of 0.04 dex in
age and 0.02 dex in metallicity from high quality data (R=10 000,
signal-to-noise ratio SNR=100 per pixel), not taking model errors into account.
The described methods and error estimates will be useful in the design and in
the analysis of extragalactic spectroscopic surveys.Comment: 31 pages, 23 figures, accepted for publication in MNRA
Fluorescence measurements for evaluating the application of multivariate analysis techniques to optically thick environments.
Laser-induced fluorescence measurements of cuvette-contained laser dye mixtures are made for evaluation of multivariate analysis techniques to optically thick environments. Nine mixtures of Coumarin 500 and Rhodamine 610 are analyzed, as well as the pure dyes. For each sample, the cuvette is positioned on a two-axis translation stage to allow the interrogation at different spatial locations, allowing the examination of both primary (absorption of the laser light) and secondary (absorption of the fluorescence) inner filter effects. In addition to these expected inner filter effects, we find evidence that a portion of the absorbed fluorescence is re-emitted. A total of 688 spectra are acquired for the evaluation of multivariate analysis approaches to account for nonlinear effects
Impact of baryons on the cluster mass function and cosmological parameter determination
Recent results by the Planck collaboration have shown that cosmological
parameters derived from the cosmic microwave background anisotropies and
cluster number counts are in tension, with the latter preferring lower values
of the matter density parameter, , and power spectrum
amplitude, . Motivated by this, we investigate the extent to which
the tension may be ameliorated once the effect of baryonic depletion on the
cluster mass function is taken into account. We use the large-volume Millennium
Gas simulations in our study, including one where the gas is pre-heated at high
redshift and one where the gas is heated by stars and active galactic nuclei
(in the latter, the self-gravity of the baryons and radiative cooling are
omitted). In both cases, the cluster baryon fractions are in reasonably good
agreement with the data at low redshift, showing significant depletion of
baryons with respect to the cosmic mean. As a result, it is found that the
cluster abundance in these simulations is around 15 per cent lower than the
commonly-adopted fit to dark matter simulations by Tinker et al (2008) for the
mass range . Ignoring this effect
produces a significant artificial shift in cosmological parameters which can be
expressed as at
(the median redshift of the cluster sample) for the
feedback model. While this shift is not sufficient to fully explain the
discrepancy, it is clear that such an effect cannot be
ignored in future precision measurements of cosmological parameters with
clusters. Finally, we outline a simple, model-independent procedure that
attempts to correct for the effect of baryonic depletion and show that it works
if the baryon-dark matter back-reaction is negligible.Comment: 10 pages, 5 figures, Accepted by MNRA
The Atacama Cosmology Telescope: Data Characterization and Map Making
We present a description of the data reduction and mapmaking pipeline used
for the 2008 observing season of the Atacama Cosmology Telescope (ACT). The
data presented here at 148 GHz represent 12% of the 90 TB collected by ACT from
2007 to 2010. In 2008 we observed for 136 days, producing a total of 1423 hours
of data (11 TB for the 148 GHz band only), with a daily average of 10.5 hours
of observation. From these, 1085 hours were devoted to a 850 deg^2 stripe (11.2
hours by 9.1 deg) centered on a declination of -52.7 deg, while 175 hours were
devoted to a 280 deg^2 stripe (4.5 hours by 4.8 deg) centered at the celestial
equator. We discuss sources of statistical and systematic noise, calibration,
telescope pointing, and data selection. Out of 1260 survey hours and 1024
detectors per array, 816 hours and 593 effective detectors remain after data
selection for this frequency band, yielding a 38% survey efficiency. The total
sensitivity in 2008, determined from the noise level between 5 Hz and 20 Hz in
the time-ordered data stream (TOD), is 32 micro-Kelvin sqrt{s} in CMB units.
Atmospheric brightness fluctuations constitute the main contaminant in the data
and dominate the detector noise covariance at low frequencies in the TOD. The
maps were made by solving the least-squares problem using the Preconditioned
Conjugate Gradient method, incorporating the details of the detector and noise
correlations. Cross-correlation with WMAP sky maps, as well as analysis from
simulations, reveal that our maps are unbiased at multipoles ell > 300. This
paper accompanies the public release of the 148 GHz southern stripe maps from
2008. The techniques described here will be applied to future maps and data
releases.Comment: 20 pages, 18 figures, 6 tables, an ACT Collaboration pape
Forecasting ground-based sensitivity to the Rayleigh scattering of the CMB in the presence of astrophysical foregrounds
The Rayleigh scattering of cosmic microwave background (CMB) photons off the
neutral hydrogen produced during recombination effectively creates an
additional scattering surface after recombination that encodes new cosmological
information, including the expansion and ionization history of the universe. A
first detection of Rayleigh scattering is a tantalizing target for
next-generation CMB experiments. We have developed a Rayleigh scattering
forecasting pipeline that includes instrumental effects, atmospheric noise, and
astrophysical foregrounds (e.g., Galactic dust, cosmic infrared background, or
CIB, and the thermal Sunyaev-Zel'dovich effect). We forecast the Rayleigh
scattering detection significance for several upcoming ground-based
experiments, including SPT-3G+, Simons Observatory, CCAT-prime, and CMB-S4, and
examine the limitations from atmospheric and astrophysical foregrounds as well
as potential mitigation strategies. When combined with Planck data, we estimate
that the ground-based experiments will detect Rayleigh scattering with a
significance between 1.6 and 3.7, primarily limited by atmospheric noise and
the CIB.Comment: 19 pages, 7 figures (v2 additional author added
Cosmological parameters constraints from galaxy cluster mass function measurements in combination with other cosmological data
We present the cosmological parameters constraints obtained from the
combination of galaxy cluster mass function measurements (Vikhlinin et al.,
2009a,b) with new cosmological data obtained during last three years: updated
measurements of cosmic microwave background anisotropy with Wilkinson Microwave
Anisotropy Probe (WMAP) observatory, and at smaller angular scales with South
Pole Telescope (SPT), new Hubble constant measurements, baryon acoustic
oscillations and supernovae Type Ia observations.
New constraints on total neutrino mass and effective number of neutrino
species are obtained. In models with free number of massive neutrinos the
constraints on these parameters are notably less strong, and all considered
cosmological data are consistent with non-zero total neutrino mass \Sigma m_\nu
\approx 0.4 eV and larger than standard effective number of neutrino species,
N_eff \approx 4. These constraints are compared to the results of neutrino
oscillations searches at short baselines.
The updated dark energy equation of state parameters constraints are
presented. We show that taking in account systematic uncertainties, current
cluster mass function data provide similarly powerful constraints on dark
energy equation of state, as compared to the constraints from supernovae Type
Ia observations.Comment: Accepted for publication in Astronomy Letter
Constraining the expansion rate of the Universe using low-redshift ellipticals as cosmic chronometers
We present a new methodology to determine the expansion history of the
Universe analyzing the spectral properties of early type galaxies (ETG). We
found that for these galaxies the 4000\AA break is a spectral feature that
correlates with the relative ages of ETGs. In this paper we describe the
method, explore its robustness using theoretical synthetic stellar population
models, and apply it using a SDSS sample of 14 000 ETGs. Our motivation
to look for a new technique has been to minimise the dependence of the cosmic
chronometer method on systematic errors. In particular, as a test of our
method, we derive the value of the Hubble constant (stat)
(syst) (68% confidence), which is not only fully compatible with the
value derived from the Hubble key project, but also with a comparable error
budget. Using the SDSS, we also derive, assuming w=constant, a value for the
dark energy equation of state parameter (stat)
(syst). Given the fact that the SDSS ETG sample only reaches , this
result shows the potential of the method. In future papers we will present
results using the high-redshift universe, to yield a determination of H(z) up
to .Comment: 25 pages, 17 figures, JCAP accepte
- âŠ