2,760 research outputs found
Design and operation of an autosampler controlled flow-injection preconcentration system for lead determination by flame atomic absorption spectrometry
Flow-injection manifolds are described which allow the preconcentration
of lead for flame atomic absorption determinations, using
columns contained within the sample loop of an injection valve. An
interface was designed which allowed the valves and pump in the
system to be controlled by an autosampler which enabled precise
timing of preconcentration and elution steps. The effects of sample
flow rate, buffer pH and buffer type for preconcentration and
eluent concentration and flow rate were investigated in order to
obtain optimum performance of the system. A 50-times improvement
in detection limits over conventional sample introduction was
obtained for a sample volume of approximately 12 ml, preconcentrated for
150 s. The injection of eluent, as opposed to the use of a
continuously flowing eluent stream, enabled this reagent to be
conserved
Tests of Bayesian Model Selection Techniques for Gravitational Wave Astronomy
The analysis of gravitational wave data involves many model selection
problems. The most important example is the detection problem of selecting
between the data being consistent with instrument noise alone, or instrument
noise and a gravitational wave signal. The analysis of data from ground based
gravitational wave detectors is mostly conducted using classical statistics,
and methods such as the Neyman-Pearson criteria are used for model selection.
Future space based detectors, such as the \emph{Laser Interferometer Space
Antenna} (LISA), are expected to produced rich data streams containing the
signals from many millions of sources. Determining the number of sources that
are resolvable, and the most appropriate description of each source poses a
challenging model selection problem that may best be addressed in a Bayesian
framework. An important class of LISA sources are the millions of low-mass
binary systems within our own galaxy, tens of thousands of which will be
detectable. Not only are the number of sources unknown, but so are the number
of parameters required to model the waveforms. For example, a significant
subset of the resolvable galactic binaries will exhibit orbital frequency
evolution, while a smaller number will have measurable eccentricity. In the
Bayesian approach to model selection one needs to compute the Bayes factor
between competing models. Here we explore various methods for computing Bayes
factors in the context of determining which galactic binaries have measurable
frequency evolution. The methods explored include a Reverse Jump Markov Chain
Monte Carlo (RJMCMC) algorithm, Savage-Dickie density ratios, the Schwarz-Bayes
Information Criterion (BIC), and the Laplace approximation to the model
evidence. We find good agreement between all of the approaches.Comment: 11 pages, 6 figure
Theory of Spike Spiral Waves in a Reaction-Diffusion System
We discovered a new type of spiral wave solutions in reaction-diffusion
systems --- spike spiral wave, which significantly differs from spiral waves
observed in FitzHugh-Nagumo-type models. We present an asymptotic theory of
these waves in Gray-Scott model. We derive the kinematic relations describing
the shape of this spiral and find the dependence of its main parameters on the
control parameters. The theory does not rely on the specific features of
Gray-Scott model and thus is expected to be applicable to a broad range of
reaction-diffusion systems.Comment: 4 pages (REVTeX), 2 figures (postscript), submitted to Phys. Rev.
Let
Validation and Calibration of Models for Reaction-Diffusion Systems
Space and time scales are not independent in diffusion. In fact, numerical
simulations show that different patterns are obtained when space and time steps
( and ) are varied independently. On the other hand,
anisotropy effects due to the symmetries of the discretization lattice prevent
the quantitative calibration of models. We introduce a new class of explicit
difference methods for numerical integration of diffusion and
reaction-diffusion equations, where the dependence on space and time scales
occurs naturally. Numerical solutions approach the exact solution of the
continuous diffusion equation for finite and , if the
parameter assumes a fixed constant value,
where is an odd positive integer parametrizing the alghorithm. The error
between the solutions of the discrete and the continuous equations goes to zero
as and the values of are dimension
independent. With these new integration methods, anisotropy effects resulting
from the finite differences are minimized, defining a standard for validation
and calibration of numerical solutions of diffusion and reaction-diffusion
equations. Comparison between numerical and analytical solutions of
reaction-diffusion equations give global discretization errors of the order of
in the sup norm. Circular patterns of travelling waves have a maximum
relative random deviation from the spherical symmetry of the order of 0.2%, and
the standard deviation of the fluctuations around the mean circular wave front
is of the order of .Comment: 33 pages, 8 figures, to appear in Int. J. Bifurcation and Chao
Detection of weak gravitational lensing distortions of distant galaxies by cosmic dark matter at large scales
Most of the matter in the universe is not luminous and can be observed
directly only through its gravitational effect. An emerging technique called
weak gravitational lensing uses background galaxies to reveal the foreground
dark matter distribution on large scales. Light from very distant galaxies
travels to us through many intervening overdensities which gravitationally
distort their apparent shapes. The observed ellipticity pattern of these
distant galaxies thus encodes information about the large-scale structure of
the universe, but attempts to measure this effect have been inconclusive due to
systematic errors. We report the first detection of this ``cosmic shear'' using
145,000 background galaxies to reveal the dark matter distribution on angular
scales up to half a degree in three separate lines of sight. The observed
angular dependence of this effect is consistent with that predicted by two
leading cosmological models, providing new and independent support for these
models.Comment: 18 pages, 5 figures: To appear in Nature. (This replacement fixes tex
errors and typos.
Synoptic Sky Surveys and the Diffuse Supernova Neutrino Background: Removing Astrophysical Uncertainties and Revealing Invisible Supernovae
The cumulative (anti)neutrino production from all core-collapse supernovae
within our cosmic horizon gives rise to the diffuse supernova neutrino
background (DSNB), which is on the verge of detectability. The observed flux
depends on supernova physics, but also on the cosmic history of supernova
explosions; currently, the cosmic supernova rate introduces a substantial
(+/-40%) uncertainty, largely through its absolute normalization. However, a
new class of wide-field, repeated-scan (synoptic) optical sky surveys is coming
online, and will map the sky in the time domain with unprecedented depth,
completeness, and dynamic range. We show that these surveys will obtain the
cosmic supernova rate by direct counting, in an unbiased way and with high
statistics, and thus will allow for precise predictions of the DSNB. Upcoming
sky surveys will substantially reduce the uncertainties in the DSNB source
history to an anticipated +/-5% that is dominated by systematics, so that the
observed high-energy flux thus will test supernova neutrino physics. The
portion of the universe (z < 1) accessible to upcoming sky surveys includes the
progenitors of a large fraction (~ 87%) of the expected 10-26 MeV DSNB event
rate. We show that precision determination of the (optically detected) cosmic
supernova history will also make the DSNB into a strong probe of an extra flux
of neutrinos from optically invisible supernovae, which may be unseen either
due to unexpected large dust obscuration in host galaxies, or because some
core-collapse events proceed directly to black hole formation and fail to give
an optical outburst.Comment: 11 pages, 6 figure
Towards Precision LSST Weak-Lensing Measurement - I: Impacts of Atmospheric Turbulence and Optical Aberration
The weak-lensing science of the LSST project drives the need to carefully
model and separate the instrumental artifacts from the intrinsic lensing
signal. The dominant source of the systematics for all ground based telescopes
is the spatial correlation of the PSF modulated by both atmospheric turbulence
and optical aberrations. In this paper, we present a full FOV simulation of the
LSST images by modeling both the atmosphere and the telescope optics with the
most current data for the telescope specifications and the environment. To
simulate the effects of atmospheric turbulence, we generated six-layer phase
screens with the parameters estimated from the on-site measurements. For the
optics, we combined the ray-tracing tool ZEMAX and our simulated focal plane
data to introduce realistic aberrations and focal plane height fluctuations.
Although this expected flatness deviation for LSST is small compared with that
of other existing cameras, the fast f-ratio of the LSST optics makes this focal
plane flatness variation and the resulting PSF discontinuities across the CCD
boundaries significant challenges in our removal of the systematics. We resolve
this complication by performing PCA CCD-by-CCD, and interpolating the basis
functions using conventional polynomials. We demonstrate that this PSF
correction scheme reduces the residual PSF ellipticity correlation below 10^-7
over the cosmologically interesting scale. From a null test using HST/UDF
galaxy images without input shear, we verify that the amplitude of the galaxy
ellipticity correlation function, after the PSF correction, is consistent with
the shot noise set by the finite number of objects. Therefore, we conclude that
the current optical design and specification for the accuracy in the focal
plane assembly are sufficient to enable the control of the PSF systematics
required for weak-lensing science with the LSST.Comment: Accepted to PASP. High-resolution version is available at
http://dls.physics.ucdavis.edu/~mkjee/LSST_weak_lensing_simulation.pd
Galaxy Peculiar Velocities From Large-Scale Supernova Surveys as a Dark Energy Probe
Upcoming imaging surveys such as the Large Synoptic Survey Telescope will
repeatedly scan large areas of sky and have the potential to yield
million-supernova catalogs. Type Ia supernovae are excellent standard candles
and will provide distance measures that suffice to detect mean pairwise
velocities of their host galaxies. We show that when combining these distance
measures with photometric redshifts for either the supernovae or their host
galaxies, the mean pairwise velocities of the host galaxies will provide a dark
energy probe which is competitive with other widely discussed methods. Adding
information from this test to type Ia supernova photometric luminosity
distances from the same experiment, plus the cosmic microwave background power
spectrum from the Planck satellite, improves the Dark Energy Task Force Figure
of Merit by a factor of 1.8. Pairwise velocity measurements require no
additional observational effort beyond that required to perform the traditional
supernova luminosity distance test, but may provide complementary constraints
on dark energy parameters and the nature of gravity. Incorporating additional
spectroscopic redshift follow-up observations could provide important dark
energy constraints from pairwise velocities alone. Mean pairwise velocities are
much less sensitive to systematic redshift errors than the luminosity distance
test or weak lensing techniques, and also are only mildly affected by
systematic evolution of supernova luminosity.Comment: 18 pages; 4 figures; 4 tables; replaced to match the accepted versio
- …