7,996 research outputs found
Dark energy and curvature from a future baryonic acoustic oscillation survey using the Lyman-alpha forest
We explore the requirements for a Lyman-alpha forest (LyaF) survey designed
to measure the angular diameter distance and Hubble parameter at 2~<z~<4 using
the standard ruler provided by baryonic acoustic oscillations (BAO). The goal
would be to obtain a high enough density of sources to probe the
three-dimensional density field on the scale of the BAO feature. A
percent-level measurement in this redshift range can almost double the Dark
Energy Task Force Figure of Merit, relative to the case with only a similar
precision measurement at z~1, if the Universe is not assumed to be flat. This
improvement is greater than the one obtained by doubling the size of the z~1
survey, with Planck and a weak SDSS-like z=0.3 BAO measurement assumed in each
case. Galaxy BAO surveys at z~1 may be able to make an effective LyaF
measurement simultaneously at minimal added cost, because the required number
density of quasars is relatively small. We discuss the constraining power as a
function of area, magnitude limit (density of quasars), resolution, and
signal-to-noise of the spectra. For example, a survey covering 2000 sq. deg.
and achieving S/N=1.8 per Ang. at g=23 (~40 quasars per sq. deg.) with an
R~>250 spectrograph is sufficient to measure both the radial and transverse
oscillation scales to 1.4% from the LyaF (or better, if fainter magnitudes and
possibly Lyman-break galaxies can be used). At fixed integration time and in
the sky-noise-dominated limit, a wider, noisier survey is generally more
efficient; the only fundamental upper limit on noise being the need to identify
a quasar and find a redshift. Because the LyaF is much closer to linear and
generally better understood than galaxies, systematic errors are even less
likely to be a problem.Comment: 18 pages including 6 figures, submitted to PR
E-QED: Electrical Bug Localization During Post-Silicon Validation Enabled by Quick Error Detection and Formal Methods
During post-silicon validation, manufactured integrated circuits are
extensively tested in actual system environments to detect design bugs. Bug
localization involves identification of a bug trace (a sequence of inputs that
activates and detects the bug) and a hardware design block where the bug is
located. Existing bug localization practices during post-silicon validation are
mostly manual and ad hoc, and, hence, extremely expensive and time consuming.
This is particularly true for subtle electrical bugs caused by unexpected
interactions between a design and its electrical state. We present E-QED, a new
approach that automatically localizes electrical bugs during post-silicon
validation. Our results on the OpenSPARC T2, an open-source
500-million-transistor multicore chip design, demonstrate the effectiveness and
practicality of E-QED: starting with a failed post-silicon test, in a few hours
(9 hours on average) we can automatically narrow the location of the bug to
(the fan-in logic cone of) a handful of candidate flip-flops (18 flip-flops on
average for a design with ~ 1 Million flip-flops) and also obtain the
corresponding bug trace. The area impact of E-QED is ~2.5%. In contrast,
deter-mining this same information might take weeks (or even months) of mostly
manual work using traditional approaches
Expected Large Synoptic Survey Telescope (LSST) Yield of Eclipsing Binary Stars
In this paper we estimate the Large Synoptic Survey Telescope (LSST) yield of
eclipsing binary stars, which will survey ~20,000 square degrees of the
southern sky during the period of 10 years in 6 photometric passbands to r ~
24.5. We generate a set of 10,000 eclipsing binary light curves sampled to the
LSST time cadence across the whole sky, with added noise as a function of
apparent magnitude. This set is passed to the Analysis of Variance (AoV) period
finder to assess the recoverability rate for the periods, and the successfully
phased light curves are passed to the artificial intelligence-based pipeline
EBAI to assess the recoverability rate in terms of the eclipsing binaries'
physical and geometric parameters. We find that, out of ~24 million eclipsing
binaries observed by LSST with S/N>10 in mission life-time, ~28% or 6.7 million
can be fully characterized by the pipeline. Of those, ~25% or 1.7 million will
be double-lined binaries, a true treasure trove for stellar astrophysics.Comment: 19 pages, 7 figures. Accepted to AJ, to appear in issue 142:2 (Aug
2011
Precise Null Pointer Analysis Through Global Value Numbering
Precise analysis of pointer information plays an important role in many
static analysis techniques and tools today. The precision, however, must be
balanced against the scalability of the analysis. This paper focusses on
improving the precision of standard context and flow insensitive alias analysis
algorithms at a low scalability cost. In particular, we present a
semantics-preserving program transformation that drastically improves the
precision of existing analyses when deciding if a pointer can alias NULL. Our
program transformation is based on Global Value Numbering, a scheme inspired
from compiler optimizations literature. It allows even a flow-insensitive
analysis to make use of branch conditions such as checking if a pointer is NULL
and gain precision. We perform experiments on real-world code to measure the
overhead in performing the transformation and the improvement in the precision
of the analysis. We show that the precision improves from 86.56% to 98.05%,
while the overhead is insignificant.Comment: 17 pages, 1 section in Appendi
The Energy Conserving Particle-in-Cell Method
A new Particle-in-Cell (PIC) method, that conserves energy exactly, is
presented. The particle equations of motion and the Maxwell's equations are
differenced implicitly in time by the midpoint rule and solved concurrently by
a Jacobian-free Newton Krylov (JFNK) solver. Several tests show that the finite
grid instability is eliminated in energy conserving PIC simulations, and the
method correctly describes the two-stream and Weibel instabilities, conserving
exactly the total energy. The computational time of the energy conserving PIC
method increases linearly with the number of particles, and it is rather
insensitive to the number of grid points and time step. The kinetic enslavement
technique can be effectively used to reduce the problem matrix size and the
number of JFNK solver iterations
Recommended from our members
Visualisation of Origins, Destinations and Flows with OD Maps
We present a new technique for the visual exploration of origins (O) and destinations (D) arranged in geographic space. Previous attempts to map the flows between origins and destinations have suffered from problems of occlusion usually requiring some form of generalisation, such as aggregation or flow density estimation before they can be visualized. This can lead to loss of detail or the introduction of arbitrary artefacts in the visual representation. Here, we propose mapping OD vectors as cells rather than lines, comparable with the process of constructing OD matrices, but unlike the OD matrix, we preserve the spatial layout of all origin and destination locations by constructing a gridded two‐level spatial treemap. The result is a set of spatially ordered small multiples upon which any arbitrary geographic data may be projected. Using a hash grid spatial data structure, we explore the characteristics of the technique through a software prototype that allows interactive query and visualisation of 105‐106 simulated and recorded OD vectors. The technique is illustrated using US county to county migration and commuting statistics
Measuring 14 elemental abundances with R=1,800 LAMOST spectra
The LAMOST survey has acquired low-resolution spectra (R=1,800) for 5 million
stars across the Milky Way, far more than any current stellar survey at a
corresponding or higher spectral resolution. It is often assumed that only very
few elemental abundances can be measured from such low-resolution spectra,
limiting their utility for Galactic archaeology studies. However, Ting et al.
(2017) used ab initio models to argue that low-resolution spectra should enable
precision measurements of many elemental abundances, at least in theory. Here
we verify this claim in practice by measuring the relative abundances of 14
elements from LAMOST spectra with a precision of 0.1 dex for objects
with > 30 (per pixel). We employ a spectral modeling
method in which a data-driven model is combined with priors that the model
gradient spectra should resemble ab initio spectral models. This approach
assures that the data-driven abundance determinations draw on physically
sensible features in the spectrum in their predictions and do not just exploit
astrophysical correlations among abundances. Our analysis is constrained to the
number of elemental abundances measured in the APOGEE survey, which is the
source of the training labels. Obtaining high quality/resolution spectra for a
subset of LAMOST stars to measure more elemental abundances as training labels
and then applying this method to the full LAMOST catalog will provide a sample
with more than 20 elemental abundances that is an order of magnitude larger
than current high-resolution surveys, substantially increasing the sample size
for Galactic archaeology.Comment: 6 pages, 3 figures, ApJ (Accepted for publication- 2017 October 9
- …