27,565 research outputs found
A Foundation of Programming a Multi-Tape Quantum Turing machine
The notion of quantum Turing machines is a basis of quantum complexity
theory. We discuss a general model of multi-tape, multi-head Quantum Turing
machines with multi final states that also allow tape heads to stay still.Comment: A twelve page version is to appear in the Proceedings of the 24th
International Symposium on Mathematical Foundations of Computer Science in
September, 1999. LNC
The Geometry of Niggli Reduction I: The Boundary Polytopes of the Niggli Cone
Correct identification of the Bravais lattice of a crystal is an important
step in structure solution. Niggli reduction is a commonly used technique. We
investigate the boundary polytopes of the Niggli-reduced cone in the
six-dimensional space G6 by algebraic analysis and organized random probing of
regions near 1- through 8-fold boundary polytope intersections. We limit
consideration of boundary polytopes to those avoiding the mathematically
interesting but crystallographically impossible cases of 0 length cell edges.
Combinations of boundary polytopes without a valid intersection in the closure
of the Niggli cone or with an intersection that would force a cell edge to 0 or
without neighboring probe points are eliminated. 216 boundary polytopes are
found: 15 5-D boundary polytopes of the full G6 Niggli cone, 53 4-D boundary
polytopes resulting from intersections of pairs of the 15 5-D boundary
polytopes, 79 3-D boundary polytopes resulting from 2-fold, 3-fold and 4-fold
intersections of the 15 5-D boundary polytopes, 55 2-D boundary polytopes
resulting from 2-fold, 3-fold, 4-fold and higher intersections of the 15 5-D
boundary polytopes, 14 1-D boundary polytopes resulting from 3-fold and higher
intersections of the 15 5-D boundary polytopes. All primitive lattice types can
be represented as combinations of the 15 5-D boundary polytopes. All
non-primitive lattice types can be represented as combinations of the 15 5-D
boundary polytopes and of the 7 special-position subspaces of the 5-D boundary
polytopes. This study provides a new, simpler and arguably more intuitive basis
set for the classification of lattice characters and helps to illuminate some
of the complexities in Bravais lattice identification. The classification is
intended to help in organizing database searches and in understanding which
lattice symmetries are "close" to a given experimentally determined cell
The Geometry of Niggli Reduction II: BGAOL -- Embedding Niggli Reduction
Niggli reduction can be viewed as a series of operations in a six-dimensional
space derived from the metric tensor. An implicit embedding of the space of
Niggli-reduced cells in a higher dimensional space to facilitate calculation of
distances between cells is described. This distance metric is used to create a
program, BGAOL, for Bravais lattice determination. Results from BGAOL are
compared to the results from other metric-based Bravais lattice determination
algorithms
Upper Energy Limit of Heavy Baryon Chiral Perturbation Theory in Neutral Pion Photoproduction
With the availability of the new neutral pion photoproduction from the proton
data from the A2 and CB-TAPS Collaborations at Mainz it is mandatory to revisit
Heavy Baryon Chiral Perturbation Theory (HBChPT) and address the extraction of
the partial waves as well as other issues such as the value of the low-energy
constants, the energy range where the calculation provides a good agreement
with the data and the impact of unitarity. We find that, within the current
experimental status, HBChPT with the fitted LECs gives a good agreement with
the existing neutral pion photoproduction data up to 170 MeV and that
imposing unitarity does not improve this picture. Above this energy the data
call for further improvement in the theory such as the explicit inclusion of
the \Delta (1232). We also find that data and multipoles can be well described
up to 185 MeV with Taylor expansions in the partial waves up to first
order in pion energy.Comment: 6 pages, 5 figures, version to be published in Physics Letters
ERTS-1 analysis in the Monterey Bay Area, using digital tapes
There are no author-identified significant results in this report
Photometric Redshift Biases from Galaxy Evolution
Proposed cosmological surveys will make use of photometric redshifts of
galaxies that are significantly fainter than any complete spectroscopic
redshift surveys that exist to train the photo-z methods. We investigate the
photo-z biases that result from known differences between the faint and bright
populations: a rise in AGN activity toward higher redshift, and a metallicity
difference between intrinsically luminous and faint early-type galaxies. We
find that even very small mismatches between the mean photometric target and
the training set can induce photo-z biases large enough to corrupt derived
cosmological parameters significantly. A metallicity shift of ~0.003dex in an
old population, or contamination of any galaxy spectrum with ~0.2% AGN flux, is
sufficient to induce a 10^-3 bias in photo-z. These results highlight the
danger in extrapolating the behavior of bright galaxies to a fainter
population, and the desirability of a spectroscopic training set that spans all
of the characteristics of the photo-z targets, i.e. extending to the 25th mag
or fainter galaxies that will be used in future surveys
Analytic design of spaceborne axial injection cross-field amplifiers Final report
S band crossed-field amplifier suitable for satellite television relay system
The suitability of various spacecraft for future space applications missions
The Space Applications Advisory Committee (SAAC) of NASA's Advisory Council was asked by the Associate Administrator for Space Science and Applications to consider the most suitable future means for accomplishing space application missions. To comply with this request, SAAC formed a Task Force whose report is contained in this document. In their considerations, the Task Force looked into the suitability of likely future spacecraft options for supporting various types of application mission payloads. These options encompass a permanent manned space station, the Space Shuttle operating in a sortie mode, unmanned platforms that integrate a wide variety of instruments or other devices, and smaller free fliers that accommodate at most a few functions. The Task Force also recognized that the various elements could be combined to form a larger space infrastructure. This report summarizes the results obtained by the Task Force. It describes the approach utilized, the findings and their analysis, and the conclusions
Reconstructing the direction of reactor antineutrinos via electron scattering in Gd-doped water Cherenkov detectors
The potential of elastic antineutrino-electron scattering in a Gd-doped water
Cherenkov detector to determine the direction of a nuclear reactor antineutrino
flux was investigated using the recently proposed WATCHMAN antineutrino
experiment as a baseline model. The expected scattering rate was determined
assuming a 13-km standoff from a 3.758-GWt light water nuclear reactor and the
detector response was modeled using a Geant4-based simulation package.
Background was estimated via independent simulations and by scaling published
measurements from similar detectors. Background contributions were estimated
for solar neutrinos, misidentified reactor-based inverse beta decay
interactions, cosmogenic radionuclides, water-borne radon, and gamma rays from
the photomultiplier tubes (PMTs), detector walls, and surrounding rock. We show
that with the use of low background PMTs and sufficient fiducialization,
water-borne radon and cosmogenic radionuclides pose the largest threats to
sensitivity. Directional sensitivity was then analyzed as a function of radon
contamination, detector depth, and detector size. The results provide a list of
experimental conditions that, if satisfied in practice, would enable
antineutrino directional reconstruction at 3 significance in large
Gd-doped water Cherenkov detectors with greater than 10-km standoff from a
nuclear reactor.Comment: 11 pages, 9 figure
Maximum Entropy/Optimal Projection (MEOP) control design synthesis: Optimal quantification of the major design tradeoffs
The underlying philosophy and motivation of the optimal projection/maximum entropy (OP/ME) stochastic modeling and reduced control design methodology for high order systems with parameter uncertainties are discussed. The OP/ME design equations for reduced-order dynamic compensation including the effect of parameter uncertainties are reviewed. The application of the methodology to several Large Space Structures (LSS) problems of representative complexity is illustrated
- …