10,258 research outputs found
CORROSION OF IRON-BASE ALLOYS VERSUS ALTERNATE MATERIALS IN GEOTHERMAL BRINES (Interim Report - Period Ending October 1977)
This geothermal corrosion program is to determine why geothermal brines are so corrosive to economical iron-base alloys. The program involves tests of many materials in high pressure equipment where a wide variety of brine chemistries can be studied. The validity of these lab tests is checked by field tests in actual geothermal brine. A series of 30 refreshed autoclave tests and one field test have been completed to define how various chemical components in geothermal brines affect uniform corrosion of 35 materials
Structured optical receivers to attain superadditive capacity and the Holevo limit
When classical information is sent over a quantum channel, attaining the
ultimate limit to channel capacity requires the receiver to make joint
measurements over long codeword blocks. For a pure-state channel, we construct
a receiver that can attain the ultimate capacity by applying a single-shot
unitary transformation on the received quantum codeword followed by
simultaneous (but separable) projective measurements on the
single-modulation-symbol state spaces. We study the ultimate limits of
photon-information-efficient communications on a lossy bosonic channel. Based
on our general results for the pure-state quantum channel, we show some of the
first concrete examples of codes and structured joint-detection optical
receivers that can achieve fundamentally higher (superadditive) channel
capacity than conventional receivers that detect each modulation symbol
individually.Comment: 4 pages, 4 figure
Measuring the effective complexity of cosmological models
We introduce a statistical measure of the effective model complexity, called
the Bayesian complexity. We demonstrate that the Bayesian complexity can be
used to assess how many effective parameters a set of data can support and that
it is a useful complement to the model likelihood (the evidence) in model
selection questions. We apply this approach to recent measurements of cosmic
microwave background anisotropies combined with the Hubble Space Telescope
measurement of the Hubble parameter. Using mildly non-informative priors, we
show how the 3-year WMAP data improves on the first-year data by being able to
measure both the spectral index and the reionization epoch at the same time. We
also find that a non-zero curvature is strongly disfavored. We conclude that
although current data could constrain at least seven effective parameters, only
six of them are required in a scheme based on the Lambda-CDM concordance
cosmology.Comment: 9 pages, 4 figures, revised version accepted for publication in PRD,
updated with WMAP3 result
Generalized Hurst exponent and multifractal function of original and translated texts mapped into frequency and length time series
A nonlinear dynamics approach can be used in order to quantify complexity in
written texts. As a first step, a one-dimensional system is examined : two
written texts by one author (Lewis Carroll) are considered, together with one
translation, into an artificial language, i.e. Esperanto are mapped into time
series. Their corresponding shuffled versions are used for obtaining a "base
line". Two different one-dimensional time series are used here: (i) one based
on word lengths (LTS), (ii) the other on word frequencies (FTS). It is shown
that the generalized Hurst exponent and the derived curves
of the original and translated texts show marked differences. The original
"texts" are far from giving a parabolic function, - in contrast to
the shuffled texts. Moreover, the Esperanto text has more extreme values. This
suggests cascade model-like, with multiscale time asymmetric features as
finally written texts. A discussion of the difference and complementarity of
mapping into a LTS or FTS is presented. The FTS curves are more
opened than the LTS onesComment: preprint for PRE; 2 columns; 10 pages; 6 (multifigures); 3 Tables; 70
reference
Fracture toughness of brittle materials determined with chevron notch specimens
The use of chevron-notch specimens for determining the plane strain fracture toughness (K sub Ic) of brittle materials is discussed. Three chevron-notch specimens were investigated: short bar, short rod, and four-point-bend. The dimensionless stress intensity coefficient used in computing K sub Ic is derived for the short bar specimen from the superposition of ligament-dependent and ligament-independent solutions for the straight through crack, and also from experimental compliance calibrations. Coefficients for the four-point-bend specimen were developed by the same superposition procedure, and with additional refinement using the slice model of Bluhm. Short rod specimen stress intensity coefficients were determined only by experimental compliance calibration. Performance of the three chevron-notch specimens and their stress intensity factor relations were evaluated by tests on hot-pressed silicon nitride and sintered aluminum oxide. Results obtained with the short bar and the four-point-bend specimens on silicon nitride are in good agreement and relatively free of specimen geometry and size effects within the range investigated. Results on aluminum oxide were affected by specimen size and chevron-notch geometry, believed due to a rising crack growth resistance curve for the material. Only the results for the short bar specimen are presented in detail
Quantum Analogue Computing
We briefly review what a quantum computer is, what it promises to do for us,
and why it is so hard to build one. Among the first applications anticipated to
bear fruit is quantum simulation of quantum systems. While most quantum
computation is an extension of classical digital computation, quantum
simulation differs fundamentally in how the data is encoded in the quantum
computer. To perform a quantum simulation, the Hilbert space of the system to
be simulated is mapped directly onto the Hilbert space of the (logical) qubits
in the quantum computer. This type of direct correspondence is how data is
encoded in a classical analogue computer. There is no binary encoding, and
increasing precision becomes exponentially costly: an extra bit of precision
doubles the size of the computer. This has important consequences for both the
precision and error correction requirements of quantum simulation, and
significant open questions remain about its practicality. It also means that
the quantum version of analogue computers, continuous variable quantum
computers (CVQC) becomes an equally efficient architecture for quantum
simulation. Lessons from past use of classical analogue computers can help us
to build better quantum simulators in future.Comment: 10 pages, to appear in the Visions 2010 issue of Phil. Trans. Roy.
Soc.
Improving Detectors Using Entangling Quantum Copiers
We present a detection scheme which using imperfect detectors, and imperfect
quantum copying machines (which entangle the copies), allows one to extract
more information from an incoming signal, than with the imperfect detectors
alone.Comment: 4 pages, 2 figures, REVTeX, to be published in Phys. Rev.
- …