1,079 research outputs found
The 1600 CE Huaynaputina eruption as a possible trigger for persistent cooling in the North Atlantic region
Paleoclimate reconstructions have identified a period of exceptional summer and winter cooling in the North Atlantic region following the eruption of the tropical volcano Huaynaputina (Peru) in 1600 CE. A previous study based on numerical climate simulations has indicated a potential mechanism for the persistent cooling in a slowdown of the North Atlantic subpolar gyre (SPG) and consequent ocean-atmosphere feedbacks. To examine whether this mechanism could have been triggered by the Huaynaputina eruption, this study compares the simulations used in the previous study both with and without volcanic forcing and this SPG shift to reconstructions from annual proxies in natural archives and historical written records as well as contemporary historical observations of relevant climate and environmental conditions. These reconstructions and observations demonstrate patterns of cooling and sea-ice expansion consistent with, but not indicative of, an eruption trigger for the proposed SPG slowdown mechanism. The results point to possible improvements in future model-data comparison studies utilizing historical written records. Moreover, we consider historical societal impacts and adaptations associated with the reconstructed climatic and environmental anomalies
Minimum Decision Cost for Quantum Ensembles
For a given ensemble of independent and identically prepared particles,
we calculate the binary decision costs of different strategies for measurement
of polarised spin 1/2 particles. The result proves that, for any given values
of the prior probabilities and any number of constituent particles, the cost
for a combined measurement is always less than or equal to that for any
combination of separate measurements upon sub-ensembles. The Bayes cost, which
is that associated with the optimal strategy (i.e., a combined measurement) is
obtained in a simple closed form.Comment: 11 pages, uses RevTe
Error estimates for solid-state density-functional theory predictions: an overview by means of the ground-state elemental crystals
Predictions of observable properties by density-functional theory
calculations (DFT) are used increasingly often in experimental condensed-matter
physics and materials engineering as data. These predictions are used to
analyze recent measurements, or to plan future experiments. Increasingly more
experimental scientists in these fields therefore face the natural question:
what is the expected error for such an ab initio prediction? Information and
experience about this question is scattered over two decades of literature. The
present review aims to summarize and quantify this implicit knowledge. This
leads to a practical protocol that allows any scientist - experimental or
theoretical - to determine justifiable error estimates for many basic property
predictions, without having to perform additional DFT calculations. A central
role is played by a large and diverse test set of crystalline solids,
containing all ground-state elemental crystals (except most lanthanides). For
several properties of each crystal, the difference between DFT results and
experimental values is assessed. We discuss trends in these deviations and
review explanations suggested in the literature. A prerequisite for such an
error analysis is that different implementations of the same first-principles
formalism provide the same predictions. Therefore, the reproducibility of
predictions across several mainstream methods and codes is discussed too. A
quality factor Delta expresses the spread in predictions from two distinct DFT
implementations by a single number. To compare the PAW method to the highly
accurate APW+lo approach, a code assessment of VASP and GPAW with respect to
WIEN2k yields Delta values of 1.9 and 3.3 meV/atom, respectively. These
differences are an order of magnitude smaller than the typical difference with
experiment, and therefore predictions by APW+lo and PAW are for practical
purposes identical.Comment: 27 pages, 20 figures, supplementary material available (v5 contains
updated supplementary material
Kepler Presearch Data Conditioning I - Architecture and Algorithms for Error Correction in Kepler Light Curves
Kepler provides light curves of 156,000 stars with unprecedented precision.
However, the raw data as they come from the spacecraft contain significant
systematic and stochastic errors. These errors, which include discontinuities,
systematic trends, and outliers, obscure the astrophysical signals in the light
curves. To correct these errors is the task of the Presearch Data Conditioning
(PDC) module of the Kepler data analysis pipeline. The original version of PDC
in Kepler did not meet the extremely high performance requirements for the
detection of miniscule planet transits or highly accurate analysis of stellar
activity and rotation. One particular deficiency was that astrophysical
features were often removed as a side-effect to removal of errors. In this
paper we introduce the completely new and significantly improved version of PDC
which was implemented in Kepler SOC 8.0. This new PDC version, which utilizes a
Bayesian approach for removal of systematics, reliably corrects errors in the
light curves while at the same time preserving planet transits and other
astrophysically interesting signals. We describe the architecture and the
algorithms of this new PDC module, show typical errors encountered in Kepler
data, and illustrate the corrections using real light curve examples.Comment: Submitted to PASP. Also see companion paper "Kepler Presearch Data
Conditioning II - A Bayesian Approach to Systematic Error Correction" by Jeff
C. Smith et a
Majority Dynamics and Aggregation of Information in Social Networks
Consider n individuals who, by popular vote, choose among q >= 2
alternatives, one of which is "better" than the others. Assume that each
individual votes independently at random, and that the probability of voting
for the better alternative is larger than the probability of voting for any
other. It follows from the law of large numbers that a plurality vote among the
n individuals would result in the correct outcome, with probability approaching
one exponentially quickly as n tends to infinity. Our interest in this paper is
in a variant of the process above where, after forming their initial opinions,
the voters update their decisions based on some interaction with their
neighbors in a social network. Our main example is "majority dynamics", in
which each voter adopts the most popular opinion among its friends. The
interaction repeats for some number of rounds and is then followed by a
population-wide plurality vote.
The question we tackle is that of "efficient aggregation of information": in
which cases is the better alternative chosen with probability approaching one
as n tends to infinity? Conversely, for which sequences of growing graphs does
aggregation fail, so that the wrong alternative gets chosen with probability
bounded away from zero? We construct a family of examples in which interaction
prevents efficient aggregation of information, and give a condition on the
social network which ensures that aggregation occurs. For the case of majority
dynamics we also investigate the question of unanimity in the limit. In
particular, if the voters' social network is an expander graph, we show that if
the initial population is sufficiently biased towards a particular alternative
then that alternative will eventually become the unanimous preference of the
entire population.Comment: 22 page
Bose-Einstein Condensation in a Harmonic Potential
We examine several features of Bose-Einstein condensation (BEC) in an
external harmonic potential well. In the thermodynamic limit, there is a phase
transition to a spatial Bose-Einstein condensed state for dimension D greater
than or equal to 2. The thermodynamic limit requires maintaining constant
average density by weakening the potential while increasing the particle number
N to infinity, while of course in real experiments the potential is fixed and N
stays finite. For such finite ideal harmonic systems we show that a BEC still
occurs, although without a true phase transition, below a certain
``pseudo-critical'' temperature, even for D=1. We study the momentum-space
condensate fraction and find that it vanishes as 1/N^(1/2) in any number of
dimensions in the thermodynamic limit. In D less than or equal to 2 the lack of
a momentum condensation is in accord with the Hohenberg theorem, but must be
reconciled with the existence of a spatial BEC in D=2. For finite systems we
derive the N-dependence of the spatial and momentum condensate fractions and
the transition temperatures, features that may be experimentally testable. We
show that the N-dependence of the 2D ideal-gas transition temperature for a
finite system cannot persist in the interacting case because it violates a
theorem due to Chester, Penrose, and Onsager.Comment: 34 pages, LaTeX, 6 Postscript figures, Submitted to Jour. Low Temp.
Phy
The cytoplasm of living cells: A functional mixture of thousands of components
Inside every living cell is the cytoplasm: a fluid mixture of thousands of
different macromolecules, predominantly proteins. This mixture is where most of
the biochemistry occurs that enables living cells to function, and it is
perhaps the most complex liquid on earth. Here we take an inventory of what is
actually in this mixture. Recent genome-sequencing work has given us for the
first time at least some information on all of these thousands of components.
Having done so we consider two physical phenomena in the cytoplasm: diffusion
and possible phase separation. Diffusion is slower in the highly crowded
cytoplasm than in dilute solution. Reasonable estimates of this slowdown can be
obtained and their consequences explored, for example, monomer-dimer equilibria
are established approximately twenty times slower than in a dilute solution.
Phase separation in all except exceptional cells appears not to be a problem,
despite the high density and so strong protein-protein interactions present. We
suggest that this may be partially a byproduct of the evolution of other
properties, and partially a result of the huge number of components present.Comment: 11 pages, 1 figure, 1 tabl
Pair creation: back-reactions and damping
We solve the quantum Vlasov equation for fermions and bosons, incorporating
spontaneous pair creation in the presence of back-reactions and collisions.
Pair creation is initiated by an external impulse field and the source term is
non-Markovian. A simultaneous solution of Maxwell's equation in the presence of
feedback yields an internal current and electric field that exhibit plasma
oscillations with a period tau_pl. Allowing for collisions, these oscillations
are damped on a time-scale, tau_r, determined by the collision frequency.
Plasma oscillations cannot affect the early stages of the formation of a
quark-gluon plasma unless tau_r >> tau_pl and tau_pl approx. 1/Lambda_QCD
approx 1 fm/c.Comment: 16 pages, 6 figure, REVTEX, epsfig.st
The agency of liminality: army wives in the DR Congo and the tactical reversal of militarization
The inherently unstable boundaries between military and civilian worlds have emerged as a main object of study within the field of critical military studies. This article sheds light on the (re)production of these boundaries by attending to a group that rarely features in the debates on the military/civilian divide: army wives in a ‘non-Northern’ context, more specifically the Democratic Republic of the Congo (DRC). Drawing upon the ‘analytical toolbox’ of governmentality, we explore how civilian and military positionalities are called upon, articulated and subverted in the governing and self-governing of Congolese army wives. We show the decisive importance of these wives’ civilian-military ‘in betweenness’ both in efforts to govern them and in their exercise of agency, in particular
The inherently unstable boundaries between military and civilian worlds have emerged as a main object of study within the field of critical military studies. This article sheds light on the (re)production of these boundaries by attending to a group that rarely features in the debates on the military/civilian divide: army wives in a ‘non-Northern’ context, more specifically the Democratic Republic of the Congo (DRC). Drawing upon the ‘analytical toolbox’ of governmentality, we explore how civilian and military positionalities are called upon, articulated, and subverted in the governing and self-governing of Congolese army wives. We show the decisive importance of these wives’ civilian–military ‘in-betweenness’ both in efforts to govern them and in their exercise of agency, in particular the ways in which they ‘tactically reverse’ militarization. The article also demonstrates the dispersed nature of the governing arrangements surrounding army wives, highlighting the vital role of ‘the civilian’ as well as the ‘agency of those being militarized’ within processes of militarization. By foregrounding the relevance of studying Congolese army wives and their militarization with an analytical toolbox often reserved for so called ‘advanced militaries/societies’, and by revealing numerous similarities between the Congolese and ‘Northern’ contexts, the article also sets out to counter the Euro/US-centrism and ‘theoretical discrimination’ that mark present-day (critical) military studies
- …