25 research outputs found
The Erpenbeck high frequency instability theorem for ZND detonations
The rigorous study of spectral stability for strong detonations was begun by
J.J. Erpenbeck in [Er1]. Working with the Zeldovitch-von Neumann-D\"oring (ZND)
model, which assumes a finite reaction rate but ignores effects like viscosity
corresponding to second order derivatives, he used a normal mode analysis to
define a stability function V(\tau,\eps) whose zeros in
correspond to multidimensional perturbations of a steady detonation profile
that grow exponentially in time. Later in a remarkable paper [Er3] he provided
strong evidence, by a combination of formal and rigorous arguments, that for
certain classes of steady ZND profiles, unstable zeros of exist for
perturbations of sufficiently large transverse wavenumber \eps, even when the
von Neumann shock, regarded as a gas dynamical shock, is uniformly stable in
the sense defined (nearly twenty years later) by Majda. In spite of a great
deal of later numerical work devoted to computing the zeros of V(\tau,\eps),
the paper \cite{Er3} remains the only work we know of that presents a detailed
and convincing theoretical argument for detecting them.
The analysis in [Er3] points the way toward, but does not constitute, a
mathematical proof that such unstable zeros exist. In this paper we identify
the mathematical issues left unresolved in [Er3] and provide proofs, together
with certain simplifications and extensions, of the main conclusions about
stability and instability of detonations contained in that paper.
The main mathematical problem, and our principal focus here, is to determine
the precise asymptotic behavior as \eps\to \infty of solutions to a linear
system of ODEs in , depending on \eps and a complex frequency as
parameters, with turning points on the half-line
Comment on "Theory and computer simulation for the equation of state of additive hard-disk fluid mixtures"
A flaw in the comparison between two different theoretical equations of state
for a binary mixture of additive hard disks and Monte Carlo results, as
recently reported in C. Barrio and J. R. Solana, Phys. Rev. E 63, 011201
(2001), is pointed out. It is found that both proposals, which require the
equation of state of the single component system as input, lead to comparable
accuracy but the one advocated by us [A. Santos, S. B. Yuste, and M. L\'{o}pez
de Haro, Mol. Phys. 96, 1 (1999)] is simpler and complies with the exact limit
in which the small disks are point particles.Comment: 4 pages, including 1 figur
Homogeneous shear flow of a hard-sphere fluid: Analytic solutions
Recently, a solution for collision-free trajectories in an N particle thermostatted hard-sphere system undergoing homogeneous shear (the so-called "Sllod" equations of motion) led to a kinetic theory of dilute hard-sphere gases under shear. However, a solution for collisions, necessary for a complete theory at higher densities, has been missing. We present an analytic solution to this problem, which provides surprising insights into the mechanical aspects of thermostatting a system in an external field. The equivalence of constant temperature and constant energy ensembles in the thermodynamic limit in equilibrium, the conditions for the nature of heat exchange with the environment (entropy creation and reduction) in the system, and the condition for appearance of the artificial string phase follow from our solution
Population Monte Carlo algorithms
We give a cross-disciplinary survey on ``population'' Monte Carlo algorithms.
In these algorithms, a set of ``walkers'' or ``particles'' is used as a
representation of a high-dimensional vector. The computation is carried out by
a random walk and split/deletion of these objects. The algorithms are developed
in various fields in physics and statistical sciences and called by lots of
different terms -- ``quantum Monte Carlo'', ``transfer-matrix Monte Carlo'',
``Monte Carlo filter (particle filter)'',``sequential Monte Carlo'' and
``PERM'' etc. Here we discuss them in a coherent framework. We also touch on
related algorithms -- genetic algorithms and annealed importance sampling.Comment: Title is changed (Population-based Monte Carlo -> Population Monte
Carlo). A number of small but important corrections and additions. References
are also added. Original Version is read at 2000 Workshop on
Information-Based Induction Sciences (July 17-18, 2000, Syuzenji, Shizuoka,
Japan). No figure
Transport properties of dense fluid argon
We calculate using molecular dynamics simulations the transport properties of
realistically modeled fluid argon at pressures up to and
temperatures up to . In this context we provide a critique of some newer
theoretical predictions for the diffusion coefficients of liquids and a
discussion of the Enskog theory relevance under two different adaptations:
modified Enskog theory (MET) and effective diameter Enskog theory. We also
analyze a number of experimental data for the thermal conductivity of
monoatomic and small diatomic dense fluids.Comment: 8 pages, 6 figure
A model for the atomic-scale structure of a dense, nonequilibrium fluid: the homogeneous cooling state of granular fluids
It is shown that the equilibrium Generalized Mean Spherical Model of fluid
structure may be extended to nonequilibrium states with equation of state
information used in equilibrium replaced by an exact condition on the two-body
distribution function. The model is applied to the homogeneous cooling state of
granular fluids and upon comparison to molecular dynamics simulations is found
to provide an accurate picture of the pair distribution function.Comment: 29 pages, 11 figures Revision corrects formatting of the figure
Structure and Dynamics of Liquid Iron under Earth's Core Conditions
First-principles molecular dynamics simulations based on density-functional
theory and the projector augmented wave (PAW) technique have been used to study
the structural and dynamical properties of liquid iron under Earth's core
conditions. As evidence for the accuracy of the techniques, we present PAW
results for a range of solid-state properties of low- and high-pressure iron,
and compare them with experimental values and the results of other
first-principles calculations. In the liquid-state simulations, we address
particular effort to the study of finite-size effects, Brillouin-zone sampling
and other sources of technical error. Results for the radial distribution
function, the diffusion coefficient and the shear viscosity are presented for a
wide range of thermodynamic states relevant to the Earth's core. Throughout
this range, liquid iron is a close-packed simple liquid with a diffusion
coefficient and viscosity similar to those of typical simple liquids under
ambient conditions.Comment: 13 pages, 8 figure
A Quantum-mechanical Approach for Constrained Macromolecular Chains
Many approaches to three-dimensional constrained macromolecular chains at
thermal equilibrium, at about room temperatures, are based upon constrained
Classical Hamiltonian Dynamics (cCHDa). Quantum-mechanical approaches (QMa)
have also been treated by different researchers for decades. QMa address a
fundamental issue (constraints versus the uncertainty principle) and are
versatile: they also yield classical descriptions (which may not coincide with
those from cCHDa, although they may agree for certain relevant quantities).
Open issues include whether QMa have enough practical consequences which differ
from and/or improve those from cCHDa. We shall treat cCHDa briefly and deal
with QMa, by outlining old approaches and focusing on recent ones.Comment: Expands review published in The European Physical Journal (Special
Topics) Vol. 200, pp. 225-258 (2011
Graph Neural Networks for low-energy event classification & reconstruction in IceCube
IceCube, a cubic-kilometer array of optical sensors built to detect atmospheric and astrophysical neutrinos between 1 GeV and 1 PeV, is deployed 1.45 km to 2.45 km below the surface of the ice sheet at the South Pole. The classification and reconstruction of events from the in-ice detectors play a central role in the analysis of data from IceCube. Reconstructing and classifying events is a challenge due to the irregular detector geometry, inhomogeneous scattering and absorption of light in the ice and, below 100 GeV, the relatively low number of signal photons produced per event. To address this challenge, it is possible to represent IceCube events as point cloud graphs and use a Graph Neural Network (GNN) as the classification and reconstruction method. The GNN is capable of distinguishing neutrino events from cosmic-ray backgrounds, classifying different neutrino event types, and reconstructing the deposited energy, direction and interaction vertex. Based on simulation, we provide a comparison in the 1 GeVâ100 GeV energy range to the current state-of-the-art maximum likelihood techniques used in current IceCube analyses, including the effects of known systematic uncertainties. For neutrino event classification, the GNN increases the signal efficiency by 18% at a fixed background rate, compared to current IceCube methods. Alternatively, the GNN offers a reduction of the background (i.e. false positive) rate by over a factor 8 (to below half a percent) at a fixed signal efficiency. For the reconstruction of energy, direction, and interaction vertex, the resolution improves by an average of 13%â20% compared to current maximum likelihood techniques in the energy range of 1 GeVâ30 GeV. The GNN, when run on a GPU, is capable of processing IceCube events at a rate nearly double of the median IceCube trigger rate of 2.7 kHz, which opens the possibility of using low energy neutrinos in online searches for transient events.Peer Reviewe
A muon-track reconstruction exploiting stochastic losses for large-scale Cherenkov detectors
IceCube is a cubic-kilometer Cherenkov telescope operating at the South Pole. The main goal of IceCube is the detection of astrophysical neutrinos and the identification of their sources. High-energy muon neutrinos are observed via the secondary muons produced in charge current interactions with nuclei in the ice. Currently, the best performing muon track directional reconstruction is based on a maximum likelihood method using the arrival time distribution of Cherenkov photons registered by the experiment\u27s photomultipliers. A known systematic shortcoming of the prevailing method is to assume a continuous energy loss along the muon track. However at energies >1 TeV the light yield from muons is dominated by stochastic showers. This paper discusses a generalized ansatz where the expected arrival time distribution is parametrized by a stochastic muon energy loss pattern. This more realistic parametrization of the loss profile leads to an improvement of the muon angular resolution of up to 20% for through-going tracks and up to a factor 2 for starting tracks over existing algorithms. Additionally, the procedure to estimate the directional reconstruction uncertainty has been improved to be more robust against numerical errors