2,606 research outputs found
Soliton-dynamical approach to a noisy Ginzburg-Landau model
We present a dynamical description and analysis of non-equilibrium
transitions in the noisy Ginzburg-Landau equation based on a canonical phase
space formulation. The transition pathways are characterized by nucleation and
subsequent propagation of domain walls or solitons. We also evaluate the
Arrhenius factor in terms of an associated action and find good agreement with
recent numerical optimization studies.Comment: 4 pages (revtex4), 3 figures (eps
NVU dynamics. III. Simulating molecules at constant potential energy
This is the final paper in a series that introduces geodesic molecular
dynamics at constant potential energy. This dynamics is entitled NVU dynamics
in analogy to standard energy-conserving Newtonian NVE dynamics. In the first
two papers [Ingebrigtsen et al., J. Chem. Phys. 135, 104101 (2011); ibid,
104102 (2011)], a numerical algorithm for simulating geodesic motion of atomic
systems was developed and tested against standard algorithms. The conclusion
was that the NVU algorithm has the same desirable properties as the Verlet
algorithm for Newtonian NVE dynamics, i.e., it is time-reversible and
symplectic. Additionally, it was concluded that NVU dynamics becomes equivalent
to NVE dynamics in the thermodynamic limit. In this paper, the NVU algorithm
for atomic systems is extended to be able to simulate geodesic motion of
molecules at constant potential energy. We derive an algorithm for simulating
rigid bonds and test this algorithm on three different systems: an asymmetric
dumbbell model, Lewis-Wahnstrom OTP, and rigid SPC/E water. The rigid bonds
introduce additional constraints beyond that of constant potential energy for
atomic systems. The rigid-bond NVU algorithm conserves potential energy, bond
lengths, and step length for indefinitely long runs. The quantities probed in
simulations give results identical to those of Nose-Hoover NVT dynamics. Since
Nose-Hoover NVT dynamics is known to give results equivalent to those of NVE
dynamics, the latter results show that NVU dynamics becomes equivalent to NVE
dynamics in the thermodynamic limit also for molecular systems.Comment: 14 pages, 12 figure
On the work distribution for the adiabatic compression of a dilute classical gas
We consider the adiabatic and quasi-static compression of a dilute classical
gas, confined in a piston and initially equilibrated with a heat bath. We find
that the work performed during this process is described statistically by a
gamma distribution. We use this result to show that the model satisfies the
non-equilibrium work and fluctuation theorems, but not the
flucutation-dissipation relation. We discuss the rare but dominant realizations
that contribute most to the exponential average of the work, and relate our
results to potentially universal work distributions.Comment: 4 page
Spectral dependence of purely-Kerr driven filamentation in air and argon
Based on numerical simulations, we show that higher-order nonlinear indices
(up to and , respectively) of air and argon have a dominant
contribution to both focusing and defocusing in the self-guiding of ultrashort
laser pulses over most of the spectrum. Plasma generation and filamentation are
therefore decoupled. As a consequence, ultraviolet wavelength may not be the
optimal wavelengths for applications requiring to maximize ionization.Comment: 14 pages, 4 figures (14 panels
Microcanonical quantum fluctuation theorems
Previously derived expressions for the characteristic function of work
performed on a quantum system by a classical external force are generalized to
arbitrary initial states of the considered system and to Hamiltonians with
degenerate spectra. In the particular case of microcanonical initial states
explicit expressions for the characteristic function and the corresponding
probability density of work are formulated. Their classical limit as well as
their relations to the respective canonical expressions are discussed. A
fluctuation theorem is derived that expresses the ratio of probabilities of
work for a process and its time reversal to the ratio of densities of states of
the microcanonical equilibrium systems with corresponding initial and final
Hamiltonians.From this Crooks-type fluctuation theorem a relation between
entropies of different systems can be derived which does not involve the time
reversed process. This entropy-from-work theorem provides an experimentally
accessible way to measure entropies.Comment: revised and extended versio
Stochastic learning in a neural network with adapting synapses
We consider a neural network with adapting synapses whose dynamics can be
analitically computed. The model is made of neurons and each of them is
connected to input neurons chosen at random in the network. The synapses
are -states variables which evolve in time according to Stochastic Learning
rules; a parallel stochastic dynamics is assumed for neurons. Since the network
maintains the same dynamics whether it is engaged in computation or in learning
new memories, a very low probability of synaptic transitions is assumed. In the
limit with large and finite, the correlations of neurons and
synapses can be neglected and the dynamics can be analitically calculated by
flow equations for the macroscopic parameters of the system.Comment: 25 pages, LaTeX fil
Von Neumann's expanding model on random graphs
Within the framework of Von Neumann's expanding model, we study the maximum
growth rate r achievable by an autocatalytic reaction network in which
reactions involve a finite (fixed or fluctuating) number D of reagents. r is
calculated numerically using a variant of the Minover algorithm, and
analytically via the cavity method for disordered systems. As the ratio between
the number of reactions and that of reagents increases the system passes from a
contracting (r1). These results extend the
scenario derived in the fully connected model (D\to\infinity), with the
important difference that, generically, larger growth rates are achievable in
the expanding phase for finite D and in more diluted networks. Moreover, the
range of attainable values of r shrinks as the connectivity increases.Comment: 20 page
The Effect of Nonstationarity on Models Inferred from Neural Data
Neurons subject to a common non-stationary input may exhibit a correlated
firing behavior. Correlations in the statistics of neural spike trains also
arise as the effect of interaction between neurons. Here we show that these two
situations can be distinguished, with machine learning techniques, provided the
data are rich enough. In order to do this, we study the problem of inferring a
kinetic Ising model, stationary or nonstationary, from the available data. We
apply the inference procedure to two data sets: one from salamander retinal
ganglion cells and the other from a realistic computational cortical network
model. We show that many aspects of the concerted activity of the salamander
retinal neurons can be traced simply to the external input. A model of
non-interacting neurons subject to a non-stationary external field outperforms
a model with stationary input with couplings between neurons, even accounting
for the differences in the number of model parameters. When couplings are added
to the non-stationary model, for the retinal data, little is gained: the
inferred couplings are generally not significant. Likewise, the distribution of
the sizes of sets of neurons that spike simultaneously and the frequency of
spike patterns as function of their rank (Zipf plots) are well-explained by an
independent-neuron model with time-dependent external input, and adding
connections to such a model does not offer significant improvement. For the
cortical model data, robust couplings, well correlated with the real
connections, can be inferred using the non-stationary model. Adding connections
to this model slightly improves the agreement with the data for the probability
of synchronous spikes but hardly affects the Zipf plot.Comment: version in press in J Stat Mec
Dynamics and robustness of familiarity memory
When presented with an item or a face, one might have a sense of recognition without the ability to recall when or where the stimulus has been encountered before. This sense of recognition is called familiarity memory. Following previous computational studies of familiarity memory, we investigate the dynamical properties of familiarity discrimination and contrast two different familiarity discriminators: one based on the energy of the neural network and the other based on the time derivative of the energy. We show how the familiarity signal decays rapidly after stimulus presentation. For both discriminators, we calculate the capacity using mean field analysis. Compared to recall capacity (the classical associative memory in Hopfield nets), both the energy and the slope discriminators have bigger capacity, yet the energy-based discriminator has a higher capacity than one based on its time derivative. Finally, both discriminators are found to have a different noise dependence
- …