3,350 research outputs found
A constructive mean field analysis of multi population neural networks with random synaptic weights and stochastic inputs
We deal with the problem of bridging the gap between two scales in neuronal
modeling. At the first (microscopic) scale, neurons are considered individually
and their behavior described by stochastic differential equations that govern
the time variations of their membrane potentials. They are coupled by synaptic
connections acting on their resulting activity, a nonlinear function of their
membrane potential. At the second (mesoscopic) scale, interacting populations
of neurons are described individually by similar equations. The equations
describing the dynamical and the stationary mean field behaviors are considered
as functional equations on a set of stochastic processes. Using this new point
of view allows us to prove that these equations are well-posed on any finite
time interval and to provide a constructive method for effectively computing
their unique solution. This method is proved to converge to the unique solution
and we characterize its complexity and convergence rate. We also provide
partial results for the stationary problem on infinite time intervals. These
results shed some new light on such neural mass models as the one of Jansen and
Rit \cite{jansen-rit:95}: their dynamics appears as a coarse approximation of
the much richer dynamics that emerges from our analysis. Our numerical
experiments confirm that the framework we propose and the numerical methods we
derive from it provide a new and powerful tool for the exploration of neural
behaviors at different scales.Comment: 55 pages, 4 figures, to appear in "Frontiers in Neuroscience
Recommended from our members
Full-field and anomaly initialization using a low-order climate model: a comparison and proposals for advanced formulations
Initialization techniques for seasonal-to-decadal climate predictions fall into two main categories; namely full-field initialization (FFI) and anomaly initialization (AI). In the FFI case the initial model state is replaced by the best possible available estimate of the real state. By doing so the initial error is efficiently reduced but, due to the unavoidable presence of model deficiencies, once the model is let free to run a prediction, its trajectory drifts away from the observations no matter how small the initial error is. This problem is partly overcome with AI where the aim is to forecast future anomalies by assimilating observed anomalies on an estimate of the model climate.
The large variety of experimental setups, models and observational networks adopted worldwide make it difficult to draw firm conclusions on the respective advantages and drawbacks of FFI and AI, or to identify distinctive lines for improvement. The lack of a unified mathematical framework adds an additional difficulty toward the design of adequate initialization strategies that fit the desired forecast horizon, observational network and model at hand.
Here we compare FFI and AI using a low-order climate model of nine ordinary differential equations and use the notation and concepts of data assimilation theory to highlight their error scaling properties. This analysis suggests better performances using FFI when a good observational network is available and reveals the direct relation of its skill with the observational accuracy. The skill of AI appears, however, mostly related to the model quality and clear increases of skill can only be expected in coincidence with model upgrades.
We have compared FFI and AI in experiments in which either the full system or the atmosphere and ocean were independently initialized. In the former case FFI shows better and longer-lasting improvements, with skillful predictions until month 30. In the initialization of single compartments, the best performance is obtained when the stabler component of the model (the ocean) is initialized, but with FFI it is possible to have some predictive skill even when the most unstable compartment (the extratropical atmosphere) is observed.
Two advanced formulations, least-square initialization (LSI) and exploring parameter uncertainty (EPU), are introduced. Using LSI the initialization makes use of model statistics to propagate information from observation locations to the entire model domain. Numerical results show that LSI improves the performance of FFI in all the situations when only a portion of the system's state is observed. EPU is an online drift correction method in which the drift caused by the parametric error is estimated using a short-time evolution law and is then removed during the forecast run. Its implementation in conjunction with FFI allows us to improve the prediction skill within the first forecast year.
Finally, the application of these results in the context of realistic climate models is discussed
Correction-to-scaling exponents for two-dimensional self-avoiding walks
We study the correction-to-scaling exponents for the two-dimensional
self-avoiding walk, using a combination of series-extrapolation and Monte Carlo
methods. We enumerate all self-avoiding walks up to 59 steps on the square
lattice, and up to 40 steps on the triangular lattice, measuring the
mean-square end-to-end distance, the mean-square radius of gyration and the
mean-square distance of a monomer from the endpoints. The complete endpoint
distribution is also calculated for self-avoiding walks up to 32 steps (square)
and up to 22 steps (triangular). We also generate self-avoiding walks on the
square lattice by Monte Carlo, using the pivot algorithm, obtaining the
mean-square radii to ~0.01% accuracy up to N = 4000. We give compelling
evidence that the first non-analytic correction term for two-dimensional
self-avoiding walks is Delta_1 = 3/2. We compute several moments of the
endpoint distribution function, finding good agreement with the field-theoretic
predictions. Finally, we study a particular invariant ratio that can be shown,
by conformal-field-theory arguments, to vanish asymptotically, and we find the
cancellation of the leading analytic correction.Comment: LaTeX 2.09, 56 pages. Version 2 adds a renormalization-group
discussion near the end of Section 2.2, and makes many small improvements in
the exposition. To be published in the Journal of Statistical Physic
Multiscale Computations on Neural Networks: From the Individual Neuron Interactions to the Macroscopic-Level Analysis
We show how the Equation-Free approach for multi-scale computations can be
exploited to systematically study the dynamics of neural interactions on a
random regular connected graph under a pairwise representation perspective.
Using an individual-based microscopic simulator as a black box coarse-grained
timestepper and with the aid of simulated annealing we compute the
coarse-grained equilibrium bifurcation diagram and analyze the stability of the
stationary states sidestepping the necessity of obtaining explicit closures at
the macroscopic level. We also exploit the scheme to perform a rare-events
analysis by estimating an effective Fokker-Planck describing the evolving
probability density function of the corresponding coarse-grained observables
Dynamical stability analysis of the HD202206 system and constraints to the planetary orbits
Long-term precise Doppler measurements with the CORALIE spectrograph revealed
the presence of two massive companions to the solar-type star HD202206.
Although the three-body fit of the system is unstable, it was shown that a 5:1
mean motion resonance exists close to the best fit, where the system is stable.
We present here an extensive dynamical study of the HD202206 system aiming at
constraining the inclinations of the two known companions, from which we derive
possible ranges of value for the companion masses.
We study the long term stability of the system in a small neighborhood of the
best fit using Laskar's frequency map analysis. We also introduce a numerical
method based on frequency analysis to determine the center of libration mode
inside a mean motion resonance.
We find that acceptable coplanar configurations are limited to inclinations
to the line of sight between 30 and 90 degrees. This limits the masses of both
companions to roughly twice the minimum. Non coplanar configurations are
possible for a wide range of mutual inclinations from 0 to 90 degrees, although
configurations seem to be favored. We also confirm the
5:1 mean motion resonance to be most likely. In the coplanar edge-on case, we
provide a very good stable solution in the resonance, whose does not
differ significantly from the best fit. Using our method to determine the
center of libration, we further refine this solution to obtain an orbit with a
very low amplitude of libration, as we expect dissipative effects to have
dampened the libration.Comment: 14 pages, 18 figure
Lognormal Distributions and Geometric Averages of Positive Definite Matrices
This article gives a formal definition of a lognormal family of probability
distributions on the set of symmetric positive definite (PD) matrices, seen as
a matrix-variate extension of the univariate lognormal family of distributions.
Two forms of this distribution are obtained as the large sample limiting
distribution via the central limit theorem of two types of geometric averages
of i.i.d. PD matrices: the log-Euclidean average and the canonical geometric
average. These averages correspond to two different geometries imposed on the
set of PD matrices. The limiting distributions of these averages are used to
provide large-sample confidence regions for the corresponding population means.
The methods are illustrated on a voxelwise analysis of diffusion tensor imaging
data, permitting a comparison between the various average types from the point
of view of their sampling variability.Comment: 28 pages, 8 figure
The 1999 Center for Simulation of Dynamic Response in Materials Annual Technical Report
Introduction:
This annual report describes research accomplishments for FY 99 of the Center
for Simulation of Dynamic Response of Materials. The Center is constructing a
virtual shock physics facility in which the full three dimensional response of a
variety of target materials can be computed for a wide range of compressive, ten-
sional, and shear loadings, including those produced by detonation of energetic
materials. The goals are to facilitate computation of a variety of experiments
in which strong shock and detonation waves are made to impinge on targets
consisting of various combinations of materials, compute the subsequent dy-
namic response of the target materials, and validate these computations against
experimental data
- …