3,449 research outputs found

    First passage times of two-correlated processes: analytical results for the Wiener process and a numerical method for diffusion processes

    Full text link
    Given a two-dimensional correlated diffusion process, we determine the joint density of the first passage times of the process to some constant boundaries. This quantity depends on the joint density of the first passage time of the first crossing component and of the position of the second crossing component before its crossing time. First we show that these densities are solutions of a system of Volterra-Fredholm first kind integral equations. Then we propose a numerical algorithm to solve it and we describe how to use the algorithm to approximate the joint density of the first passage times. The convergence of the method is theoretically proved for bivariate diffusion processes. We derive explicit expressions for these and other quantities of interest in the case of a bivariate Wiener process, correcting previous misprints appearing in the literature. Finally we illustrate the application of the method through a set of examples.Comment: 18 pages, 3 figure

    Sample Path Analysis of Integrate-and-Fire Neurons

    Get PDF
    Computational neuroscience is concerned with answering two intertwined questions that are based on the assumption that spatio-temporal patterns of spikes form the universal language of the nervous system. First, what function does a specific neural circuitry perform in the elaboration of a behavior? Second, how do neural circuits process behaviorally-relevant information? Non-linear system analysis has proven instrumental in understanding the coding strategies of early neural processing in various sensory modalities. Yet, at higher levels of integration, it fails to help in deciphering the response of assemblies of neurons to complex naturalistic stimuli. If neural activity can be assumed to be primarily driven by the stimulus at early stages of processing, the intrinsic activity of neural circuits interacts with their high-dimensional input to transform it in a stochastic non-linear fashion at the cortical level. As a consequence, any attempt to fully understand the brain through a system analysis approach becomes illusory. However, it is increasingly advocated that neural noise plays a constructive role in neural processing, facilitating information transmission. This prompts to gain insight into the neural code by studying the stochasticity of neuronal activity, which is viewed as biologically relevant. Such an endeavor requires the design of guiding theoretical principles to assess the potential benefits of neural noise. In this context, meeting the requirements of biological relevance and computational tractability, while providing a stochastic description of neural activity, prescribes the adoption of the integrate-and-fire model. In this thesis, founding ourselves on the path-wise description of neuronal activity, we propose to further the stochastic analysis of the integrate-and fire model through a combination of numerical and theoretical techniques. To begin, we expand upon the path-wise construction of linear diffusions, which offers a natural setting to describe leaky integrate-and-fire neurons, as inhomogeneous Markov chains. Based on the theoretical analysis of the first-passage problem, we then explore the interplay between the internal neuronal noise and the statistics of injected perturbations at the single unit level, and examine its implications on the neural coding. At the population level, we also develop an exact event-driven implementation of a Markov network of perfect integrate-and-fire neurons with both time delayed instantaneous interactions and arbitrary topology. We hope our approach will provide new paradigms to understand how sensory inputs perturb neural intrinsic activity and accomplish the goal of developing a new technique for identifying relevant patterns of population activity. From a perturbative perspective, our study shows how injecting frozen noise in different flavors can help characterize internal neuronal noise, which is presumably functionally relevant to information processing. From a simulation perspective, our event-driven framework is amenable to scrutinize the stochastic behavior of simple recurrent motifs as well as temporal dynamics of large scale networks under spike-timing-dependent plasticity

    The stellar atmosphere simulation code Bifrost

    Full text link
    Context: Numerical simulations of stellar convection and photospheres have been developed to the point where detailed shapes of observed spectral lines can be explained. Stellar atmospheres are very complex, and very different physical regimes are present in the convection zone, photosphere, chromosphere, transition region and corona. To understand the details of the atmosphere it is necessary to simulate the whole atmosphere since the different layers interact strongly. These physical regimes are very diverse and it takes a highly efficient massively parallel numerical code to solve the associated equations. Aims: The design, implementation and validation of the massively parallel numerical code Bifrost for simulating stellar atmospheres from the convection zone to the corona. Methods: The code is subjected to a number of validation tests, among them the Sod shock tube test, the Orzag-Tang colliding shock test, boundary condition tests and tests of how the code treats magnetic field advection, chromospheric radiation, radiative transfer in an isothermal scattering atmosphere, hydrogen ionization and thermal conduction. Results: Bifrost completes the tests with good results and shows near linear efficiency scaling to thousands of computing cores

    On the estimation of the persistence exponent for a fractionally integrated brownian motion by numerical simulations

    Get PDF
    For a fractionally integrated Brownian motion (FIBM) of order alpha is an element of (0, 1], X-alpha(t), we investigate the decaying rate of P(tau(alpha)(S) > t) as t -> +infinity, where tau(alpha)(S) = inf{t > 0 : X-alpha(t) >= S} is the first-passage time (FPT) of X-alpha(t) through the barrier S > 0. Precisely, we study the so-called persistent exponent theta = theta(alpha) of the FPT tail, such that P(tau(alpha)(S) > t) = t(-theta+o(1)), as t -> +infinity, and by means of numerical simulation of long enough trajectories of the process X-alpha(t), we are able to estimate theta(alpha) and to show that it is a non-increasing function of alpha is an element of (0, 1], with 1/4 <= theta(alpha) <= 1/2. In particular, we are able to validate numerically a new conjecture about the analytical expression of the function theta = theta(alpha), for alpha is an element of (0, 1]. Such a numerical validation is carried out in two ways: in the first one, we estimate theta(alpha), by using the simulated FPT density, obtained for any alpha is an element of (0, 1]; in the second one, we estimate the persistent exponent by directly calculating P(max(0)<= s <= tX(alpha)(s) < 1). Both ways confirm our conclusions within the limit of numerical approximation. Finally, we investigate the self-similarity property of X-alpha(t) and we find the upper bound of its covariance function

    A Markovian event-based framework for stochastic spiking neural networks

    Full text link
    In spiking neural networks, the information is conveyed by the spike times, that depend on the intrinsic dynamics of each neuron, the input they receive and on the connections between neurons. In this article we study the Markovian nature of the sequence of spike times in stochastic neural networks, and in particular the ability to deduce from a spike train the next spike time, and therefore produce a description of the network activity only based on the spike times regardless of the membrane potential process. To study this question in a rigorous manner, we introduce and study an event-based description of networks of noisy integrate-and-fire neurons, i.e. that is based on the computation of the spike times. We show that the firing times of the neurons in the networks constitute a Markov chain, whose transition probability is related to the probability distribution of the interspike interval of the neurons in the network. In the cases where the Markovian model can be developed, the transition probability is explicitly derived in such classical cases of neural networks as the linear integrate-and-fire neuron models with excitatory and inhibitory interactions, for different types of synapses, possibly featuring noisy synaptic integration, transmission delays and absolute and relative refractory period. This covers most of the cases that have been investigated in the event-based description of spiking deterministic neural networks
    • …
    corecore