55,479 research outputs found

    Stationary probability density of stochastic search processes in global optimization

    Full text link
    A method for the construction of approximate analytical expressions for the stationary marginal densities of general stochastic search processes is proposed. By the marginal densities, regions of the search space that with high probability contain the global optima can be readily defined. The density estimation procedure involves a controlled number of linear operations, with a computational cost per iteration that grows linearly with problem size

    Characterization of the convergence of stationary Fokker-Planck learning

    Get PDF
    The convergence properties of the stationary Fokker-Planck algorithm for the estimation of the asymptotic density of stochastic search processes is studied. Theoretical and empirical arguments for the characterization of convergence of the estimation in the case of separable and nonseparable nonlinear optimization problems are given. Some implications of the convergence of stationary Fokker-Planck learning for the inference of parameters in artificial neural network models are outlined

    Accumulation time of stochastic processes with resetting

    Get PDF
    One of the characteristic features of a stochastic process under resetting is that the probability density converges to a non-equilibrium stationary state (NESS). In addition, the approach to the stationary state exhibits a dynamical phase transition, which can be interpreted as a traveling front separating spatial regions for which the probability density has relaxed to the NESS from those where transients persist. A very different mechanism for generating an NESS occurs within the context of diffusion-based morphogenesis, in which an extrinsic localized current source combined with degradation within the interior of the domain leads to the formation of a protein concentration gradient. A common method for characterizing the relaxation process is to calculate the so-called accumulation time. The latter is the analog of the mean first passage time of a search process, in which the survival probability density is replaced by an accumulation fraction density. In this paper, we extend the definition of the accumulation time to stochastic processes with resetting by showing how the probability density associated with trajectories that reset at least once evolves in an analogous fashion to protein concentration gradients. We consider a range of examples, including diffusion with instantaneous resetting, resetting with refractory periods and finite return times, and non-diffusive processes such as run-and-tumble particles. In each case we calculate the accumulation time as a function of the spatial separation from the reset point

    Filling of a Poisson trap by a population of random intermittent searchers

    Get PDF
    We extend the continuum theory of random intermittent search processes to the case of NN independent searchers looking to deliver cargo to a single hidden target located somewhere on a semi--infinite track. Each searcher randomly switches between a stationary state and either a leftward or rightward constant velocity state. We assume that all of the particles start at one end of the track and realize sample trajectories independently generated from the same underlying stochastic process. The hidden target is treated as a partially absorbing trap in which a particle can only detect the target and deliver its cargo if it is stationary and within range of the target; the particle is removed from the system after delivering its cargo. As a further generalization of previous models, we assume that up to nn successive particles can find the target and deliver its cargo. Assuming that the rate of target detection scales as 1/N1/N, we show that there exists a well--defined mean field limit NN\rightarrow \infty, in which the stochastic model reduces to a deterministic system of linear reaction--hyperbolic equations for the concentrations of particles in each of the internal states. These equations decouple from the stochastic process associated with filling the target with cargo. The latter can be modeled as a Poisson process in which the time--dependent rate of filling λ(t)\lambda(t) depends on the concentration of stationary particles within the target domain. Hence, we refer to the target as a Poisson trap. We analyze the efficiency of filling the Poisson trap with nn particles in terms of the waiting time density fn(t)f_n(t). The latter is determined by the integrated Poisson rate μ(t)=0tλ(s)ds\mu(t)=\int_0^t\lambda(s)ds, which in turn depends on the solution to the reaction-hyperbolic equations. We obtain an approximate solution for the particle concentrations by reducing the system of reaction-hyperbolic equations to a scalar advection--diffusion equation using a quasi-steady-state analysis. We compare our analytical results for the mean--field model with Monte-Carlo simulations for finite NN. We thus determine how the mean first passage time (MFPT) for filling the target depends on NN and nn

    Fluctuations in the weakly asymmetric exclusion process with open boundary conditions

    Get PDF
    accepted in Journal of Statistical PhysicsWe investigate the fluctuations around the average density profile in the weakly asymmetric exclusion process with open boundaries in the steady state. We show that these fluctuations are given, in the macroscopic limit, by a centered Gaussian field and we compute explicitly its covariance function. We use two approaches. The first method is dynamical and based on fluctuations around the hydrodynamic limit. We prove that the density fluctuations evolve macroscopically according to an autonomous stochastic equation, and we search for the stationary distribution of this evolution. The second approach, which is based on a representation of the steady state as a sum over paths, allows one to write the density fluctuations in the steady state as a sum over two independent processes, one of which is the derivative of a Brownian motion, the other one being related to a random path in a potential

    Spatial multi-level interacting particle simulations and information theory-based error quantification

    Get PDF
    We propose a hierarchy of multi-level kinetic Monte Carlo methods for sampling high-dimensional, stochastic lattice particle dynamics with complex interactions. The method is based on the efficient coupling of different spatial resolution levels, taking advantage of the low sampling cost in a coarse space and by developing local reconstruction strategies from coarse-grained dynamics. Microscopic reconstruction corrects possibly significant errors introduced through coarse-graining, leading to the controlled-error approximation of the sampled stochastic process. In this manner, the proposed multi-level algorithm overcomes known shortcomings of coarse-graining of particle systems with complex interactions such as combined long and short-range particle interactions and/or complex lattice geometries. Specifically, we provide error analysis for the approximation of long-time stationary dynamics in terms of relative entropy and prove that information loss in the multi-level methods is growing linearly in time, which in turn implies that an appropriate observable in the stationary regime is the information loss of the path measures per unit time. We show that this observable can be either estimated a priori, or it can be tracked computationally a posteriori in the course of a simulation. The stationary regime is of critical importance to molecular simulations as it is relevant to long-time sampling, obtaining phase diagrams and in studying metastability properties of high-dimensional complex systems. Finally, the multi-level nature of the method provides flexibility in combining rejection-free and null-event implementations, generating a hierarchy of algorithms with an adjustable number of rejections that includes well-known rejection-free and null-event algorithms.Comment: 34 page

    Non-equilibrium steady states of stochastic processes with intermittent resetting

    Get PDF
    Stochastic processes that are randomly reset to an initial condition serve as a showcase to investigate non-equilibrium steady states. However, all existing results have been restricted to the special case of memoryless resetting protocols. Here, we obtain the general solution for the distribution of processes in which waiting times between reset events are drawn from an arbitrary distribution. This allows for the investigation of a broader class of much more realistic processes. As an example, our results are applied to the analysis of the efficiency of constrained random search processes.Comment: 5 pages, 4 figure
    corecore