9,512 research outputs found

    Excited Brownian motions as limits of excited random walks

    Full text link
    We obtain the convergence in law of a sequence of excited (also called cookies) random walks toward an excited Brownian motion. This last process is a continuous semi-martingale whose drift is a function, say ϕ\phi, of its local time. It was introduced by Norris, Rogers and Williams as a simplified version of Brownian polymers, and then recently further studied by the authors. To get our results we need to renormalize together the sequence of cookies, the time and the space in a convenient way. The proof follows a general approach already taken by T\'oth and his coauthors in multiple occasions, which goes through Ray-Knight type results. Namely we first prove, when ϕ\phi is bounded and Lipschitz, that the convergence holds at the level of the local time processes. This is done via a careful study of the transition kernel of an auxiliary Markov chain which describes the local time at a given level. Then we prove a tightness result and deduce the convergence at the level of the full processes.Comment: v.3: main result improved: hyothesis of recurrence removed. To appear in P.T.R.

    Stability of the Einstein-Lichnerowicz constraints system

    Full text link
    We study the Einstein-Lichnerowicz constraints system, obtained through the conformal method when addressing the initial data problem for the Einstein equations in a scalar field theory. We prove that this system is stable with respect to the physics data when posed on the standard 33-sphere.Comment: Minor changes, some typos fixed and references adde

    A constructive mean field analysis of multi population neural networks with random synaptic weights and stochastic inputs

    Get PDF
    We deal with the problem of bridging the gap between two scales in neuronal modeling. At the first (microscopic) scale, neurons are considered individually and their behavior described by stochastic differential equations that govern the time variations of their membrane potentials. They are coupled by synaptic connections acting on their resulting activity, a nonlinear function of their membrane potential. At the second (mesoscopic) scale, interacting populations of neurons are described individually by similar equations. The equations describing the dynamical and the stationary mean field behaviors are considered as functional equations on a set of stochastic processes. Using this new point of view allows us to prove that these equations are well-posed on any finite time interval and to provide a constructive method for effectively computing their unique solution. This method is proved to converge to the unique solution and we characterize its complexity and convergence rate. We also provide partial results for the stationary problem on infinite time intervals. These results shed some new light on such neural mass models as the one of Jansen and Rit \cite{jansen-rit:95}: their dynamics appears as a coarse approximation of the much richer dynamics that emerges from our analysis. Our numerical experiments confirm that the framework we propose and the numerical methods we derive from it provide a new and powerful tool for the exploration of neural behaviors at different scales.Comment: 55 pages, 4 figures, to appear in "Frontiers in Neuroscience

    Forward Vertical Integration: The Fixed-Proportion Case Revisited

    Get PDF
    Assuming a fixed-proportion downstream production technology, partial forward integration by an upstream monopolist may be observed whether the monopolist is advantaged or disadvantaged cost-wise relative to fringe firms in the downstream market. Integration need not induce cost-predation and the profits of the fringe may increase. The output price falls and welfare unambiguously rises.

    Over-education for the rich, under-education for the poor: a search-theoretic microfoundation

    Get PDF
    This paper studies the efficiency of educational choices in a two sector/two schooling level matching model of the labour market where a continuum of heterogenous workers allocates itself between sectors depending on their decision to invest in education. Individuals differ in ability and schooling cost, the search market is segmented by education, and there is free entry of new firms in each sector. Self-selection in education originates composition effects in the distribution of skills across sectors. This in turn modifies the intensity of job creation, implying the private and social returns to schooling always differ. Provided that ability and schooling cost are not too positively correlated, agents with large schooling costs — the ‘poor’ — select themselves too much, while there is too little self-selection among the low schooling cost individuals — the ‘rich’. We also show that education should be more taxed than subsidized when the Hosios condition holds.Ability; Schooling cost; Heterogeneity; Matching frictions; Efficiency

    Forward Vertical Integration: The Fixed-Proportion Case Revisited

    Get PDF
    Assuming a fixed-proportion downstream production technology, partial forward integration by an upstream monopolist may be observed whether the monopolist is advantaged or disadvantaged cost-wise relative to fringe firms in the downstream market. Integration need not induce cost predation and the fringe firms’ margin may even increase. The output price falls and welfare unambiguously rises.Vertical integration; cost predation; cost asymmetries

    Spatio-temporal spike trains analysis for large scale networks using maximum entropy principle and Monte-Carlo method

    Full text link
    Understanding the dynamics of neural networks is a major challenge in experimental neuroscience. For that purpose, a modelling of the recorded activity that reproduces the main statistics of the data is required. In a first part, we present a review on recent results dealing with spike train statistics analysis using maximum entropy models (MaxEnt). Most of these studies have been focusing on modelling synchronous spike patterns, leaving aside the temporal dynamics of the neural activity. However, the maximum entropy principle can be generalized to the temporal case, leading to Markovian models where memory effects and time correlations in the dynamics are properly taken into account. In a second part, we present a new method based on Monte-Carlo sampling which is suited for the fitting of large-scale spatio-temporal MaxEnt models. The formalism and the tools presented here will be essential to fit MaxEnt spatio-temporal models to large neural ensembles.Comment: 41 pages, 10 figure

    A Theory of Regularized Markov Decision Processes

    Full text link
    Many recent successful (deep) reinforcement learning algorithms make use of regularization, generally based on entropy or Kullback-Leibler divergence. We propose a general theory of regularized Markov Decision Processes that generalizes these approaches in two directions: we consider a larger class of regularizers, and we consider the general modified policy iteration approach, encompassing both policy iteration and value iteration. The core building blocks of this theory are a notion of regularized Bellman operator and the Legendre-Fenchel transform, a classical tool of convex optimization. This approach allows for error propagation analyses of general algorithmic schemes of which (possibly variants of) classical algorithms such as Trust Region Policy Optimization, Soft Q-learning, Stochastic Actor Critic or Dynamic Policy Programming are special cases. This also draws connections to proximal convex optimization, especially to Mirror Descent.Comment: ICML 201
    • 

    corecore