37 research outputs found

    Prevalent Behavior of Strongly Order Preserving Semiflows

    Full text link
    Classical results in the theory of monotone semiflows give sufficient conditions for the generic solution to converge toward an equilibrium or towards the set of equilibria (quasiconvergence). In this paper, we provide new formulations of these results in terms of the measure-theoretic notion of prevalence. For monotone reaction-diffusion systems with Neumann boundary conditions on convex domains, we show that the set of continuous initial data corresponding to solutions that converge to a spatially homogeneous equilibrium is prevalent. We also extend a previous generic convergence result to allow its use on Sobolev spaces. Careful attention is given to the measurability of the various sets involved.Comment: 18 page

    On the number of Mather measures of Lagrangian systems

    Full text link
    In 1996, Ricardo Ricardo Ma\~n\'e discovered that Mather measures are in fact the minimizers of a "universal" infinite dimensional linear programming problem. This fundamental result has many applications, one of the most important is to the estimates of the generic number of Mather measures. Ma\~n\'e obtained the first estimation of that sort by using finite dimensional approximations. Recently, we were able with Gonzalo Contreras to use this method of finite dimensional approximation in order to solve a conjecture of John Mather concerning the generic number of Mather measures for families of Lagrangian systems. In the present paper we obtain finer results in that direction by applying directly some classical tools of convex analysis to the infinite dimensional problem. We use a notion of countably rectifiable sets of finite codimension in Banach (and Frechet) spaces which may deserve independent interest

    Adaptation of the generic PDE's results to the notion of prevalence

    Full text link
    Many generic results have been proved, especially concerning the qualitative behaviour of solutions of partial differential equations. Recently, a new notion of "almost always", the prevalence, has been developped for vectorial spaces. This notion is interesting since, for example, prevalence sets are equivalent to the full Lebesgue measure sets in finite dimensional spaces. The purpose of this article is to adapt the generic PDE's results to the notion of prevalence. In particular, we consider the cases where Sard-Smale theorems or arguments of analytic perturbations of the parameters are used

    ∊-Optimal Discretized Linear Reward-Penalty Learning Automata

    No full text
    In this paper we consider variable structure stochastic automata (VSSA), which interact with an environment and which dynamically learns the optimal action that the environment offers. Like all VSSA the automata are fully defined by a set of action probability updating rules [4], 19], [22]. However, to minimize the requirements on the random number generator used to implement the VSSA, and to increase the speed of convergence of the automaton, we consider the case in which the probability updating functions can assume only a finite number of values. These values discretize the probability space [0,1] and hence they are called discretized learning automata. The discretized automata are linear because the subintervals of [0,1] are of equal length. We shall prove the following results: a) two-action discretized linear reward-penalty automata are ergodic and ∊-optimal in all environments whose minimum penalty probability is less than 0.5; b) there exist discretized two-action linear reward-penalty automata that are ergodic and ∊-optimal in all random environments; and c) discretized two-action linear reward-penalty automata with artificially created absorbing barriers are ∊-optimal in all random environments. Apart from the above theoretical results simulation results will be presented that indicate the properties of the automata discussed. The rate of convergence of all these automata and some open problems are also presented

    ON THREE FAMILIES OF ASYMPTOTICALLY OPTIMAL LINEAR REWARD-PENALTY LEARNING AUTOMATA.

    No full text
    The authors consider variable-structure stochastic automata (VSSA) that interact with an environment and dynamically learn the optimal action available to them. Like all VSSA, the automata are fully defined by a set of action probability updating rules. They examine the case in which the probability updating functions can assume only a finite number of values. These values discretize the probability space left bracket 0,1 right bracket , and hence they are called discretized learning automata. The discretized automata are linear because the subintervals in left bracket 0,1 right bracket are of equal length. The authors prove the following results: (i) two-action discretized linear reward-penalty automata are ergodic and epsilon -optimal in all environments where the minimum penalty probability is less than 0. 5; (ii) there exist discretized two-action linear reward-penalty automata that are ergodic and epsilon -optimal in all random environments; and (iii) discretized two-action linear reward-penalty automata with artificially created absorbing barriers are epsilon -optimal in all random environments
    corecore