2,745 research outputs found

    On Bounded Positive Stationary Solutions for a Nonlocal Fisher-KPP Equation

    Full text link
    We study the existence of stationary solutions for a nonlocal version of the Fisher-Kolmogorov-Petrovskii-Piscounov (FKPP) equation. The main motivation is a recent study by Berestycki et {al.} [Nonlinearity 22 (2009), {pp.}~2813--2844] where the nonlocal FKPP equation has been studied and it was shown for the spatial domain R\mathbb{R} andsufficiently small nonlocality that there are only two bounded non-negative stationary solutions. Here we provide a similar result for Rd\mathbb{R}^d using a completely different approach. In particular, an abstract perturbation argument is used in suitable weighted Sobolev spaces. One aim of the alternative strategy is that it can eventually be generalized to obtain persistence results for hyperbolic invariant sets for other nonlocal evolution equations on unbounded domains with small nonlocality, {i.e.}, to improve our understanding in applications when a small nonlocal influence alters the dynamics and when it does not.Comment: 24 pages, 1 figure; revised versio

    Extended Kalman filtering with stochastic nonlinearities and multiple missing measurements

    Get PDF
    Copyright @ 2012 ElsevierIn this paper, the extended Kalman filtering problem is investigated for a class of nonlinear systems with multiple missing measurements over a finite horizon. Both deterministic and stochastic nonlinearities are included in the system model, where the stochastic nonlinearities are described by statistical means that could reflect the multiplicative stochastic disturbances. The phenomenon of measurement missing occurs in a random way and the missing probability for each sensor is governed by an individual random variable satisfying a certain probability distribution over the interval [0,1]. Such a probability distribution is allowed to be any commonly used distribution over the interval [0,1] with known conditional probability. The aim of the addressed filtering problem is to design a filter such that, in the presence of both the stochastic nonlinearities and multiple missing measurements, there exists an upper bound for the filtering error covariance. Subsequently, such an upper bound is minimized by properly designing the filter gain at each sampling instant. It is shown that the desired filter can be obtained in terms of the solutions to two Riccati-like difference equations that are of a form suitable for recursive computation in online applications. An illustrative example is given to demonstrate the effectiveness of the proposed filter design scheme.This work was supported in part by the National 973 Project under Grant 2009CB320600, National Natural Science Foundation of China under Grants 61028008, 61134009 and 60825303, the State Key Laboratory of Integrated Automation for the Process Industry (Northeastern University) of China, the Engineering and Physical Sciences Research Council (EPSRC) of the U.K. under Grant GR/S27658/01, the Royal Society of the U.K., and the Alexander von Humboldt Foundation of Germany

    Second-order Quantile Methods for Experts and Combinatorial Games

    Get PDF
    We aim to design strategies for sequential decision making that adjust to the difficulty of the learning problem. We study this question both in the setting of prediction with expert advice, and for more general combinatorial decision tasks. We are not satisfied with just guaranteeing minimax regret rates, but we want our algorithms to perform significantly better on easy data. Two popular ways to formalize such adaptivity are second-order regret bounds and quantile bounds. The underlying notions of 'easy data', which may be paraphrased as "the learning problem has small variance" and "multiple decisions are useful", are synergetic. But even though there are sophisticated algorithms that exploit one of the two, no existing algorithm is able to adapt to both. In this paper we outline a new method for obtaining such adaptive algorithms, based on a potential function that aggregates a range of learning rates (which are essential tuning parameters). By choosing the right prior we construct efficient algorithms and show that they reap both benefits by proving the first bounds that are both second-order and incorporate quantiles

    New Trends in Differential and Difference Equations and Applications

    Get PDF
    This is a reprint of articles from the Special Issue published online in the open-access journal Axioms (ISSN 2075-1680) from 2018 to 2019 (available at https://www.mdpi.com/journal/axioms/special issues/differential difference equations)
    corecore