70 research outputs found

    Extended Kalman Filter for Turbo-synchronization Application to DVB-RCS

    Get PDF
    We give in this paper a new algorithm combining the so called Extended Kalman Filter, a technique coming from stochastic filtering, within a turbo synchronization loop. Simulated numerical results are presented in the last section

    On the length of one-dimensional reactive paths

    Full text link
    Motivated by some numerical observations on molecular dynamics simulations, we analyze metastable trajectories in a very simplecsetting, namely paths generated by a one-dimensional overdamped Langevin equation for a double well potential. More precisely, we are interested in so-called reactive paths, namely trajectories which leave definitely one well and reach the other one. The aim of this paper is to precisely analyze the distribution of the lengths of reactive paths in the limit of small temperature, and to compare the theoretical results to numerical results obtained by a Monte Carlo method, namely the multi-level splitting approach

    Nearest neighbor classification in infinite dimension

    Get PDF
    Let XX be a random element in a metric space (\calF,d), and let YY be a random variable with value 00 or 11. YY is called the class, or the label, of XX. Assume nn i.i.d. copies (X_i,Y_i)_1\leqi\leqn. The problem of classification is to predict the label of a new random element XX. The kk-nearest neighbor classifier consists in the simple following rule : look at the kk nearest neighbors of XX and choose 00 or 11 for its label according to the majority vote. If (\calF,d)=(R^d,||.||), Stone has proved in 1977 the universal consistency of this classifier : its probability of error converges to the Bayes error, whatever the distribution of (X,Y)(X,Y). We show in this paper that this result is no more valid in general metric spaces. However, if (\calF,d) is separable and if a regularity condition is assumed, then the kk-nearest neighbor classifier is weakly consistent

    Long time asymptotics for some dynamical noise free non-linear filtering problems

    Get PDF
    Disponible dans les fichiers attachés à ce documen

    Extended Kalman Filter for Turbo-synchronization Application to DVB-RCS

    Get PDF
    We give in this paper a new algorithm combining the so called Extended Kalman Filter, a technique coming from stochastic filtering, within a turbo synchronization loop. Simulated numerical results are presented in the last section

    Long time asymptotics for some dynamical noise free non-linear filtering problems, new cases

    Get PDF
    We are interested here in the long time behaviour of the conditional law for a special case of filtering problem: there is no noise on the state equation and the prior law of the state process concentrate fast in some neighborhood of a limit cycle with strictly negative characteristic exponents. Then assuming a deterministic observability property on the cycle we show the concentration of the conditional law on an arbitrary neighborhood of the current (unknown) state as the time goes to infinity. This work can be considered as illustrating how the tools of dynamical systems theory can be use to study the long time behavior of the filtering process

    Sur la vitesse de convergence de l'estimateur du plus proche voisin baggé

    Get PDF
    International audienceOn s'intéresse dans cette communication à l'estimation de la fonction de

    Fluctuations of Rare Event Simulation with Monte Carlo Splitting in the Small Noise Asymptotics

    Full text link
    Diffusion processes with small noise conditioned to reach a target set are considered. The AMS algorithm is a Monte Carlo method that is used to sample such rare events by iteratively simulating clones of the process and selecting trajectories that have reached the highest value of a so-called importance function. In this paper, the large sample size relative variance of the AMS small probability estimator is considered. The main result is a large deviations logarithmic equivalent of the latter in the small noise asymptotics, which is rigorously derived. It is given as a maximisation problem explicit in terms of the quasi-potential cost function associated with the underlying small noise large deviations. Necessary and sufficient geometric conditions ensuring the vanishing of the obtained quantity ('weak' asymptotic efficiency) are provided. Interpretations and practical consequences are discussed

    On the Rate of Convergence of the Functional kk-NN Estimates

    Get PDF
    Let F\mathcal F be a general separable metric space and denote by \mathcal D_n=\{(\bX_1,Y_1), \hdots, (\bX_n,Y_n)\} independent and identically distributed FĂ—R\mathcal F\times \mathbb R-valued random variables with the same distribution as a generic pair (\bX, Y). In the regression function estimation problem, the goal is to estimate, for fixed \bx \in \mathcal F, the regression function r(\bx)=\mathbb E[Y|\bX=\bx] using the data Dn\mathcal D_n. Motivated by a broad range of potential applications, we propose, in the present contribution, to investigate the properties of the so-called knk_n-nearest neighbor regression estimate. We present explicit general finite sample upper bounds, and particularize our results to important function spaces, such as reproducing kernel Hilbert spaces, Sobolev spaces or Besov spaces
    • …
    corecore