126 research outputs found

    A deconvolution approach to estimation of a common shape in a shifted curves model

    Get PDF
    This paper considers the problem of adaptive estimation of a mean pattern in a randomly shifted curve model. We show that this problem can be transformed into a linear inverse problem, where the density of the random shifts plays the role of a convolution operator. An adaptive estimator of the mean pattern, based on wavelet thresholding is proposed. We study its consistency for the quadratic risk as the number of observed curves tends to infinity, and this estimator is shown to achieve a near-minimax rate of convergence over a large class of Besov balls. This rate depends both on the smoothness of the common shape of the curves and on the decay of the Fourier coefficients of the density of the random shifts. Hence, this paper makes a connection between mean pattern estimation and the statistical analysis of linear inverse problems, which is a new point of view on curve registration and image warping problems. We also provide a new method to estimate the unknown random shifts between curves. Some numerical experiments are given to illustrate the performances of our approach and to compare them with another algorithm existing in the literature

    Sharp template estimation in a shifted curves model

    Get PDF
    This paper considers the problem of adaptive estimation of a template in a randomly shifted curve model. Using the Fourier transform of the data, we show that this problem can be transformed into a stochastic linear inverse problem. Our aim is to approach the estimator that has the smallest risk on the true template over a finite set of linear estimators defined in the Fourier domain. Based on the principle of unbiased empirical risk minimization, we derive a nonasymptotic oracle inequality in the case where the law of the random shifts is known. This inequality can then be used to obtain adaptive results on Sobolev spaces as the number of observed curves tend to infinity. Some numerical experiments are given to illustrate the performances of our approach

    Classification with the nearest neighbor rule in general finite dimensional spaces: necessary and sufficient conditions

    Get PDF
    Given an nn-sample of random vectors (Xi,Yi)1in(X_i,Y_i)_{1 \leq i \leq n} whose joint law is unknown, the long-standing problem of supervised classification aims to \textit{optimally} predict the label YY of a given a new observation XX. In this context, the nearest neighbor rule is a popular flexible and intuitive method in non-parametric situations. Even if this algorithm is commonly used in the machine learning and statistics communities, less is known about its prediction ability in general finite dimensional spaces, especially when the support of the density of the observations is Rd\mathbb{R}^d. This paper is devoted to the study of the statistical properties of the nearest neighbor rule in various situations. In particular, attention is paid to the marginal law of XX, as well as the smoothness and margin properties of the \textit{regression function} η(X)=E[YX]\eta(X) = \mathbb{E}[Y | X]. We identify two necessary and sufficient conditions to obtain uniform consistency rates of classification and to derive sharp estimates in the case of the nearest neighbor rule. Some numerical experiments are proposed at the end of the paper to help illustrate the discussion.Comment: 53 Pages, 3 figure

    Regret bounds for Narendra-Shapiro bandit algorithms

    Get PDF
    Narendra-Shapiro (NS) algorithms are bandit-type algorithms that have been introduced in the sixties (with a view to applications in Psychology or learning automata), whose convergence has been intensively studied in the stochastic algorithm literature. In this paper, we adress the following question: are the Narendra-Shapiro (NS) bandit algorithms competitive from a \textit{regret} point of view? In our main result, we show that some competitive bounds can be obtained for such algorithms in their penalized version (introduced in \cite{Lamberton_Pages}). More precisely, up to an over-penalization modification, the pseudo-regret Rˉn\bar{R}_n related to the penalized two-armed bandit algorithm is uniformly bounded by CnC \sqrt{n} (where CC is made explicit in the paper). \noindent We also generalize existing convergence and rates of convergence results to the multi-armed case of the over-penalized bandit algorithm, including the convergence toward the invariant measure of a Piecewise Deterministic Markov Process (PDMP) after a suitable renormalization. Finally, ergodic properties of this PDMP are given in the multi-armed case

    L2 Boosting on generalized Hoeffding decomposition for dependent variables. Application to Sensitivity Analysis

    Get PDF
    This paper is dedicated to the study of an estimator of the generalized Hoeffding decomposition. We build such an estimator using an empirical Gram-Schmidt approach and derive a consistency rate in a large dimensional settings. Then, we apply a greedy algorithm with these previous estimators to Sensitivity Analysis. We also establish the consistency of this L2\mathbb L_2-boosting up to sparsity assumptions on the signal to analyse. We end the paper with numerical experiments, which demonstrates the low computational cost of our method as well as its efficiency on standard benchmark of Sensitivity Analysis.Comment: 48 pages, 7 Figure

    Long time behaviour and stationary regime of memory gradient diffusions

    Get PDF
    In this paper, we are interested in a diffusion process based on a gradient descent. The process is non Markov and has a memory term which is built as a weighted average of the drift term all along the past of the trajectory. For this type of diffusion, we study the long time behaviour of the process in terms of the memory. We exhibit some conditions for the long-time stability of the dynamical system and then provide, when stable, some convergence properties of the occupation measures and of the marginal distribution, to the associated steady regimes. When the memory is too long, we show that in general, the dynamical system has a tendency to explode, and in the particular gaussian case, we explicitly obtain the rate of divergence

    Adaptive sequential design for regression on Schauder Basis

    Get PDF
    We present a new sequential algorithm to build both optimal design and model selection in a multi-resolution family of functions. This algorithm relies on a localization property of discrete sequential D and A-optimal designs for Schauder Basis. We use these property with a simulated annealing strategy to obtain our stochastic algorithm. We illustrate its efficiency on several numerical experiments

    Intensity estimation of non-homogeneous Poisson processes from shifted trajectories

    Get PDF
    This paper considers the problem of adaptive estimation of a non-homogeneous intensity function from the observation of n independent Poisson processes having a common intensity that is randomly shifted for each observed trajectory. We show that estimating this intensity is a deconvolution problem for which the density of the random shifts plays the role of the convolution operator. In an asymptotic setting where the number n of observed trajectories tends to infinity, we derive upper and lower bounds for the minimax quadratic risk over Besov balls. Non-linear thresholding in a Meyer wavelet basis is used to derive an adaptive estimator of the intensity. The proposed estimator is shown to achieve a near-minimax rate of convergence. This rate depends both on the smoothness of the intensity function and the density of the random shifts, which makes a connection between the classical deconvolution problem in nonparametric statistics and the estimation of a mean intensity from the observations of independent Poisson processes

    Intensity estimation of non-homogeneous Poisson processes from shifted trajectories

    Get PDF
    In this paper, we consider the problem of estimating nonparametrically a mean pattern intensity λ from the observation of n independent and non-homogeneous Poisson processes N1,…,Nn on the interval [0,1]. This problem arises when data (counts) are collected independently from n individuals according to similar Poisson processes. We show that estimating this intensity is a deconvolution problem for which the density of the random shifts plays the role of the convolution operator. In an asymptotic setting where the number n of observed trajectories tends to infinity, we derive upper and lower bounds for the minimax quadratic risk over Besov balls. Non-linear thresholding in a Meyer wavelet basis is used to derive an adaptive estimator of the intensity. The proposed estimator is shown to achieve a near-minimax rate of convergence. This rate depends both on the smoothness of the intensity function and the density of the random shifts, which makes a connection between the classical deconvolution problem in nonparametric statistics and the estimation of a mean intensity from the observations of independent Poisson processes
    corecore