90 research outputs found

    A new adaptive local polynomial density estimation procedure on complicated domains

    Full text link
    This paper presents a novel approach for pointwise estimation of multivariate density functions on known domains of arbitrary dimensions using nonparametric local polynomial estimators. Our method is highly flexible, as it applies to both simple domains, such as open connected sets, and more complicated domains that are not star-shaped around the point of estimation. This enables us to handle domains with sharp concavities, holes, and local pinches, such as polynomial sectors. Additionally, we introduce a data-driven selection rule based on the general ideas of Goldenshluger and Lepski. Our results demonstrate that the local polynomial estimators are minimax under a L2L^2 risk across a wide range of H\"older-type functional classes. In the adaptive case, we provide oracle inequalities and explicitly determine the convergence rate of our statistical procedure. Simulations on polynomial sectors show that our oracle estimates outperform those of the most popular alternative method, found in the sparr package for the R software. Our statistical procedure is implemented in an online R package which is readily accessible.Comment: 35 pages, 4 figure

    Maximum likelihood estimators and random walks in long memory models

    No full text
    To appear in "Statistics".We consider statistical models driven by Gaussian and non-Gaussian self-similar processes with long memory and we construct maximum likelihood estimators (MLE) for the drift parameter. Our approach is based on the approximation by random walks of the driving noise. We study the asymptotic behavior of the estimators and we give some numerical simulations to illustrate our results

    Minimax properties of Dirichlet kernel density estimators

    Full text link
    This paper is concerned with the asymptotic behavior in β\beta-H\"older spaces and under LpL^p losses of a Dirichlet kernel density estimator proposed by Aitchison & Lauder (1985) for the analysis of compositional data. In recent work, Ouimet & Tolosana-Delgado (2022) established the uniform strong consistency and asymptotic normality of this nonparametric estimator. As a complement, it is shown here that for p∈[1,3)p \in [1, 3) and β∈(0,2]\beta \in (0, 2], the Aitchison--Lauder estimator can achieve the minimax rate asymptotically for a suitable choice of bandwidth, but that this estimator cannot be minimax when either p∈[4,∞)p \in [4, \infty) or β∈(2,∞)\beta \in (2, \infty). These results extend to the multivariate case, and also rectify in a minor way, earlier findings of Bertin & Klutchnikoff (2011) concerning the minimax properties of Beta kernel estimators.Comment: 15 pages, 1 figur

    Estimateur de type Lasso pour modèle mixte non-paramétrique

    Get PDF
    National audienceLa vraisemblance pénalisée par une norme L1 est devenue relativement standard en grande dimension quand le modèle est supposé basé sur n observations indépendantes et identiquement distribuées. Ces techniques peuvent améliorer la capacité de prédiction (la régularisation implique une réduction de la variance) tout en restant in-terprétable (la sparsité identifie un sous ensemble de variable avec des effets forts). D'un point de vue computationnel, ces pénalités sont attractives et leurs propriétés théoriques ontétéontété largementétudiéeslargementétudiées cesdernì eres années. Plusieurs auteurs ont récemment suggérer des méthodes pour analyser les données lon-gitudinales ou groupées de grandes dimensions utilisant une pénalisation L 1 dans des modèles mixtes. Ces approches ontétéontété développées pour la sélection de variables dans le cas modèle linéaire mixte et modèle linéaire mixte généralisé mais moins dans le cas de modèle non linéaire mixte. Peu de travaux ont considéré leprobì eme de sélection de fonctions non linéaire utilisant une méthode de pénalisation de type L 1 dans un modèle mixte non paramétrique avec ou non des covariables. Dans ce cas, les fonctions non linéaire sont approximées par une com-binaison linéaire de fonction de lissage (spline, wavelet ou bases de Fourier) possiblement combinéescombinéesà des fonctionsirrégulì eres (bases de Spiky). Abstract. The penalization of likelihoods by L1-norms has become a relatively standard technique for high-dimensional data when the assumed models are based on n independent and identically distributed observations. These techniques may improve prediction accuracy (since regularization leads to variance reduction) together with interpretabil-ity (since sparsity identifies a subset of variables with strong effects). Computationally, these penalties are attractive and their theoretical properties have been intensively studied during the last years. Several authors have recently developed suggestions to analyze high-dimensional clustered
    • …
    corecore