35 research outputs found

    Analysis of the rate of convergence of an over-parametrized deep neural network estimate learned by gradient descent

    Full text link
    Estimation of a regression function from independent and identically distributed random variables is considered. The L2L_2 error with integration with respect to the design measure is used as an error criterion. Over-parametrized deep neural network estimates are defined where all the weights are learned by the gradient descent. It is shown that the expected L2L_2 error of these estimates converges to zero with the rate close to n1/(1+d)n^{-1/(1+d)} in case that the regression function is H\"older smooth with H\"older exponent p[1/2,1]p \in [1/2,1]. In case of an interaction model where the regression function is assumed to be a sum of H\"older smooth functions where each of the functions depends only on dd^* many of dd components of the design variable, it is shown that these estimates achieve the corresponding dd^*-dimensional rate of convergence

    Estimation of a function of low local dimensionality by deep neural networks

    Get PDF
    Deep neural networks (DNNs) achieve impressive results for complicated tasks like object detection on images and speech recognition. Motivated by this practical success, there is now a strong interest in showing good theoretical properties of DNNs. To describe for which tasks DNNs perform well and when they fail, it is a key challenge to understand their performance. The aim of this paper is to contribute to the current statistical theory of DNNs. We apply DNNs on high dimensional data and we show that the least squares regression estimates using DNNs are able to achieve dimensionality reduction in case that the regression function has locally low dimensionality. Consequently, the rate of convergence of the estimate does not depend on its input dimension dd, but on its local dimension dd^* and the DNNs are able to circumvent the curse of dimensionality in case that dd^* is much smaller than dd. In our simulation study we provide numerical experiments to support our theoretical result and we compare our estimate with other conventional nonparametric regression estimates. The performance of our estimates is also validated in experiments with real data

    Nephrocutaneous Fistula Due to Xanthogranulomatous Pyelonephritis

    Get PDF
    While the development of a fistulous tract from the kidney to the proximal adjacent organs is relatively common, a tract leading to the skin is a rare occurrence. The primary cause of a fistula is prior surgical intervention or malignancy leading to abscess formation. Our case involves Xanthogranulomatous pyelonephritis (XGP) causing a longstanding lobulated abscess, ultimately leading to the formation of a fistulous tract

    Estimation of extreme quantiles in a simulation model

    No full text

    Estimating quantiles in imperfect simulation models using conditional density estimation

    No full text

    Asymptotic confidence intervals for Poisson regression

    Get PDF
    AbstractLet (X,Y) be a Rd×N0-valued random vector where the conditional distribution of Y given X=x is a Poisson distribution with mean m(x). We estimate m by a local polynomial kernel estimate defined by maximizing a localized log-likelihood function. We use this estimate of m(x) to estimate the conditional distribution of Y given X=x by a corresponding Poisson distribution and to construct confidence intervals of level α of Y given X=x. Under mild regularity conditions on m(x) and on the distribution of X we show strong convergence of the integrated L1 distance between Poisson distribution and its estimate. We also demonstrate that the corresponding confidence interval has asymptotically (i.e., for sample size tending to infinity) level α, and that the probability that the length of this confidence interval deviates from the optimal length by more than one converges to zero with the number of samples tending to infinity
    corecore