35 research outputs found
Analysis of the rate of convergence of an over-parametrized deep neural network estimate learned by gradient descent
Estimation of a regression function from independent and identically
distributed random variables is considered. The error with integration
with respect to the design measure is used as an error criterion.
Over-parametrized deep neural network estimates are defined where all the
weights are learned by the gradient descent. It is shown that the expected
error of these estimates converges to zero with the rate close to
in case that the regression function is H\"older smooth with
H\"older exponent . In case of an interaction model where the
regression function is assumed to be a sum of H\"older smooth functions where
each of the functions depends only on many of components of the
design variable, it is shown that these estimates achieve the corresponding
-dimensional rate of convergence
Estimation of a function of low local dimensionality by deep neural networks
Deep neural networks (DNNs) achieve impressive results for complicated tasks
like object detection on images and speech recognition. Motivated by this
practical success, there is now a strong interest in showing good theoretical
properties of DNNs. To describe for which tasks DNNs perform well and when they
fail, it is a key challenge to understand their performance. The aim of this
paper is to contribute to the current statistical theory of DNNs. We apply DNNs
on high dimensional data and we show that the least squares regression
estimates using DNNs are able to achieve dimensionality reduction in case that
the regression function has locally low dimensionality. Consequently, the rate
of convergence of the estimate does not depend on its input dimension , but
on its local dimension and the DNNs are able to circumvent the curse of
dimensionality in case that is much smaller than . In our simulation
study we provide numerical experiments to support our theoretical result and we
compare our estimate with other conventional nonparametric regression
estimates. The performance of our estimates is also validated in experiments
with real data
Nephrocutaneous Fistula Due to Xanthogranulomatous Pyelonephritis
While the development of a fistulous tract from the kidney to the proximal adjacent organs is relatively common, a tract leading to the skin is a rare occurrence. The primary cause of a fistula is prior surgical intervention or malignancy leading to abscess formation. Our case involves Xanthogranulomatous pyelonephritis (XGP) causing a longstanding lobulated abscess, ultimately leading to the formation of a fistulous tract
Progress in upscaling Miscanthus biomass production for the European bio-economy with seed-based hybrids
Funded by UK's Biotechnology and Biological Sciences Research Council (BBSRC) Department for Environment, Food and Rural Affairs (DEFRA). Grant Number: LK0863 BBSRC strategic programme Grant on Energy Grasses & Bio-refining. Grant Number: BBS/E/W/10963A01 OPTIMISC. Grant Number: FP7-289159 WATBIO. Grant Number: FP7-311929 Innovate UK/BBSRC ‘MUST’. Grant Number: BB/N016149/1Peer reviewedPublisher PD
Asymptotic confidence intervals for Poisson regression
AbstractLet (X,Y) be a Rd×N0-valued random vector where the conditional distribution of Y given X=x is a Poisson distribution with mean m(x). We estimate m by a local polynomial kernel estimate defined by maximizing a localized log-likelihood function. We use this estimate of m(x) to estimate the conditional distribution of Y given X=x by a corresponding Poisson distribution and to construct confidence intervals of level α of Y given X=x. Under mild regularity conditions on m(x) and on the distribution of X we show strong convergence of the integrated L1 distance between Poisson distribution and its estimate. We also demonstrate that the corresponding confidence interval has asymptotically (i.e., for sample size tending to infinity) level α, and that the probability that the length of this confidence interval deviates from the optimal length by more than one converges to zero with the number of samples tending to infinity