6,367 research outputs found
A Modern Take on the Bias-Variance Tradeoff in Neural Networks
The bias-variance tradeoff tells us that as model complexity increases, bias
falls and variances increases, leading to a U-shaped test error curve. However,
recent empirical results with over-parameterized neural networks are marked by
a striking absence of the classic U-shaped test error curve: test error keeps
decreasing in wider networks. This suggests that there might not be a
bias-variance tradeoff in neural networks with respect to network width, unlike
was originally claimed by, e.g., Geman et al. (1992). Motivated by the shaky
evidence used to support this claim in neural networks, we measure bias and
variance in the modern setting. We find that both bias and variance can
decrease as the number of parameters grows. To better understand this, we
introduce a new decomposition of the variance to disentangle the effects of
optimization and data sampling. We also provide theoretical analysis in a
simplified setting that is consistent with our empirical findings
Learning a Dilated Residual Network for SAR Image Despeckling
In this paper, to break the limit of the traditional linear models for
synthetic aperture radar (SAR) image despeckling, we propose a novel deep
learning approach by learning a non-linear end-to-end mapping between the noisy
and clean SAR images with a dilated residual network (SAR-DRN). SAR-DRN is
based on dilated convolutions, which can both enlarge the receptive field and
maintain the filter size and layer depth with a lightweight structure. In
addition, skip connections and residual learning strategy are added to the
despeckling model to maintain the image details and reduce the vanishing
gradient problem. Compared with the traditional despeckling methods, the
proposed method shows superior performance over the state-of-the-art methods on
both quantitative and visual assessments, especially for strong speckle noise.Comment: 18 pages, 13 figures, 7 table
- …