13,271 research outputs found
On the growth of von Neumann dimension of harmonic spaces of semipositive line bundles over covering manifolds
We study the harmonic space of line bundle valued forms over a covering
manifold with a discrete group action , and obtain an asymptotic
estimate for the -dimension of the harmonic space with respect to the
tensor times in the holomorphic line bundle and the type
of the differential form, when is semipositive. In particular, we
estimate the -dimension of the corresponding reduced -Dolbeault
cohomology group. Essentially, we obtain a local estimate of the pointwise norm
of harmonic forms with valued in semipositive line bundles over Hermitian
manifolds
Stability of matrix factorization for collaborative filtering
We study the stability vis a vis adversarial noise of matrix factorization
algorithm for matrix completion. In particular, our results include: (I) we
bound the gap between the solution matrix of the factorization method and the
ground truth in terms of root mean square error; (II) we treat the matrix
factorization as a subspace fitting problem and analyze the difference between
the solution subspace and the ground truth; (III) we analyze the prediction
error of individual users based on the subspace stability. We apply these
results to the problem of collaborative filtering under manipulator attack,
which leads to useful insights and guidelines for collaborative filtering
system design.Comment: ICML201
Data Dropout: Optimizing Training Data for Convolutional Neural Networks
Deep learning models learn to fit training data while they are highly
expected to generalize well to testing data. Most works aim at finding such
models by creatively designing architectures and fine-tuning parameters. To
adapt to particular tasks, hand-crafted information such as image prior has
also been incorporated into end-to-end learning. However, very little progress
has been made on investigating how an individual training sample will influence
the generalization ability of a model. In other words, to achieve high
generalization accuracy, do we really need all the samples in a training
dataset? In this paper, we demonstrate that deep learning models such as
convolutional neural networks may not favor all training samples, and
generalization accuracy can be further improved by dropping those unfavorable
samples. Specifically, the influence of removing a training sample is
quantifiable, and we propose a Two-Round Training approach, aiming to achieve
higher generalization accuracy. We locate unfavorable samples after the first
round of training, and then retrain the model from scratch with the reduced
training dataset in the second round. Since our approach is essentially
different from fine-tuning or further training, the computational cost should
not be a concern. Our extensive experimental results indicate that, with
identical settings, the proposed approach can boost performance of the
well-known networks on both high-level computer vision problems such as image
classification, and low-level vision problems such as image denoising
- …