66 research outputs found
Pruning Neural Networks via Coresets and Convex Geometry: Towards No Assumptions
Pruning is one of the predominant approaches for compressing deep neural
networks (DNNs). Lately, coresets (provable data summarizations) were leveraged
for pruning DNNs, adding the advantage of theoretical guarantees on the
trade-off between the compression rate and the approximation error. However,
coresets in this domain were either data-dependent or generated under
restrictive assumptions on both the model's weights and inputs. In real-world
scenarios, such assumptions are rarely satisfied, limiting the applicability of
coresets. To this end, we suggest a novel and robust framework for computing
such coresets under mild assumptions on the model's weights and without any
assumption on the training data. The idea is to compute the importance of each
neuron in each layer with respect to the output of the following layer. This is
achieved by a combination of L\"{o}wner ellipsoid and Caratheodory theorem. Our
method is simultaneously data-independent, applicable to various networks and
datasets (due to the simplified assumptions), and theoretically supported.
Experimental results show that our method outperforms existing coreset based
neural pruning approaches across a wide range of networks and datasets. For
example, our method achieved a compression rate on ResNet50 on ImageNet
with drop in accuracy
Efficient NTK using Dimensionality Reduction
Recently, neural tangent kernel (NTK) has been used to explain the dynamics
of learning parameters of neural networks, at the large width limit.
Quantitative analyses of NTK give rise to network widths that are often
impractical and incur high costs in time and energy in both training and
deployment. Using a matrix factorization technique, we show how to obtain
similar guarantees to those obtained by a prior analysis while reducing
training and inference resource costs. The importance of our result further
increases when the input points' data dimension is in the same order as the
number of input points. More generally, our work suggests how to analyze large
width networks in which dense linear layers are replaced with a low complexity
factorization, thus reducing the heavy dependence on the large width
Coresets for the Nearest-Neighbor Rule
Given a training set of labeled points, the nearest-neighbor rule
predicts the class of an unlabeled query point as the label of its closest
point in the set. To improve the time and space complexity of classification, a
natural question is how to reduce the training set without significantly
affecting the accuracy of the nearest-neighbor rule. Nearest-neighbor
condensation deals with finding a subset such that for every
point , 's nearest-neighbor in has the same label as . This
relates to the concept of coresets, which can be broadly defined as subsets of
the set, such that an exact result on the coreset corresponds to an approximate
result on the original set. However, the guarantees of a coreset hold for any
query point, and not only for the points of the training set.
This paper introduces the concept of coresets for nearest-neighbor
classification. We extend existing criteria used for condensation, and prove
sufficient conditions to correctly classify any query point when using these
subsets. Additionally, we prove that finding such subsets of minimum
cardinality is NP-hard, and propose quadratic-time approximation algorithms
with provable upper-bounds on the size of their selected subsets. Moreover, we
show how to improve one of these algorithms to have subquadratic runtime, being
the first of this kind for condensation
A Survey of Dataset Refinement for Problems in Computer Vision Datasets
Large-scale datasets have played a crucial role in the advancement of
computer vision. However, they often suffer from problems such as class
imbalance, noisy labels, dataset bias, or high resource costs, which can
inhibit model performance and reduce trustworthiness. With the advocacy of
data-centric research, various data-centric solutions have been proposed to
solve the dataset problems mentioned above. They improve the quality of
datasets by re-organizing them, which we call dataset refinement. In this
survey, we provide a comprehensive and structured overview of recent advances
in dataset refinement for problematic computer vision datasets. Firstly, we
summarize and analyze the various problems encountered in large-scale computer
vision datasets. Then, we classify the dataset refinement algorithms into three
categories based on the refinement process: data sampling, data subset
selection, and active learning. In addition, we organize these dataset
refinement methods according to the addressed data problems and provide a
systematic comparative description. We point out that these three types of
dataset refinement have distinct advantages and disadvantages for dataset
problems, which informs the choice of the data-centric method appropriate to a
particular research objective. Finally, we summarize the current literature and
propose potential future research topics.Comment: 33 pages, 10 figures, to be published in ACM Computing Survey
- …