106 research outputs found
Worst-Case Linear Discriminant Analysis as Scalable Semidefinite Feasibility Problems
In this paper, we propose an efficient semidefinite programming (SDP)
approach to worst-case linear discriminant analysis (WLDA). Compared with the
traditional LDA, WLDA considers the dimensionality reduction problem from the
worst-case viewpoint, which is in general more robust for classification.
However, the original problem of WLDA is non-convex and difficult to optimize.
In this paper, we reformulate the optimization problem of WLDA into a sequence
of semidefinite feasibility problems. To efficiently solve the semidefinite
feasibility problems, we design a new scalable optimization method with
quasi-Newton methods and eigen-decomposition being the core components. The
proposed method is orders of magnitude faster than standard interior-point
based SDP solvers.
Experiments on a variety of classification problems demonstrate that our
approach achieves better performance than standard LDA. Our method is also much
faster and more scalable than standard interior-point SDP solvers based WLDA.
The computational complexity for an SDP with constraints and matrices of
size by is roughly reduced from to
( in our case).Comment: 14 page
RandomBoost: Simplified Multi-class Boosting through Randomization
We propose a novel boosting approach to multi-class classification problems,
in which multiple classes are distinguished by a set of random projection
matrices in essence. The approach uses random projections to alleviate the
proliferation of binary classifiers typically required to perform multi-class
classification. The result is a multi-class classifier with a single
vector-valued parameter, irrespective of the number of classes involved. Two
variants of this approach are proposed. The first method randomly projects the
original data into new spaces, while the second method randomly projects the
outputs of learned weak classifiers. These methods are not only conceptually
simple but also effective and easy to implement. A series of experiments on
synthetic, machine learning and visual recognition data sets demonstrate that
our proposed methods compare favorably to existing multi-class boosting
algorithms in terms of both the convergence rate and classification accuracy.Comment: 15 page
Hash kernels and structured learning
Vast amounts of data being generated, how to process massive data remains a challenge for machine learning algorithms. We propose hash kernels to facilitate efficient kernels which can deal with massive multi-class problems. We show a principled way to compute the kernel matrix for data streams and sparse feature spaces. We further generalise it via sampling to graphs. Later we exploit the connection between hash kernels with compressed sensing, and apply hashing to face recognition which significantly speeds up over the state-of-the-art with competitive accuracy. And we give a recovery rate on the sparse representation and a bounded recognition rate.
As hash kernels can deal with data with structures in the input such as graphs and face images, the second part of the thesis moves on to an even more challenging task - dealing with data with structures in the output.
Recent advances in machine learning exploit the dependency among data output, hence dealing with complex, structured data becomes possible. We study the most popular structured learning algorithms and categorise them into two categories - probabilistic approaches and Max Margin approaches. We show the connections of different algorithms, reformulate them in the empirical risk minimisation framework, and compare their advantages and disadvantages, which help choose suitable algorithms according to the characteristics of the application.
We have made practical and theoretical contributions in this thesis.
We show some real-world applications using structured learning as follows: a) We propose a novel approach for automatic paragraph segmentation, namely training Semi-Markov models discriminatively using a Max-Margin method. This method allows us to model the sequential nature of the problem and to incorporate features of a whole paragraph, such as paragraph coherence which cannot be used in previous models. b) We jointly segment and recognise actions in video sequences with a discriminative semi-Markov model framework, which incorporates features that capture the characteristics on boundary frames, action segments and neighbouring action segments. A Viterbi-like algorithm is devised to help efficiently solve the induced optimisation problem. c) We propose a novel hybrid loss of Conditional Random Fields (CRFs) and Support Vector Machines (SVMs). We apply the hybrid loss to various applications such as Text chunking, Named Entity Recognition and Joint Image Categorisation.
We have made the following theoretical contributions: a) We study the recent advance in PAC-Bayes bounds, and apply it to structured learning. b) We propose a more refined notion of Fisher consistency, namely Conditional Fisher Consistency for Classification (CFCC), that conditions on the knowledge of the true distribution of class labels. c) We show that the hybrid loss has the advantages of both CRFs and SVMs - it is consistent and has a tight PAC-Bayes bound which shrinks as the margin increases. d) We also introduce Probabilistic margins which take the label distribution into account. And we show that many existing algorithms can be viewed as special cases of the new margin concept which may help understand existing algorithms as well as design new algorithms.
At last, we discuss some future directions such as tightening PAC-Bayes bounds, adaptive hybrid losses and graphical model inference via Compressed Sensing
Fast Supervised Hashing with Decision Trees for High-Dimensional Data
Supervised hashing aims to map the original features to compact binary codes
that are able to preserve label based similarity in the Hamming space.
Non-linear hash functions have demonstrated the advantage over linear ones due
to their powerful generalization capability. In the literature, kernel
functions are typically used to achieve non-linearity in hashing, which achieve
encouraging retrieval performance at the price of slow evaluation and training
time. Here we propose to use boosted decision trees for achieving non-linearity
in hashing, which are fast to train and evaluate, hence more suitable for
hashing with high dimensional data. In our approach, we first propose
sub-modular formulations for the hashing binary code inference problem and an
efficient GraphCut based block search method for solving large-scale inference.
Then we learn hash functions by training boosted decision trees to fit the
binary codes. Experiments demonstrate that our proposed method significantly
outperforms most state-of-the-art methods in retrieval precision and training
time. Especially for high-dimensional data, our method is orders of magnitude
faster than many methods in terms of training time.Comment: Appearing in Proc. IEEE Conf. Computer Vision and Pattern
Recognition, 2014, Ohio, US
A Survey on Deep Neural Network Pruning-Taxonomy, Comparison, Analysis, and Recommendations
Modern deep neural networks, particularly recent large language models, come
with massive model sizes that require significant computational and storage
resources. To enable the deployment of modern models on resource-constrained
environments and accelerate inference time, researchers have increasingly
explored pruning techniques as a popular research direction in neural network
compression. However, there is a dearth of up-to-date comprehensive review
papers on pruning. To address this issue, in this survey, we provide a
comprehensive review of existing research works on deep neural network pruning
in a taxonomy of 1) universal/specific speedup, 2) when to prune, 3) how to
prune, and 4) fusion of pruning and other compression techniques. We then
provide a thorough comparative analysis of seven pairs of contrast settings for
pruning (e.g., unstructured/structured) and explore emerging topics, including
post-training pruning, different levels of supervision for pruning, and broader
applications (e.g., adversarial robustness) to shed light on the commonalities
and differences of existing methods and lay the foundation for further method
development. To facilitate future research, we build a curated collection of
datasets, networks, and evaluations on different applications. Finally, we
provide some valuable recommendations on selecting pruning methods and prospect
promising research directions. We build a repository at
https://github.com/hrcheng1066/awesome-pruning
Non-sparse Linear Representations for Visual Tracking with Online Reservoir Metric Learning
Most sparse linear representation-based trackers need to solve a
computationally expensive L1-regularized optimization problem. To address this
problem, we propose a visual tracker based on non-sparse linear
representations, which admit an efficient closed-form solution without
sacrificing accuracy. Moreover, in order to capture the correlation information
between different feature dimensions, we learn a Mahalanobis distance metric in
an online fashion and incorporate the learned metric into the optimization
problem for obtaining the linear representation. We show that online metric
learning using proximity comparison significantly improves the robustness of
the tracking, especially on those sequences exhibiting drastic appearance
changes. Furthermore, in order to prevent the unbounded growth in the number of
training samples for the metric learning, we design a time-weighted reservoir
sampling method to maintain and update limited-sized foreground and background
sample buffers for balancing sample diversity and adaptability. Experimental
results on challenging videos demonstrate the effectiveness and robustness of
the proposed tracker.Comment: Appearing in IEEE Conf. Computer Vision and Pattern Recognition, 201
Influence Function Based Second-Order Channel Pruning-Evaluating True Loss Changes For Pruning Is Possible Without Retraining
A challenge of channel pruning is designing efficient and effective criteria
to select channels to prune. A widely used criterion is minimal performance
degeneration. To accurately evaluate the truth performance degeneration
requires retraining the survived weights to convergence, which is prohibitively
slow. Hence existing pruning methods use previous weights (without retraining)
to evaluate the performance degeneration. However, we observe the loss changes
differ significantly with and without retraining. It motivates us to develop a
technique to evaluate true loss changes without retraining, with which channels
to prune can be selected more reliably and confidently. We first derive a
closed-form estimator of the true loss change per pruning mask change, using
influence functions without retraining. Influence function which is from robust
statistics reveals the impacts of a training sample on the model's prediction
and is repurposed by us to assess impacts on true loss changes. We then show
how to assess the importance of all channels simultaneously and develop a novel
global channel pruning algorithm accordingly. We conduct extensive experiments
to verify the effectiveness of the proposed algorithm. To the best of our
knowledge, we are the first that shows evaluating true loss changes for pruning
without retraining is possible. This finding will open up opportunities for a
series of new paradigms to emerge that differ from existing pruning methods.
The code is available at https://github.com/hrcheng1066/IFSO.Comment: chrome-extension://ogjibjphoadhljaoicdnjnmgokohngcc/assets/icon-50207e67.pn
- …