4,425 research outputs found
ForestHash: Semantic Hashing With Shallow Random Forests and Tiny Convolutional Networks
Hash codes are efficient data representations for coping with the ever
growing amounts of data. In this paper, we introduce a random forest semantic
hashing scheme that embeds tiny convolutional neural networks (CNN) into
shallow random forests, with near-optimal information-theoretic code
aggregation among trees. We start with a simple hashing scheme, where random
trees in a forest act as hashing functions by setting `1' for the visited tree
leaf, and `0' for the rest. We show that traditional random forests fail to
generate hashes that preserve the underlying similarity between the trees,
rendering the random forests approach to hashing challenging. To address this,
we propose to first randomly group arriving classes at each tree split node
into two groups, obtaining a significantly simplified two-class classification
problem, which can be handled using a light-weight CNN weak learner. Such
random class grouping scheme enables code uniqueness by enforcing each class to
share its code with different classes in different trees. A non-conventional
low-rank loss is further adopted for the CNN weak learners to encourage code
consistency by minimizing intra-class variations and maximizing inter-class
distance for the two random class groups. Finally, we introduce an
information-theoretic approach for aggregating codes of individual trees into a
single hash code, producing a near-optimal unique hash for each class. The
proposed approach significantly outperforms state-of-the-art hashing methods
for image retrieval tasks on large-scale public datasets, while performing at
the level of other state-of-the-art image classification techniques while
utilizing a more compact and efficient scalable representation. This work
proposes a principled and robust procedure to train and deploy in parallel an
ensemble of light-weight CNNs, instead of simply going deeper.Comment: Accepted to ECCV 201
Strengthening the Effectiveness of Pedestrian Detection with Spatially Pooled Features
We propose a simple yet effective approach to the problem of pedestrian
detection which outperforms the current state-of-the-art. Our new features are
built on the basis of low-level visual features and spatial pooling.
Incorporating spatial pooling improves the translational invariance and thus
the robustness of the detection process. We then directly optimise the partial
area under the ROC curve (\pAUC) measure, which concentrates detection
performance in the range of most practical importance. The combination of these
factors leads to a pedestrian detector which outperforms all competitors on all
of the standard benchmark datasets. We advance state-of-the-art results by
lowering the average miss rate from to on the INRIA benchmark,
to on the ETH benchmark, to on the TUD-Brussels
benchmark and to on the Caltech-USA benchmark.Comment: 16 pages. Appearing in Proc. European Conf. Computer Vision (ECCV)
201
Gradient Boosting With Piece-Wise Linear Regression Trees
Gradient Boosted Decision Trees (GBDT) is a very successful ensemble learning
algorithm widely used across a variety of applications. Recently, several
variants of GBDT training algorithms and implementations have been designed and
heavily optimized in some very popular open sourced toolkits including XGBoost,
LightGBM and CatBoost. In this paper, we show that both the accuracy and
efficiency of GBDT can be further enhanced by using more complex base learners.
Specifically, we extend gradient boosting to use piecewise linear regression
trees (PL Trees), instead of piecewise constant regression trees, as base
learners. We show that PL Trees can accelerate convergence of GBDT and improve
the accuracy. We also propose some optimization tricks to substantially reduce
the training time of PL Trees, with little sacrifice of accuracy. Moreover, we
propose several implementation techniques to speedup our algorithm on modern
computer architectures with powerful Single Instruction Multiple Data (SIMD)
parallelism. The experimental results show that GBDT with PL Trees can provide
very competitive testing accuracy with comparable or less training time
- …