476 research outputs found
Accelerated face detector training using the PSL framework
We train a face detection system using the PSL framework [1] which combines the AdaBoost
learning algorithm and Haar-like features. We demonstrate the ability of this framework to
overcome some of the challenges inherent in training classifiers that are structured in cascades
of boosted ensembles (CoBE). The PSL classifiers are compared to the Viola-Jones type cas-
caded classifiers. We establish the ability of the PSL framework to produce classifiers in a
complex domain in significantly reduced time frame. They also comprise of fewer boosted en-
sembles albeit at a price of increased false detection rates on our test dataset. We also report
on results from a more diverse number of experiments carried out on the PSL framework in
order to shed more insight into the effects of variations in its adjustable training parameters
Physical Representation-based Predicate Optimization for a Visual Analytics Database
Querying the content of images, video, and other non-textual data sources
requires expensive content extraction methods. Modern extraction techniques are
based on deep convolutional neural networks (CNNs) and can classify objects
within images with astounding accuracy. Unfortunately, these methods are slow:
processing a single image can take about 10 milliseconds on modern GPU-based
hardware. As massive video libraries become ubiquitous, running a content-based
query over millions of video frames is prohibitive.
One promising approach to reduce the runtime cost of queries of visual
content is to use a hierarchical model, such as a cascade, where simple cases
are handled by an inexpensive classifier. Prior work has sought to design
cascades that optimize the computational cost of inference by, for example,
using smaller CNNs. However, we observe that there are critical factors besides
the inference time that dramatically impact the overall query time. Notably, by
treating the physical representation of the input image as part of our query
optimization---that is, by including image transforms, such as resolution
scaling or color-depth reduction, within the cascade---we can optimize data
handling costs and enable drastically more efficient classifier cascades.
In this paper, we propose Tahoma, which generates and evaluates many
potential classifier cascades that jointly optimize the CNN architecture and
input data representation. Our experiments on a subset of ImageNet show that
Tahoma's input transformations speed up cascades by up to 35 times. We also
find up to a 98x speedup over the ResNet50 classifier with no loss in accuracy,
and a 280x speedup if some accuracy is sacrificed.Comment: Camera-ready version of the paper submitted to ICDE 2019, In
Proceedings of the 35th IEEE International Conference on Data Engineering
(ICDE 2019
- …